prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Sort latitude and longitude coordinates into clockwise ordered quadrilateral
**Problem**
Users can provide up to four latitude and longitude coordinates, in any order. They do so with Google Maps. Using Google's `Polygon` API (v3), the coordinates they select should highlight the selected area between the four coordinates.
**Question**
How do you sort an array of latitude and longitude coordinates in (counter-)clockwise order?
**Solutions and Searches**
*StackOverflow Questions*
- [Drawing resizable (not intersecting) polygons](https://stackoverflow.com/questions/2353409/drawing-resizable-not-intersecting-polygons)
- [How to sort points in a Google maps polygon so that lines do not cross?](https://stackoverflow.com/questions/2374708/how-to-sort-points-in-a-google-maps-polygon-so-that-lines-do-not-cross)
- [Sort Four Points in Clockwise Order](https://stackoverflow.com/questions/242404/sort-four-points-in-clockwise-order)
*Related Sites*
- <http://www.daftlogic.com/projects-google-maps-area-calculator-tool.htm>
- <http://en.literateprograms.org/Quickhull_%28Javascript%29>
- <http://www.geocodezip.com/map-markers_ConvexHull_Polygon.asp>
- <http://softsurfer.com/Archive/algorithm_0103/algorithm_0103.htm>
*Known Algorithms*
- Graham's scan (too complicated)
- Jarvis March algorithm (handles N points)
- Recursive Convex Hull (removes a point)
**Code**
Here is what I have so far:
```
// Ensures the markers are sorted: NW, NE, SE, SW
function sortMarkers() {
var ns = markers.slice( 0 );
var ew = markers.slice( 0 );
ew.sort( function( a, b ) {
if( a.position.lat() < b.position.lat() ) {
return -1;
}
else if( a.position.lat() > b.position.lat() ) {
return 1;
}
return 0;
});
ns.sort( function( a, b ) {
if( a.position.lng() < b.position.lng() ) {
return -1;
}
else if( a.position.lng() > b.position.lng() ) {
return 1;
}
return 0;
});
var nw;
var ne;
var se;
var sw;
if( ew.indexOf( ns[0] ) > 1 ) {
nw = ns[0];
}
else {
ne = ns[0];
}
if( ew.indexOf( ns[1] ) > 1 ) {
nw = ns[1];
}
else {
ne = ns[1];
}
if( ew.indexOf( ns[2] ) > 1 ) {
sw = ns[2];
}
else {
se = ns[2];
}
if( ew.indexOf( ns[3] ) > 1 ) {
sw = ns[3];
}
else {
se = ns[3];
}
markers[0] = nw;
markers[1] = ne;
markers[2] = se;
markers[3] = sw;
}
```
Thank you.
|
Given the points:
```
4 + [d] [g]
|
3 [a] [e]
|
2 + [f] [h]
|
1 + [b]
|
0 +----+---[c]---+----+----+----+
0 1 2 3 4 5 6
```
you want to find the following bound walk:
```
4 + ___[d]------------[g]
| __/ \
3 [a]/ [e]__ \
| \ \_ ```--- \
2 + \ `[f] \___[h]
| \ __/
1 + [b] __/
| \ /
0 +----+--`[c]---+----+----+----+
0 1 2 3 4 5 6
```
?
If this is correct, here's a way:
- find the upper most point, Ptop, in the set of points. In case of a tie, pick the point with the smallest x coordinate
- sort all points by comparing the slopes mi and mj of the lines each pair of points (excluding Ptop!) Pi and Pj make when passing through Ptop
- if mi and mj are equal, let the point Pi or Pj closest to Ptop come first
- if mi is positive and mj is negative (or zero), Pj comes first
- if both mi and mj are either positive or negative, let the point belonging to the line with the largest slope come first
Here's a quick demo for the map:

(I know little JavaScript, so I might, or probably have, violated some JavaScript code conventions...):
```
var points = [
new Point("Stuttgard", 48.7771056, 9.1807688),
new Point("Rotterdam", 51.9226899, 4.4707867),
new Point("Paris", 48.8566667, 2.3509871),
new Point("Hamburg", 53.5538148, 9.9915752),
new Point("Praha", 50.0878114, 14.4204598),
new Point("Amsterdam", 52.3738007, 4.8909347),
new Point("Bremen", 53.074981, 8.807081),
new Point("Calais", 50.9580293, 1.8524129),
];
var upper = upperLeft(points);
print("points :: " + points);
print("upper :: " + upper);
points.sort(pointSort);
print("sorted :: " + points);
// A representation of a 2D Point.
function Point(label, lat, lon) {
this.label = label;
this.x = (lon + 180) * 360;
this.y = (lat + 90) * 180;
this.distance=function(that) {
var dX = that.x - this.x;
var dY = that.y - this.y;
return Math.sqrt((dX*dX) + (dY*dY));
}
this.slope=function(that) {
var dX = that.x - this.x;
var dY = that.y - this.y;
return dY / dX;
}
this.toString=function() {
return this.label;
}
}
// A custom sort function that sorts p1 and p2 based on their slope
// that is formed from the upper most point from the array of points.
function pointSort(p1, p2) {
// Exclude the 'upper' point from the sort (which should come first).
if(p1 == upper) return -1;
if(p2 == upper) return 1;
// Find the slopes of 'p1' and 'p2' when a line is
// drawn from those points through the 'upper' point.
var m1 = upper.slope(p1);
var m2 = upper.slope(p2);
// 'p1' and 'p2' are on the same line towards 'upper'.
if(m1 == m2) {
// The point closest to 'upper' will come first.
return p1.distance(upper) < p2.distance(upper) ? -1 : 1;
}
// If 'p1' is to the right of 'upper' and 'p2' is the the left.
if(m1 <= 0 && m2 > 0) return -1;
// If 'p1' is to the left of 'upper' and 'p2' is the the right.
if(m1 > 0 && m2 <= 0) return 1;
// It seems that both slopes are either positive, or negative.
return m1 > m2 ? -1 : 1;
}
// Find the upper most point. In case of a tie, get the left most point.
function upperLeft(points) {
var top = points[0];
for(var i = 1; i < points.length; i++) {
var temp = points[i];
if(temp.y > top.y || (temp.y == top.y && temp.x < top.x)) {
top = temp;
}
}
return top;
}
```
Note: your should double, or triple check the conversions from `lat,lon` to `x,y` as I am a novice if it comes to GIS!!! But perhaps you don't even need to convert anything. If you don't, the `upperLeft` function might just return the lowest point instead of the highest, depending on the locations of the points in question. Again: triple check these assumptions!
When executing the snippet above, the following gets printed:
```
points :: Stuttgard,Rotterdam,Paris,Hamburg,Praha,Amsterdam,Bremen,Calais
upper :: Hamburg
sorted :: Hamburg,Praha,Stuttgard,Paris,Bremen,Calais,Rotterdam,Amsterdam
```
**Alternate Distance Function**
```
function distance(lat1, lng1, lat2, lng2) {
var R = 6371; // km
var dLat = (lat2-lat1).toRad();
var dLon = (lng2-lng1).toRad();
var a = Math.sin(dLat/2) * Math.sin(dLat/2) +
Math.cos(lat1.toRad()) * Math.cos(lat2.toRad()) *
Math.sin(dLon/2) * Math.sin(dLon/2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
return R * c;
}
```
|
Convert UTC date from scientific notation to Java.util.Date
I'm trying to convert the `created_utc` date from Reddit's json to a `Date` object, but I keep getting an "Unparceable" error. An example of their dates is: `created_utc": 1.43701862E9,` which I'm told is a unix timestamp.
From my research this code should convert it:
```
String date = "1.43701862E9";
java.util.Date time = new java.util.Date((long)date*1000);
```
but obviously I'm getting an error on multiplying the date by 1000.
This is the code I normally use to convert string dates:
```
String date = "1.43701862E9";
Calendar cal = Calendar.getInstance(TimeZone.getDefault());
SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss z");
format.parse(date);
```
|
This should work for you:
```
public static void main(String[] args) {
String date = "1.43701862E9";
java.util.Date time = new java.util.Date(Double.valueOf(date).longValue()*1000);
System.out.println(time);
}
```
Output:
```
Wed Jul 15 23:50:20 EDT 2015
```
---
Since you're using scientific notation you can't parse the `String` using the `Long` class: `Long.parseLong(String s)` (Nor can you simply cast a String, as you're trying). Instead, I used the [`Double.valueOf()`](http://docs.oracle.com/javase/7/docs/api/java/lang/Double.html#valueOf(java.lang.String)) method and preserve the `Long` using `.longValue()`
|
decimal data entery in ASP MVC
```
public Decimal SalePrice { get; set; }
```
and
```
<%= Html.TextBoxFor(Model => Model.SalePrice) %>
```
What is a good way to ensure validation or proper input entry from the user? Things like only allow numeric enteries and upto two decimal points?
|
A regular expression like the following should work:
```
\A\d+(\.\d{1,2})?\Z
```
This matches input like:
```
2.00
25.70
04.15
2.50
525.43
423.3
52
```
And, as Mike suggested, you could use this in a data validation attribute:
```
[RegularExpression(@"\A\d+(\.\d{1,2})?\Z", ErrorMessage="Please enter a numeric value with up to two decimal places.")]
public Decimal SalePrice { get; set; }
```
**Edit:** In response to your two questions:
**1) This validates on submit right and not when we lose focus of that field?**
Assuming all you've added is the attribute, then yes validation occurs on submit. Technically, validation occurs once the form parameters are bound to the model. However, to actually make use of this, you need to check the validation parameter in your controller:
```
public ActionResult MyController(MyModel model)
{
if (ModelState.IsValid)
{
// do stuff
}
else
{
// Return view with the now-invalid model
// if you've placed error messages on the view, they will be displayed
return View(model);
}
}
```
To have the validation occur client-side in addition to server-side, you'd need to use javascript. A basic example of this using Microsoft AJAX validation is at [Scott Gu's blog](http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx).
**2) Can you show me the regex where the max entry cannot exceed 100.00 and min entry cannot be below 1.00**
You could probably do this in regex somehow, but regex isn't really designed for anything more than pattern matching. A better way to do this would be to add a range validation attribute, in addition to your regex attribute. So now your property would look like:
```
[RegularExpression(@"\A\d+(\.\d{1,2})?\Z", ErrorMessage="Please enter a numeric value with up to two decimal places.")]
[Range(1.00m, 100.00m)]
public Decimal SalePrice { get; set; }
```
The above code is untested but the general approach should work.
|
SPARQL "ASK" query to check if object property exist between two classes (Not b/w individuals)
I want to query, if certain ObjectPropery (OP) exist between two classes inside an OWL2 file. I'm using JENA API to construct the SPARQL queries.
What I have tried till now:
First I used the SELECT query to check the classes for a given OP:
```
" { SELECT ?domain ?range WHERE {\n" +
":isManagedBy rdfs:domain ?domain; \n" +
" rdfs:range ?range. \n } }" +
"}";
```
Then I wrapped it with ASK query
```
" ASK WHERE { \n" +
" { SELECT ?domain ?range WHERE {\n" +
":isManagedBy rdfs:domain ?domain; \n" +
" rdfs:range ?range. \n } }" +
"}";
```
It seems to give me the answer, but I think, I'm mixing up so many things in this query:
My Goal: Is to query if certain fact exist inside the OWL file or not (Boolean Answer)
Eg: OWL Snippet
```
<owl:ObjectProperty rdf:ID="isManagedBy">
<rdf:type rdf:resource="owl#FunctionalProperty" />
<rdfs:domain rdf:resource="#FunctionManagement" />
<rdfs:range rdf:resource="#SymposiumPlanner2013"/>
</owl:ObjectProperty>
```
What I would like to check: `isManagedBy(FunctionManagement, SymposiumPlanner2013)` exists or not.
|
I think that the possible duplicate ([How to query Classes with Object Property in Sparql](https://stackoverflow.com/questions/18458150/how-to-query-classes-with-object-property-in-sparql)) I linked may be close enough to answer your question, but there's no need to wrap a `select` query in an `ask` query here. You're asking about whether your data contains the triples:
```
:isManagedBy rdfs:domain :FunctionManagement .
:isManagedBy rdfs:range :SymposiumPlanner2013 .
```
You just want a query that `ask`s whether that data is present:
```
prefix : <…>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
ask {
:isManagedBy rdfs:domain :FunctionManagement .
:isManagedBy rdfs:range :SymposiumPlanner2013 .
}
```
You can even use some abbreviations to make that a bit more concise:
```
prefix : <…>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
ask {
:isManagedBy rdfs:domain :FunctionManagement ;
rdfs:range :SymposiumPlanner2013 .
}
```
|
the index value of attribute
I am using the following code
...
...
```
for($i=0; $i<90; $i++){
?>
<a id='read[<?php print $i; ?>]' href="<?php print $textToshow; ?>"> Text Shown</a>
<?php } ?>
```
I want to know the id of the a href when a user clicks on it. Something like read[1] read[2] etc
|
```
$('a').click(function( e ) {
alert(this.id);
// e.preventDefault(); // Uncomment this line if you don't want
}); // to follow the link's href.
```
This assigns a `click` event to *all* `<a>` elements that will alert its ID when clicked.
Uncomment the [`e.preventDefault()`](http://api.jquery.com/event.preventdefault/) line to prevent the default behavior of the link (following its `href`).
It would probably be best to add a class attribute to the links, and select using that:
```
$('a.someClass').click(function( e ) {
alert(this.id);
// e.preventDefault(); // Uncomment this line if you don't want
});
```
This selects `<a>` elements with the class `"someClass"` using [the class selector](http://api.jquery.com/class-selector/).
|
How does isNew() tell if the session is a new one or is already in use?
How does calling [`isNew()`](http://docs.oracle.com/javaee/1.2.1/api/javax/servlet/http/HttpSession.html#isNew%28%29) on the session object,check if the session is a new one or is already in use ?
I read that `isNew()` returns true if the client has not yet responded with the session ID. But what does it mean ? Please explain
|
>
> I read that isNew() returns true if the client has not yet responded with the session ID. But what does it mean ?
>
>
>
Consider that the server is currently processing a request. There are two scenarios with respect to session handling.
- In the new session scenario, a new session is being created for the user / client by the server. (The client may have supplied no session id in the request, or it may have supplied a session id that the server thinks is invalid.) The application code of the servlet decides a session is required (e.g. because it has some information it wants to store there), and attempts to fetch it with the "create if not present" flag. The servlet infrastructure realises that there is no current session, creates a new one with a new session id, and saves it in the session store. At the completion of the request, the session id is returned to the client; e.g. as a cookie, or as a URL with session id attached.
- In the existing session scenario, the client has included a session id in the request; e.g. as a session cookie, or as a session id in the request URL. The servlet infrastructure recognizes this id, looks it up in its session store and (if necessary) recreates the `HttpSession` object containing the session state retrieved from the session store. When the application code of the servlet attempts to access the session, it gets this `HttpSession` object, not a new one. The session state can then be used and updated by the servlet as it processes the request.
In the first scenario, calling `isNew()` on the session object will return `true` because this is a new session.
In the second scenario, calling `isNew()` on the session object will return `false` because this is NOT a new session.
>
> How does calling isNew() on the session object,check if the session is a new one or is already in use ?
>
>
>
The servlet infrastructure knows which of the two scenarios occurred because it did the session creation or session lookup. The most obvious implementation of `isNew()` is to include a `private boolean` field in the `HttpSession` object, and return the value of that field as the result of `isNew()`. The field would be initialized by the servlet infrastructure according to how it obtained the session object.
If it would help, [How do servlets work? Instantiation, sessions, shared variables and multithreading](https://stackoverflow.com/questions/3106452) explains the larger context in which this all occurs.
|
Foreach row display as column
I have a pretty standard database like:
```
id | name | last_name | gender
-----------------------------------------------
1 | John | Doe | Male
2 | Jane | Smith Dolores Clayborne | Female
3 | Paul | Paulson | Male
```
and I want to display it in a table, but every row in the DB needs to be a column in HTML table:
```
id | 1 | 2 | 3
---------------------------------------------
name | John | Jane | Paul
---------------------------------------------
last_name | Doe | Smith | Paulson
| | Dolores |
| | Clayborne |
---------------------------------------------
gender | Male | Female | Male
```
If I go with:
```
foreach($data as $row) {
echo "<tr><td>" . $row->id . "</td></tr>";
echo "<tr><td>" . $row->name . "</td></tr>";
echo "<tr><td>" . $row->last_name . "</td></tr>";
echo "<tr><td>" . $row->gender . "</td></tr>";
}
```
I get all the data in one long column. How do I break the column after every SQL row?
Note: `$data` is an array of objects that contain properties with the field values (you can probably figure that out from the example).
**EDIT:**
I found the solution, see my answer, it's simple and elegant.
|
I figured it out, turns out it's a pretty simple solution with few cycles and pre-populating an array before writing rows and cells.
I tested it, works like a charm. ;) It might be useful for someone, so here it is:
```
foreach($records as $key => $row) {
foreach($row as $field => $value) {
$recNew[$field][] = $value;
}
}
//This creates a new array composed/transposed with the field names as keys and
//the "rowed" values as sub-arrays.
echo "<table>\n";
foreach ($recNew as $key => $values) // For every field name (id, name, last_name, gender)
{
echo "<tr>\n"; // start the row
echo "\t<td>" . $key . "</td>\n" ; // create a table cell with the field name
foreach ($values as $cell) // for every sub-array iterate through all values
{
echo "\t<td>" . $cell . "</td>\n"; // write cells next to each other
}
echo "</tr>\n"; // end row
}
echo "</table>";
```
|
Scroll issue on .animate() and .prop()?
I have two divs with same class. If I scroll one div the other divs scroll comes to 0. I am able to achieve this with .prop() property easily. But when I use .animate() the occurrence just happens once and then it stops working(Commented the code in my example snippet) . **What I want is the scroll when comes to zero should animate i.e the scroll comes to 0 with a animation like its showing with .animate().**
>
> **Note: Classes of divs will be same and there can be more divs too.**
>
>
>
Here is the code I have tried, please tell me where I am wrong.
```
$(document).ready(function() {
$('.swipe_div').scroll(function() {
// $(this).siblings(".swipe_div").animate({scrollLeft: 0},100);
$(this).siblings(".swipe_div").prop({
scrollLeft: 0
});
});
});
```
```
body,
html {
width: 100%;
height: 100%;
background-color: green;
padding: 0;
margin: 0;
}
.swipe_div {
display: block;
float: left;
width: 100%;
height: 100px;
overflow-x: scroll;
background-color: white;
}
.content,
.operation,
.swipe_container {
display: block;
float: left;
height: 100%;
}
.swipe_container {
width: 150%;
}
.content {
display: flex;
align-items: center;
justify-content: flex-end;
flex-direction: row;
text-align: right;
font-size: 30pt;
width: 67%;
background-color: grey;
}
.operation {
width: 33%;
background-color: red;
}
```
```
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="swipe_div">
<div class="swipe_container">
<div class="content">
>
</div>
<div class="operation">
</div>
</div>
</div>
<div class="swipe_div">
<div class="swipe_container">
<div class="content">
>
</div>
<div class="operation">
</div>
</div>
</div>
```
|
When you're animating `scrollLeft` you're activating `scroll()` on the sibling, which is trying to animate scroll on the div you're actively scrolling. So you need to mark when you start scrolling and `throttle()` all subsequent calls on `scroll()` until you're done scrolling.
`trailing:true` calls it one more time after it hasn't been called for *throttle\_interval* (`250` in this example), turning `scrolling` marker back to `false`:
```
$(document).ready(function() {
var scrolling;
$('.swipe_div').scroll(_.throttle(function() {
if (!scrolling) {
scrolling = true;
$(this).siblings(".swipe_div").animate({scrollLeft: 0},150);
} else {
scrolling = false;
}
}, 250, {leading:true,trailing:true}));
});
```
```
body,
html {
width: 100%;
height: 100%;
background-color: green;
padding: 0;
margin: 0;
}
.swipe_div {
display: block;
float: left;
width: 100%;
height: 100px;
overflow-x: scroll;
background-color: white;
}
.content,
.operation,
.swipe_container {
display: block;
float: left;
height: 100%;
}
.swipe_container {
width: 150%;
}
.content {
display: flex;
align-items: center;
justify-content: flex-end;
flex-direction: row;
text-align: right;
font-size: 30pt;
width: 67%;
background-color: grey;
}
.operation {
width: 33%;
background-color: red;
}
```
```
<script src="https://cdn.jsdelivr.net/lodash/4.17.4/lodash.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="swipe_div">
<div class="swipe_container">
<div class="content">
>
</div>
<div class="operation">
</div>
</div>
</div>
<div class="swipe_div">
<div class="swipe_container">
<div class="content">
>
</div>
<div class="operation">
</div>
</div>
</div>
```
I tested it for a bit and actually discovered a small glitch/limitation: the throttle interval has to be smaller than the animation time. If it is not, the animation will outlast the throttle interval and trigger, in turn, the closing animation for the original scrolled element.
But this is web (impossible is nothing): if and when your animation has to be longer than the throttle interval, you will have to mark the initial element with a class that will exclude it from being animated. The class will be removed using a timeout on completion of animate, equal to the throttle interval:
```
$(document).ready(function() {
var scrolling;
$('.swipe_div').scroll(_.throttle(function() {
if (!scrolling) {
scrolling = true;
$(this).addClass('original');
$(this).siblings(".swipe_div:not(.original)").animate(
{scrollLeft:0},
250,
function(){
setTimeout(function() {
$('.swipe_div').removeClass('original')
}, 150)
}
);
} else {
scrolling = false;
}
}, 150, {leading:true,trailing:true}));
});
```
|
Using strace for a C file
I have a C file that I want to trace what is going on. I'm trying to use:
```
strace -o trace.txt random.c
```
But it says that:
```
strace: Can't stat 'random.c': No such file or directory
```
I've been looking up strace for the past hour now, and don't exactly know what I'm doing wrong.
Is there an easier command to just trace what is compiled/executed?
Does 'strace' only work on shell files?
|
As [thrig says](https://unix.stackexchange.com/questions/340351/using-strace-for-a-c-file#comment601677_340351), you must compile your C program (using a compiler such as `gcc` or `clang`), then use `strace` to run the compiled binary.
```
ek@Io:~$ cat >hello.c <<'EOF'
> #include <stdio.h>
>
> int main(void)
> {
> puts("Hello, world!");
> return 0;
> }
> EOF
ek@Io:~$ gcc -ansi -pedantic -Wall -Wextra -g -o hello hello.c
ek@Io:~$ strace ./hello
execve("./hello", ["./hello"], [/* 19 vars */]) = 0
brk(NULL) = 0x220f000
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8000316000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=156046, ...}) = 0
mmap(NULL, 156046, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f80002ef000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\t\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1864888, ...}) = 0
mmap(NULL, 3967392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f7fffd2a000
mprotect(0x7f7fffee9000, 2097152, PROT_NONE) = 0
mmap(0x7f80000e9000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bf000) = 0x7f80000e9000
mmap(0x7f80000ef000, 14752, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f80000ef000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f80002ee000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f80002ed000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f80002ec000
arch_prctl(ARCH_SET_FS, 0x7f80002ed700) = 0
mprotect(0x7f80000e9000, 16384, PROT_READ) = 0
mprotect(0x600000, 4096, PROT_READ) = 0
mprotect(0x7f8000318000, 4096, PROT_READ) = 0
munmap(0x7f80002ef000, 156046) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0
brk(NULL) = 0x220f000
brk(0x2230000) = 0x2230000
write(1, "Hello, world!\n", 14Hello, world!
) = 14
exit_group(0) = ?
+++ exited with 0 +++
```
You may still of course use whatever options you need for `strace`, such as `-o`.
`strace` will not succeed at running C source code. Even if you give it a path with a `/` in it to insist that it attempt to do so, it will fail:
```
ek@Io:~$ strace hello.c
strace: Can't stat 'hello.c': No such file or directory
ek@Io:~$ strace ./hello.c
execve("./hello.c", ["./hello.c"], [/* 19 vars */]) = -1 EACCES (Permission denied)
write(2, "strace: exec: Permission denied\n", 32strace: exec: Permission denied
) = 32
exit_group(1) = ?
+++ exited with 1 +++
```
Even if you marked your `.c` file executable, it still cannot be run:
```
ek@Io:~$ chmod +x hello.c
ek@Io:~$ strace ./hello.c
execve("./hello.c", ["./hello.c"], [/* 19 vars */]) = -1 ENOEXEC (Exec format error)
write(2, "strace: exec: Exec format error\n", 32strace: exec: Exec format error
) = 32
exit_group(1) = ?
+++ exited with 1 +++
```
In short, you have to compile your C program and run the compiled binary rather than the source code file itself. Just as you cannot run a `.c` file normally (i.e., `./hello.c` fails), you cannot run it with `strace` either.
|
Haskell `$` in F# possible?
In Haskell, we can write
`print $ abs $ 3 - 5`
using `$` .
In F#, we can write
`printfn "%d" << abs <| 3 - 5`
However, in many cases in F#, it would be also useful to have the same functionality of `$` since the above are simply expressions with binary operators.
The trick of Haskell `$` is that has somewhat the lowest precedence among its binary operators.
So, I investigated again
<https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/symbol-and-operator-reference/#operator-precedence>
and I observe, unfortunately, that there are no operators in lower precedences that can overload safely because all of them seem essential.
Do you have any ideas on this?
Do you think Haskell `$` in F# is possible??
|
Like Haskell, F♯ also allows you to define *new* infix operators, not just overload existing ones. Unlike Haskell however, you can't select precedence (aka fixity) at will, nor select any non-letter Unicode symbol. Instead, you must choose from `!`, `$`, `%`, `&`, `*`, `+`, `-`, `.`, `/`, `<`, `=`, `>`, `?`, `@`, `^`, `|`. The fixity is then determined by analogy with the standard operators, in a way I don't entirely understand but what does seem to hold is that for any single-symbol infix, you can make a custom one with the same fixity by adding a `.` in front.
So, to get a lowest-fixity operator you'd have to call it `.|`. However, `|` is left-associative, so you couldn't write `printfn "%d" .| abs .| 3 - 5`. However I'd note that in Haskell, your example would also rather be written `print . abs $ 3 - 5`, and that can indeed be expressed in F♯:
```
let (.|) f x = f x
printfn "%d" << abs .| 3 - 5
```
To transliterate `print $ abs $ 3 - 5`, you'd need a *right*-associative operator. The lowest-precedence custom right-associative operator I can manage to define is `.^`, which indeed gets the example working:
```
let (.^) f x = f x
printfn "%d" .^ abs .^ 3 - 5
```
However, this operator doesn't have very low precedence, in particular it actually has higher precedence than the composition operators!
---
Really, you shouldn't be doing any of this and instead just use `printfn "%d" << abs <| 3 - 5` as you originally suggested. It's standard, and it also corresponds to the preferred style in Haskell.
|
Use execCommand to remove an element so it is undoable
So I am trying to add a delete button that appears on `hover` of elements inside a `contenteditable`, and clicking the delete button removes the element. Right now, I have something like this:
```
button.on("click", function() {
item.remove();
button.hide();
}
```
However, I would like to make the removal undoable, so the user could press `command+z` to `undo` the deletion. As far as I know, you would have to use `execCommand` to delete the element, in order to do this (A custom solution for undoing is not an option). Is there any way to somehow use `execCommand` to delete a specific node (even something unselectable, like an `iframe`)?
|
There is a `delete` command. You can set the selection to encompass the element in question, call the command and it will be undoable, at least in the following simple example, which works in all the browsers I tried:
```
function deleteElement(id) {
var el = document.getElementById("toBeDeleted");
var range = document.createRange();
range.selectNode(el);
var sel = window.getSelection();
sel.removeAllRanges();
sel.addRange(range);
document.execCommand("delete", false, null);
}
```
```
<input type="button" onmousedown="deleteElement('toBeDeleted'); return false" value="Delete">
<div contenteditable="true">Use the button above to delete <b id="toBeDeleted">this bold text</b></div>
```
|
JWT (Token based authentication) vs Session / Cookies - Best Usage
I've been reading up on this topic a lot but could not find a good answer that I was looking for.
So my understanding of the pros and cons of JWT vs Session is
JWT
pro
- more scalable since no DB look up on server side. (assuming stateless JWT)
con
- storage of token on client side needs to be well thought out. (cookie w/ httpOnly is preferable over local storage but cookie has 4kb size limit)
- not immediately revocable
- permissions can go stale until the next refresh
Session
pro
- arguably more secure since you are only passing around session id (opaque ref), easier to protect against CSRF than XSS, etc.
- changes on user are reflected immediately.
con
- less scalable than token
So given my understanding,
1. which approach does website that supports huge number of users (amazon, uber) use? Is using session w/ distributed cache good enough?
2. what is the real life use case where it makes more sense to use JWT (token based) over session based?
Thank you!
|
JWTs were never designed for handling sessions. They are a way of exchanging integrity-protected messages between services. Have a look at this article: [http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-for-sessions/](http://cryto.net/%7Ejoepie91/blog/2016/06/13/stop-using-jwt-for-sessions/) which explains why using JWTs for handling sessions is not a good idea.
You can also read about the BFF pattern: <https://curity.io/resources/learn/the-bff-pattern/> where you use a lightweight backend component for handling tokens, and still deal only with sessions in the frontend. Because it's a light component it's easy to scale it - e.g. it can be a lambda function.
So in my opinion, there are no real good use cases where you genuinely prefer JWT-based session over cookie-based session, but (as any strong opinion), this may trigger a discussion ;)
|
Assigned pointer field becomes
I have a struct:
```
type user struct {
Id string
..
data_ptr *userData
}
```
And I store slice of users in global scope:
```
type Hall struct {
users []user
}
var hall = Hall{} //global
```
Finally, http handler:
```
func dataHandler(response http.ResponseWriter, request *http.Request) {
userExist, user_ptr := hall.haveUserId() //works fine
switch requestType {
case "load":
user_ptr.loadData() //data loaded and user_ptr.data_ptr is set
case "newData":
user_ptr.data_ptr = newData // <-- this is it, now previously set data_ptr == nil
```
So, why the heck, I mean I send "load" request, it loads data, sets `data_ptr` for `user_ptr`. But on next call, "newData" request, `user_ptr.data_ptr` is `nil`?
Just in case, here is `loadData()`:
```
func (p *user) loadData(userId) {
..
data := userData {}
p.data_ptr = &data
}
```
**EDIT:** where `user_ptr` comes from:
```
func (h *Hall) haveUserId(id string) (bool, *user) {
for _, u := range h.users {
if u.Id == id {
fmt.Println("UID found")
return true, &u
}
}
return false, nil
}
```
|
This is because you operate on a *copy* and not on the slice element itself.
In your `haveUserId()` function the `for ... range` makes a copy of the elements it loops over, and you return the address of this copy. And so later you will modify this copy which is independent from the value in the slice. So if later you check the address in the slice element, it will still be unchanged (`nil`).
Possible fix: return the address of the slice element: `&h.users[i]`
```
func (h *Hall) haveUserId(id string) (bool, *user) {
for i := range h.users {
if h.users[i].Id == id {
fmt.Println("UID found")
return true, &h.users[i]
}
}
return false, nil
}
```
To demonstrate this, see this example:
```
type Point struct{ x, y int }
ps := []Point{{1, 2}, {3, 4}}
fmt.Println(ps) // Output: [{1 2} {3 4}]
for _, v := range ps {
v.x += 10 // Modifies just the copy
}
fmt.Println(ps) // Output (unchanged): [{1 2} {3 4}]
for i := range ps {
ps[i].x += 10 // Modifies value in slice
}
fmt.Println(ps) // Output (changed): [{11 2} {13 4}]
```
Try it on the [Go Playground](http://play.golang.org/p/4B54nwVPA9).
|
Issue with getting database via Sitecore API
We noticed a slight oddity in the Sitecore API code. The code is below for your reference. The code is trying to get a database by doing `new Database(database)`. But randomly it was failing.
This code worked for a while with `Database db = new Database(database);` but started failing randomly yesterday. When we changed the code to `Database db = Database.GetDatabase(database);`, the code started working again. What is the difference between the two approaches and what is recommended by Sitecore?
I've seen this happen twice now - multiple times in production and a couple of times in my development environment.
```
public static void DeleteItem(string id, stringdatabase)
{
//get the database
Database db = new Database(database);
//get the item
item = db.GetItem(new ID(id));
if (item != null)
{
using(new Sitecore.SecurityModel.SecurityDisabler())|
{
//delete the item
item.Delete();
}
}
}
```
|
A common way you will see people get a specific database is:
```
Sitecore.Data.Database master = Sitecore.Configuration.Factory.GetDatabase("master");
```
This is equivalent to `Sitecore.Data.Database.GetDatabase("master")`.
When you call either of these methods it will first check the cache for the database. If not found it will build up the database with all of the configuration values within the config file via reflection. Once the database is created it will be placed in the cache for future use.
When you use the constructor on the database it is simply creating a rather empty database object. I am rather suprised to hear it was working at all when you used this method.
The proper approach to get a **specific** database would be to use:
```
Sitecore.Configuration.Factory.GetDatabase("master");
// or
Sitecore.Data.Database.GetDatabase("master");
```
If you are looking to get the database used with the current request (aka context database) you can use `Sitecore.Context.Database`. You can also use `Sitecore.Context.ContentDatabase`.
|
Customize icon for "Add to home screen"
Are there any alternatives to the default option in web browsers, like chrome that allows me to add a link or button in my webapp and create an icon in homescreen?
For example, I have my webapp and i want a link or button in the main.html that execute an script and create the icon in the homescreen. It is not possible i think. So are there any alternatives that simulate this default option in web browsers?
Also i want to specify the icon that shows in the homescreen. How can i do that?
|
**I want to specify the icon that shows in the home screen. How can i do that?**
You can use `<link rel="apple-touch-icon" sizes="128x128" href="niceicon.png">` (yes, even for Android device).
Please check [iOS document](https://developer.apple.com/library/content/documentation/AppleApplications/Reference/SafariWebContent/ConfiguringWebApplications/ConfiguringWebApplications.html) for detail information on Apple devices. Please note you can even define icon for the entire website.
Please check [Android document](https://developer.chrome.com/multidevice/android/installtohomescreen) for detail information on Android devices. If `<link rel="apple-touch-icon" sizes="128x128" href="niceicon.png">` does not work, you can try `<link rel="icon" sizes="192x192" href="nice-highres.png">`, which is the recommended method in Android.
**Are there any alternatives that simulate this default option in web browsers?**
You can check <http://cubiq.org/add-to-home-screen> for an alternative. "add-to-home-screen" will show an overlaying message encouraging user to add the web app to the homescreen.
Note: Thanks to adam0404's comment, cubiq.org link is broken now. Fortunately, the "add-to-home-screen" library was uploaded to GitHub, please check <https://github.com/cubiq/add-to-homescreen>
|
Wrap current session in a screen session
I started an big operation, but I didn't start a screen session, So i can't disconnect without canceling the operation.
Is there a way to inject screen so that the operation continues and sends its output to the screen session and i can safely disconnect my ssh session?
|
After browsing your "No." link I actually found [this answer](https://serverfault.com/questions/55880/moving-an-already-running-process-to-screen/55882#55882) on serverfault from earl:
>
> However, for your actual problem, there's another thing you could try: after having
> launched your job from the terminal, background it by typing ctrl-z and then bg.
> After that, detach the job from it's parent shell - in bash: disown -h %<jobid>. After
> that, you can safely close the terminal and the job will continue running.
>
>
>
And this one from the same question from Rob Wells:
>
> $ screen -S my\_process
>
> $ [retty](http://pasky.or.cz/%7Epasky/dev/retty/) $(pgrep my\_process)
>
> /redraw
>
>
>
|
Summing values of Hive array types
Hive has this pretty nice Array type that is very useful in theory but when it comes to practice I found very little information on how to do any kind of opeartions with it.
We store a serie of numbers in an array type column and need to SUM them in a query, preferably from n-th to m-th element. Is it possible with standard HiveQL or does it require a UDF or customer mapper/reducer?
Note: we're using Hive 0.8.1 in EMR environment.
|
I'd write a simple `UDF` for this purpose. You need to have `hive-exec` in your build path.
E.g In case of `Maven`:
```
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>0.8.1</version>
</dependency>
```
A simple raw implementation would look like this:
```
package com.myexample;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.IntWritable;
public class SubArraySum extends UDF {
public IntWritable evaluate(ArrayList<Integer> list,
IntWritable from, IntWritable to) {
IntWritable result = new IntWritable(-1);
if (list == null || list.size() < 1) {
return result;
}
int m = from.get();
int n = to.get();
//m: inclusive, n:exclusive
List<Integer> subList = list.subList(m, n);
int sum = 0;
for (Integer i : subList) {
sum += i;
}
result.set(sum);
return result;
}
}
```
Next, build a jar and load it in Hive shell:
```
hive> add jar /home/user/jar/myjar.jar;
hive> create temporary function subarraysum as 'com.myexample.SubArraySum';
```
Now you can use it to calculate the sum of the array you have.
E.g:
Let's assume that you have an input file having tab-separated columns in it :
```
1 0,1,2,3,4
2 5,6,7,8,9
```
Load it into mytable:
```
hive> create external table mytable (
id int,
nums array<int>
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/user/hadoopuser/hive/input';
```
Execute some queries then:
```
hive> select * from mytable;
1 [0,1,2,3,4]
2 [5,6,7,8,9]
```
Sum it in range m,n where **m=1, n=3**
```
hive> select subarraysum(nums, 1,3) from mytable;
3
13
```
Or
```
hive> select sum(subarraysum(nums, 1,3)) from mytable;
16
```
|
Write an Rcpp function to detect if a NumericMatrix has any NA values
I want to write an Rcpp function with a NumericMatrix argument. It returns true if any of the matrix elements are NA, false otherwise. I tried looping is\_na over all the columns but I am looking for a neater way. I am also concerned about speed.
```
bool check(NumericMatrix M){
n=M.ncol();
for(int i=0; i < n; i ++){
if(is_na( M(_,i) ){ return T;}
}
return F;
}
```
|
Rcpp sugar can replicate the operation by combining `is_na()` and `any()`. `is_na()` will detect missing values and `any()` verifies a single value is `TRUE`. Note, to retrieve a boolean value, `any()` must be used with `is_true()`.
```
#include<Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
bool contains_na(NumericMatrix M){
return is_true(any(is_na(M)));
}
```
Test case:
```
A = matrix(1:4, nrow = 2)
contains_na(A)
# [1] FALSE
M = matrix(c(1, 2, NA, 4), nrow = 2)
contains_na(M)
# [1] TRUE
```
|
Set encoding when converting text file to pdf using itext
I'm working on getting itext to output my UTF-8 encoded text correctly in fact the input file contains symbols like ° and Latin caracters (é,è,à...) .
But i didn't find a solution this is the code i'm using :
```
BufferedReader input = null;
Document output = null;
System.out.println("Convert text file to pdf");
System.out.println("input : " + args[0]);
System.out.println("output : " + args[1]);
try {
// text file to convert to pdf as args[0]
input =
new BufferedReader (new FileReader(args[0]));
// letter 8.5x11
// see com.lowagie.text.PageSize for a complete list of page-size constants.
output = new Document(PageSize.LETTER, 40, 40, 40, 40);
// pdf file as args[1]
PdfWriter.getInstance(output, new FileOutputStream (args[1]));
output.open();
output.addAuthor("RealHowTo");
output.addSubject(args[0]);
output.addTitle(args[0]);
BaseFont courier = BaseFont.createFont(BaseFont.COURIER, BaseFont.CP1252, BaseFont.EMBEDDED);
Font font = new Font(courier, 12, Font.NORMAL);
Chunk chunk = new Chunk("",font);
output.add(chunk);
String line = "";
while(null != (line = input.readLine())) {
System.out.println(line);
Paragraph p = new Paragraph(line);
p.setAlignment(Element.ALIGN_JUSTIFIED);
output.add(p);
}
System.out.println("Done.");
output.close();
input.close();
System.exit(0);
}
catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
```
Any idea will be appreciated.
|
When I look at your code, I see a number of things that are odd.
1. You say you require UTF-8, but you create a `BaseFont` object using `BaseFont.CP1252` instead of `BaseFont.IDENTITY_H` (which is the "encoding" you need when you work with Unicode).
2. You use the standard Type 1 font Courier, which is a font that doesn't know how to render é,è,à... and a font that is never embedded. As documented, the `BaseFont.EMBEDDED` parameter is ignored in this case!
3. You don't use this font with an object that has actual content. The actual content is put into a `Paragraph` that is created using the default font "Helvetica", a font that doesn't know how to render é,è,à...
To solve this, you need to create the `Paragraph` with the appropriate font. That is **NOT** a standard type 1 font, but something like `courier.ttf`. You also need to use the appropriate encoding: `BaseFont.IDENTITY_H`.
|
laravel 4 , AVG and COUNT inside a join query
I'm trying to execute a query with avg and count inside the rows result, something like that :
```
SELECT r.id, avg( p.puntuacio ), count(p.puntuacio)
FROM receptes AS r, puntuacio_receptes_usuaris AS p
WHERE r.id = p.recepta_id
GROUP BY r.id
```
But I don't know can I do on Laravel because in Eloquent can't write avg or count inside a result row.
Thanks a lot
|
Query Builder (Fluent) method:
```
DB::table(DB::raw('receptes as r, puntuacio_receptes_usuaris as p'))
->select(array('r.id', DB::raw('avg( p.puntuacio ) as avg_p'), DB::raw('count(p.puntuacio) as count_p')))
->where('r.id', DB::raw('p.recepta_id'))
->groupBy('r.id')
->get();
```
This should work without any problem, but if you want to do it with your Eloquent model, I recommend using `JOIN` instead of using two tables in `FROM`.
You can access the avg and count results by accessing `avg_p` and `count_p`.
**Note**:
- `DB::raw()` instructs Laravel not to escape what's within, [see the doc](http://four.laravel.com/docs/queries#raw-expressions).
|
Force Inno Setup to create log even on install failure
I have application with Inno Setup installer. Application is running on background. This means, when I upgrade application (installing new version over old one), I need installer to close this application, so all files might be replaced gracefully.
But it seems, that sometimes it fails to close application for some unknown reason. To get reason, I'd like to log it. The problem is, that since installer can't replace files, user gets error message, which stops installer from finishing. And Inno Setup writes log only after successfully finishing installation.
The actual question is - is last statement correct, or is there a way to configure Inno Setup installer, so it will create log file even on aborted installation? I have idea to make my very own manual log file, which will be created on installation start, and updated every time some method is executed, but I don't want to reinvent wheel.
|
>
> And Inno Setup writes log only after successfully finishing installation.
>
>
>
No that's not correct.
The log is started as soon as the installer starts, even before it eventually fails.
So there's some log always (if enabled), no matter what.
The moment the installer window opens, the log already looks like:
```
2016-05-13 12:30:45.712 Log opened. (Time zone: UTC+02:00)
2016-05-13 12:30:45.712 Setup version: Inno Setup version 5.5.9 (u)
2016-05-13 12:30:45.712 Original Setup EXE: C:\path\mysetup.exe
2016-05-13 12:30:45.712 Setup command line: /SL5="$C0338,134364,121344,C:\path\mysetup.exe" /SPAWNWND=$200C5A /NOTIFYWND=$110C08 /log=install.log
2016-05-13 12:30:45.712 Windows version: 10.0.10586 (NT platform: Yes)
2016-05-13 12:30:45.712 64-bit Windows: Yes
2016-05-13 12:30:45.712 Processor architecture: x64
2016-05-13 12:30:45.712 User privileges: Administrative
2016-05-13 12:30:45.712 64-bit install mode: No
2016-05-13 12:30:45.712 Created temporary directory: C:\Users\martin\AppData\Local\Temp\is-K3VS6.tmp
```
---
The log is created only if:
- you ask for it using `/log=c:\path\setup.log` switch (as you can see in my log above) or
- the log is forced using the [`SetupLogging=yes` directive](https://jrsoftware.org/ishelp/index.php?topic=setup_setuplogging), in which case, the log is created in `%TEMP%\Setup Log yyyy-mm-dd #xxx.txt`.
|
What is the difference in these two declarations?
```
List<String> someName = new ArrayList<String>();
ArrayList<String> someName = new ArrayList<String>();
```
1. Does it impact anything on performance?
2. The first one is a List of Objects and the latter one is ArrayList of Objects. Correct me if i am wrong. I got confused because ArrayList implements List Interface.
3. Why do people declare like this? Does it help in any situtions.
4. When i am receiving some email address from DB, what is the best way to collect it? List of eMail address Objects????
5. Finally one unrelated question.... can an interface have two method names with same name and signature and same name with different signature.
|
The difference between the declarations is more one of style. It is preferable to declare variables using the abstract, rather than the concrete implementation, because you can change the implementation choice later without changing the variable type. For example, you might change the List to use a LinkedList instead.
If you always use the abstract type (interface or abstract class) wherever you can, especially in method signatures, the client code is free to use whatever implementation they prefer. This makes the code more flexible and easier to maintain.
This is true even of variable declarations. Consider this:
```
public abstract class MyListUsingClass {
private List<String> list;
protected MyListUsingClass(List<String> list) {
this.list = list;
}
...
}
```
If the variable `list` was declared as `ArrayList`, then *only* ArrayLists would be accepted in the constructor. This would be a poor choice: Always try to let the client code chose the implementations they want to use.
Regarding you last question: Interfaces have the same restrictions for methods as classes do, so yes you can [overload methods](http://download.oracle.com/javase/tutorial/java/javaOO/methods.html).
|
Convert an array to dictionary with value as index of the item and key as the item itself
I have an array such as -
```
arr[0] = "Name";
arr[1] = "Address";
arr[2] = "Phone";
...
```
I want to create a `Dictionary<string, int>` such that the array values will be the dictionary keys and the dictionary values will be the index, so that I can get the index of a column by querying its name in `O(1)`. I know this should be fairly simple, but I can't get my head around it.
I tried -
```
Dictionary<string, int> myDict = arr.ToDictionary(x => x, x => indexOf(x))
```
however, this returns -
```
{(Name, 0), (Address, 0), (Phone, 0),...}
```
I know this happens because it is storing the index of the first occurence, but that's not what I'm looking to do.
|
You can use the overload of `Select` which includes the index:
```
var dictionary = array.Select((value, index) => new { value, index })
.ToDictionary(pair => pair.value, pair => pair.index);
```
Or use `Enumerable.Range`:
```
var dictionary = Enumerable.Range(0, array.Length).ToDictionary(x => array[x]);
```
Note that `ToDictionary` will throw an exception if you try to provide two equal keys. You should think carefully about the possibility of your array having two equal values in it, and what you want to happen in that situation.
I'd be tempted just to do it manually though:
```
var dictionary = new Dictionary<string, int>();
for (int i = 0; i < array.Length; i++)
{
dictionary[array[i]] = i;
}
```
|
How to Unit Test with ActionResult?
I have a xUnit test like:
```
[Fact]
public async void GetLocationsCountAsync_WhenCalled_ReturnsLocationsCount()
{
_locationsService.Setup(s => s.GetLocationsCountAsync("123")).ReturnsAsync(10);
var controller = new LocationsController(_locationsService.Object, null)
{
ControllerContext = { HttpContext = SetupHttpContext().Object }
};
var actionResult = await controller.GetLocationsCountAsync();
actionResult.Value.Should().Be(10);
VerifyAll();
}
```
Source is
```
/// <summary>
/// Get the current number of locations for a user.
/// </summary>
/// <returns>A <see cref="int"></see>.</returns>
/// <response code="200">The current number of locations.</response>
[HttpGet]
[Route("count")]
public async Task<ActionResult<int>> GetLocationsCountAsync()
{
return Ok(await _locations.GetLocationsCountAsync(User.APropertyOfTheUser()));
}
```
The value of the result is null, causing my test to fail, but if you look at `ActionResult.Result.Value` (an internal property) it contains the expected resolved value.
See the following screen capture of the debugger.
[](https://i.stack.imgur.com/MscJr.png)
How do I get the actionResult.Value to populate in a unit test?
|
At run time your original code under test would still work because of the implicit conversion.
But based on the provided debugger image it looks like the test was asserting on the wrong property of the result.
So while changing the method under test allowed the test to pass, it would have worked when run live either way
`ActioResult<TValue>` has two properties that are set depending on what is returned from the action that uses it.
```
/// <summary>
/// Gets the <see cref="ActionResult"/>.
/// </summary>
public ActionResult Result { get; }
/// <summary>
/// Gets the value.
/// </summary>
public TValue Value { get; }
```
[Source](https://github.com/aspnet/Mvc/blob/master/src/Microsoft.AspNetCore.Mvc.Core/ActionResultOfT.cs)
So when the controller action returned using `Ok()` it would set the `ActionResult<int>.Result` property of the action result via implicit conversion.
```
public static implicit operator ActionResult<TValue>(ActionResult result)
{
return new ActionResult<TValue>(result);
}
```
But the test was asserting the `Value` property (refer to image in OP), which in this case was not being set.
Without having to modify the code under test to satisfy the test it could have accessed the `Result` property and make assertions on that value
```
[Fact]
public async Task GetLocationsCountAsync_WhenCalled_ReturnsLocationsCount() {
//Arrange
_locationsService
.Setup(_ => _.GetLocationsCountAsync(It.IsAny<string>()))
.ReturnsAsync(10);
var controller = new LocationsController(_locationsService.Object, null) {
ControllerContext = { HttpContext = SetupHttpContext().Object }
};
//Act
var actionResult = await controller.GetLocationsCountAsync();
//Assert
var result = actionResult.Result as OkObjectResult;
result.Should().NotBeNull();
result.Value.Should().Be(10);
VerifyAll();
}
```
|
Android - Understanding View.getLocalVisibleRect(Rect)
I'm looking for any clue to understand this method.
There is no info in [official Android's SDK documentation](http://developer.android.com/reference/android/view/View.html#getLocalVisibleRect%28android.graphics.Rect%29) about it.
What kind of Rectangle does it return?
Does it filled with Raw coorinates like in MotionEvent?
What if this view isn't visible? Does it returns null? Or some rectangle with some kind of VIEW\_INVISIBLE values inside?
Can anyone who has experience of working with this method give me a hand?
|
From the [JavaDoc of **getGlobalVisibleRect**](https://github.com/android/platform_frameworks_base/blob/1c394559d9aa2c160a929164b761728a79eb8bed/core/java/android/view/View.java#L10402):
```
/**
* If some part of this view is not clipped by any of its parents, then
* return that area in r in global (root) coordinates. To convert r to local
* coordinates (without taking possible View rotations into account), offset
* it by -globalOffset (e.g. r.offset(-globalOffset.x, -globalOffset.y)).
* If the view is completely clipped or translated out, return false.
*
* @param r If true is returned, r holds the global coordinates of the
* visible portion of this view.
* @param globalOffset If true is returned, globalOffset holds the dx,dy
* between this view and its root. globalOffet may be null.
* @return true if r is non-empty (i.e. part of the view is visible at the
* root level.
*/
```
getLocalVisibleRect calls getGlobalVisibleRect and then makes it local as suggested:
```
r.offset(-offset.x, -offset.y); // make r local`
```
So:
- It doesn't return a Rectangle, it returns a boolean. But it can set the parameters of a rectangle you pass, and that must be an android.graphics.Rect rectangle;
- The rectangle r will be filled with local coordinates;
- I'm not sure but I think it's the same for visibile and invisible views, while it should return false for views with `visibility="gone"`
|
What are "multikernel computers"?
Which does the term 'modern multikernel computers' refer to? Distributed systems or multi-core computers?
I saw it in a one-line question, so no context to help! Google didn't help neither.
|
>
> distributed systems or multi-core computers?
>
>
>
Both. It's something like treating a multicore computer like a distributed system.
The *Multikernel* approach is described in this paper by Andrew Baumann et al.: [The Multikernel: A new OS architecture for scalable multicore systems](http://research.microsoft.com/pubs/101903/paper.pdf) (published by the ACM, 2009). You'll find every possible information there, but here are the most important parts from the abstract.
First, why is there are need for something like this?
>
> Commodity computer systems contain more and more processor
> cores and exhibit increasingly diverse architectural tradeofs, including memory hierarchies, interconnects, instruction sets and
> variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature
> of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware
> variants pose serious challenges for operating system structures.
>
>
>
Then, what *is* multikernel?
>
> We argue that the challenge of future multicore hardware is best
> met by embracing the networked nature of the machine, rethinking
> OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a
> network of independent cores, assumes no inter-core sharing at the
> lowest level, and moves traditional OS functionality to a distributed
> system of processes that communicate via message-passing.
>
>
>
Finally, you can even try it. [Barrelfish](http://www.barrelfish.org/) is a multikernel operating system, released under the MIT open source license, created by the ETH Zürich and Microsoft.
Related to this, yet a bit earlier, also by Andrew Baumann et al.: [Your computer is already a distributed system. Why isn’t your OS?](http://www.barrelfish.org/barrelfish_hotos09.pdf).
|
PHP: Download file script not working on iPad
I have a file download script that I have written, which reads files from below public\_html and allows the user to download them after checking to see if the user is logged in and that the file is a valid file for them to download.
An issue I've recently come across is that on an iPad it just fails to do anything when the link is clicked.
Example download file code after all the checks have been done:
```
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false);
header("Content-Type: application/msword");
header("Content-Disposition: attachment; filename=\"file.doc\";" );
header("Content-Length: 50688");
readfile(SITE_PATH .'/files/file.doc');
```
This script has been tested and checked on PC, Mac and Linux machines in multiple browsers (FF, Opera, IE6-9, Chrome, Safari) and all seem to work fine, so it must be something that the iPad does differently.
I'd imagine it's something to do with the iPad not actually having a file structure as such to download files to, but I'm not certain.
Has anyone come across this problem before? If so, is there a fix?
|
iOS Safari does not support file download..
Update: But if you are looking to open the .doc files on iPad then yes.. you can do that...
use following -
```
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false);
header("Content-Type: application/msword");
readfile('file.doc');
```
the only difference in your code and mine is I removed the header for attachment
Just remove these header -
```
header("Content-Disposition: attachment; filename=\"file.doc\";" );
header("Content-Length: 50688");
```
Actually you can check for client operating system if operating system is iOS then don't add header for download like this -
```
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false);
header("Content-Type: application/msword");
if (!Operating_System_Is_IOS)
{
header("Content-Disposition: attachment; filename=\"file.doc\";" );
header("Content-Length: 50688");
}
readfile(SITE_PATH .'/files/file.doc');
```
|
Enumerate all Keychain items in my iOS application
What's the easiest way to programmatically (from within my app) get all items stored in the keychain?
It probably has something to do with SecItemCopyMatching(), but the documentation for that function is not very clear (and I failed to find a decent sample on the web).
|
`SecItemCopyMatching` is the right call for that. First we build our query dictionary so that the items' attributes are returned in dictionaries, and that all items are returned:
```
NSMutableDictionary *query = [NSMutableDictionary dictionaryWithObjectsAndKeys:
(__bridge id)kCFBooleanTrue, (__bridge id)kSecReturnAttributes,
(__bridge id)kSecMatchLimitAll, (__bridge id)kSecMatchLimit,
nil];
```
As `SecItemCopyMatching` requires at least the class of the returned `SecItem`s, we create an array with all the classes…
```
NSArray *secItemClasses = [NSArray arrayWithObjects:
(__bridge id)kSecClassGenericPassword,
(__bridge id)kSecClassInternetPassword,
(__bridge id)kSecClassCertificate,
(__bridge id)kSecClassKey,
(__bridge id)kSecClassIdentity,
nil];
```
...and for each class, set the class in our query, call `SecItemCopyMatching`, and log the result.
```
for (id secItemClass in secItemClasses) {
[query setObject:secItemClass forKey:(__bridge id)kSecClass];
CFTypeRef result = NULL;
SecItemCopyMatching((__bridge CFDictionaryRef)query, &result);
NSLog(@"%@", (__bridge id)result);
if (result != NULL) CFRelease(result);
}
```
In production code, you should check that the `OSStatus` returned by `SecItemCopyMatching` is either `errSecItemNotFound` (no items found) or `errSecSuccess` (at least one item was found).
|
Two ISPs and multipath gateway configuration
I have two different ISPs. I want to set some kind of load balancing setup that will distribute packets to those providers. I know this can be done using different routing tables, but I wanted to use something called "multipath gateway".
I've configured both interfaces in the `/etc/network/interfaces` file. Both of the connections work separately. I replaced the default gateways with the one below:
```
# ip route add default \
nexthop via 192.168.1.1 dev bond0 weight 1 \
nexthop via 10.143.105.17 dev wwan0 weight 1
```
I added masquerade targets in `iptables` on both of the interfaces:
```
iptables -t nat -A POSTROUTING -o wwan0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o bond0 -j MASQUERADE
```
Also I enabled (partially) reverse path filtering via `sysctl`:
```
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
```
This setup works. Packets (connections) are sent via both interfaces. There's just one problem I don't get.
When I want to check my IP address using the following commands:
```
$ curl text.whatisyourip.org
$ curl eko.one.pl/host.php
```
The IP address is different in both cases, which means that the mechanism works well. Also I can see it working in `wireshark`. But when I'm trying to send, for instance, multiple requests to the first of the domains above, I always get the same IP address in response. So it looks like that packets that are destined to the specific IP address always go through the same interface. I'm just wondering why. Is there any mechanism that remembers the destination IP addresses of the previous requests and makes the next requests to the same addresses to go through the same interface?
|
I've managed to solve the problem. In [this link](https://kernelnewbies.org/Linux_4.4#Networking) you can read the following:
>
> IPv4: Hash-based multipath routing. When the routing cache was removed
> in 3.6, the IPv4 multipath algorithm changed from more or less being
> destination-based into being quasi-random per-packet scheduling. This
> increased the risk of out-of-order packets and made it impossible to
> use multipath together with anycast services. In this release, the
> multipath routing implementation is replaced with a flow-based load
> balancing based on a hash over the source and destination addresses
> merge commit
>
>
>
So even though the cache was removed in kernel 3.6, the requests are still being cached. Now the source and the destination addresses matter. So that's why the packets go always through the same interface.
|
C++ variadic template with doubles
The following code
```
#include <initializer_list>
#include <vector>
template<int ...>
const std::vector<int>*make_from_ints(int args...)
{ return new std::vector<int>(std::initializer_list<int>{args}); }
```
is compiling (with GCC 6.3, on Debian/Sid/x86-64) correctly, and I expect it for a call like
```
auto vec = make_from_ints(1,2,3);
```
to return a pointer to some vector of integers containing 1, 2, 3.
However, if I replace `int` by `double`, that is if I add the following (in the same `basiletemplates.cc` file ...) code:
```
template<double ...>
const std::vector<double>*make_from_doubles(double args...)
{ return new std::vector<double>(std::initializer_list<double>{args}); }
```
I'm getting a compile error:
```
basiletemplates.cc:8:17: error: ‘double’ is not a valid type
for a template non-type parameter
template<double ...>
^~~
```
and I don't understand why. After all both `int` and `double` are scalar numerical POD types (predefined in the C++11 standard).
How to get a template variadic function to be able to code:
```
auto dvec = make_from_doubles(-1.0, 2.0, 4.0);
```
and get a pointer to some vector of doubles containing -1.0, 2.0, 4.0 ?
BTW, compiling for C++14 (with `g++ -Wall -std=c++14 -c basiletemplates.cc`), and using `clang++` (version 3.8.1) instead of `g++` dont change anything.
|
```
template<int ...>
const std::vector<int>*make_from_ints(int args...)
{ return new std::vector<int>(std::initializer_list<int>{args}); }
```
The snippet above has a multitude of issues:
- Returning a `const std::vector<int>*` instead of a `std::vector<int>` and unnecessarily using dynamic allocation.
- Even if you wanted to use dynamic allocation, you should use `std::make_unique` instead of `new`.
- You defined `make_from_ints` to be template function that takes any amount of `int` template parameters, but you're not giving those `int`s a name - you cannot ever use them!
- Your signature is actually being parsed as `make_from_ints(int args, ...)` - this is a C `va_args` signature that has nothing to do with variadic templates.
- The correct syntax for an argument pack is `type... name`.
If you want to accept any number of arguments of a specific type that works nicely with *template argument deduction*, the easiest way is to use a regular *variadic template* that accepts an arbitrary amount of types and `static_assert`s their type *(or uses `std::enable_if` for SFINAE-friendliness)*. Here's an example:
```
template <typename... Ts>
auto make_from_ints(Ts... xs)
{
static_assert((std::is_same<Ts, int>::value && ...));
return std::vector<int>{xs...};
}
template <typename... Ts>
auto make_from_doubles(Ts... xs)
{
static_assert((std::is_same<Ts, double>::value && ...));
return std::vector<double>{xs...};
}
```
Usage:
```
for(auto x : make_from_ints(1,2,3,4)) std::cout << x << " ";
std::cout << "\n";
for(auto x : make_from_doubles(1.0,1.5,2.0,2.5)) std::cout << x << " ";
```
>
> 1 2 3 4
>
>
> 1 1.5 2 2.5
>
>
>
[**live example on wandbox**](https://wandbox.org/permlink/vSwLIjqucC27OScg)
---
Note that I'm using a [C++17 *fold expression*](http://en.cppreference.com/w/cpp/language/fold) to check if all `Ts...` are of a particular type here:
```
static_assert((std::is_same<Ts, int>::value && ...));
```
If you do not have access to C++17 features, this can be easily replaced with something like:
```
template <typename... Ts>
constexpr auto all_true(Ts... xs)
{
for(auto x : std::initializer_list<bool>{xs...})
if(!x) return false;
return true;
}
// ...
static_assert(all_true(std::is_same<Ts, int>{}...));
```
|
Scroll a hidden view/layout from bottom
This is what I want to achieve :
[](https://i.stack.imgur.com/LJbwj.png)
I wanted to use AbsoluteLayout but it is deprecated.
So I made a RelativeLayout beneath the blue view in the image above, and then put everything inside a ScrollView, but the hidden view is still 'on' the blue view, and not below it. Also, the screen scrolls, but the hidden part is just cut , and instead I see the my app's default background..
Any ideas?
EDIT :
my current try :
```
<?xml version="1.0" encoding="utf-8"?>
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:fillViewport="true"
android:layout_height="wrap_content">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:animateLayoutChanges="true"
android:orientation="vertical" >
<ImageView
android:id="@+id/imageView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="centerCrop"
android:src="@drawable/imageView" />
<LinearLayout
android:id="@+id/centerHolder"
android:layout_width="300dp"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:orientation="vertical" >
.....
.....
</LinearLayout>
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="1000dp"
android:layout_below="@id/main_holder"
android:background="@color/black_color">
</RelativeLayout>
</RelativeLayout>
</ScrollView>
```
|
I am taking this from a project of mine which displays a `RecyclerView` where you can add data if you click on a row - because the click "opens" the bottom sheet.
```
<android.support.design.widget.CoordinatorLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<RelativeLayout
android:id="@+id/rl_main"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".view.fragment.BlockFragment">
<include
android:id="@+id/ll_header"
layout="@layout/layout_header_names" />
<include
android:id="@+id/divider_header"
layout="@layout/layout_divider_horizontal"
android:layout_width="match_parent"
android:layout_height="1dp"
android:layout_below="@+id/ll_header" />
<android.support.v7.widget.RecyclerView
android:id="@+id/rv_block"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_above="@+id/divider_footer"
android:layout_below="@+id/divider_header" />
<include
android:id="@+id/divider_footer"
layout="@layout/layout_divider_horizontal"
android:layout_width="match_parent"
android:layout_height="1dp"
android:background="#767676"
android:layout_above="@+id/ll_footer" />
<include
android:id="@+id/ll_footer"
layout="@layout/layout_footer_score"
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
android:layout_alignParentBottom="true"/>
</RelativeLayout>
<!-- Here comes my bottom sheet.
It is wrapped inside a FrameLayout, because an include cannot
have a behaviour. The included layout is every layout you
can imagine - mine is a RelativeLayout with two EditTexts
for example. The layout_behaviour is the second important line. -->
<FrameLayout
android:id="@+id/container_bottom_sheet"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="#e3e3e3"
app:layout_behavior="android.support.design.widget.BottomSheetBehavior">
<include layout="@layout/layout_bottom_sheet"/>
</FrameLayout>
</android.support.design.widget.CoordinatorLayout>
```
For the behaviour itself, you'll need to get the `FrameLayout` (the `View` with the `app:layout_behavior="android.support.design.widget.BottomSheetBehavior"`).
```
private BottomSheetBehavior bottomSheetBehavior;
bottomSheetBehavior = BottomSheetBehavior.from((FrameLayout)findViewById(R.id.container_bottom_sheet);
//for the sheet to "peek":
bottomSheetBehavior.setPeekHeight(200);
//now you can set the states:
bottomSheetBehavior.setState(BottomSheetBehavior.STATE_COLLAPSED);
bottomSheetBehavior.setState(BottomSheetBehavior.STATE_EXPANDED);
```
It is also possible to set a `BottomSheetCallback()` in which you can get all the state changes and also the slideOffset!
```
bottomSheetBehavior.setBottomSheetCallback(new BottomSheetBehavior.BottomSheetCallback() {
@Override
public void onStateChanged(@NonNull View bottomSheet, int newState) {
switch (newState) {
case BottomSheetBehavior.STATE_DRAGGING:
case BottomSheetBehavior.STATE_EXPANDED:
break;
case BottomSheetBehavior.STATE_COLLAPSED:
default:
}
}
@Override
public void onSlide(@NonNull View bottomSheet, float slideOffset) {
}
});
```
|
When does XNA discard Render Target contents?
I understand that Render Targets in XNA are volatile, but how volatile are they? I can't find any documentation that tells *when exactly* their contents are discarded. Is it just when you start drawing to them, or could it be at any time?
I would like to simply draw to a render target once and then use it as a Texture2D indefinitely. Is this possible? Would I need to enable `RenderTargetUsage.PreserveContents` for this to work properly? I have read that PreserveContents is very slow on Xbox and phone hardware.
|
Jonathan has given a great explanation of `RenderTargetUsage`, but I'd like to clarify some things with regard to the original question, involving the use of a `RenderTarget2D` as a `Texture2D`:
Put simply, `RenderTargetUsage` controls the behaviour of the surface when it is set on the device as a render target. If it is not set to preserve the contents, they will be cleared. On Xbox 360 this happens as a quirk of the hardware (for performance). On Windows this is emulated to match Xbox 360 by simply clearing the surface when you set it.
**But, if you are using it as a texture, this is irrelevant.** It's only an issue when you want to render to the one surface multiple times (setting it to the device each time) without losing the previous contents.
If you only set it as the render target once, at the start of your game, it won't matter what the preserve setting is because it never gets set as a render target again.
**What you do care about is** the `IsContentLost` property ([MSDN](http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.rendertarget2d.iscontentlost.aspx)) or the `ContentLost` event ([MSDN](http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.rendertarget2d.contentlost.aspx)). These will be set/raised if the contents that you rendered to the render target were lost (due to device changes, eg: going full-screen).
If you're re-rendering the render target each frame, then you don't need to check for content-lost. But if you expect to create your render targets at the start of your game and continue using them throughout, you need to ensure they haven't been lost after the frame on which they were created.
(Note that you don't need to worry about content-lost happening part-way through your `Draw` call. I'm reasonably certain that it can only ever happen between frames.)
|
How to set deadline for BigQuery on Google App Engine
I have a Google App Engine program that calls BigQuery for data.
The query usually takes 3 - 4.5 seconds and is fine but sometimes takes over five seconds and throws this error:
>
> DeadlineExceededError: The API call urlfetch.Fetch() took too long to respond and was cancelled.
>
>
>
This [article](https://developers.google.com/appengine/articles/deadlineexceedederrors) shows the deadlines and the different kinds of deadline errors.
Is there a way to set the deadline for a BigQuery job to be above 5 seconds? Could not find it in the BigQuery API docs.
|
BigQuery queries are fast, but often take longer than the default App Engine urlfetch timeout. The BigQuery API is async, so you need to break up the steps into API calls that each are shorter than 5 seconds.
For this situation, I would use the [App Engine Task Queue](https://developers.google.com/appengine/docs/python/taskqueue/overview-pull#Python_Using_the_task_queue_REST_API_with_the_Python_Google_API_library):
1. Make a call to the BigQuery API to insert your job. This returns a JobID.
2. Place a task on the App Engine task queue to check out the status of the BigQuery query job at that ID.
3. If the BigQuery Job Status is not "DONE", place a new task on the queue to check it again.
4. If the Status is "DONE," then make a call using urlfetch to retrieve the results.
|
Can multiple threads write data into a file at the same time?
If you have ever used a p2p downloading software, they can download a file with multi-threading, and they created only one file, So I wonder how the threads write data into that file. Sequentially or in parallel?
Imagine that you want to dump a big database table to a file, and how to make this job faster?
|
You can use multiple threads writing a to a file e.g. a log file. but you have to co-ordinate your threads as @Thilo points out. Either you need to synchronize file access and only write whole record/lines, or you need to have a strategy for allocating regions of the file to different threads e.g. re-building a file with known offsets and sizes.
This is rarely done for performance reasons as most disk subsystems perform best when being written to sequentially and disk IO is the bottleneck. If CPU to create the record or line of text (or network IO) is the bottleneck it can help.
>
> Image that you want to dump a big database table to a file, and how to make this job faster?
>
>
>
Writing it sequentially is likely to be the fastest.
|
Prop and bool in Coq
How can I use a comparison of to rational numbers in an if-statement?
```
if 1 = 2 then 1 else 2
```
`1 = 2` is of course `Prop` and not `bool`.
|
I don't understand how dfan's answer is related to the question...
Of course, `1 = 2` is a `Prop`, it is the statement that 1 is equal to 2. Hopefully you don't have a proof of this statement...
What you want is a function that, given two natural numbers, `1` and `2`, returns `true` if they are equal, and `false` if they aren't.
The library `Coq.Arith.EqNat` gives you such a function, named `beq_nat`.
In fact, you might want something even better, a function that returns a proof of equality or a proof of difference:
```
(* In Coq.Arith.Peano_dec *)
Theorem eq_nat_dec : forall n m, {n = m} + {n <> m}.
(* ^ a proof that n = m
^ or a proof that n <> m *)
```
`if` is overloaded to work with such things, so you can even write:
```
if eq_nat_dec 2 3 then ... else ...
```
|
if localStorage key value doesn't exist
I am trying to hide my div if there is no a localStorage key value.
With the line below I achieved only to hide a div when localStorage key completely deleted but need to do the same if localStorage **key hasn't got any value** at all just [].
```
window.localStorage.getItem('items') === null
```
How would it be performed?
|
You can add required conditions using the `OR` operator `||`
```
var items = window.localStorage.getItem('items')
if (items === null || items.length === 0)
{
// items is null, [] or '' (empty string)
}
```
If you have to check for `undefined` somewhere as well you can change `=== null` to `== null` or expand with an extra condition like this
```
if (items === undefined || items === null || items.length === 0)
{
// items is undefined, null, [] or '' (empty string)
}
```
**EDIT:** Here is what you can do to get the array directly
```
var items = JSON.parse(window.localStorage.getItem('items'))
if (items === null || items.length === 0)
{
// items is null or []
}
```
|
Time series cross section forecasting with R
I have a (I suspect) simple question. I have time series cross section data on voting behaviour in the Council of the European Union (the monthly number of yes, no and abstentions for each member state from 1999 to 2007). So basically the variables are counts, thus a Poisson/negative binomial regression would be appropriate, possibly with lagged dependent variables on the right hand side to control for time dependencies. I have seen papers with people using such negative binomial models to forecast, for instance the number of monthly legislative acts adopted in the future, and I have three questions in this regard:
1. How can i run a negative binomial regression on panel data without making any inferential mistakes?
2. How can I use a negative binomial model with lags to forecast future values of the dependent variable.
3. Can this be done in R?
Thomas
|
After a bit of research, I can give a partial answer. In his [book](http://rads.stackoverflow.com/amzn/click/0262232197) Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged variables he only discusses Poisson regression. Maybe negative binomial is discussed in the new edition. The main conclusion is that for random effects Poisson regression with lagged random variable can be estimated by mixed effects Poisson regression model. The detailed description can be found [here](http://www.cemmap.ac.uk/wps/cwp0218.pdf). The mixed effects Poisson regression in R can be estimated with **glmer** from package **lme4**. To adapt it to work with panel data, you will need to create lagged variable explicitly. Then your estimation command should look something like this:
```
glmer(y~lagy+exo+(1|Country),data,family=quasipoisson)
```
You should also look into **gplm** package suggested by @dickoa. But be sure to check, whether it supports lagged variables. Yves Croissant, the creator of gplm and plm packages writes wonderful code, but unfortunately in my personal experience, the code is not tested enough, so bugs crop up more frequently than in standard R packages.
|
Column does not exist in the IN clause, but SQL runs
I have a query that uses the `IN` clause. Here's a simplified version:
```
SELECT *
FROM table A
JOIN table B
ON A.ID = B.ID
WHERE B.AnotherColumn IN (SELECT Column FROM tableC WHERE ID = 1)
```
`tableC` doesn't have a `Column` column, but the query executes just fine with no error message. Can anyone explain why?
|
This will work if a table in the *outer* query has a column of that name. This is because column names from the outer query are available to the subquery, and you could be deliberately meaning to select an outer query column in your subquery SELECT list.
For example:
```
CREATE TABLE #test_main (colA integer)
CREATE TABLE #test_sub (colB integer)
-- Works, because colA is available to the sub-query from the outer query. However,
-- it's probably not what you intended to do:
SELECT * FROM #test_main WHERE colA IN (SELECT colA FROM #test_sub)
-- Doesn't work, because colC is nowhere in either query
SELECT * FROM #test_main WHERE colA IN (SELECT colC FROM #test_sub)
```
As Damien observes, the safest way to protect yourself from this none-too-obvious "gotcha" is to get into the habit of qualifying your column names in the subquery:
```
-- Doesn't work, because colA is not in table #test_sub, so at least you get
-- notified that what you were trying to do doesn't make sense.
SELECT * FROM #test_main WHERE colA IN (SELECT #test_sub.colA FROM #test_sub)
```
|
How to find out how much memory a program taking
I use htop to watch my processes, but most of services ran are multi threaded, so they are shown on several lines with a % of memory use which in some cases is the same for all, sometimes it varies among them.
Say in the case of Firefox it can list 15 processes each consuming 13% of memory.
Can I know hoc much memory is Firefox, MySQL, some fcgi script or anything is consuming? since the htop output seems misleading.
|
Most top implementations have a way to turn the display of threads on or off.
- htop: in the “Setup / Display options” menu, “Hide userlands threads”.
- Linux top: press `H` to toggle the showing of threads (but they're off by default).
- OpenBSD top: press `T` to toggle the showing of threads (but they're off by default).
Note that memory mappings, and hence memory occupation, is a property of a process, so you'll always see the same numbers for every thread in a process. If you see different numbers, it means there are multiple processes.
There's no easy way to find out the total memory consumption of a set of processes because the concept isn't well-defined. Some of the memory may be shared; this happens all the time with shared libraries, and in addition related processes (such as multiple instances of a server) are more likely to use shared memory to exchange data. If you just add the figures, you'll often get a number that's a lot larger than the actual used memory.
|
Eclipse key mappings confused
I'm trying to use eclipse on Win 7, but for some reason it is screwing up my keys.
For example, when I press '{' it ends up displaying a '^'. Im pretty sure that this is what would happen if I had a french keyboard layout, but I'm pretty sure that my language settings in Windows 7 are set to use Canadian English. Firefox or notepad don't have a problem, so I guess that means it's eclipse.
Any idea how to fix it? Right now I'm just copying and pasting symbols lol.
|
Windows XP has a feature that swaps keyboard layout if you hit a key combination (default is left alt + shift). Since it appears to remap on a per-application basis, you might think it is a problem with Eclipse since it won't appear to affect another application. This feature is especially a problem with Eclipse since some useful keybindings may use alt + shift + something and if you don't press the something part, your keyboard layout will switch.
To disable, open the Text Services and Input Languages dialog (easiest way is to right click on the task bar, choose Toolbars > Language bar then right click on the Language bar icon and select Settings...)
In the dialog, click on the Key Settings button. In the Advanced Key Settings, select Switch between input languages and click Change Key Sequence... Deselect the Switch input languages and Switch keyboard layouts checkboxes then hit OK.
While you're in the Advanced Key Settings dialog, if you have any of the language specific hotkeys set, you may want to disable those as well. Just select Switch to (language) and then repeat deselecting the Enable key sequence checkbox and hit OK.
By the way, if you don't want to disable this and your keyboard gets remapped to a foreign language layout, just toggle it back by pressing Alt+Shift (or whatever the combo is).
|
How can I make text in a UILabel shrink font size
If a UILabel contains too much text, how can I setup my label so that it shrinks font-sizes?
Here is how I am setting up my UILabel:
```
descriptionLabel = [[UILabel alloc] initWithFrame:CGRectMake(200, 30, 130, 150)];
[descriptionLabel setFont:[Utils getSystemFontWithSize:14]];
[descriptionLabel setBackgroundColor:[UIColor clearColor]];
[descriptionLabel setTextColor:[UIColor whiteColor]];
descriptionLabel.numberOfLines = 1;
[self addSubview:descriptionLabel];
```
|
```
descriptionLabel.adjustsFontSizeToFitWidth = YES;
descriptionLabel.minimumFontSize = 10.0; //adjust to preference obviously
```
The following example is tested and verified on iPhone Simulator 3.1.2:
```
UILabel *descriptionLabel = [[UILabel alloc] initWithFrame:CGRectMake(90, 0, 200, 30)];
descriptionLabel.font = [UIFont systemFontOfSize:14.0];
descriptionLabel.minimumFontSize = 10.0;
descriptionLabel.adjustsFontSizeToFitWidth = YES;
descriptionLabel.numberOfLines = 1;
descriptionLabel.text = @"supercalifragilisticexpialidocious even thought he sound of it is something quite attrocious";
```
|
How to convert a InputStream to ZIP format?
I am having a InputStream Object which is actually a zip file. I want to change it back to zip file and save it. I am using DWR's FileTransfer class object to receive the uploaded data from client.
[FileTransfer](http://www.jarvana.com/jarvana/view/org/directwebremoting/dwr/3.0.M1/dwr-3.0.M1-javadoc.jar!/org/directwebremoting/io/FileTransfer.html) have 3 methods, [getInputStream()](http://www.jarvana.com/jarvana/view/org/directwebremoting/dwr/3.0.M1/dwr-3.0.M1-javadoc.jar!/org/directwebremoting/io/FileTransfer.html#getInputStream%28%29) is one of them. It returns InputStream from FileTransfer object.
In my case, fileTransfer object holds zip file and as well as InputStream object too.
I have done, lot of searches in google. But i am not able to find one example, that illustrates InputStream to zip conversion.
**Update**
```
String zipName = file.getName();
String zipType = file.getMimeType();
InputStream zipStream = file.getInputStream();
ZipInputStream zis = new ZipInputStream(zipStream);
System.out.println("File Name: "+zipName+"\n"+"File Type: "+zipType);
int c;
File f2 = new File(DATA_STORE_LOC+dat+".zip");
path.setPath2(DATA_STORE_LOC+dat+".zip");
FileOutputStream fos = new FileOutputStream(f2);
ZipOutputStream zos = new ZipOutputStream(fos);
c = zis.read();
System.out.println(c);
while ((c = zis.read(BUFFER)) != -1) {
zos.write(BUFFER, 0, c);
}
zos.close();
zis.close();
```
I tried this code, by thought of a typical file copy program. I know it is false, just tried. It gives me `java.util.zip.ZipException: ZIP file must have at least one entry`.
Any suggestion would be really appreciative!!!!!
|
See the examples java2s, [input](http://www.java2s.com/Tutorial/Java/0180__File/0602__ZipInputStream.htm) and [output](http://www.java2s.com/Tutorial/Java/0180__File/0601__ZipOutputStream.htm). If you have more questions feel free to ask them :)
For clarity, in [this input example](http://www.java2s.com/Tutorial/Java/0180__File/CreateZipInputStreamfromFileInputStream.htm) you should do something like:
```
// FileInputStream fin = new FileInputStream(args[i]);
ZipInputStream zin = new ZipInputStream(ft.getInputStream());
```
As Don Roby correctly said, if you just want to copy you need not know the file structure and you could use for example [static IOUtils.copy(in, out)](http://commons.apache.org/io/api-1.4/org/apache/commons/io/IOUtils.html#copy%28java.io.InputStream,%20java.io.OutputStream%29) to copy the file.
Further, if you do wish to extract the ZIP file contents, you should not plainly copy bytes. The ZIP file has a structure, and you extract Entries from the ZIP file, and not just bytes (see the example). Every Entry is a (compressed) file (or the data thereof) with the original name:
```
ZipEntry ze = null;
while ((ze = zin.getNextEntry()) != null) {
System.out.println("Unzipping " + ze.getName());
FileOutputStream fout = new FileOutputStream(ze.getName());
for (int c = zin.read(); c != -1; c = zin.read()) {
...
```
Please note the [javadoc of getNextEntry()](http://download.oracle.com/javase/6/docs/api/java/util/zip/ZipInputStream.html#getNextEntry%28%29):
>
> Reads the next ZIP file entry and positions the stream at the beginning of the entry data.
>
>
>
This positioning is crucial to get to the zipped file contents, and not the metadata.
And I do believe that you accidentally remove the first int:
```
c = zis.read(); // removing the first
while ((c = zis.read(BUFFER)) != -1) { // so you start with the second?
```
I believe you mix 2 idioms:
```
c = zis.read();
while(c != -1) {
...
c = zis.read();
}
```
and:
```
int c;
while ((c = zis.read(BUFFER)) != -1) { // so you start with the second?
...
}
```
I think you can see the difference :)
|
Finding Intersection of NSMutableArrays
I have three NSMutableArray containing names that are added to the lists according to different criterieas.
Here are my arrays pseudocode:
```
NSMutableArray *array1 = [@"Jack", @"John", @"Daniel", @"Lisa"];
NSMutableArray *array2 = [@"Jack", @"Bryan", @"Barney", @"Lisa",@"Penelope",@"Angelica"];
NSMutableArray *array3 = [@"Jack", @"Jerome", @"Dan", @"Lindsay", @"Lisa"];
```
I want to find a fourth array which includes the intersection of those three arrays. In this case for example it will be:
```
NSMutableArray *array4 = [@"Jack",@"Lisa"];
```
Because all the three array have jack and lisa as an element. Is there way of simply doing this?
|
Use `NSMutableSet`:
```
NSMutableSet *intersection = [NSMutableSet setWithArray:array1];
[intersection intersectSet:[NSSet setWithArray:array2]];
[intersection intersectSet:[NSSet setWithArray:array3]];
NSArray *array4 = [intersection allObjects];
```
The only issue with this is that you lose ordering of elements, but I think (in this case) that that's OK.
---
As has been pointed out in the comments (thanks, [Q80](https://stackoverflow.com/users/889424)!), iOS 5 and OS X 10.7 added a new class called `NSOrderedSet` (with a `Mutable` subclass) that allows you to perform these same intersection operations while still maintaining order.
|
How many tables can be created in a mysql database?
How many tables can be created in a mysql database ?
And how many columns can be created in a mysql table ?
How many rows can be inserted into a mysql table ?
|
>
> **How many tables can be created in a mysql database ?**
>
>
>
MySQL has [no limit on the number of databases.](http://dev.mysql.com/doc/refman/5.0/en/database-count-limit.html) The underlying file system may have a limit on the number of tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.
>
> **And how many columns can be created in a mysql table ?**
>
>
>
There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact limit depends on [several interacting factors.](http://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html)
>
> **How many rows can be inserted into a mysql table ?**
>
>
>
The number of rows is limited by the [maximum size allowed for a table](http://dev.mysql.com/doc/refman/5.0/en/table-size-limit.html). This is OS-dependent. You can impose a limit on the number of rows by setting `MAX_ROWS` at table creation time.
### Reference: [Limits in MySQL](http://dev.mysql.com/doc/refman/5.0/en/limits.html)
|
CUDA/C++: Passing \_\_device\_\_ pointers in C++ code
I am developing a Windows 64-bit application that will manage concurrent execution of different CUDA-algorithms on several GPUs.
My design requires a way of passing pointers to device memory
around c++ code. (E.g. remember them as members in my c++ objects).
I know that it is impossible to declare class members with `__device__` qualifiers.
However I couldn't find a definite answer whether assigning `__device__` pointer to a normal C pointer and then using the latter works. In other words: Is the following code valid?
```
__device__ float *ptr;
cudaMalloc(&ptr, size);
float *ptr2 = ptr
some_kernel<<<1,1>>>(ptr2);
```
For me it compiled and behaved correctly but I would like to know whether it is guaranteed to be correct.
|
No, that code isn't strictly valid. While it might work on the host side (more or less by accident), if you tried to dereference `ptr` directly from device code, you would find it would have an invalid value.
The correct way to do what your code implies would be like this:
```
__device__ float *ptr;
__global__ void some_kernel()
{
float val = ptr[threadIdx.x];
....
}
float *ptr2;
cudaMalloc(&ptr2, size);
cudaMemcpyToSymbol("ptr", ptr2, sizeof(float *));
some_kernel<<<1,1>>>();
```
for CUDA 4.x or newer, change the `cudaMemcpyToSymbol` to:
```
cudaMemcpyToSymbol(ptr, ptr2, sizeof(float *));
```
If the static device symbol `ptr` is really superfluous, you can just to something like this:
```
float *ptr2;
cudaMalloc(&ptr2, size);
some_kernel<<<1,1>>>(ptr2);
```
But I suspect that what you are probably looking for is something like the [thrust library `device_ptr` class](https://thrust.github.io/doc/classthrust_1_1device__ptr.html), which is a nice abstraction wrapping the naked device pointer and makes it absolutely clear in code what is in device memory and what is in host memory.
|
How can I plotly a ggplot treemap?
I am looking to get this gradient colors on the map:
```
ramp <- colorRamp(c("royalblue4", "white"))
ramp.list <- rgb( ramp(seq(0, 1, length = 15)), max = 255)
```
But also, and more important, I am looking to add plotly charactheristics to the graph (specially hovering text output). This is my data:
```
structure(list(V1 = structure(c(9L, 8L, 4L, 7L, 2L, 6L, 1L, 3L,
5L, 10L, 13L, 11L, 12L), .Label = c("Apple", "Avocado", "Banana",
"Carrot", "Mango", "Mushroom", "Onion", "Orange", "Pineapple",
"Strawberry", "Sweet-lemon", "Watermelon", "Wildberry"), class = "factor"),
V2 = structure(c(4L, 3L, 9L, 11L, 12L, 2L, 1L, 6L, 10L, 5L,
7L, 8L, 1L), .Label = c("23", "24", "36", "42", "43", "46",
"48", "52", "56", "61", "82", "94"), class = "factor")), class = "data.frame", row.names = c(NA,
-13L))
```
And this is what I've tried:
```
library(ggplot2)
library(plotly)
ramp <- colorRamp(c("royalblue4", "white"))
ramp.list <- rgb( ramp(seq(0, 1, length = 15)), max = 255)
g <- ggplot(dtd7, aes(area = n, fill = topic, label = as.character(topic))) +
geom_treemap()+
geom_treemap_text(fontface = "italic", colour = "white", place = "centre") +
theme(legend.position = "none")
ggplotly(p)
```
|
Using treemap traces you can display hierarchical datasets.
Accordingly the problem with your code snippet from the comments `plot_ly(dtd7, ids = ~topic, values = ~n, parents = ~topic, type = 'treemap')` is, that you are assigning the same data to `ids` and `parents`.
Please check the following:
```
library(plotly)
dtd7 <- structure(
list(
topic = structure(
c(9L, 8L, 4L, 7L, 2L, 6L, 1L, 3L,
5L, 10L, 13L, 11L, 12L),
.Label = c("Apple", "Avocado", "Banana", "Carrot", "Mango","Mushroom", "Onion", "Orange", "Pineapple", "Strawberry", "Sweet-lemon", "Watermelon", "Wildberry"),
class = "factor"
),
n = structure(
c(4L, 3L, 9L, 11L, 12L, 2L, 1L, 6L, 10L, 5L,
7L, 8L, 1L),
.Label = c("23", "24", "36", "42", "43", "46", "48", "52", "56", "61", "82", "94"),
class = "factor"
)
),
class = "data.frame",
row.names = c(NA,-13L)
)
p <- plot_ly(
dtd7,
labels = ~ topic,
parents = NA,
values = ~ n,
type = 'treemap',
hovertemplate = "Ingredient: %{label}<br>Count: %{value}<extra></extra>"
)
p
```
[](https://i.stack.imgur.com/6iSoz.png)
|
How to setup a grid as template for an Items control?
I'm trying to create an `ItemsControl` that uses a grid as its `ItemsPanel` in such a way that it has two columns, where the first columns width is the width of the widest item in that column, and has as may rows needed to display all the items
Basically, I want the following, but somehow within an `ItemsControl` so that I can bind to a collection of objects:
```
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="auto"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="auto"/>
<RowDefinition Height="auto"/>
<RowDefinition Height="auto"/>
</Grid.RowDefinitions>
<Label Content="{Binding Items[0].Header}"/>
<TextBox Text="{Binding Items[0].Content}" Grid.Column="1"/>
<Label Content="{Binding Items[1].Header}" Grid.Row="1"/>
<TextBox Text="{Binding Items[1].Content}" Grid.Row="1" Grid.Column="1"/>
<Label Content="{Binding Items[2].Header}" Grid.Row="2"/>
<TextBox Text="{Binding Items[2].Content}" Grid.Row="2" Grid.Column="1"/>
</Grid>
```
Edit : Rachels answer worked great, here is a working example.
(I moved the Grid.IsSharedSizeScope="True" to the ItemsPanel, not sure if Rachel meant to put it in the ItemTemplate (which didn't work))
```
namespace WpfApplication23
{
public partial class Window1 : Window
{
public List<Item> Items { get; set; }
public Window1()
{
Items = new List<Item>()
{
new Item(){ Header="Item0", Content="someVal" },
new Item(){ Header="Item1", Content="someVal" },
new Item(){ Header="Item267676", Content="someVal" },
new Item(){ Header="a", Content="someVal" },
new Item(){ Header="bbbbbbbbbbbbbbbbbbbbbbbbbb", Content="someVal" },
new Item(){ Header="ccccccc", Content="someVal" }
};
InitializeComponent();
DataContext = this;
}
}
public class Item
{
public string Header { get; set; }
public string Content { get; set; }
}
}
<Window x:Class="WpfApplication23.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<ItemsControl ItemsSource="{Binding Items}">
<ItemsControl.ItemsPanel>
<ItemsPanelTemplate>
<StackPanel Grid.IsSharedSizeScope="True"/>
</ItemsPanelTemplate>
</ItemsControl.ItemsPanel>
<ItemsControl.ItemTemplate>
<DataTemplate>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition SharedSizeGroup="ColumnOne" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
<Label Content="{Binding Header}"/>
<TextBox Text="{Binding Content}" Grid.Column="1"/>
</Grid>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
</Window>
```
|
There are multiple problems here for an `ItemsControl`:
- Getting your first column to match the width of the largest item
- Generating a dynamic number of rows
- Generating more than one item for each iteration of the `ItemsControl`
The last one is really the biggest problem, because an `ItemsControl` wraps each `ItemTemplate` in a `ContentPresenter`, so there is no default way of creating more than one item in the panel for each Iteration of the `ItemsControl`. Your end result would look like this:
```
<Grid>
...
<ContentPresenter>
<Label Content="{Binding Items[0].Header}"/>
<TextBox Text="{Binding Items[0].Content}" Grid.Column="1"/>
</ContentPresenter>
<ContentPresenter>
<Label Content="{Binding Items[1].Header}" Grid.Row="1"/>
<TextBox Text="{Binding Items[1].Content}" Grid.Row="1" Grid.Column="1"/>
</ContentPresenter>
<ContentPresenter>
<Label Content="{Binding Items[2].Header}" Grid.Row="2"/>
<TextBox Text="{Binding Items[2].Content}" Grid.Row="2" Grid.Column="1"/>
</ContentPresenter>
</Grid>
```
My best suggestion would be to create an `ItemTemplate` that contains a 1x2 `Grid`, and use `Grid.IsSharedSizeScope` to make the width of the first column shared. (The `ItemsPanelTemplate` would remain the default `StackPanel`.)
This way, the end result would look like this:
```
<StackPanel>
<ContentPresenter>
<Grid IsSharedSizeScope="True">
<Grid.ColumnDefinitions>
<ColumnDefinition SharedSizeGroup="ColumnOne" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
<Label Content="{Binding Header}"/>
<TextBox Text="{Binding Content}" Grid.Column="1"/>
</Grid>
</ContentPresenter>
<ContentPresenter>
<Grid IsSharedSizeScope="True">
<Grid.ColumnDefinitions>
<ColumnDefinition SharedSizeGroup="ColumnOne" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
<Label Content="{Binding Header}"/>
<TextBox Text="{Binding Content}" Grid.Column="1"/>
</Grid>
</ContentPresenter>
...
</StackPanel>
```
|
Sync data between Google Firestore and Google Sheets using Cloud Functions/Admin SDK
While using Cloud Firestore as data backend, I need to share some data collections with non-tech site managers (editors, sales teams, etc.). Also, I wish to give these people access to edit the data stored in Cloud Firestore.
Google Sheets is a very familiar tool with site managers which can save me time in developing a CRUD admin panel like the interface from scratch for data updating and viewing.
This Stack Overflow [answer](https://stackoverflow.com/questions/46640981/how-to-import-csv-or-json-to-firebase-cloud-firestore/47491711#47491711) shows how to send data using cloud function and levels deep, and this Github [library](https://github.com/grahamearley/FirestoreGoogleAppsScript) can get data from Firestore using Google Apps Script (I wish to do it using Cloud Functions or Firebase Admin SDK), but I am still trying to figure out how to make an end-to-end Sheets based interface.
Please guide if there are any better alternatives to achieve the same objective. I'm facing some difficulties switching from SQL databases and Django auto-created admin interfaces to the Firebase-Firestore NoSQL world.
|
I understand that you want to be able to call a Cloud Function from a Google Sheet in order to build an "end-to-end Sheets based interface" for Firestore.
You can use the [UrlFetchApp](https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app) Class to make a request to fetch the URL of an [HTTP Cloud Function](https://firebase.google.com/docs/functions/http-events).
You Apps Script code would be like:
```
function callSimpleHTTPCloudFunction() {
const url = "https://xxxxxxxx.cloudfunctions.net/simpleHttp";
response = UrlFetchApp.fetch(url, {
method: 'get'
})
respObj = JSON.parse(response.getContentText());
Logger.log(respObj);
}
```
While your Cloud Function would be like:
```
exports.simpleHttp = functions.https.onRequest((req, res) => {
res.send({ msg: 'simpleHttp' });
});
```
This is a very simple example of Cloud Function, but you can adapt this Cloud Function to read and write data from/to Firestore. Have a look at this official video for a starting point: <https://www.youtube.com/watch?v=7IkUgCLr5oA&t=1s&list=PLl-K7zZEsYLkPZHe41m4jfAxUi0JjLgSM&index=3>
---
Now, if you want to authenticate your users in such a way you can control who can access your data through the Cloud Function, it is going to be a bit more complex.
There is an official Cloud Function Sample which shows "how to restrict an HTTPS Function to only the Firebase users of your app": <https://github.com/firebase/functions-samples/tree/master/authorized-https-endpoint>
As explained in the code comments: "The Firebase ID token needs to be passed as a Bearer token in the Authorization HTTP header like this: `Authorization: Bearer <Firebase ID Token>`. When decoded successfully, the ID Token content will be added as `req.user`."
So you need, in your Apps Script code, to generate a Firebase ID Token for the Firebase user. For that we will use the [Firebase Auth REST API](https://firebase.google.com/docs/reference/rest/auth/). In this example we will use the email of the user authenticated in the Google Sheet (`Session.getActiveUser().getEmail()`) as the Firebase User Name.
As explained in the doc, to call the Firebase Auth REST API, you need to obtain a Web API Key for your Firebase project, through the project settings page in your Firebase admin console.
The following Apps Script function will do the job:
```
function getToken() { {
const userName = Session.getActiveUser().getEmail();
const pwd = 'xyz' //For example get the password via a prompt.
//This is NOT the password of the account authenticated with Google Sheet, but the password of the Firebase user. In this example, the emails are the same but they are different accounts.
const verifyPasswordUrl = "https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyPassword?key=[API_KEY]" //Replace with your Web API Key
const payload = JSON.stringify({"email":userName,"password": pwd,"returnSecureToken": true});
const verifyPasswordResponse = UrlFetchApp.fetch(verifyPasswordUrl, {
method: 'post',
contentType: 'application/json',
muteHttpExceptions: true,
payload : payload
});
const token = JSON.parse(verifyPasswordResponse.getContentText()).idToken;
return token;
}
```
Then, still in Apps Script, you use the token in the call to the Cloud Function, as follows:
```
function callSecuredHTTPCloudFunction() {
const authHeader = {"Authorization": "Bearer " + getToken()};
const url = "https://us-central1-<yourproject>.cloudfunctions.net/securedHttp/";
const response = UrlFetchApp.fetch(url, {
method: 'get',
headers: authHeader,
muteHttpExceptions: true,
});
Logger.log(response);
//Here do what you want with the response from the Cloud Function, e.g. populate the Sheet
}
```
The Cloud Function code would be as follows, adapted from the official example.
```
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const cors = require('cors')({
origin: true
});
const express = require('express');
const cookieParser = require('cookie-parser')();
const app = express();
// Express middleware that validates Firebase ID Tokens passed in the Authorization HTTP header.
// The Firebase ID token needs to be passed as a Bearer token in the Authorization HTTP header like this:
// `Authorization: Bearer <Firebase ID Token>`.
// when decoded successfully, the ID Token content will be added as `req.user`.
const validateFirebaseIdToken = (req, res, next) => {
console.log('Check if request is authorized with Firebase ID token');
if (
!req.headers.authorization ||
!req.headers.authorization.startsWith('Bearer ')
) {
console.error(
'No Firebase ID token was passed as a Bearer token in the Authorization header.',
'Make sure you authorize your request by providing the following HTTP header:',
'Authorization: Bearer <Firebase ID Token>'
);
res.status(403).send('Unauthorized');
return;
}
let idToken;
if (
req.headers.authorization &&
req.headers.authorization.startsWith('Bearer ')
) {
console.log('Found "Authorization" header');
// Read the ID Token from the Authorization header.
idToken = req.headers.authorization.split('Bearer ')[1];
console.log(idToken);
} else {
// No cookie
res.status(403).send('Unauthorized');
return;
}
admin
.auth()
.verifyIdToken(idToken)
.then(decodedIdToken => {
console.log('ID Token correctly decoded', decodedIdToken);
req.user = decodedIdToken;
return next();
})
.catch(error => {
console.error('Error while verifying Firebase ID token:', error);
res.status(403).send('Unauthorized');
});
};
app.use(cors);
app.use(cookieParser);
app.use(validateFirebaseIdToken);
app.get('/', (req, res) => {
res.send(`Your email is ${req.user.email}`);
});
// This HTTPS endpoint can only be accessed by your Firebase Users.
// Requests need to be authorized by providing an `Authorization` HTTP header
// with value `Bearer <Firebase ID Token>`.
exports.securedHttp = functions.https.onRequest(app);
```
---
You can very well write a similar function with a POST and a payload in order to send data from the Google Sheet to the Cloud Function and then write to Firestore.
Finally, note that you could implement the same approach for calling, from the Google Sheet, the [Firestore REST API](https://firebase.google.com/docs/firestore/use-rest-api) instead of calling Cloud Functions.
|
Generating even-sized clusters in scikit-learn
I'm attempting to generate approximately even-sized clusters of a PCA'd feature set in Scikit-learn, but I'm not having any luck. I'm only familiar with KMeans clustering, and with that algorithm the largest cluster contains the majority of the examples (in the case of K=2 it's 80%, in K=4 it's 65%, etc...). I've done some basic experiments with default MeanShift and AffinityPropagation, but I've had no luck there.
This is where a graduate degree would have come in handy, but in the meantime can anyone point me in the direction of some good resources on what types of clustering algorithms can control cluster size (specifically any that are implemented in sklearn!)
I realize this question is super vague, but I'm not sure what information is relevant to the problem. My data set begins as a combination of normalized continuous variables and one-hot encoding for categorical variables. 36 original features are reduced with PCA to 20 features that describe 99%+ of the variance. Attempts to modify my pre-PCA data set don't really effect how the clustering divides up the examples.
Thanks for any suggestions/input!
|
This [tutorial discusses a k-means variation to produce clusters of the same size](http://elki.dbs.ifi.lmu.de/wiki/Tutorial/SameSizeKMeans). It's for ELKI, but you could implement the same in sklearn, too.
See also:
- [Clustering procedure where each cluster has an equal number of points?](https://stats.stackexchange.com/questions/8744/clustering-procedure-where-each-cluster-has-an-equal-number-of-points)
- <https://stackoverflow.com/questions/8796682/group-n-points-in-k-clusters-of-equal-size>
- <https://stackoverflow.com/questions/5452576/k-means-algorithm-variation-with-equal-cluster-size>
|
Find 2nd largest value in an array that has duplicates of the largest integer
I'm trying to find the second largest number in an array of numbers, but the greatest number appears twice, so I can't just remove it from the array and select the new highest number.
`array = [0, 3, 2, 5, 5]` (therefore `3` is the 2nd largest value)
I have this code where I can explicitly return 3, but it wouldn't work on other arrays:
```
function getSecondLargest(nums) {
var sorted_array = nums.sort(function (a,b) {return a - b;});
var unique_sorted_array = sorted_array.filter(function(elem, index, self) {
return index === self.indexOf(elem);
})
return unique_sorted_array[unique_sorted_array.length - 2];
}
return unique_sorted_array[unique_sorted_array.length - 2];
```
If I wanted to make it more dynamic, is there a way that I could identify the greatest value of the array, then compare that against each iteration of the array?
I was thinking that something along the lines of:
```
var greatestNum = sortedArray[-1]
while(greatestNum != i) do {
//check for the first number that doesn't equal greatestNum
}
```
Any help would be appreciated.
|
You can simply create a [**Set**](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) first and than [**sort**](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) in descending and take the 1st index element
```
let array = [0, 3, 2, 5, 5]
let op = [...new Set(array)].sort((a,b) => b-a)[1]
console.log(op)
```
For those who thinks in terms of efficiency. this is the best way IMO
```
let array = [0, 3, 2, 5, 5]
let max = -Infinity
let secondMax = -Infinity
for(let i=0; i<array.length; i++){
if(array[i] > max){
secondMax = max
max = array[i]
}
}
console.log(secondMax)
```
|
How does this kmalloc() function works?
I am reading the JamesM's kernel development tutorial and in the "chapter" that explain how to setup the paging memory management mechanism, he shows the kmalloc() function, that should allocate some memory.
This is the function :
```
u32int kmalloc(u32int sz) {
u32int tmp = placement_address;
placement_address += sz;
return tmp;
}
```
I am a little bit confused about.
How does this function allocate memory?
It returns the address where the allocated chunck should be, but I really don't understand how the actual allocation works.
|
This is the simplest memory allocation mechanism. `placement_address` is initialized when booting to point to a start of some big, contiguous memory block. This `kmalloc` assigns sequential chunks of it as requested.
I think the problem comes from understanding of the world "allocate". Here it's used in the very basic meaning of the term: it tells the caller where requested chunk is and takes note that it's no longer available, so it won't be returned again. Nothing more. There is no deallocation, no OOM, no boundaries and all interaction with physical memory (if it happened) was done once at startup.
Think of a ticket seller with a [roll of tear-off tickets](https://www.google.com/search?q=raffle%20ticket%20roll&tbm=isch): you say "sz tickets please", so they start at the current end of the roll (`placement_address`), they advance their "end" `sz` places (`placement_address += sz`) and handle you the previous "end" (`return tmp`). The tearing away of paper tickets has no equivalent here. You're wondering on "how was the roll printed", but "allocation" doesn't mean printing, it only means handling it away in pieces.
|
How can I extract a BMP from an ICO file?
I'm doing some work where I'm programmatically downloading icons from sites specified in an OpenSearch document and I need to extract the first image (for now) if it is in ICO format. I'm able to read the ICO file header with no problems and slice out the first image file. However, after reading the [Wikipedia entry explaining the file format](http://en.wikipedia.org/wiki/ICO_%28file_format%29) I've discovered that, if the image is in bitmap format, then the file is incomplete (it's missing the header). So I need to reconstruct this header before I can save the data to a file, but I'm having a bit of difficulty.
According to the [Wikipedia entry for BMP file format](http://en.wikipedia.org/wiki/BMP_file_format), the header is 14 bytes long and should contain the following:
```
**Offset Data**
0x0000 "BM", for our intents and purposes
0x0002 Size of the bitmap file in bytes
0x0006 Dependant on the application creating the file
0x0008 Dependant on the application creating the file
0x000A Offset of the image data/pixel array
```
I figured that the size of the bitmap file in bytes would be the size of the extracted image + the 14 bytes for the header, but I'm unsure what to write at 0x0006, 0x0008 and how to get the location of the pixel array to write at 0x000A.
I've read the article a few times, but I must admit my head is hurting a little. This is my first experience at doing this sort of thing. Can anybody help me work out how to get the pixel array location?
|
`0x0006` and `0x0008` are reserved, you should simply put zeros there. As to `0x000A`, that's the position at which the actual image data starts in the file. Normally, the header you have here is followed by the DIB header (starting at offset `0x000E`) and the first four bytes of the DIB header are its size. So you take the size of the DIB header, add its starting offset (`0x000E`) and what you've got is the position where the actual data starts - put that at position `0x000A`.
Here is example data from a random bitmap file:
```
42 4D "BM"
2E 78 08 00 Size of the entire bitmap file (0x8782E meaning 555054 bytes)
00 00 creator1, reserved
00 00 creator2, reserved
36 00 00 00 Image data starts at offset 0x36 because the next 0x28 bytes are DIB header
28 00 00 00 DIB header started and its size is 0x28 (40 bytes)
another 36 bytes
FF FF FF First pixel of the image (white as it happens)
```
If you take the [favicon on serverfault.com](http://sstatic.net/serverfault/img/favicon.ico) as an example, you would take the part of the file between offset `0x0016` and `0x013E` and prepend it with with `42 4D 36 01 00 00 00 00 00 00 36 00 00 00`. Which gives you a sort of correct bitmap file - and IrfanView will even display it. However, data stored in ICO files and BMP files is not quite the same because ICO files need to store transparency information. Which is why this favicon has size 16x32 according to its DIB header rather than the expected 16x16.
From [Wikipedia](http://en.wikipedia.org/wiki/ICO_%28file_format%29):
>
> Images with less than 32 bits of color depth follow a particular format: the image is encoded as a single image consisting of a color mask (the "XOR mask") together with an opacity mask (the "AND mask"). The XOR mask must precede the AND mask inside the bitmap data; if the image is stored in bottom-up order (which it most likely is), the XOR mask would be drawn below the AND mask.
>
>
>
In our particular case this means that from 256 bytes of image data the first 64 bytes are the XOR mask, the last 64 bytes the AND mask and only the middle part is our image. In our particular case you could change start of image data (offset `0x000A`) to 0x76 to skip the XOR mask. Then you would also change image height in the DIB header (offset `0x0016`) to 0x10 to make sure the AND mask is ignored. Here these manipulations will give you a valid bitmap, pretty much like what you expected. In general case it might be better to consider the masks however rather than ignore them.
|
How to trigger event in silverlight animation by keyframe?
I need to call a method when the animation gets to a certain keyframe. Is it possible to trigger an event when an animation gets to a certain keyframe? If not is there a better way of triggering an event at a certain time?
|
Silverlight timelines are very limited when it comes to events. As far as I can tell, only the Completed event is supported. What you could do though is have two timelines inside a single storyboard where the second timeline is updating a bound property that you could watch.
Maybe something like:
```
<Storyboard>
<DoubleAnimationusingKeyFrames ... />
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="TriggerEvent">
<ObjectKeyFrame KeyTime="00:00:01" Value="True" />
<ObjectAnimationUsingKeyFrames>
</Storyboard>
```
Then in your code behind for the control, define a dependency property called TriggerEvent of type Boolean. When it changes to true, call your method.
**Another option however**, which is probably better actually, would be to split your original animation into two parallel timelines and hook up a Completed event handler to the first timeline (which you'd use to call your method) then on the second timeline, use the BeginTime property to synchronize the two animations so that the second one picks up just as the first one is completing.
```
<Storyboard>
<!-- Timeline 1 -->
<DoubleAnimationusingKeyFrames Completed="MyCompletedHandler" ... />
<!-- Timeline 2 -->
<DoubleAnimationUsingKeyFrames BeginTime="00:00:01" ... />
</Storyboard>
```
|
\_\_init and \_\_exit macros usage for built-in and loadable modules
I was reading about linux kernel development and I just read some text that I don't understand. Here is the paragraph, which talks about the \_\_init and \_\_exit macros for modules:
>
> This demonstrates a feature of kernel 2.2 and later. Notice the change
> in the definitions of the init and cleanup functions. The \_\_init macro
> causes the init function to be discarded and its memory freed once the
> init function finishes for built−in drivers, but not loadable modules.
> If you think about when the init function is invoked, this makes
> perfect sense.
>
>
> There is also an \_\_initdata
> which works similarly to \_\_init but for init variables rather than
> functions.
>
>
> The \_\_exit macro causes the omission of the function when
> the module is built into the kernel, and like
> \_\_exit, has no effect for loadable modules. Again, if you consider when the cleanup function runs
>
>
>
I get the point; the macro `__init` causes the init function to be discarded and its memory freed once the init function finishes for built-in drivers. But *why?* not for loadable modules? I couldn't make sense of it.
I know it's a silly thing, but I thought about it for some time and couldn't comprehend it fully. Why for built-in driver but not for loadable modules? Variables, addresses etc assigned in `__init` would be required for the both, right?
|
You're right; even in a module there could be functions that you really don't need after initialization, and so they could in principle be removed from memory. The reason `__init` has no effect for modules is more about how easy it would be to implement.
[This answer to a question about the nature of `__init`](https://stackoverflow.com/a/8832142/460258) sheds some light on the subject. Essentially, the kernel build system looks for all of the functions flagged with `__init`, across all of the pieces of the kernel, and arranges them so that they will all be in the same block of memory.
Then, when the kernel boots, it can free that one block of memory all at once.
This pre-sorting idea doesn't work so well with modules. The init code has to be loaded when the module is loaded, so it can't share space with other init code. Instead, the kernel would have to pick a few hundred bytes out of each module and free them individually.
However, hardware page sizes are typically 4KB, so it's hard to free up memory in chunks of less than that. So trying to free the `__init` functions in each individual module is probably more trouble than it's worth.
|
Nativescript Use LIstPicker with JS object
I'm trying to use ListPicker with an object array and my list gets rendered with the label displaying [object Object] for all elements.
I would like to specify which property to use as the "label" for the listpicker.
Unfortunately Nativescript ListPicker only accepts an array of strings and I can't use my Object array as the label will call toString()
I found an alternative solution based on: <https://github.com/NativeScript/NativeScript/issues/1677>
buy my app uses page-router-outlet and doesn't use element so I have no way of using the proposed approach above. Given this scenario are there any possible ways of using ListPicker with object arrays or any workaround that doesn't rely on Page element loaded event ?
|
You don't have to use any loaded event at all. Simply override the `toString` method and pass the items to the ListPicker:
```
public countries: any[] = [
{
value: 0,
name: 'Sweden',
toString: () => {
return 'Sweden';
}
},
{
value: 1,
name: 'Denmark',
toString: () => {
return 'Denmark';
}
},
{
value: 2,
name: 'Norway',
toString: () => {
return 'Norway';
}
},
{
value: 3,
name: 'Finland',
toString: () => {
return 'Finland';
}
},
{
value: 4,
name: 'Iceland',
toString: () => {
return 'Iceland';
}
},
];
```
Pass em to the picker:
```
<ListPicker [items]="countries"></ListPicker>
```
|
Can you remove sakai core tools you don't want/need?
Something I've been wondering recently, is it possible to actually "remove" core tools from the sakai vanilla build without a huge effort (editing loads of config files)?
I know about stealthing tools (<https://confluence.sakaiproject.org/display/DOC/Provisional+Tools>) and I "think" there's some way to "disable" tools (or is that just stealthing?), but simply to remove the possibility of potential problems and lower the service memory footprint + startup time it would be nice if there was a supported means to "not have X Y or Z tool in the service at all".
I've never tried just removing jars to see what happens, but I suspect that mightn't be a good idea and probably needs to be compiled up with tools deployed to the webapp directory, which I would think means changing a whole load of maven files to do a "mvn clean install sakai:deploy" that would be lighter.
|
The Sakai architecture is actually more akin to a lot of loosely (or tightly in some cases) coupled tools than a unified system. This is an advantage from the perspective that you can do exactly the thing you want to do here. It is a disadvantage from a unified user experience perspective (though that is not an architectural limitation but rather a side effect of how the tool teams were run early on in the project).
If you want to remove a tool (like Samigo for this example) then you can simply delete the war file (and directory) related to it from your TOMCAT\_HOME/webapps directory. Run this from your tomcat home directory:
```
rm -rf webapps/samigo-app*
```
When you startup tomcat, the tool will not be loaded and things will work fine (assuming there is not another tool or part of Sakai that expects that one to be there). Some tools like resources (sakai-content-tool) should not be removed for that reason (though stealthing them would be fine).
Please note that only removing the tool will not save you as much as you might hope since there is also a service related to most tools which lines in TOMCAT\_HOME/components. The service component is actually an exploded war file (basically the same as the tool webapp) but it has not interface and has to follow some Sakai conventions in order to load properly. In the case of Samigo again, you could remove it like so (running from your tomcat home):
```
rm -rf components/samigo-pack
```
You should NOT do this while the system is running. You should also NOT remove the API jars from shared.
When you restart Sakai after removing the component you will see a more significant drop in the resources since the tool service is no longer loaded in memory and initialized. I saw about 5 second reduction in startup time (90s to 85s) and about a 25MB reduction in JVM memory used (from 795 to 770) by removing Samigo and it's service.
Your best bet would be to "trial and error" out the optimal solution for your situation and try removing a tool and it's service (if it has one) and seeing if things startup without errors and if the tools you do use work as expected.
Also, please note that removing a tool will NOT remove the tool pages in existing courses. You will end up with a page which simply displays nothing (because Sakai sees it as an empty page in the course now). If you add the tool back into the system then it will appear on the page again.

**UPDATE**: If you want to remove the blank tool page there is one easy option. The easy option is to just go into the site and remove the page the tool is on. This can be done from the Sites admin tool.
Alternatively, you could go into the database and remove all the pages which contain the specific tool id. This is pretty risky though so I don't recommend it.
Generally, the removal of a tool like this would happen before the tool is used in production so hopefully this is a rare case.
|
How can I add text labels to a Plotly scatter plot in Python?
I'm trying to add text labels next to the data points in a Plotly scatter plot in Python but I get an error.
How can I do that?
Here is my dataframe:
```
world_rank university_name country teaching international research citations income total_score num_students student_staff_ratio international_students female_male_ratio year
0 1 Harvard University United States of America 99.7 72.4 98.7 98.8 34.5 96.1 20,152 8.9 25% NaN 2011
```
Here is my code snippet:
```
citation = go.Scatter(
x = "World Rank" + timesData_df_top_50["world_rank"], <--- error
y = "Citation" + timesData_df_top_50["citations"], <--- error
mode = "lines+markers",
name = "citations",
marker = dict(color = 'rgba(48, 217, 189, 1)'),
text= timesData_df_top_50["university_name"])
```
The error is shown below.
```
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('<U32')
```
|
You can include the text labels in the `text` attribute. To make sure that they are displayed on the scatter plot, set `mode='lines+markers+text'`. See the [Plotly documentation on text and annotations](https://plotly.com/python/text-and-annotations/#text-on-scatter-plots-with-graph-objects). I included an example below based on your code.
```
import plotly.graph_objects as go
import pandas as pd
df = pd.DataFrame({'world_rank': [1, 2, 3, 4, 5],
'university_name': ['Harvard', 'MIT', 'Stanford', 'Cambridge', 'Oxford'],
'citations': [98.8, 98.7, 97.6, 97.5, 96]})
layout = dict(plot_bgcolor='white',
margin=dict(t=20, l=20, r=20, b=20),
xaxis=dict(title='World Rank',
range=[0.9, 5.5],
linecolor='#d9d9d9',
showgrid=False,
mirror=True),
yaxis=dict(title='Citations',
range=[95.5, 99.5],
linecolor='#d9d9d9',
showgrid=False,
mirror=True))
data = go.Scatter(x=df['world_rank'],
y=df['citations'],
text=df['university_name'],
textposition='top right',
textfont=dict(color='#E58606'),
mode='lines+markers+text',
marker=dict(color='#5D69B1', size=8),
line=dict(color='#52BCA3', width=1, dash='dash'),
name='citations')
fig = go.Figure(data=data, layout=layout)
fig.show()
```
|
How many bytes are IV and Keys for AES?
I'm using AESCryptoServiceProvider in C#/.Net Framework and I'm wondering how large, in bytes, the IV and Keys are. I'm pretty sure that this class follows the specifications of AES so if anyone has any ideas, I'd be happy to hear it :)
|
The IV size is 16 bytes, and the default key size is 32 (16 and 24 are also allowed) You can use the [BlockSize](http://msdn.microsoft.com/en-us/library/system.security.cryptography.symmetricalgorithm.blocksize.aspx), [KeySize](http://msdn.microsoft.com/en-us/library/system.security.cryptography.aescryptoserviceprovider.keysize.aspx), [LegalBlockSizes](http://msdn.microsoft.com/en-us/library/system.security.cryptography.symmetricalgorithm.legalblocksizes.aspx), and [LegalKeySizes](http://msdn.microsoft.com/en-us/library/system.security.cryptography.symmetricalgorithm.legalkeysizes.aspx) properties, respectively. The API provides the value in bits.
|
Add MIME mapping in web.config for IIS Express
I need to add a new MIME mapping for .woff file extensions to IIS Express.
If I add the following snippet to the "applicationhost.config" of IIS Express it works fine:
```
<staticContent lockAttributes="isDocFooterFileName">
<mimeMap fileExtension=".woff" mimeType="font/x-woff" />
...
```
But I would actually like to do add it to my "web.config" so that not every developer would need to change their "applicationhost.config" locally.
So I removed it again from the "applicationhost.config" file and added the following snippet to the project's "web.config":
```
<system.webServer>
...
<staticContent>
<mimeMap fileExtension=".woff" mimeType="font/x-woff" />
</staticContent>
</system.webServer>
```
Unfortunately it doesn't seem to work that way because when I try to access a .woff file I end up with a HTTP 404.3 error.
What am I doing wrong?
|
Putting it in the "web.config" works fine. The problem was that I got the MIME type wrong. Instead of `font/x-woff` or `font/x-font-woff` it must be **`application/font-woff`**:
```
<system.webServer>
...
<staticContent>
<remove fileExtension=".woff" />
<mimeMap fileExtension=".woff" mimeType="application/font-woff" />
</staticContent>
</system.webServer>
```
See also this answer regarding the MIME type: <https://stackoverflow.com/a/5142316/135441>
**Update 4/10/2013**
>
> Spec is now a recommendation and the MIME type is officially: [`application/font-woff`](http://www.w3.org/TR/WOFF/#appendix-b)
>
>
>
|
(Yet Another) List Aliasing Conundrum
I thought I had the whole list alias thing figured out, but then I came across this:
```
l = [1, 2, 3, 4]
for i in l:
i = 0
print(l)
```
which results in:
```
[1, 2, 3, 4]
```
So far so good.
However, when I tried this:
```
l = [[1, 2], [3, 4], [5, 6]]
for i in l:
i[0] = 0
```
I get
```
[[0, 2], [0, 4], [0, 5]]
```
Why is this?
Does this have to do with how deep aliasing goes?
|
`i = 0` is very different from `i[0] = 0`.
Ignacio has explained the reasons, succinctly and correctly. So I'll just try to explain in a more simple words what's actually going on here.
In the first case, `i` is just a label pointing at some object (one of the members in your list). `i = 0` changes the reference to some other object, so that `i` now references the integer `0`. The list is unmodified, because you never asked to modify `l[0]` or any element of `l`, you only modified `i`.
In the second case, `i` is also just a name pointing at one of the members in your list. That part is no different. However, `i[0]` is now calling `.__getitem__(0)` on one of the list members. Similarly, `i[0] = 'other'` would be like doing `i.__setitem__(0, 'other')`. It is not simply pointing `i` at a different object, as a regular assignment statement would, actually it's mutating the object `i`.
An easy way to think of it is always that names in Python are just labels for objects. A scope or namespace is just like a dict mapping names to objects.
|
Upgrade from gdb 7.7 to 7.8
How to upgrade my GDB debugger from the current version which is 7.7 to the next version which is 7.8, Also I'm working on Ubuntu 14.04.1?
|
gdb 7.8 is currently not available in [trusty repo](http://packages.ubuntu.com/search?suite=trusty&keywords=gdb). But you can install it from the source.
Open terminal and type following commands
```
wget http://ftp.gnu.org/gnu/gdb/gdb-7.8.tar.xz
tar -xf gdb-7.8.tar.xz
cd gdb-7.8/
./configure
make
sudo cp gdb/gdb /usr/local/bin/gdb
```
It will install `gdb` in `/usr/local/bin/` directory. As `/usr/local/bin/` is searched before `/usr/bin/` whenever a command is executed, running `gdb` will execute `gdb 7.8`.
Once installed, you can check gdb version using
```
gdb --version
```
It should output
```
GNU gdb (GDB) 7.8
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
```
If you want to uninstall it simply remove `gdb` from `/usr/local/bin/` by executing
```
sudo rm /usr/local/bin/gdb
```
|
Color classification library
Status: [I am developing my own library](https://stackoverflow.com/questions/6846935/color-classification-library/6850754#6850754)
---
Question:
Is there any library that can do color classification?
I imagine the workflow like this:
```
>>> import colorclassification
>>> classifier = colorclassification.Classifier
>>> color = classifier.classify_rgb([255, 255, 0])
['yellow']
>>> color = classifier.classify_rgb([255, 170, 0])
['orange']
```
The library must not necessarily be for python. Any language where I can view the source code of the module/library will do fine.
|
One way you could do this is just by finding the "closest" color. suppose we have a collection of colors, It doesn't have to cover all 16777216 possible rgb values, it doesn't even need to be in rgb, but for the sake of simplicity, it might look something like so:
```
colors = {'red': (255,0,0),
'green': (0,255,0),
'blue': (0,0,255),
'yellow': (255,255,0),
'orange': (255,127,0),
'white': (255,255,255),
'black': (0,0,0),
'gray': (127,127,127),
'pink': (255,127,127),
'purple': (127,0,255),}
```
Lets define a mechanism that tells us what we really *mean* by "closest" color. In this case, i'll use a simple cartesian distance, but anything that can compare two colors for how similar they are will do.
```
def distance(left, right):
return sum((l-r)**2 for l, r in zip(left, right))**0.5
class NearestColorKey(object):
def __init__(self, goal):
self.goal = goal
def __call__(self, item):
return distance(self.goal, item[1])
```
And that's actually all we need. We can use the builtin `min()` (or max if your similarity function returns higher values for more similar colors)
```
>>> min(colors.items(), key=NearestColorKey((10,10,100)))
('black', (0, 0, 0))
>>> min(colors.items(), key=NearestColorKey((10,10,200)))
('blue', (0, 0, 255))
>>> min(colors.items(), key=NearestColorKey((100,10,200)))
('purple', (127, 0, 255))
>>>
```
|
Unnormalized result of Word Mover's Distance with SpaCy
I'm trying to use Fast Word Mover's Distance library with SpaCy, for the same example in the [GitHub](https://github.com/src-d/wmd-relax)
```
import spacy
import wmd
nlp = spacy.load('en_core_web_md')
nlp.add_pipe(wmd.WMD.SpacySimilarityHook(nlp), last=True)
doc1 = nlp("Politician speaks to the media in Illinois.")
doc2 = nlp("The president greets the press in Chicago.")
print(doc1.similarity(doc2))
```
The result is:
```
6.070106029510498
```
I don't know how to interpret it, since usually distance is normalized (0 to 1). In the readme file, the result for this is not available, hence I'm not sure if my result is wrong or the scale for this measurement is different.
|
A short answer: **don't interpret it**. Use it just like this: the lower is the distance, the more similar are sentences. For virtually all practical applications (e.g. KNN) this is enough.
Now the long answer: word mover distance (read [the paper](http://www.cs.cornell.edu/~kilian/papers/wmd_metric.pdf)) is defined as the weighted average of distances between best matching pairs of "non-stop" words. So if you want to normalize it into (0, 1), you need to divide this best sum by its worst case.
The problem is that in `spacy` word vectors are not normalized (check it by printing `[sum(t.vector**2) for t in doc1]`). Therefore, the maximal distance between them is unlimited. And if you do normalize them, the new WMD will not be equivalent to original WMD (i.e. it will sort pairs of texts differently). Therefore, there is no obvious way to normalize the original spacy-WMD distances that you demonstrated.
Now let's pretend that word vectors are unit-normalized. If it is the case, then the maximal distance between two words is the diameter of a unit sphere (that is, 2). And the maximal weighted average of many 2s is still 2. So you need to divide the distance between texts by 2 to make it fully normalized.
You can build word vector normalization into WMD calculation by inheriting the class you use:
```
import wmd
import numpy
import libwmdrelax
class NormalizedWMDHook(wmd.WMD.SpacySimilarityHook):
def compute_similarity(self, doc1, doc2):
"""
Calculates the similarity between two spaCy documents. Extracts the
nBOW from them and evaluates the WMD.
:return: The calculated similarity.
:rtype: float.
"""
doc1 = self._convert_document(doc1)
doc2 = self._convert_document(doc2)
vocabulary = {
w: i for i, w in enumerate(sorted(set(doc1).union(doc2)))}
w1 = self._generate_weights(doc1, vocabulary)
w2 = self._generate_weights(doc2, vocabulary)
evec = numpy.zeros((len(vocabulary), self.nlp.vocab.vectors_length),
dtype=numpy.float32)
for w, i in vocabulary.items():
v = self.nlp.vocab[w].vector # MODIFIED
evec[i] = v / (sum(v**2)**0.5) # MODIFIED
evec_sqr = (evec * evec).sum(axis=1)
dists = evec_sqr - 2 * evec.dot(evec.T) + evec_sqr[:, numpy.newaxis]
dists[dists < 0] = 0
dists = numpy.sqrt(dists)
return libwmdrelax.emd(w1, w2, dists) / 2 # MODIFIED
```
Now you can be sure that your distance is properly normalized:
```
import spacy
nlp = spacy.load('en_core_web_md')
nlp.add_pipe(NormalizedWMDHook(nlp), last=True)
doc1 = nlp("Politician speaks to the media in Illinois.")
doc2 = nlp("The president greets the press in Chicago.")
print(doc1.similarity(doc2))
print(doc1.similarity(doc1))
print(doc1.similarity(nlp("President speaks to the media in Illinois.")))
print(doc1.similarity(nlp("some irrelevant bullshit")))
print(doc1.similarity(nlp("JDL")))
```
Now the result is
```
0.469503253698349
0.0
0.12690649926662445
0.6037049889564514
0.7507566213607788
```
P.S. You can see that even between two very unrelated texts this normalized distance is much less than 1. This is because in reality word vectors don't cover the whole unit sphere - instead, most of them are clustered on several "continents" on it. Therefore, the distance even between very different texts would be typically less than 1.
|
How to extract data from Tumblr API (JSON)?
I have set up a Tumblr account and registered my application to authenticate it.
Tumblr Documentation: <http://www.tumblr.com/docs/en/api/v2>
I understand the API outputs JSON like this:
```
{
"meta": {
"status": 200,
"msg": "OK"
},
"response": {
"blog": {
"title": "David's Log",
"posts": 3456,
"name": "david",
"url": "http:\/\/david.tumblr.com\/",
"updated": 1308953007,
"description": "<p><strong>Mr. Karp<\/strong> is tall and skinny, with
unflinching blue eyes a mop of brown hair.\r\n
"ask": true,
"ask_anon": false,
"likes": 12345
}
}
}
```
Thats fine, but the documentation ends there. I have no idea how to get this information and display it on my site.
I thought the way you would get it would be something like:
```
$.ajax({
url: "http://api.tumblr.com/v2/blog/myblog.tumblr.com/info?api-key=myapikey",
dataType: 'jsonp',
success: function(results){
console.log(results);
}
});
```
But this does nothing.
Can anyone help me out? Thanks
|
`results` is now the object you can use to reference the JSON structure. When you console.log the results object, it should appear in the Javascript developer console where you can explore the object tree.
## The response object
So when your success callback receives the response, the following should be available to you:
`results.meta.status` => 200
`results.meta.msg` => "OK"
`results.response.title` => "David's Log"
`results.response.posts` => 3456
`results.response.name` => "david"
`results.response.url` => "<http://david.tumblr.com/>"
`results.response.updated` => 1308953007
`results.response.description` => "`<p><strong>Mr. Karp</strong>..`"
`results.response.ask` => true
`results.response.ask_anon` => false
`results.response.likes` => 12345
---
## Writing to the page
If you actually want to see something written to your page you'll need to use a function that modifies the DOM such as document.write, or, since you're using Jquery, $("#myDivId").html(results.response.title);
Try this:
- Add `<div id="myDivId"></div>` somewhere in the of your page, and
- Add `$("#myDivId").html(results.response.title);` in your success callback function
---
```
$.ajax({
url: "http://api.tumblr.com/v2/blog/myblog.tumblr.com/info?api_key=myapikey",
dataType: 'jsonp',
success: function(results){
// Logs to your javascript console.
console.log(results);
// writes the title to a div with the Id "myDivId" (ie. <div id="myDivId"></div>)
$("#myDivId").html(results.response.title);
}
});
```
|
How to use YUI Compressor in Ant build script for javascript and css
After a couple of days searching for how to use the YUI Compressor in an Ant build script I have finally got it working. Many old examples (<2010) exist for creating an Ant task and using that within your build script, but that was overkill for me.
Many of the examples are also old and require some greater knowledge of Ant or configuring Ant tasks. The solution below is simply what was quick, easy and effective for me.
|
The below was added to one of my `<target>` tags to have **all** the javascript files in a single directory compressed. These files retain their original name. To do this for CSS simply switch the 'js' to 'css' and update the paths accordingly.
This was done with YUI Compressor 2.4.7 and I run the Ant build script it in Eclipse Juno without any changes to class paths or other modifications of settings.
```
<!-- Minimizing Javascript files -->
<echo message="Compressing Javascript files at location: ${build.root}/resources/js/*.js" />
<java jar="c:/dev/lib/yuicompressor-2.4.7/build/yuicompressor.jar" fork="true">
<arg value="${build.root}/resources/js/*.js" /> <!-- input path for JS files -->
<!--<arg value="-v" /> --><!-- Turn on verbose -->
<arg value="-o" />
<arg value="'.js$:.js'" />
<arg value="${build.root}/resources/js/*.js" /> <!-- output path for JS files -->
<classpath>
<pathelement location="c:/dev/lib/yuicompressor-2.4.7/build/yuicompressor.jar"/>
</classpath>
</java>
```
Please feel free to improve this answer. The solution above works for me, but I'm no expert.
|
Remove certain consecutive duplicates in list
I have a list of strings like this:
```
['**', 'foo', '*', 'bar', 'bar', '**', '**', 'baz']
```
I want to replace the `'**', '**'` with a single `'**'`, but leave `'bar', 'bar'` intact. I.e. replace any consecutive number of `'**'` with a single one. My current code looks like this:
```
p = ['**', 'foo', '*', 'bar', 'bar', '**', '**', 'baz']
np = [p[0]]
for pi in range(1,len(p)):
if p[pi] == '**' and np[-1] == '**':
continue
np.append(p[pi])
```
Is there any more pythonic way to do this?
|
Not sure about pythonic, but this should work and is more terse:
```
star_list = ['**', 'foo', '*', 'bar', 'bar', '**', '**', 'baz']
star_list = [i for i, next_i in zip(star_list, star_list[1:] + [None])
if (i, next_i) != ('**', '**')]
```
The above copies the list twice; if you want to avoid that, consider Tom Zych's method. Or, you could do as follows:
```
from itertools import islice, izip, chain
star_list = ['**', 'foo', '*', 'bar', 'bar', '**', '**', 'baz']
sl_shift = chain(islice(star_list, 1, None), [None])
star_list = [i for i, next_i in izip(star_list, sl_shift)
if (i, next_i) != ('**', '**')]
```
This can be generalized and made iterator-friendly -- not to mention more readable -- using a variation on the `pairwise` recipe from the `itertools` docs:
```
from itertools import islice, izip, chain, tee
def compress(seq, x):
seq, shift = tee(seq)
shift = chain(islice(shift, 1, None), (object(),))
return (i for i, j in izip(seq, shift) if (i, j) != (x, x))
```
Tested:
```
>>> list(compress(star_list, '**'))
['**', 'foo', '*', 'bar', 'bar', '**', 'baz']
```
|
Can a speculatively executed CPU branch contain opcodes that access RAM?
As I understand, when a CPU speculatively executes a piece of code, it "backs up" the register state before switching to the speculative branch, so that if the prediction turns out wrong (rendering the branch useless) -- the register state would be safely restored, without damaging the "state".
So, my question is: *can a speculatively executed CPU branch contain opcodes that access RAM?*
I mean, accessing the RAM isn't an "atomic" operation - one simple opcode reading from memory can cause actual RAM access, if the data isn't currently located in the CPU cache, which might turn out as an extremely time consuming operation, from the CPU perspective.
And if such access is indeed allowed in a speculative branch, is it only for read operations? Because, I can only assume that reverting a write operation, depending on it's size, might turn out extremely slow and tricky if a branch is discarded and a "rollback" is performed.
And, for sure, read/write operations are supported, to some extent at least, due to the fact that the registers themselves, on some CPUs, are physically located on the CPU cache as I understand.
So, maybe a more precise formulation would be: *what are the limitations of a speculatively executed piece of code?*
|
The cardinal rules of speculative out-of-order (OoO) execution are:
1. Preserve the illusion of instructions running sequentially, in program order
2. Make sure speculation is contained to things that can be rolled back if mis-speculation is detected, and that can't be observed by other cores to be holding a wrong value. Physical registers, the back-end itself that tracks instruction order yes, but not cache. Cache is coherent with other cores so stores must not commit to cache until after they're non-speculative.
OoO exec is normally implemented by treating *everything* as speculative until retirement. Every load or store could fault, every FP instruction could raise an FP exception. Branches are special (compared to exceptions) only in that branch mispredicts are not rare, so a special mechanism to handle [early detection and roll-back for branch misses](https://stackoverflow.com/questions/50984007/what-exactly-happens-when-a-skylake-cpu-mispredicts-a-branch) is helpful.
---
Yes, cacheable loads can be executed speculatively and OoO because they have no side effects.
Store instructions can also be executed speculatively thanks to the store buffer. **The actual execution of a store just writes the address and data into the store buffer.** (related: [Size of store buffers on Intel hardware? What exactly is a store buffer?](https://stackoverflow.com/questions/54876208/size-of-store-buffers-on-intel-hardware-what-exactly-is-a-store-buffer) gets more techincal than this, with more x86 focus. This answer is I think applicable to most ISAs.)
Commit to L1d cache happens some time *after* the store instruction retires from the re-order buffer (ROB), i.e. when the store is known to be non-speculative, the associated store-buffer entry "graduates" and becomes eligible to commit to cache and become globally visible. A store buffer decouples execution from anything other cores can see, and also insulates this core from cache-miss stores so it's a very useful feature even on in-order CPUs.
**Before a store-buffer entry "graduates", it can just be discarded along with the ROB entry that points to it, when rolling back on mis-speculation.**
(This is why even strongly-ordered hardware memory models still allow StoreLoad reordering <https://preshing.com/20120930/weak-vs-strong-memory-models/> - it's nearly essential for good performance not to make later loads wait for earlier stores to actually commit.)
The store buffer is effectively a circular buffer: entries allocated by the front-end (during alloc/rename pipeline stage(s)) and released upon commit of the store to L1d cache. (Which is kept coherent with other cores via [MESI](https://en.wikipedia.org/wiki/MESI_protocol)).
Strongly-ordered memory models like x86 can be implemented by doing commit from the store buffer to L1d in order. Entries were allocated in program order, so the store buffer can basically be a circular buffer in hardware. Weakly-ordered ISAs can look at younger entries if the head of the store buffer is for a cache line that isn't ready yet.
Some ISAs (especially weakly ordered) also do merging of store buffer entries to create a single 8-byte commit to L1d out of a pair of 32-bit stores, [for example](https://stackoverflow.com/a/54225485/224132).
---
**Reading cacheable memory regions is assumed to have no side effects and can be done speculatively by OoO exec, hardware prefetch, or whatever**. Mis-speculation can "pollute" caches and waste some bandwidth by touching cache lines that the true path of execution wouldn't (and maybe even triggering speculative page-walks for TLB misses), but that's the only downside1.
MMIO regions (where reads *do* have side-effects, e.g. making a network card or SATA controller do something) need to be marked as uncacheable so the CPU knows that speculative reads from that physical address are not allowed. [If you get this wrong, your system will be unstable](https://stackoverflow.com/questions/46118893/arm-prefetch-workaround) - my answer there covers a lot of the same details you're asking about for speculative loads.
High performance CPUs have a load buffer with multiple entries to track in-flight loads, including ones that miss in L1d cache. (Allowing hit-under-miss and miss-under-miss even on in-order CPUs, stalling only if/when an instruction tries to read load-result register that isn't ready yet).
In an OoO exec CPU, it also allows OoO exec when one load address is ready before another. When data eventually arrives, instructions waiting for inputs from the load result become ready to run (if their other input was also ready). So the load buffer entries have to be wired up to the scheduler (called the reservation station in some CPUs).
See also [About the RIDL vulnerabilities and the "replaying" of loads](https://stackoverflow.com/questions/56187269/about-the-ridl-vulnerabilities-and-the-replaying-of-loads) for more about how Intel CPUs specifically handle uops that are waiting by aggressively trying to start them on the cycle when data might be arriving from L2 for an L2 hit.
---
**Footnote 1**: This downside, combined with a timing side-channel for detecting / reading micro-architectural state (cache line hot or cold) into architectural state (register value) is what enables Spectre. (<https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)#Mechanism>)
**Understanding Meltdown as well is very useful for understanding the details of how Intel CPUs choose to handle fault-suppression for speculative loads that turn out to be on the wrong path.** <http://blog.stuffedcow.net/2018/05/meltdown-microarchitecture/>
---
>
> And, for sure, read/write operations are supported
>
>
>
Yes, by decoding them to separate logically separate load / ALU / store operations, if you're talking about modern x86 that decodes to instructions uops. The load works like a normal load, the store puts the ALU result in the store buffer. All 3 of the operation can be scheduled normally by the out-of-order back end, just like if you'd written separate instructions.
If you mean *atomic* RMW, then that can't really be speculative. Cache is globally visible (share requests can come at any time) and there's no way to roll it back (well, except [whatever Intel does for transactional memory](https://www.realworldtech.com/haswell-tm/3/)...). You must not ever put a wrong value in cache. See [Can num++ be atomic for 'int num'?](https://stackoverflow.com/questions/39393850/can-num-be-atomic-for-int-num/) for more about how atomic RMWs are handled, especially on modern x86, by delaying response to share / invalidate requests for that line between the load and the store-commit.
However, that doesn't mean that `lock add [rdi], eax` serializes the whole pipeline: [Are loads and stores the only instructions that gets reordered?](https://stackoverflow.com/questions/50494658/are-loads-and-stores-the-only-instructions-that-gets-reordered/50496379) shows that speculative OoO exec of *other* independent instructions can happen around an atomic RMW. (vs. what happens with an exec barrier like `lfence` that drains the ROB).
Many RISC ISAs only provide atomic RMW via [load-linked / store-conditional](https://en.wikipedia.org/wiki/Load-link/store-conditional) instructions, not a single atomic RMW instruction.
>
> [read/write ops ...], to some extent at least, due to the fact that the registers themselves, on some CPUs, are physically located on the CPU cache as I understand.
>
>
>
Huh? False premise, and that logic doesn't make sense. Cache has to be correct at all times because another core could ask you to share it at any moment. Unlike registers which are private to this core.
Register files are built out of SRAM like cache, but are separate. There are a few microcontrollers with SRAM *memory* (not cache) on board, and the registers are memory-mapped using the early bytes of that space. (e.g. AVR). But none of that seems at all relevant to out-of-order execution; cache lines that are caching memory are definitely not the same ones that are being used for something completely different, like holding register values.
It's also not really plausible that a high-performance CPU that's spending the transistor budget to do speculative execution at all would combine cache with register file; then they'd compete for read/write ports. One large cache with the sum total read and write ports is much more expensive (area and power) than a tiny fast register file (many read/write ports) and a small (like 32kiB) L1d cache with a couple read ports and 1 write port. For the same reason we use split L1 caches, and have multi-level caches instead of just one big private cache per core in modern CPUs. [Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?](https://stackoverflow.com/questions/4666728/why-is-the-size-of-l1-cache-smaller-than-that-of-the-l2-cache-in-most-of-the-pro/38549736#38549736)
---
**Related reading / background**:
- <https://stackoverflow.com/tags/x86/info> has some good CPU-architecture links.
- <https://www.realworldtech.com/haswell-cpu/5/> David Kanter's Haswell deep-dive.
- [Size of store buffers on Intel hardware? What exactly is a store buffer?](https://stackoverflow.com/questions/54876208/size-of-store-buffers-on-intel-hardware-what-exactly-is-a-store-buffer)
- [what is a store buffer?](https://stackoverflow.com/questions/11105827/what-is-a-store-buffer)
- [How do the store buffer and Line Fill Buffer interact with each other?](https://stackoverflow.com/questions/61129773/how-do-the-store-buffer-and-line-fill-buffer-interact-with-each-other)
- [Out-of-order execution vs. speculative execution](https://stackoverflow.com/questions/49601910/out-of-order-execution-vs-speculative-execution) - *Everything* is speculative until retirement. My answer there focuses on the Meltdown aspect.
- <http://blog.stuffedcow.net/2018/05/meltdown-microarchitecture/>
- [What exactly happens when a skylake CPU mispredicts a branch?](https://stackoverflow.com/questions/50984007/what-exactly-happens-when-a-skylake-cpu-mispredicts-a-branch)
- <https://en.wikipedia.org/wiki/MESI_protocol#Store_Buffer>
- <https://en.wikipedia.org/wiki/Write_buffer> (not a great article, but mentioned for completeness).
- [How does memory reordering help processors and compilers?](https://stackoverflow.com/questions/37725497/how-does-memory-reordering-help-processors-and-compilers) (StoreLoad reordering allows for a store buffer and is essentially necessary for good performance.)
---
- <https://en.wikipedia.org/wiki/Memory_disambiguation> - how the CPU handles forwarding from the store buffer to a load, or not if the store was actually younger (later in program order) than this load.
- <https://blog.stuffedcow.net/2014/01/x86-memory-disambiguation/> - **Store-to-Load Forwarding and Memory Disambiguation in x86 Processors**. Very detailed test results and technical discussion of store-forwarding, including from narrow loads that overlap with different parts of a store, and near cache-line boundaries. (<https://agner.org/optimize/> has some simpler-to-understand but less detailed info about when store-forwarding is slow vs. fast in his microarch PDF.)
- <https://github.com/travisdowns/uarch-bench/wiki/Memory-Disambiguation-on-Skylake> - modern CPUs dynamically predict memory dependencies for loads when there are earlier stores with unknown address in flight. (i.e. store-address uop not executed yet.) This can result in having to roll back if the prediction is wrong.
- [Globally Invisible load instructions](https://stackoverflow.com/questions/50609934/globally-invisible-load-instructions) - store forwarding from loads that *partially* overlap a recent store and partially don't gives us a corner case that sheds some light on how CPUs work, and how it does/doesn't make sense to think about memory (ordering) models. Note that C++ std::atomic can't create code that does this, although C++20 std::atomic\_ref could let you do an aligned 4-byte atomic store that overlaps an aligned 8-byte atomic load.
|
TypeError: First argument must be a string or Buffer. Javascript
```
var formData = {
name: 'TestDeck',
description: 'This is a test deck for my api',
private: false,
shareable: false,
ttsLanguages: [],
blacklistedSideIndices: [],
blacklistedQuestionTypes: [],
gradingModes: [],
imageAttribution: 'https://www.logogarden.com/wp-content/uploads/lg-index/Example-Logo-6.jpg',
imageFile: fs.readFile('retext.png', 'utf8')
}
function createDeck(connection) {
request.post({
url: '<url>',
formData: formData,
headers: {
'Content-Type': 'multipart/form-data'
},
json: true
}),
function(err, resp, body) {
}
}
```
I am getting the error: TypeError: First argument must be a string or Buffer.
I honestly have no idea why, need help.
|
There are several problems in the code.
1. You get `TypeError: First argument must be a string or Buffer` because you are trying to send boolean value `false` in form data -- HTML form does not support boolean value. In HTML, checked checkbox will send its value, while unchecked checkbox won't.
To fix the issue, you can change `false` to `'FALSE'`(string) and parse it in server side.
2. The use of `fs.readFile('retext.png', 'utf8')` is incorrect. To attach file in the form, the right way is: `imageFile: fs.createReadStream('retext.png')`.
3. When `formData: formData` is used in `request.post(...)`, the `Content-Type` of the HTTP request would be `multipart/form-data` automatically, you don't need to define `Content-Type` header again.
Moreover, it is incorrect to set `json: true`, which will make `Content-Type` as `application/json`. This conflict will make `request` module confused, and may cause problem in some JavaScript environment.
4. The callback function `function(err, resp, body){...}` should be part of `request.post(...)`, maybe it is a typo.
In summary, the correct code would look like:
```
var formData = {
name: 'TestDeck',
description: 'This is a test deck for my api',
private: 'FALSE',
shareable: 'FALSE',
ttsLanguages: [],
blacklistedSideIndices: [],
blacklistedQuestionTypes: [],
gradingModes: [],
imageAttribution: 'https://www.logogarden.com/wp-content/uploads/lg-index/Example-Logo-6.jpg',
imageFile: fs.createReadStream('retext.png')
}
function createDeck(connection) {
request.post({
url: '<url>',
formData: formData
}, function(err, resp, body) {
})
}
```
|
Set f:param value with JavaScript
Is it possible to do:
jsf code (pseudo):
```
...
<f:param name="arg" value="document.getElementById('naming').text()">
<h:inputText id="naming"></h:inputText>
...
```
I mean approach,when `<f:param>` is set with JS.
Is it bad practice?
Thanks for help.
|
You need to use a4j's `commandButton` and `actionParam` to be able to pass a dynamic param back to the server.
Additionally, you need an attribute on your bean that will receive the param value.
Example:
```
<a4j:commandButton action="#{myBean.action}" value="Submit!">
<a4j:actionParam name="arg" noEscape="true" value="getTheValue()" assignTo="#{myBean.myBeanArg}" />
</a4j:commandButton>
```
Here `myBean.myBeanArg` will receive the value returned by the javascript function `getTheValue()`.
Notice the `noEscape="true"` attribute. This is needed because otherwise the data inside `value` would be enclosed in single quotes and escaped, resulting in no javascript execution. As stated in the [documentation](http://docs.jboss.org/richfaces/3.3.X/3.3.1.GA/en/devguide/html/a4j_actionparam.html):
>
> It is possible to use JavaScript expression or function in the "value"
> attribute. In this case the "noEscape" attribute should be set to
> "true". The result of this JavaScript invocation is sent to the server
> as a value of `<a4j:actionparam>`.
>
>
>
|
An Entity with identical table data
Before I elaborate the problem, I'm well aware the database isn't designed conventionally. Sadly, I can't change this particular database due to how it is integrated, so I've got a potential solution but that won't be implemented for several months. In the mean time I need to work around the following:
The problem is I need to build an Entity, this would represent our `Accounts`. But the problem, our database implements the following structure:
- Invoiced Table
- Non-Invoiced Table
My Entity, represents the exact same data on those tables, same column names, duplicate under all conditions, except one is invoiced while the other represents non-invoiced customers. But since it isn't one table, with a Flag to indicate invoiced versus non-invoiced, how can my Entity link to both of those tables?
Since both tables represent separate names, I can't use the `[Table("...")]` or the auto mapping capabilities. I hate asking such a question, but I can't find any documentation on how to handle such an issue.
|
You could use table-per-concrete class inheritance then define the table names on the derived types:
```
public abstract class Account
{
// common entity code here
...
}
public class InvoicedAccount : Account {}
public class NonInvoicedAccount: Account {}
public YourContext : DbContext
{
public DbSet<InvoicedAccount> InvoicedAccounts { get; set; }
public DbSet<NonInvoicedAccount> NonInvoicedAccounts { get; set; }
protected override void OnModelCreating( DbModelBuilder modelBuilder )
{
modelBuilder.Entity<InvoicedAccounts>().Map( m =>
{
m.MapInheritedProperties();
m.ToTable( "InvoicedAccountTable" );
} );
modelBuilder.Entity<NonInvoicedAccounts>().Map( m =>
{
m.MapInheritedProperties();
m.ToTable( "NonInvoicedAccountTable" );
} );
}
}
```
|
Border shadow on one edge of a CSS triangle
I have this CSS triangle:
<http://codepen.io/orweinberger/pen/myEoVa>
CODE:
```
*,
*:before,
*:after {
box-sizing: border-box;
}
.triangle {
position:absolute;
bottom:0;
right:0;
display: inline-block;
vertical-align: middle;
}
.triangle-0 {
width: 200px;
height: 200px;
border-bottom: solid 100px rgb(85,85,85);
border-right: solid 100px rgb(85,85,85);
border-left: solid 100px transparent;
border-top: solid 100px transparent;
}
.text {
color:#fff;
position:absolute;
bottom:0;
right:0;
}
```
Is it possible to add a shadow to one of the edges, similar to this?
<http://codepen.io/orweinberger/pen/ByzbKX>
|
You can use another approach for the triangle to be able to apply a box-shadow to it :
```
body {
overflow: hidden;
}
div {
position: absolute;
bottom: 0;
right: 0;
height: 150px;
width: 213px;
background: lightgrey;
-webkit-transform-origin:100% 0;
-ms-transform-origin:100% 0;
transform-origin: 100% 0;
-webkit-transform: rotate(-45deg);
-ms-transform: rotate(-45deg);
transform: rotate(-45deg);
box-shadow: 0px -3px 5px 0px #656565;
}
```
```
<div></div>
```
More info here on [triangles with transform rotate](https://stackoverflow.com/a/24808936/1811992)
|
C++: Is It OK to Inherit from a Class and Its Protected Member Type?
Is the following code C++ standard compliant?
```
struct B
{
protected:
struct Type {};
};
struct D : B, B::Type
{};
int main()
{
D d;
return 0;
}
```
I tried it on [Compiler Explorer](https://godbolt.org/).
MSVC(VS 2017 RTW) accepts it.
gcc(7.3) and clang(6.0.0) reject it.
|
The code is standard compliant and was since C++11, but was not in C++03.
C++11 through C++17 say this in the introduction to section [class.access] , Member Access Control:
>
> All access controls in clause [class.access] affect the ability to access a class member name from the declaration of a particular entity, including parts of the declaration preceding the name of the entity being declared and, if the entity is a class, the definitions of members of the class appearing outside the class's *member-specification*.
>
>
>
In those same Standard versions, an example follows which is very much like your question, but even a bit trickier:
>
> [Example:
>
>
>
> ```
> class A {
> ...
> protected:
> struct B { };
> };
> ...
>
> struct D: A::B, A { };
>
> ```
>
> ... The use of `A::B` as a *base-specifier* is well-formed because `D` is derived from `A`, so checking of *base-specifiers* must be deferred until the entire *base-specifier-list* has been seen. -*end example*]
>
>
>
But I see the same results you did: g++ and clang++ both reject these programs, no matter what `-std=` argument I give. This is a pair of compiler bugs.
C++03 has this instead of the first paragraph I quoted above:
>
> All access controls in clause [class.access] affect the ability to access a class member name from a particular scope. The access control for names used in the definition of a class member that appears outside of the member's class definition is done as if the entire member definition appeared in the scope of the member's class....
>
>
>
The *base-specifier* of a class definition is not in that class's scope, so C++03 does not allow using a protected or private name as the name of a base class for a derived class that otherwise has access to that name.
|
How to disable requiretty for a single command in sudoers?
I want to disable requiretty so that I can sudo within scripts, but I'd rather only disable it for a single command rather than everything. Is that possible within the sudoers config?
|
You can override the default setting for options such as `requiretty` for a specific user or for a specific command (or for a specific run-as-user or host), but not for a specific command when executed as a specific user.
For example, assuming that `requiretty` is set in the compile-default options, the following `sudoers` file allows both `artbristol` and `bob` to execute `/path/to/program` as root from a script. `artbristol` needs no password whereas `bob` must have to enter a password (presumably `tty_tickets` is off and `bob` entered his password on some terminal recently).
```
artbristol ALL = (root) NOPASSWD: /path/to/program
bob ALL = (root) /path/to/program
Defaults!/path/to/program !requiretty
```
If you want to change the setting for a command with specific arguments, you need to use a command alias (this is a syntax limitation). For example, the following fragment allows `artbristol` to run `/path/to/program --option` in a script, but not `/path/to/program` with other arguments.
```
Cmnd_Alias MYPROGRAM = /path/to/program --option
artbristol ALL = (root) /path/to/program
artbristol ALL = (root) NOPASSWD: MYPROGRAM
Defaults!MYPROGRAM !requiretty
```
|
How to drop a unique constraint from table column?
I have a table 'users' with 'login' column defined as:
```
[login] VARCHAR(50) UNIQUE NOT NULL
```
Now I want to remove this unique constraint/index using SQL script. I found its name **UQ\_*users*\_7D78A4E7** in my local database but I suppose it has a different name on another database.
What is the best way to drop this unique constraint? Or at least any...
Thanks.
|
SKINDER, your code does not use column name. Correct script is:
```
declare @table_name nvarchar(256)
declare @col_name nvarchar(256)
declare @Command nvarchar(1000)
set @table_name = N'users'
set @col_name = N'login'
select @Command = 'ALTER TABLE ' + @table_name + ' drop constraint ' + d.name
from sys.tables t
join sys.indexes d on d.object_id = t.object_id and d.type=2 and d.is_unique=1
join sys.index_columns ic on d.index_id=ic.index_id and ic.object_id=t.object_id
join sys.columns c on ic.column_id = c.column_id and c.object_id=t.object_id
where t.name = @table_name and c.name=@col_name
print @Command
--execute (@Command)
```
|
PKCS#1 and PKCS#8 format for RSA private key
Can some one help me understand how an RSA key literally is stored in these formats? I would like to know the difference between the PKCS formats vs Encodings(DER, PEM). From what I understand PEM is more human readable. Is PEM/DER for keys/certs similar to UTF-8/16 for characters? What is the significance of DER/PEM? Sorry too many questions but fed up googling and getting vague answers. Thanks.
|
PKCS#1 and PKCS#8 (Public-Key Cryptography Standard) are standards that govern the use of particular cryptographic primitives, padding, etc. Both define file formats that are used to store keys, certificates, and other relevant information.
PEM (*Privacy-Enhanced Mail*) and DER (*Distinguished Encoding Rules*) are a little bit more interesting. DER is the ASN.1 encoding for keys and certificates etc., which you'll be able to Google plenty about. Private keys and certificates are encoded using DER and can be saved directly like this. However, these files are binary and can't be copied and pasted easily, so many (if not most?) implementations accept PEM encoded files also. PEM is basically base64 encoded DER: we add a header, optional meta-data, and the base64 encoded DER data and we have a PEM file.
|
combinatorial hardware multiplication in verilog
Suppose I have a multiplier code like this,
```
module multiply(
output [63:0] result,
input [31:0] a,
input [31:0] b
);
assign result = a * b;
endmodule
```
This produces a lot of gates.
What preferable method should be used to implement combinatorial multiplier?
|
Hardware multipliers are big, you just have to live with it!
Multipliers will get bigger as its input bit widths get bigger. So if you don't need the full 32 bits on one of you operands, then reducing this size to the minimum will reduce the size of the resulting hardware.
If you're multiplying by a fixed number, I think the compiler can make some optimizations to limit the size of the hardware too. Or you can use different encoding schemes for the fixed number such as [CSD](http://en.wikipedia.org/wiki/Canonical_signed_digit) that will reduce the number of adders in the multiplier, further reducing its area.
If you need loads of multipliers and have a fast clock, maybe you can reuse a single hardware multiplier for many calculations. This means writing some control/pipelining logic to schedule your multiplies, and you might need some memory, but it can save you area overall. You'd be designing a mini-DSP datapath in this case.
|
How do I execute a Linux command whilst using the less command or within the man pages?
I generally pipe the command `ls` with `less` and would like to execute a command while it is paging e.g. I come across a file that I would like to delete so I would like to execute the command `rm {filename}` whilst still paging. I would also like to hope I can use the same method while perusing man pages. If not how is it different?
|
You can access the command line using bang (`!`) within less.
So for example, if you type:
```
touch temp.txt
ls | less
!rm temp.txt
```
And temp.txt should be gone.
**Edit:** By default it seems that man now uses less to page (for some reason I thought it used `more`, maybe in the past it did). You can use the same trick, but it requires the full path (eg. /home/user/...) to get it to work.
This is because invoking man changes the current working directory. On my machine (xubuntu, using `xfce-terminal`) it goes to `/usr/share/man`. If your console displays the CWD you can see it change, or you can see it from within `man` by entering:
```
!pwd
```
|
Honeycomb and TabHost specs
I have a question about Honeycomb's backward compatibility. I have an app that supports 2.1 or higher and seems to mostly work on Honeycomb except when they start a TabActivity.
In particular, when I add tabs to the TabHost, I get the following exception
android.content.res.Resources$NotFoundException: Resource ID #0x0
When looking at the code that throws this exception, I see that it's the tab spec that has a label and an icon. Inside the code, in the LabelAndIconIndicatorStrategy tries to inflate the layout file R.layout.tab\_indicator which doesn't appear to be available.
```
TabHost.TabSpec spec; // Resusable TabSpec for each tab
Intent intent; // Reusable Intent for each tab
// Create an Intent to launch an Activity for the tab (to be reused)
intent = new Intent().setClass(_gameActivity, ScoreGameActivity.class);
intent.putExtra(GameChangerConstants.STREAM_ID, _stream.pk().toString());
// Initialize a TabSpec for each tab and add it to the TabHost
spec = _gameTabHost.newTabSpec("score_game").setIndicator("Score", res.getDrawable(R.drawable.icon_field_gloss)).setContent(intent);
_gameTabHost.addTab(spec);
```
Is there a new way of creating tabs for honeycomb that I don't know about? I've poured over the documentation but haven't seen anything that indicates a problem with what I've done.
I'd like to avoid having to use fragments at this point until we can do a more comprehensive restructuring of our UI widgets and I'd like to better understand this issue.
|
I believe I've found a solution, but because people are curious, here is the stacktrace I got when I ran into this problem:
```
05-17 13:09:53.462: ERROR/CustomExceptionHandler(500): Uncaught throwable in thread Thread[main,5,main]
android.content.res.Resources$NotFoundException: Resource ID #0x0
at android.content.res.Resources.getValue(Resources.java:1014)
at android.content.res.Resources.loadXmlResourceParser(Resources.java:2039)
at android.content.res.Resources.getLayout(Resources.java:853)
at android.view.LayoutInflater.inflate(LayoutInflater.java:389)
at android.widget.TabHost$LabelAndIconIndicatorStrategy.createIndicatorView(TabHost.java:568)
at android.widget.TabHost.addTab(TabHost.java:226)
at com.myApp.ui.TabDialog.addTab(TabDialog.java:80)
...
```
At that line, I have code roughly equivalent to what sparky saw:
```
spec = myTabHost.newTabSpec("score_game").setIndicator("Score", res.getDrawable(R.drawable.icon_field_gloss)).setContent(intent);
myTabHost.addTab(spec);
```
Note that `myTabHost` is a TabHost and `spec` is a TabSpec.
Previously, I was initializing `myTabHost` like this:
```
//WRONG - This can CRASH your app starting at Android SDK 3.0
TabHost myTabHost = new TabHost(getContext());
```
To fix this problem, I started initializing the TabHost by doing this:
```
TabHost myTabHost = new TabHost(getContext(), null);
```
And that fixed it! I would love to find a root cause, but I haven't yet been able to figure it out.
|
A working example of the CLI and STI instructions in x86 16-bit assembly
I would like a practical example of the `cli` and `sti` instructions in x86 16-bit assembly, i.e. an example in code written in this language which allows me to know what these instructions are for by practice and to go further than theory.
I know the documentation says that `cli` disables the interrupt flag and `sti` enables it and the interrupt flag does not affect
not handling non-maskable interrupts (NMI) or software interrupts generated by the `int` instruction.
In the tutorial I follow I have this code:
```
mov ax, 0x8000
cli
mov ss, ax
mov sp, 0xF000
sti
```
- My tests make me say that `cli` and `sti` are useless in the example given in the course, after doing several tests I was able to verify that the results will always be the same whether I put `cli` and `sti` or remove these instructions.
- The explanation of the usefulness of `cli` and `sti` by the speakers on the different topics for the example given in the course is purely theoretical. That is to say that you have to put `cli` and `sti` for safety to avoid bugs / crashes. A speaker on discord says there's a one in a million chance that something goes wrong when I initialize the segments and the stack offset.
Which means he could never verify his theoretical explanation by himself, he just accepts the theory, no curiosity to go further and experiment by itself, impossible to verify by practice since there is a one in millions chance of having a problem.
- On the various documentations/site strictly no other practical example that really demonstrates what `cli` and `sti` does and how it is useful, just copied and pasted the documentation without sample code, i.e. `cli` sets the interrupt flag to 0, and `sti` sets it to 1. When it is disabled hardware interrupts are ignored. Zero example of use, just the theoretical sentence and
nothing in practice makes it possible to test this kind of thing, there is indeed an example on a French documentation but the example is also useless than the example of the tutorial that I follow to truly understand. That is to say an example which initializes a segment and which puts `cli` and `sti` before and after the line of code, and if we remove `cli` and `sti` the result will be the same no matter what (maybe we have a one in a million chance that a problem will occur if we remove `cli` and `sti`, that's good, it allows me to never check theory in practice).
- Another speaker on discord tells me that he experimented a bit with all this, that he coded in assembly for a while and that in his experience he understands why you have to put `cli` and `sti`, because otherwise it can cause problems so you have to put it and that's all. When I ask him to give me a practical example (which should be right up his alley since he practiced), he does not do it because he is not at home, but on the other hand gives me a theoretical pad again to explain to me how it is useful, so
obviously we can explain in great detail how useful it is but never demonstrate utility with a practical example in x86 16-bit assembly.
I specify that I am not familiar with the hardware interrupts. I only tested the software interrupts which one can call with `int`.
I'm in kernel mode, I want a practical example that causes the code to have trouble with hardware interrupts, then another example with `cli` and `sti` that can fix the problem.
|
Of course the whole point of `cli/sti` is to manage the handling of hardware interrupts, so you need some understanding of how hardware interrupts work in general.
Here's a brief overview: a hardware device connected to the CPU can trigger a hardware interrupt (in the case of an 8086, by connecting a high voltage to the INTR pin on the CPU chip, and using other pins to signal which interrupt vector should be called). When this occurs, *if* the interrupt flag is set, then the CPU completes the instruction that is currently executing, then pushes CS, IP and FLAGS onto the stack, and jumps to the address specified in the appropriate entry of the interrupt vector table, which is the low 1024 bytes of memory (0000:0000-0000:0400). The programmer should have previously set this entry to point to a block of code (interrupt handler) that is to run in response. The interrupt handler would do whatever is necessary to deal with the hardware interrupt and then execute `IRET` to return to whatever code was interrupted. Examples of devices causing hardware interrupts would be: key pressed on keyboard, byte arrives on serial port, timer interrupt (MS-DOS sets up the external timer to generate an interrupt at 18.2 Hz, i.e. every 55 ms).
If the interrupt flag is not set, nothing happens, but the interrupt handler will be called when the flag eventually is set again.
---
So, you would clear the interrupt flag whenever you do not want an interrupt to occur. This would normally be because you are working with some resource that is shared between the current code and the interrupt handler, such that there would be a conflict if the interrupt handler were to run at this moment.
For example, let's consider the timer interrupt. A simple handler might do nothing but increment a counter, so that the main thread of execution can tell how much time has passed. (The 8086 didn't have any other built-in clock hardware.) If a 16-bit counter is enough, you could simply have:
```
ticks DW 0
handler:
inc word ptr [ticks]
iret
main_code:
mov ax, [ticks] ; now ax contains the number of ticks
```
But at 18.2 Hz, we get very close to 65536 ticks per hour (I think that's why the number 18.2 was chosen), so the counter will overflow about every hour. That is not good if you need to keep track of time intervals longer than that, so we should use a 32-bit counter instead. Since x86-16 has no 32-bit arithmetic instructions, we have to use an `ADD/ADC` pair. Our code could look like:
```
ticks DD 0
handler:
add word ptr [ticks], 1
adc word ptr [ticks+2], 0
iret
main_code:
mov ax, [ticks]
;;; BUG what if interrupt occurs here ???
mov dx, [ticks+2]
; now dx:ax contains the 32-bit number of ticks
```
But this code has a bug. If by chance the timer interrupt should occur between the instructions marked `BUG`, the main code will get the low and high words of `ticks` out of sync. Suppose for instance that the value of `ticks` is `0x1234ffff`. The main code loads the low word, `0xffff`, into ax. Then the timer interrupt occurs and increments `ticks`, so that it is now `0x12350000`. The interrupt handler returns and the main code does `mov dx, [ticks+2]`, getting the value `0x1235`. So now the main code has loaded the value `0x1235ffff`, which is very wrong: it is an entire hour later than the actual time.
We could fix this by using `cli/sti` to disable interrupts, so that an interrupt cannot occur at the site labeled `BUG`. Corrected code would look like:
```
main_code:
cli
mov ax, [ticks]
mov dx, [ticks+2]
sti
```
In the particular case of a 32-bit counter, there happen to be other ways to fix this issue without disabling interrupts, but you get the idea. You can imagine some more complex data structure that both the handler and the main code might need to use: an I/O buffer, some larger struct with information about an I/O event that just occurred, a linked list, etc.
---
The CPU's registers are also a shared resource, such as the SS:SP example that you noticed. Suppose the stack is currently at `1234:5678` and the main code wants to switch it to `2222:4444`. You would think to do:
```
switch_stack:
mov ax, 0x2222
mov ss, ax
;;; BUG: what if interrupt occurs here?
mov sp, 0x4444
```
If an interrupt were to occur at line `BUG`, the value of SS:SP would be `2222:5678`, and this is where the CPU would push the CS/IP/FLAGS values before jumping to the handler. This would be really bad, since that isn't the correct location of either the old or the new stack. There might be important data at that address, which the CPU is now overwriting, and so we are now going to have a hard-to-reproduce memory corruption bug on our hands.
So we would likewise think to fix it with
```
switch_stack:
mov ax, 0x2222
cli
mov ss, ax
;;; interrupt can't occur here!
mov sp, 0x4444
sti
```
Now it so happens that this is actually a special case. Since this is a situation where forgetting to disable interrupts would be particularly nasty, the 8086 designers decided to do a little favor for programmers. The `mov ss, reg` instruction has a very special feature where it will automatically disable interrupts for one instruction. So in fact, if you code `mov ss, ax` followed **immediately** by `mov sp, 0x2222`, an interrupt cannot occur between then, and the code is actually safe without `cli/sti`.
But let me emphasize again that this is a **unique special case**. I believe it is only `mov ss, reg` and `pop ss` that have such functionality, so examples like the 32-bit ticks counter really do need `cli/sti`. And in fact, if you had reversed the two instructions and coded `mov sp, 0x2222` followed by `mov ss, ax` (which on its face would appear just as good), you would again have a bug, and the interrupt handler could be called with the stack pointing to `1234:2222`. Also, as @ecm noted in a comment, [some early 8086/8088 chips](https://books.google.ca/books?id=1L7PVOhfUIoC&pg=PA492&lpg=PA492&dq=defective%208088%20ss&source=bl&ots=zkBNXaMt1l&sig=ACfU3U2AUbHacKU7bUd82_b3ORmcSji5HQ&hl=en&sa=X&ved=2ahUKEwjdkqKp1aHkAhUCJDQIHXFmDakQ6AEwAXoECAkQAQ#v=onepage&q=defective%208088%20ss&f=false) had a hardware bug (?) where the "disable interrupts for one instruction" feature didn't work, so on such chips you would also have to use `cli/sti`. (Or maybe this feature was not actually part of the spec until later?)
The 386 added an `lss` instruction to load both the stack segment and stack pointer in a single instruction, which was a more robust way to address this issue. It was also more important in that case because in virtual 8086 mode, `cli/sti` would not execute directly but instead would trap to the operating system, which was very slow and best to avoid if possible.
---
You suggest that maybe this has such low probability that we wouldn't really need to worry about it. Let's look at our 32-bit timer example and imagine an application with an "alarm clock" feature. While doing other work, it periodically checks the ticks counter, let's say about 100 times per second, to see if a specified time has passed, and if so, it does something to alert the user. If you leave out the `cli/sti`, then if an interrupt occurs there with the low word equal to `0xffff` (which happens once per hour), it will think the time is one hour later than it is, and so may issue an alert up to one hour too soon. (If you want to be more dramatic, replace "issue an alert" with "activate dangerous machinery", "fire missiles", etc.)
The `mov ax, mem` instruction on the 8086 took 10 clock cycles, so there are 1000 clock cycles per second when we are vulnerable. The original IBM PC was clocked at 4.7 MHz, so we have about a 1/4700 chance at the top of every hour of the bug triggering. If you ship your application to 50,000 users, and each of them uses it for 8 hours per day, then with a little math, you can work out that you can expect to receive 425 complaints about this bug within the first week of release. Your boss is going to be pretty mad.
And remember, we are back in the mid-1980s and there is no Internet, so you're going to have to mail every one of your 50,000 customers a floppy disk with the patch. At the cost of a couple dollars plus postage, this bug has cost the company about $100,000. In contrast, your salary as an entry-level programmer in 1984 is about $20,000 per year. How do you like your odds of keeping your job?
|
How to paste all including shapes and column widths in Excel VBA
I am using the below code to paste a row onto each new sheet. I am trying to get it to paste all, including shapes and column widths. `ActiveSheet.Paste` includes Shapes but not Column Widths. I have tried `Sh.Range("1:1").PasteSpecial xlPasteAll` but this pastes neither the shapes or column widths.
I know I need to incorporate `xlPasteColumnwidths` but sure how to do this with `ActiveSheet.Paste`.
```
Private Sub Workbook_NewSheet(ByVal Sh As Object)
Sheets("Template").Range("1:1").Copy
Sh.Range("1:1").Select
ActiveSheet.Paste
End Sub
```
|
Is this what you are trying?
```
Dim wsInput As Worksheet, wsOutput As Worksheet
Set wsInput = Sheets("Template")
Set wsOutput = Sheets("Whatever") '<~~ Change as applicable
wsInput.Rows(1).Copy wsOutput.Rows(1)
wsInput.Rows(1).Copy
wsOutput.Rows(1).PasteSpecial Paste:=xlPasteColumnWidths, _
Operation:=xlNone, _
SkipBlanks:=False, _
Transpose:=False
```
>
> I am trying to make it work on creation of a new sheet. My knowledge of vba is zero so I'm sure it is not the correct way, however I have managed to get it work. I will add it as an answer. Please comment if there is a better way! – aye cee 2 mins ago
>
>
>
If you want to perform the copy paste when a new sheet is added then you need to make slight amends to the above code as shown below.
```
Private Sub Workbook_NewSheet(ByVal Sh As Object)
Dim wsInput As Worksheet
Set wsInput = Sheets("Template")
wsInput.Rows(1).Copy Sh.Rows(1)
wsInput.Rows(1).Copy
sh.Rows(1).PasteSpecial Paste:=xlPasteColumnWidths, _
Operation:=xlNone, _
SkipBlanks:=False, _
Transpose:=False
End Sub
```
|
How do I know the last sched time of a process
I current run into an issue that a process seems stuck somehow, it just doesn't gets scheduled, the status is always 'S'. I have monitored sched\_switch\_task trace by debugfs for a while, didn't see the process get scheduled. So I would like to know when is that last time scheduled of this process by kernel?
Thanks a lot.
|
It *might* be possible using the info in `/proc/pid#/sched` file.
In there you can find these parameters (depending on the OS version, mine is opensuse 3.16.7-21-desktop):
```
se.exec_start : 593336938.868448
...
se.statistics.wait_start : 0.000000
se.statistics.sleep_start : 593336938.868448
se.statistics.block_start : 0.000000
```
The values represent timestamps relative to the system boot time, but in a unit which may depend on your system (in my example the unit is 0.5 msec, for a total value of ~6 days 20 hours and change).
In the last 3 parameters listed above at most one appears to be non-zero at any time and it I suspect that the respective non-zero value represents the time when it last entered the corresponding state (with the process actively running when all are zero).
So if your process is indeed stuck the non-zero value would have recorded when it got stuck.
Note: this is mostly based on observations and assumptions - I didn't find these parameters documented anywhere, so take them with a grain of salt.
Plenty of other scheduling info in that file, but mostly stats and without documentation difficult to use.
|
Is pairwise from itertools recipes always giving the same results as zip(a, a[1:])?
For quite a while I used the pairwise function from the itertools recipes. (<https://docs.python.org/3.4/library/itertools.html#itertools-recipes>)
```
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
```
But isn't that always giving the same result as zip(iterable, iterable[1:])?
|
`zip(iterable, iterable[1:])` only works on things that can be sliced. `pairwise` works on arbitrary iterables.
To demonstrate, here’s an obscenely complex version of the typical functional infinite list (generator, here) of the Fibonacci sequence:
```
from itertools import tee
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
def cons(x, xs):
yield x
yield from xs
def take(count, l):
it = iter(l)
for i in range(count):
yield next(it)
def lazy_yield_from(get_iterable):
yield from get_iterable()
infinite_fibonacci, _if = tee(
cons(1, cons(1, lazy_yield_from(lambda: (a + b for a, b in pairwise(_if))))))
print(list(take(5, infinite_fibonacci)))
```
|
Correct ISO value for AVFoundation camera
While working with `AVFoundation` I use these two properties to know the minimum and the maximum ISO a device camera can use:
```
AVCaptureDeviceFormat *activeFormat = _videoDevice.activeFormat;
minISO = activeFormat.minISO;
maxISO = activeFormat.maxISO;
```
But these lines give me the following results back:
```
minISO = 23
maxISO = 736
```
But there is no such thing as ISO 23 or 736. In fact, if you adjust the camera with ISO 23 it will adjust itself for ISO 50 and ISO 736 will make the camera adjust to ISO 800.
Is there a way to know the real ISOs a given camera can use before setting it up?
|
You can define a list of standards ISO values:
```
let isos: [Float] = [32, 50, 64, 80, 100, 125, 160, 200, 250, 320, 400, 500, 640, 800, 1000, 1250, 1600, 1800]
```
And after initialization of capture device filter by min and max ISO values:
```
var isos_available: [Float] = []
for one_iso in isos {
if one_iso >= min_iso && one_iso <= max_iso {
isos_available.append(one_iso)
}
}
```
Also for the shutter speed:
```
var shutters: [Float] = [1, 2, 4, 8, 15, 30, 60, 125, 250, 500, 1000, 2000, 4000, 8000]
var shutters_available: [Float] = []
let min_seconds = CMTimeGetSeconds(self.device.activeFormat.minExposureDuration)
let max_seconds = CMTimeGetSeconds(self.device.activeFormat.maxExposureDuration)
for one_shutter in shutters {
let seconds = 1.0 / Float64(one_shutter)
if seconds >= min_seconds && seconds <= max_seconds {
shutters_available.append(one_shutter)
}
}
```
|
Fabric API key not valid
I am trying to install Fabric to my iOS app. I downloaded the Fabric app to install Crashlytics but when I add the run script in build phase provided by the app, after I build the project I get the error:
>
> line 2: 1:myfirebaseid:ios:myfirebaseid=1:my:app:id: command not
> found Fabric.framework/run 1.7.0 (208) error: Fabric: Configuration
> Issue
>
>
> Fabric API key not valid. Your Fabric run script build phase should
> contain your API key: ./Fabric.framework/run INSERT\_YOUR\_API\_KEY
> INSERT\_YOUR\_BUILD\_SECRET
>
>
>
Some answers suggested removing Fabric from plist which I did and the error got away. But later, I wasn't able to complete the installation of Fabric and I think that is the reason. So where can I find my Fabric API KEY and SECRET?
|
After login in **Fabric.io**, select **settings**
[](https://i.stack.imgur.com/rtxGu.png)
in that page select **Organizations**
[](https://i.stack.imgur.com/NqFl9.png)
Select your app organization
[](https://i.stack.imgur.com/0QtGE.png)
In your fabric organization page select API Key
[](https://i.stack.imgur.com/s4lMh.png)
And add that number in your info.plist
[](https://i.stack.imgur.com/S537o.png)
and here in your shell run script
[](https://i.stack.imgur.com/f7049.png)
|
Trying to call tape.gradient on a non-persistent tape while it is still active
Why is TensorFlow giving me the runtime error (in the title)?
I'm using WinPython3.5.4.2 and have installed TensorFlow 1.8.0. I've been following the tutorial at <https://www.tensorflow.org/get_started/eager> up to the section titled "Training loop".
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-e08164fd8374> in <module>()
14 for x, y in train_dataset:
15 # Optimize the model
---> 16 grads = grad(model, x, y)
17 optimizer.apply_gradients(zip(grads, model.variables),
18 global_step=tf.train.get_or_create_global_step())
<ipython-input-7-08164b502799> in grad(model, inputs, targets)
6 with tf.GradientTape() as tape:
7 loss_value = loss(model, inputs, targets)
----> 8 return tape.gradient(loss_value, model.variables)
C:\[deleted]\WinPython3.5.4.2\python-3.5.4.amd64\lib\site-packages\tensorflow\python\eager\backprop.py in gradient(self, target, sources, output_gradients)
765 flat_grad = imperative_grad.imperative_grad(
766 _default_vspace, self._tape, [target], flat_sources,
--> 767 output_gradients=output_gradients)
768
769 if not self._persistent:
C:\[deleted]\WinPython3.5.4.2\python-3.5.4.amd64\lib\site-packages\tensorflow\python\eager\imperative_grad.py in imperative_grad(vspace, tape, target, sources, output_gradients)
61 """
62 return pywrap_tensorflow.TFE_Py_TapeGradient(
---> 63 tape._tape, vspace, target, sources, output_gradients) # pylint: disable=protected-access
RuntimeError: Trying to call tape.gradient on a non-persistent tape while it is still active.
```
|
I suspect in your sample you're invoking `tape.gradient()` within the `with tf.GradientTape()` context as opposed to outside of it. Changing from:
```
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, model.variables)
```
to
```
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
# Notice the change in indentation of the line below
return tape.gradient(loss_value, model.variables)
```
should cause the error to go away.
TensorFlow operations executed within the context of a `GradientTape` are "recorded" so that the recorded computation can be later differentiated. This recording costs memory (since tensors materialized by intermediate operations have to be kept alive). Invoking `tape.gradient()` within the `GradientTape` context manager would mean that the gradient computation should be recorded as well and tensors created during the gradient computation need to be kept alive. Often this isn't what the user intended - the `tape.gradient()` call is only accidentally inside the context manager, leading to a larger memory footprint than necessary. Hence the error. Though, arguably the error message string isn't particularly well phrased (and I believe will be improved in releases after TensorFlow 1.8).
Quoting from the [documentation](https://www.tensorflow.org/api_docs/python/tf/GradientTape)
>
> By default, the resources held by a `GradientTape` are released as soon as `GradientTape.gradient()` method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected.
>
>
>
So, if you really do want to record the gradient computation (for example, to compute second order derivatives), then you could create a persistent tape and keep the `.gradient()` call inside the context manager. For example:
```
x = tfe.Variable(3.0)
with tf.GradientTape(persistent=True) as g:
y = x * x
dy = g.gradient(y, x)
d2y = g.gradient(dy, x)
print(dy)
print(d2y)
```
Eager execution is a relatively new feature in TensorFlow, feedback on it is more than welcome. If you think that the error message could be better (it could be!) and/or the default should be changed (for example, persistent by default and users particularly concerned about memory overheads could explicitly choose a non-persistent tape) - don't hesitate to chime in by providing [feedback on GitHub](https://github.com/tensorflow/tensorflow/issues/new)
Hope that helps!
|
OpenGL, Rotation around a point doesn't work still around the origin
I did the (translate, rotate, translate) thing but it still rotates around the origin. All I did is when you press the r key, the trick will begin which is (translate, rotate, translate). the result is that it is still rotating around the origin
```
#include <gl/glut.h>
void OnKeyPress(unsigned char key, int x, int y)
{
if (key == 27)
exit(0);
switch(key)
{
case 'r':
case 'R':
// trick start here *********************
glTranslatef(-60,-20,0);
glRotatef(10, 0, 0, 1);
glTranslatef(60, 20, 0);
glutPostRedisplay();
break;
};
}
void OnDisplay()
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0f,0.5f,1.0f);
glBegin(GL_TRIANGLES);
glVertex2f(20, 20);
glVertex2f(60, 20);
glVertex2f(20, 100);
glEnd();
glFlush();
}
int main( int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowPosition(100, 100);
glutInitWindowSize(800, 600);
glutCreateWindow("OpenGL Lab1" );
glutDisplayFunc(OnDisplay);
glutKeyboardFunc(OnKeyPress);
glClearColor(1.0, 1.0, 1.0, 0.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-100, 100, -100, 100);
glutMainLoop();
return 0;
}
```
|
You need to switch the order of your `glTranslate` calls because the last transformation done is the first one applied to the vertex. See, every transformation is a matrix, whose your current matrix is multiplied to. The resulting matrix is then multiplied to the vertex in order to determine its final position.
So if you have:
```
glTranslatef(-60,-20,0); (T1)
glRotatef(10, 0, 0, 1); (R)
glTranslatef(60, 20, 0); (T2)
```
and also have an old matrix Mo stored from the previous frame, you will have:
```
M = (((Mo * T1) * R) * T2)
```
in the end, this M is multiplied by each vertex Vo, to determine its final position:
```
V = M * Vo
```
Matrix multiplications are non-commutative, but they are associative, thus the parenthesis order does not matter, and all these transformations are mathematically equivalent to:
```
V = Mo * T1 * R * (T2 * Vo)
```
Note that (T2 \* Vo) is a vertex transformed by T2 (your last translation). If we call it V1, we have:
```
V = Mo * T1 * (R * V1)
```
See that V1, your original vertex transformed by T2, is now in turn being transformed by the rotation R in (R \* V1), resulting in another vertex, now translated and then rotated. Keep going on solving the expression right to left and you will have all the transformations applied to the vertex in the inverse order on where they were "called" in the OpenGL code.
I assume you know why you have to translate-rotate-untranslate in order to have the rotation around the point you want, but you got it wrong when you said that the "thing" was being rotated around the origin in your code. In fact, if you pay attention, you will notice it was being rotated around point (-60, -20).
Finally, you are not using the model-view matrix, but doing all the transformation using the perspective matrix (which incidentally works, since you are not resetting it every frame, as most applications do). You should not be doing this, but instead, using the GL\_MODELVIEW to these kind of transformation. Try finishing your Initializing function with:
```
...
Set_Transformations();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glutMainLoop();
}
```
|
Truncated element side-by-side with non-truncated element
I have a container with two parts of text: Variable length part and static part.
I wish for the variable length part to be truncated (a-la `text-overflow: ellipsis`) so that the container remains at one line at all times.
Here's an image to illustrate:
[](https://i.stack.imgur.com/ROVsN.png)
---
Now, I know I can solve it fairly simply with `display: flex` on the container like so:
```
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
.container {
display: flex;
}
.variable {
margin: 0;
text-overflow: ellipsis;
overflow: hidden;
white-space: nowrap;
}
.static {
padding: 0 5px;
white-space: nowrap;
}
```
```
<span class="container">
<span class="variable">Lorem ipsum dolor sit amet, consectetur adipisicing elit. Eum dolorum repudiandae in, delectus similique quos natus facilis non odit laudantium unde nostrum aperiam doloremque magni cum ipsum recusandae repellat iusto quo architecto. Ea reiciendis, natus at fuga officiis, cupiditate voluptatem in, sed quae vero est nesciunt quidem tempora, dignissimos sapiente cumque eveniet. Eaque, fugiat voluptates mollitia veniam, blanditiis aut deleniti. Nesciunt sapiente vitae aut, fugit in commodi neque hic reiciendis, et cupiditate possimus, quod quaerat ducimus ea cumque nihil praesentium. Nesciunt velit magni asperiores optio quam, provident eum earum? Illo et illum, autem ratione! Perferendis non facere, ratione quas beatae?</span>
<span class="static">| Static Text</span>
</span>
```
However, I require IE8 support or higher. So flexbox is decidedly out.
I've tried
- `display: table` and `display: table-cell`, but that doesn't play nice with variable lengths. The only solution I've found was to set the width on the variable length text which is unacceptable.
- Playing with floats, but even there, without setting the width for the either text container, I couldn't get the behavior specified above.
Does anyone have other ideas? I'd like to avoid the "I'll assume that the static part takes ~25% so I'll give 75% to the variable width one".
|
Here is a solution with a nested CSS table, the key is set the inner table to `table-layout:fixed;`, it's required for CSS `ellipsis`.
**[jsFiddle](https://jsfiddle.net/ysyurxam/)**
```
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
.container {
display: table;
width: 100%;
}
.variable,
.static {
display: table-cell;
white-space: nowrap;
}
.variable {
width: 100%;
}
.static {
padding: 0 5px;
}
.variable-table {
display: table;
table-layout: fixed;
width: 100%;
}
.variable-cell {
display: table-cell;
text-overflow: ellipsis;
overflow: hidden;
}
```
```
<span class="container">
<span class="variable">
<span class="variable-table">
<span class="variable-cell">Lorem ipsum dolor sit amet, consectetur adipisicing elit. Eum dolorum repudiandae in, delectus similique quos natus facilis non odit laudantium unde nostrum aperiam doloremque magni cum ipsum recusandae repellat iusto quo architecto. Ea reiciendis, natus at fuga officiis, cupiditate voluptatem in, sed quae vero est nesciunt quidem tempora, dignissimos sapiente cumque eveniet. Eaque, fugiat voluptates mollitia veniam, blanditiis aut deleniti. Nesciunt sapiente vitae aut, fugit in commodi neque hic reiciendis, et cupiditate possimus, quod quaerat ducimus ea cumque nihil praesentium. Nesciunt velit magni asperiores optio quam, provident eum earum? Illo et illum, autem ratione! Perferendis non facere, ratione quas beatae?
</span>
</span>
</span>
<span class="static">| Static Text</span>
</span>
```
|
Does python's hash function remain identical across different versions?
I'm currently using `hash` on tuples of integers and strings (and nested tuples of integers and strings etc.) in order to compute the uniqueness of some objects. Barring that there might be a hash collisions, I wonder - is the `hash` function on those data types guaranteed to return the same result for different versions of Python?
|
No. Apart from long-standing differences between 32- and 64-bit versions of Python, the hashing algorithm [was changed in Python 3.3](http://docs.python.org/3/reference/datamodel.html#object.__hash__) to resolve a security issue:
>
> By default, the **hash**() values of str, bytes and datetime objects are “salted” with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.
>
>
> This is intended to provide protection against a denial-of-service caused by carefully-chosen inputs that exploit the worst case performance of a dict insertion, O(n^2) complexity. See <http://www.ocert.org/advisories/ocert-2011-003.html> for details.
>
>
> Changing hash values affects the iteration order of dicts, sets and other mappings. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).
>
>
>
As a result, from 3.3 onwards `hash()` is not even guaranteed to return the same result across different invocations of the same Python version.
|
what do you mean by interactive shell?
UNIX: The Complete Reference, Second Edition
by Kenneth H. Rosen et al.
>
> You can start another shell after you log in by using the name of the
> shell as a command; for example, to start the Korn shell, you could
> type ksh at the command prompt. This type of shell is not a login
> shell, and you do not have to log in again to use it, but it is still
> an interactive shell, meaning that you interact with the shell by
> typing in commands (as opposed to using the shell to run a script, as
> discussed in Chapter 20). The instances of the shell that run in a
> terminal window when you are using a graphical interface are also
> interactive non-login shells. When you start a non-login shell, it
> does not read your .profile, .bash\_profile, or .login file (or your
> .logout file), but it will still read the second shell configuration
> file (such as .bashrc). This means that you can test changes to your
> .bashrc by starting another instance of the shell, but if you are
> testing changes to your .profile or .login, you must log out and then
> back in to see the results.
>
>
>
I was going through above lines and I don't understand what it means by interactive shell.
Is it true that .profile is not read if I am using terminal?
Moreover, what does it mean when you say that bourne is not is an interactive shell while bash/csh is an interactive shell?
|
An interactive shell is simply any shell process that you use to type commands, and get back output from those commands. That is, a shell with which you interact.
So, your login shell is interactive, as are any other shells you start manually, as described in the excerpt you quoted in your question. By contrast, when you run a shell script, a non-interactive shell is started that runs the commands in the script, and then exits when the script finishes.
The Bourne shell can be used as an interactive shell, just like `bash` or `tcsh`. In fact, many systems, such as FreeBSD, use `sh` as the default user shell. Modern shells like `bash`, `zsh`, `tcsh`, etc have many features that Bourne shell doesn't have, that make them more comfortable and convenient for interactive use (command history, completion, etc).
Interactive non-login shells (that is, shells you start manually from another shell or by opening a terminal window) don't read your `.login` or `.profile` files. These are only read and executed by login shells (shells started by the `login` system process, or by your X display manager), so the commands and settings they contain are only applied once, at the beginning of your login session. So, when you start a terminal, the shell that it spawns for you does not read your login files (`.login` for c-style shells, `.profile` for bourne style shells), but it does read the `.cshrc`, `.bashrc` etc files.
|
Remove all the items of Infragistics UltraListView in C#.net
I have a `ultraListView` and i want to remove all the value of it in a method.
I am using Infragistics UltraListView in C#.net
|
The [Items](http://help.infragistics.com/Help/Doc/WinForms/2013.2/CLR4.0/html/Infragistics4.Win.UltraWinListView.v13.2~Infragistics.Win.UltraWinListView.UltraListView~Items.html) property of the [UltraListView](http://help.infragistics.com/Help/Doc/WinForms/2013.2/CLR4.0/html/Infragistics4.Win.UltraWinListView.v13.2~Infragistics.Win.UltraWinListView.UltraListView_members.html) is an [UltraListViewItemsCollection](http://help.infragistics.com/Help/Doc/WinForms/2013.2/CLR4.0/html/Infragistics4.Win.UltraWinListView.v13.2~Infragistics.Win.UltraWinListView.UltraListViewItemsCollection_members.html) which can be cleared:
```
listView.Items.Clear();
```
|
How do I correctly pass a block handler into this function in Swift?
After reading about Swift Closures and trying to use the same syntax to pass a anonymous function into the handler I cannot get it to compile. How can I correctly re-create the following functioning objective c code in Swift?
Here is the objective c code that I am trying to re-create in swift
```
[self.motionManager
startAccelerometerUpdatesToQueue:[[NSOperationQueue alloc] init]
withHandler:^(CMAccelerometerData *data, NSError *error)
{
dispatch_async(dispatch_get_main_queue(),
^{
float xx = data.acceleration.x;
float yy = -data.acceleration.y;
float angle = atan2(yy, xx);
self.dropitBehavior.gravity.angle = angle;
});
}
];
```
Here are few failed attempts at re-creating the code in swift:
```
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: {
(data: CMAccelerometerData(), error: NSError()) -> Void = {
dispatch_async(dispatch_get_main_queue()) {
var xx = data.acceleration.x
var yy = -data.acceleration.y
var angle = atan2(yy, xx)
self.dropitBehavior.gravity.angle = angle
}
}
})
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: {
(data: CMAccelerometerData(), error: NSError()) {
dispatch_async(dispatch_get_main_queue()) {
var xx = data.acceleration.x
var yy = -data.acceleration.y
var angle = atan2(yy, xx)
self.dropitBehavior.gravity.angle = angle
}
}
})
```
|
By doing `CMAccelerometerData()` and `NSError()` you are actually calling the initializers of those classes. You just need to use their types. However, because in objective-C, pointers can be nil, when you translate the types to Swift, you need to use optionals. The convention is to use [Implicitly Unwrapped Optionals](https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/TheBasics.html#//apple_ref/doc/uid/TP40014097-CH5-XID_436). Also, you separate the parameters of an anonymous closure with `in` not additional curly brackets:
```
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: {
(data: CMAccelerometerData!, error: NSError!) in
// internal implementation
})
```
Also, because the types can be inferred from the parameter type, you don't have to even specify the types for the parameters:
```
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: {
(data, error) in
// internal implementation
})
```
Also, if a block is the last parameter to a method / function call, you can define it outside of the parenthesis:
```
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue()) {
(data, error) in
// internal implementation
}
```
This way you don't need the closing `)` after the closure.
That creates the final version with your internal implementation:
```
self.motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue()) {
(data, error) in
dispatch_async(dispatch_get_main_queue()) {
var xx = data.acceleration.x
var yy = -data.acceleration.y
var angle = atan2(yy, xx)
self.dropitBehavior.gravity.angle = angle
}
}
```
|
ASP MVC Razor encode special characters in input placeholder
This is my code:
Model:
```
[Required]
[DataType(DataType.Text)]
[Display(Name = "Your company's name")]
public string CompanyName { get; set; }
```
View:
```
@Html.TextBoxFor(m => m.CompanyName, new { @class = "account-input", @placeholder = @Html.DisplayNameFor(m => m.CompanyName), @id = "companyname" })
```
It will be rendered like this:
>
> Your company's name
>
>
>
html output:
```
<input class="account-input" data-val="true" data-val-required="The Your company's name field is required." id="companyname" name="CompanyName" placeholder="Your company&#39;s name" type="text" value="">
```
It should be look like this:
>
> Your company's name
>
>
>
Why is the text does not render correctly and how can I prevent this?
I already tried this:
```
@Html.TextBoxFor(m => m.CompanyName, new { @class = "account-input", @placeholder = @Html.Raw(@Html.DisplayNameFor(m => m.CompanyName)), @id = "companyname" })
```
and this
```
@Html.TextBoxFor(m => m.CompanyName, new { @class = "account-input", @placeholder = @Html.Encode(@Html.DisplayNameFor(m => m.CompanyName)), @id = "companyname" })
```
|
I think this post will help you:
[HTML encode decode c# MVC4](https://stackoverflow.com/questions/9050145/html-encode-decode-c-sharp-mvc4)
I think there are other ways to get this behaviour, but this is one option of using the TextBox:
```
@Html.TextBox("CompanyName", HttpUtility.HtmlEncode("Your company's name"))
```
There is also `HttpUtility.HtmlDecode`, which might help with our save action.
**update**
if you wrap `HttpUtility.HtmlDecode` around your place holder:
```
@Html.TextBoxFor(m => m.CompanyName, new { @class = "account-input",
@placeholder = HttpUtility.HtmlDecode(Html.DisplayNameFor(x => x.CompanyName).ToHtmlString()),
@id = "companyname" })
```
the placeholder returns as:
placeholder="Your company's name"
|
ROW\_NUMBER() function returns varchar value instead of int
I'm using the `ROW_NUMBER()` function in SQL Server to create a new column called `RowNumber`.
But when I later in the same query reference this new field in the `WHERE` statement I get this error, even though the field should be an integer:
>
> Conversion failed when converting the varchar value 'RowNumber' to data type int
>
>
>
Here is my query:
```
SELECT
ROW_NUMBER() OVER(PARTITION BY c.Node_base ORDER BY sum(c.Score) DESC) AS "RowNumber",
c.Node_base, c.Node_forslag, sum(Score)
FROM
t_input as c
WHERE 'RowNumber' < 11
GROUP BY c.Node_base, c.Node_forslag
ORDER BY c.Node_base desc
```
|
You can't reference a calculated column from the `SELECT` clause in the `WHERE` clause - and even if you could, single quotes introduce a string literal, rather than a column reference. Use a CTE or subquery.
```
;With Sums as (
SELECT c.Node_base, c.Node_forslag,SUM(c.Score) as TotScore
FROM t_input as c
GROUP BY c.Node_base, c.Node_forslag
), NumberedRows as (
SELECT *,ROW_NUMBER() OVER(PARTITION BY Node_base ORDER BY TotScore DESC) AS RowNumber
FROM Sums
)
select * from NumberedRows
WHERE RowNumber <11
order by Node_base desc
```
There's no need to quote the name `RowNumber` at all - it's not a reserved word, and it contains no special characters.
|
Multithreaded access to memory
Good morning,
Say I have some 6 different threads, and I want to share the same data with each of them at the same time. Can I make a class variable with the data I want to share and make each thread access that memory concurrently without performance downgrade, or is it preferable to pass a true copy of the data to each thread?
Thank you very much.
|
It depends entirely on the data;
- if the data is immutable (or mutable but you don't actually mutate it), then chuck all the threads at it - great
- if you need to mutate it, but no two threads will ever depend on the data mutated by another - great
- if you need to mutate it, and there are conflicts but you can sensibly synchronize access to the data such that there is no risk of two threads deadlocking etc - great, but not always trivial
- if it is not safe to make any assumptions, then a true clone of the data is the *safest* approach, but has the most overhead in terms of data duplication; if the data is cheap to copy, this may be fine - and indeed may outperform synchronization
- if the threads *do* co-depend on each other, then you have no option other than to figure out some kind of sensibly locking strategy; again - to stress: deadlocks are a problem here - some ideas:
- always provide a timeout when obtaining a lock
- if you need to lock two items, it may help to try locking both *eagerly* (rather than locking one at the start, and the other after you've done lots of changes) - then you can simply release and re-take the locks, without having to either undo changes, or put the changes back into a particular state
|
.asObservable dont want to work with Observable.forkJoin
I have service:
```
export class ConfigService {
private _config: BehaviorSubject<object> = new BehaviorSubject(null);
public config: Observable<object> = this._config.asObservable();
constructor(private api: APIService) {
this.loadConfigs();
}
loadConfigs() {
this.api.get('/configs').subscribe( res => this._config.next(res) );
}
}
```
---
Trying to call this from component:
```
...
Observable.forkJoin([someService.config])
.subscribe( res => console.log(res) ) //not working
someService.config.subscribe( res => console.log(res) ) // working
...
```
How can i use `Observable.forkJoin` with `Observable` variable `config`?
I need to store configs in service and wait unlit them and others request not finished to stop loader.
|
Since you're using `BehaviorSubject` you should know that you can call `next()` and `complete()` manually.
The `forkJoin()` operator emits only when all of its source Observables have emitted at least one values **and** they all completed. Since you're using a Subject and the `asObservable` method the source Observable never completes and thus the `forkJoin` operator never emits anything.
Btw, it doesn't make much sense to use `forkJoin` with just one source Observable. Also maybe have a look at `zip()` or `combineLatest()` operators that are similar and maybe it's what you need.
Two very similar question:
- [Observable forkJoin not firing](https://stackoverflow.com/questions/42809658/observable-forkjoin-not-firing/42809962#42809962)
- [ForkJoin 2 BehaviorSubjects](https://stackoverflow.com/questions/39722440/forkjoin-2-behaviorsubjects/39727134#39727134)
|
Is there a built in way to identify instances of a class?
I am doing some diagnostic logging with one of my C#.NET projects and I would like to be able to log an identifier that represents a specific instance of a class. I know I could do this with a static variable that just gets incremented every time a class instance is created but I am just wondering if there is any built-in way in the .NET framework to do this. Maybe using reflection or something.
|
Just to add to what [Henk said in his answer](https://stackoverflow.com/questions/3569776/is-there-a-built-in-way-to-identify-instances-of-a-class/3569800#3569800) about `GetHashCode`, and to mitigate some of the negative comments he received on that answer:
There is a way to call `GetHashCode` on **any** object that is independent of that object's value, regardless of whether or not its type has overridden `GetHashCode`.
Take a look at [`System.Runtime.CompilerServices.RuntimeHelpers.GetHashCode`](http://msdn.microsoft.com/en-us/library/11tbk3h9.aspx).
This value is not guaranteed to be unique, of course. Neither is a `Guid` (though for that *not* to be unique would involve odds that are legitimately microscopic).
I'd say your gut was right about the static counter variable. I should mention, though, that simply incrementing it using the `++` operator in each object's constructor is not thread-safe. If it's possible you could be instantiating instances of a class from multiple threads you would want to use [`Interlocked.Increment`](http://msdn.microsoft.com/en-us/library/dd78zt0c.aspx) instead.
|
Modifying Wordpress post status on Publish
I am attempting to validate fields on a custom post type in the admin panel Edit Post page.
When the user clicks "Publish" I want to validate fields in the POST data, and change the post\_status to "pending" if the data does not pass the tests. When this occurs, I'd also like to add errors to the page in the admin notices area.
I've been trying this with an added hook to the "wp\_insert\_post" action which also saves our own data. I'm not certain of the order of operations, but I'm assuming that the wp\_insert\_post events happen first, and then my function gets called via the hook.
The problem is that it's the Wordpress function which is doing the post publish actions, so by the time I get to validate data, Wordpress has already saved the post with a status of "publish". What I need to do is either prevent that update, or change the status back to "pending", but I'm having little success in finding a way to do this within the API.
So, here's an order of operations I'd like to effect:
```
1. admin user edits post data and clicks "Publish"
2. via wp_insert_post, my data validation and post meta save routine is called
3. If data passes validation, post status is "published"
4. Otherwise, post status set to "pending" & message shown in admin notice area
```
Surely someone has done this, but extensive Googling just leads me to the same seemingly irrelevant pages. Can someone point me in the right direction here? Thanks in advance-
**UPDATE:**
So, RichardML was indeed correct, hooking to the wp\_insert\_post\_data filter gave me the right place to validate admin post edit page fields. I'm updating this however to note what the rest of the solution is, specifically getting the reason reported in the admin notice area.
First off, you can't just output data or set a field because the admin page is the result of a redirect, and by the time you get to rendering the admin post page again, the admin\_notices action is already gone. The trick was something I picked up from another forum, and it's hackish, but it works.
What you'll need to do is in your validation filter function, if you determine that you will need to display errors, is use set\_option() to add a blog option with a unique name (I used 'publish\_errors'). This should be HTML code in a div with a class of "error".
You will also need to add an action hook for 'admin\_notices', pointing at a function which checks for the existence of the 'publish\_errors' option, and if it finds it, prints it to the page and deletes it with delete\_option().
|
You can use the [`wp_insert_post_data`](http://codex.wordpress.org/Plugin_API/Filter_Reference/wp_insert_post_data) filter to inspect and modify post data before it's inserted into the database.
---
In response to your update I don't think it's necessary to temporarily add an option to the database. It should be possible to simply add a query string variable to the Wordpress redirect, something like this:
```
add_filter('wp_insert_post_data', 'my_post_data_validator', '99');
function my_post_data_validator($data) {
if ($data['post_type'] == 'post') {
// If post data is invalid then
$data['post_status'] = 'pending';
add_filter('redirect_post_location', 'my_post_redirect_filter', '99');
}
return $data;
}
function my_post_redirect_filter($location) {
remove_filter('redirect_post_location', __FILTER__, '99');
return add_query_arg('my_message', 1, $location);
}
add_action('admin_notices', 'my_post_admin_notices');
function my_post_admin_notices() {
if (!isset($_GET['my_message'])) return;
switch (absint($_GET['my_message'])) {
case 1:
$message = 'Invalid post data';
break;
default:
$message = 'Unexpected error';
}
echo '<div id="notice" class="error"><p>' . $message . '</p></div>';
}
```
|
Placement of WITH(NOLOCK) in nested queries
In the Following query where would I place `WITH(NOLOCK)`?
```
SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM dbo.VBsplit(@mnemonicList, ',')) a) b
JOIN dct
ON dct.concept = b.concept
WHERE b.geo = dct.geo) c
JOIN dct_rel z
ON c.db_int = z.db_int) d
JOIN rel_d y
ON y.rel_id = d.rel_id
WHERE y.update_status = 0
GROUP BY y.rel_id,
d.concept,
d.geo_rfa
```
|
You should not put `NOLOCK` anywhere in that query. If you are trying to prevent readers from blocking writers, a much better alternative is `READ COMMITTED SNAPSHOT`. Of course, you should read about this, just like you should read about `NOLOCK` before blindly throwing it into your queries:
- [Is the `NOLOCK` SQL Server hint bad practice?](https://stackoverflow.com/questions/1452996/is-the-nolock-sql-server-hint-bad-practice)
- [Is `NOLOCK` always bad?](https://dba.stackexchange.com/questions/10655/is-nolock-always-bad)
- [What risks are there if we enable read committed snapshot in SQL Server?](https://dba.stackexchange.com/questions/5014/what-risks-are-there-if-we-enable-read-committed-snapshot-in-sql-server)
Also, since you're using SQL Server 2008, you should probably replace your `VBSplit()` function with a table-valued parameter - this will be much more efficient than splitting up a string, even if the function is baked in CLR as implied.
First, create a table type that can hold appropriate strings. I'm going to assume the list is guaranteed to be unique and no individual mnemonic word can be > 900 characters.
```
CREATE TYPE dbo.Strings AS TABLE(Word NVARCHAR(900) PRIMARY KEY);
```
Now, you can create a procedure that takes a parameter of this type, and which sets the isolation level of your choosing in one location:
```
CREATE PROCEDURE dbo.Whatever
@Strings dbo.Strings READONLY
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL --<choose wisely>;
SELECT -- please list your columns here instead of *
FROM @Strings AS s
INNER JOIN dbo.dct -- please always use proper schema prefix
ON dct.concept = s.Word
...
END
GO
```
Now you can simply pass a collection (such as a DataTable) in from your app, be it C# or whatever, and not have to assemble or deconstruct a messy comma-separated list at all.
|
Does Ubuntu 16.04 LTS have Blue Ray Support?
I read some time ago that Ubuntu doesn't support the burning of Blue Ray disks. Does 16.10 now support it? I've tried using kb3 but it displays a blank window; I've also tried some command line tools to no avail.
|
there are two aspects:
- Formatting Blu-ray content for entertainment video players.
This is indeed in bad shape, because nobody seems to be masochist enough
to implement UDF 2.5 filesystem production. That's what the article shown
by Terrance describes.
- Burning data to Blu-ray media.
This is known to work with GUI programs K3B and Xfburn. They can put readily
formatted images onto BD-R and BD-RE media with one or more layer (25, 50,
or 100 GB). They can also produce ISO 9660 filesystems for data storage.
Command line programs which can write to Blu-ray are: growisofs, cdrskin,
cdrecord, xorriso. Assumed we have the data preformatted in file image.iso:
```
growisofs -Z /dev/sr0=image.iso
cdrskin -v dev=/dev/sr0 -eject image.iso
cdrecord -v dev=/dev/sr0 -eject image.iso
xorriso -as cdrecord -v dev=/dev/sr0 -eject image.iso
```
Not suitable is program wodim, even if it runs under the name "cdrecord".
Have a nice day :)
Thomas
|
Properly handle the deletion of an object only if there are no relationships?
I have a `Person` entity which belongs to a `Department` in a one to many relationship.
I would like to be able to delete the `Department` when there are no more `Person`s associated with it (either through the deletion of the `Person` entity, or a change to the `Person`'s `department` attribute). Right now, I'm trying to do so with the following handler for `NSManagedObjectContextObjectsDidChangeNotification` (Currently just trying to see deletions, and delete appropriately):
```
- (void)managedObjectDidChange:(NSNotification *)notification {
NSSet *updatedObjects = [[notification userInfo] objectForKey:NSDeletedObjectsKey];
for (NSManagedObject *obj in updatedObjects) {
if ([obj.entity.name isEqualToString:@"Person"]) {
NSLog(@"Person Changed");
NSManagedObject *department = [(Person *)obj department];
NSLog(@"%i", [[department valueForKey:@"person"] count]);
if ([[department] valueForKey:@"person"] count] == 0) {
NSLog(@"Department has no more people associated with it");
// deletion code
}
}
}
}
```
However, the count of the number of people associated with the department doesn't change when I delete a person. I am not performing a fetch on the `Department` entity. Is that something I should be doing?
|
Easiest way: implement `willSave` in your Department entity class (You are giving each entity its own class, right? \*), have your department check itself for `[self isDeleted] == NO` and `[[self person] count] == 0`, and delete itself if so. (The check of `isDeleted` isn't optional, because an entity changing itself inside `willSave` triggers another call to willSave. The docs for `NSManagedObject willSave` have more info.) This postpones the delete until the context is flushed back to disk, which shouldn't be a huge problem.
If you do need the Department to delete itself the instant the last person leaves it, have your Department entity observe itself with KVO. Register it as an observer on its own "person" property in `awakeFromFetch` and `awakeFromInsert`, and unregister in `willTurnIntoFault`. When the person property changes, check it for empty. This is more work, so only try it if the first way doesn't work for you.
Finally, remember that the Person->Department relationship delete rule needs to be set to 'nullify' (when Person is deleted, it is removed from Department's persons), not 'no action' (when Person is deleted, you take responsibility for cleaning up the Department yourself) or 'cascade' (when any Person is deleted, its Department is deleted too!)
(\* MOGenerator is a very nice helper for maintaining per-entity classes. <http://rentzsch.github.com/mogenerator/> )
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.