prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
In SwiftUI how do I put the tabs in a TabbedView at the top of the view?
I have a view with tabs on the bottom, one of the `views` has `subviews`, to separate the logic visually, I put the tabs of the subview at the top of the view with the following code and it works perfectly:
```
self.tabbar.frame = CGRect( x: 0,
y: view.safeAreaInsets.top,
width: self.view.frame.size.width,
height: 50)
```
How do I do this in `SwiftUI`?
|
In order to do this you could create your tabs view as a container of the individual tabs something like this...
```
struct TabbedView: View {
@State private var selectedTab: Int = 0
var body: some View {
VStack {
Picker("", selection: $selectedTab) {
Text("First").tag(0)
Text("Second").tag(1)
Text("Third").tag(2)
}
.pickerStyle(SegmentedPickerStyle())
switch(selectedTab) {
case 0: FirstTabView()
case 1: SecondTabView()
case 2: ThirdTabView()
}
}
}
}
```
Doing this, you are conditionally populating the "Tab page" based on the value of the segmented control.
By using `@State` and `$selectedTab` the segmented control will update the selectedTab value and then re-render the view which will replace the page based on the new value of `selectedTab`.
**Edit**
Switches now work in SwiftUI beta.
|
How to return value from thread (java)
I made a thread like this one bellow:
```
public class MyThread implements Runnable {
private int temp;
public MyThread(int temp){
this.temp=temp;
}
@Override
public void run() {
temp+=10;
return;
}
public int getTemp() {
return temp;
}
}
```
but when i try to use temp by getTemp i get 0
```
class Main {
MyThread foo = new MyThread(10);
Thread a = new Thread(foo);
a.start();
int aa = foo.getTemp();
System.out.println(aa);
}
```
i just want to use the calculation i did in thread to be stored in some variables for later use.
|
Or simply add
```
...
a.start();
a.join(); // Add this
...
```
to wait for the thread to finish before getting the result.
Your problem is that you're trying to get the result before it has been calculated. You should wait for the thread to finish before getting the result. This answer is perhaps not the best but is the simplest. As other people had already used the Executors class I didnt want to repeat their answers. I would, however, familiarise yourself with Thread and its methods before moving onto Executors to help you get a better understanding of threads as, from your post, it appears you may be a novice in this area.
Thanks to [l4mpi (on the meta site)](https://meta.stackoverflow.com/questions/271700/do-we-upvote-correctness-or-usefulness#comment95629_271700) for pointing out the lack of explanation.
|
Access to HttpContext via static class works "correctly" with different requests
I found [this article](https://www.quickdevnotes.com/better-approach-to-use-httpcontext-outside-a-controller-in-net-core-2-1/) while trying to solve a problem that requires some headers in non-controllers.
I am quite skeptical about the approach and the author is not responding. My major concern is about the approach of having global static `HttpContext`. I was thinking that it **should not** work with two requests. An example of this case is below (together with the approach presented in the article I mentioned):
```
public static class AppContext
{
public static IHttpContextAccessor HttpContextAccessor { get; set; }
public static void Configure(IHttpContextAccessor accessor)
{
HttpContextAccessor = accessor;
}
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
IHttpContextAccessor contextAccessor)
{
AppContext.Configure(contextAccessor);
...
}
[Route("api/[controller]")]
[ApiController]
public class ExampleController : ControllerBase
{
[HttpGet("{number}")]
public IActionResult Example(int number)
{
if (number == 1)
{
Thread.Sleep(10000);
}
var result = AppContext.HttpContextAccessor.HttpContext.Request.GetDisplayUrl();
return Ok(result + " " + number);
}
}
```
Want to mention that author uses name **App**Context for this static class and that is exactly what I'd expect (and it is indeed useless then).
However, what confuses me is the actual behavior. I am debugging the piece placing a breakpoint at the line with `var result = ...`. I first send a request with `number` = 1 that will sleep for a bit and then I send the second request with a different value for `number`. I skip the placed breakpoint for the first request and wait for the first request (with `number` = 1) to stop there. Then I check what `GetDisplayUrl()` returns - and it returns a path with `/1` (that is indeed the path for this request that has slept for 10 secs). I would expect it to end with `/2` as the static field with `IHttpContextAccessor` of static class `AppContext` has been rewritten by the second request in `ConfigureServices()` method.
I believe I am missing something crucial and would be glad if you also provided some sources that I (and others confused) could use to fill the gap.
Could you also give me some more insights on using the approach? Does testability suffer (as I'm using the static class everywhere in the application) and in what way?
|
There's a few things going on here. Technically this will work, simply because `IHttpContextAccessor` is a singleton. Therefore, there's nothing technically wrong with persisting it on a static ivar. Either way, it lasts the life of the application.
`HttpContext`, itself, is scoped, but that's not what's being set here. So, as long as you have access to `IHttpContextAccessor`, you can technically get access to `HttpContext`, though it may be null, depending on where you attempt to do it (i.e. outside the request pipeline).
However, this is just such bad practice it's not even funny. Statics should largely be avoided for good code. They are not testable and they serve to hide dependencies, making your code harder to understand and more fragile.
I've seen some people do something similar to this, but that was to make `HttpContext` itself appear as if it was static, with the goal only of supporting legacy code that assumed a static `HttpContext`. This solution does not help there, as you'd have to change legacy code either way. As such, it's completely useless.
If you need access to `HttpContext` outside of places where it intrinsically exists such as controllers, pages, and views, then simply inject `IHttpContextAccessor` there, and use it directly. This whole `AppContext` thing is a joke and should die in fire.
|
nicely print out a matrix in mathematica
I have a list of lists (matrix) in mathematica. I want to print it out nicely, with the max in every row in bold. How to do this?
Or maybe even further, with a circle outside such max, is this possible?
|
You can use `MatrixForm` to print out matrices nicely:
```
data = RandomInteger[100, {5, 5}];
data // MatrixForm
```
yields

You can draw a circle around the maximum in each row as follows:
```
Map[# /. m : Max[#] :> Framed[m, RoundingRadius -> 1000] &, data] //
MatrixForm
```
yielding

`RoundingRadius -> 1000` uses a ridiculously large parameter to get circles. You may need to adjust the constant depending upon the scale of your display.
You can change `Framed[m...]` with `Style[m, Bold]` if you prefer bolding.
|
UTF-8 encoded JSON file, trying to parse with JSON module - wide character
I have a pretty simple script in Perl:
```
use JSON;
use open qw/ :std :encoding(utf8) /;
#my $ref = JSON::decode_json($json_contents);
my $path = "/home/chambres/web/x.org/public_html/cgi-bin/links/admin/booking_import/import/file.json";
my $json_contents = slurp_utf8_file($path);
my $ref = JSON->new->utf8->decode($json_contents);
sub slurp_utf8_file {
my @back;
#open my $in, '<:encoding(UTF-8)', $_[0] or die $!;
open my $in, "<$_[0]" or die $!;
while (<$in>) {
push @back, $_
}
close ($in);
return join("", @back);
}
```
The file is encoded in UTF-8 in Notepad++:
[](https://i.stack.imgur.com/JqElz.jpg)
...yet when I run my script I get:
```
perl test.cgi
Wide character in subroutine entry at test.cgi line 11.
```
Line 11 is:
```
my $ref = JSON->new->utf8->decode($json_contents);
```
I'm baffled as to what I'm doing wrong. Maybe I just need a break! Any advice would be much appreciated!
|
You are trying to double decode UTF-8:
```
#!/usr/bin/perl
use strict;
use warnings;
use JSON;
use Data::Dumper;
open(my $fh, '<:encoding(UTF-8)', $ARGV[0]) or die $!;
my @lines = <$fh>;
close($fh) or die $!;
# Wide character in subroutine entry at dummy.pl line 14.
my $ref = JSON->new->utf8->decode(join('', @lines));
# OK, no warning.
my $ref = JSON->new->decode(join('', @lines));
print Dumper($ref);
exit 0;
```
Test run
```
$ cat dummy.json
{
"path": "ä⁈"
}
# with ->utf8
$ perl dummy.pl dummy.json
Wide character in subroutine entry at dummy.pl line 14.
# without ->utf8
$ perl dummy.pl dummy.json
$VAR1 = {
'path' => "\x{e4}\x{2048}"
};
```
|
How can I access my Javascript Variables in PHP?
I have a file called `lightstatuspage.php` and within it, I have HTML, JavaScript and PHP code. I have used some variables within the JavaScript part of the code and I am trying to send these variables to the server by passing them to the PHP part of the code. However, this is not working.
I am using `$.post("lightstatuspage.php", slider_val);` and then in the PHP part, I am calling the variable by doing `$_GET['rangeslider_val'];`.
What am I doing wrong and what can I do differently to get the variable from JavaScript and send it to the server?
```
function show_value(x)
{
document.getElementById("slider_value").innerHTML=x;
event.preventDefault();
var slider_val = x;
var query = new Parse.Query(Post);
query.first({
success: function(objects){
objects.set("slider_val", slider_val);
objects.setACL(new Parse.ACL(Parse.User.current()));
return objects.save();
window.location.href = "lightstatuspage.php?rangeslider_val=" + slider_val;
}
})
}
```
The PHP code is:
```
<?php
$_GET['rangeslider_val'];
print($rangeslider_val);
?>
```
|
First Add Jquery
```
<script src='https://code.jquery.com/jquery-1.11.3.min.js'></script>
```
to the end of page before closing body tag.
To send Javascript variable to PHP. the best way is to use Ajax. and add the following code to your javascript.
**Do not forget that the below code should be on an event. For example on a button click or something like that**
```
$( document ).ready(function() {
var x = $('#input1').val();
//or var x= 15;
$.post("lightstatuspage.php",
{
'rangeslider_val': x
},
function(data, status){
//alert("Data: " + data + "\nStatus: " + status);
// you can show alert or not
});
});
```
and in php, you can use:
```
$value = $_POST['field1'];
```
now your variable is in $value in php.
P.S:
Backend Page and HTML page should be on same domain or you have to check **Cross-Origin Resource Sharing**
---
**Second Solution**
as the User accepted this solution here would be the code:
```
$.get("lightstatuspage.php?rangeslider_val=" + slider_val,function(res) {
console.log(res);
});
```
the second way is only the difference between POST and GET method
---
**Third Solution**
if you don't want to use Jquery in your project and you need pure javascript you can use below code
```
var str = "Send me to PHP";
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
console.log(xmlhttp.responseText);
}
};
xmlhttp.open("GET", "lightstatuspage.php?rangeslider_val=" + str, true);
xmlhttp.send();
```
|
New to Android - Drawing a view at runtime
HI all,
I'm just getting started with developing for Android. I'm looking to port one of my iPhone applications, but I'm kind of at a loss for how to draw a view at runtime (a view not declared in the XML). Basically, I want to draw a simple rectangle, but then be able to manipulate its frame after being drawn.
Sorry if this is a really, really simple question, but I can't seem to find some equivalent to the iPhone SDK here.
Thanks in advance!
|
It sounds like you want to experiment with 2D graphics - for that, you should use a `Canvas`. You can control the drawing of the Canvas through the `invalidate()` method, which tells Android to redraw the whole thing triggering your customised `onDraw()` method. You mention not wanting to use the XML file, but that is the simplest way to put in a Canvas - you don't have to define its contents in the XML file, but simply tell the layout file it's there. A powerful but simple way to put a Canvas in your application is to customise a View. For example, include in your XML file a `<your.package.CustomView android:.../>` element. Then declare the `CustomView extends View` class. Any kind of drawing you want to do, put in the onDraw() method.
For example, to draw a rectangle, do something like this.
```
//First you define a colour for the outline of your rectangle
rectanglePaint = new Paint();
rectanglePaint.setARGB(255, 255, 0, 0);
rectanglePaint.setStrokeWidth(2);
rectanglePaint.setStyle(Style.STROKE);
//Then create yourself a Rectangle
Rect rectangle = new Rect(left, top, right, bottom) //in pixels
//And here's a sample onDraw()
@Override
public void onDraw(Canvas canvas){
rectangle.offset(2, 2);
canvas.drawRect(rectangle, rectanglePaint);
}
```
Every time invalidate() is called from your program, the view will be redrawn and the rectangle moved 2px down and to the right. Note: the redrawing only happens with the main thread is 'waiting'. In other words, if you have a loop calling invalidate several times, the View won't actually be drawn until the loop finishes. You can get around this, but that adds more complication. For an example of how that's done, look at the LunarLander example game from Google - it's a simple game demonstrating a custom View, 2 threads, and how to implement continuous animation.
|
Altering a data frame in R
I have a data frame that has the first column go from 1 to 365 like this
```
c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2...
```
and the second column has times that repeat over and over again like this
```
c(0,30,130,200,230,300,330,400,430,500,0,30,130,200,230,300,330,400,430,500...
```
so for every 1 value in the first column I have a corresponding time in the second column then when I get to the 2's the times start over and each 2 has a corresponding time,
occasionally I will come across
```
c(3,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4...
c(0,30,130,200,230,330,400,430,500,0,30,130,200,230,300,330,400,430,500...
```
Here one of the 3's is missing and the corresponding time of 300 is missing with it.
How can I go through my entire data frame and add these missing values? I need a way for R to go through and identify any missing values then insert a row and put the appropriate value, 1 to 365, in column one and the appropriate time with it. So for the given example R would add a row in between 230 and 330 and then place a 3 in the first column and 300 in the second. There are parts of the column that are missing several consecutive values. It is not just one here and there
|
EDIT: Solution with all 10 times clearly specified in advance and code tidy up/commenting
You need to create another `data.frame` containing every possible row and then `merge` it with your `data.frame`. The key aspect is the `all.x = TRUE` in the final merge which forces the gaps in your data to be highlighted. I simulated the gaps by sampling only 15 of the first 20 possible day/time combinations in `your.dat`
```
# create vectors for the days and times
the.days = 1:365
the.times = c(0,30,100,130,200,230,330,400,430,500) # the 10 times to repeat
# create a master data.frame with all the times repeated for each day, taking only the first 20 observations
dat.all = data.frame(x1=rep(the.days, each=10), x2 = rep(the.times,times = 365))[1:20,]
# mimic your data.frame with some gaps in it (only 15 of 20 observations are present)
your.sample = sample(1:20, 15)
your.dat = data.frame(x1=rep(the.days, each=10), x2 = rep(the.times,times = 365), x3 = rnorm(365*10))[your.sample,]
# left outer join merge to include ALL of the master set and all of your matching subset, filling blanks with NA
merge(dat.all, your.dat, all.x = TRUE)
```
Here is the output from the merge, showing all 20 possible records with the gaps clearly visible as `NA`:
```
x1 x2 x3
1 1 0 NA
2 1 30 1.23128294
3 1 100 0.95806838
4 1 130 2.27075361
5 1 200 0.45347199
6 1 230 -1.61945983
7 1 330 NA
8 1 400 -0.98702883
9 1 430 NA
10 1 500 0.09342522
11 2 0 0.44340164
12 2 30 0.61114408
13 2 100 0.94592127
14 2 130 0.48916825
15 2 200 0.48850478
16 2 230 NA
17 2 330 0.52789171
18 2 400 -0.16939587
19 2 430 0.20961745
20 2 500 NA
```
|
Methodology for calculating variable importance in dataset using regression
I am trying to come up with an understanding of the magnitude of the effect of various variables on a (measurable) continuous target variable from data based on a survey. There are 2000 variables consisting of survey questions as well as some index and socioeconomic variables. Both categorical and continuous variables are present.
All of the variables are actually grouped into 1 of 5 categories. Consider these as question types. Ideally, I would like to have a chart that breaks down in percentage terms the importance of question type 1 on the target, etc. The breakdown for the 5 categories would add up to 100%
Here is my idea.
1. For each categorical variable, make sure that the base level in the factor (using R) makes logical sense. Most of the categorical questions have been pre-processed so that the missing, unavailable, not administered "answers" to the questions are labeled as NA. However, this is still considered an answer option in the analysis as opposed to a missing value, because there may be valuable information from this. This "NA" designation can be set as the base level for many of the questions.
2. Run a multiple linear regression on the raw data.
3. Check for multi-collinearity and re-run, excluding the variables with high VIFs.
4. Examine the distribution of questions in each category. Create weights to ensure that the actual distribution of categorial + continuous questions in each category is roughly equal. Otherwise, if 1 category tends to have more questions, the purported impact will be skewed. It may make sense to use only the number of categorical questions in each category for the weights.
5. Normalize the coefficients first by dividing by the variance of the variable. This is performed because of course the variables will have both positive and negative effects on the target variable.
6. Standardize the coefficients by subtracting the value of the minimum coefficient and then dividing by the range of the coefficients.
7. Multiply each coefficient by its weight and then take the sum.
8. Express each (variable, weight) product as a percentage of the total in the step above.
9. Group the questions according to the categories lookup.
Is this a valid approach? What are the pitfalls and major assumptions? Because there are so many variables, I am ignoring p-values completely. Is that appropriate?
Any suggestions or ideas on an alternative approach welcome. Thank you.
|
There is a substantial literature on relative importance so I would suggest using an established technique. In her paper "Estimators of relative importance in linear regression based on variance decomposition" [here](http://www.tandfonline.com/doi/abs/10.1198/000313007X188252) Grömping suggests criteria for choosing between methods and then gives a detailed comparison between two methods: LMG which obtains the Model I sums of squares for each predictor and then averages over all the possible orders of fitting the predictors, and PMVD which is a weighted version of LMG. In an earlier paper discussing her software "Relative importance for linear regression in R: the package relaimpo" [here](https://www.jstatsoft.org/article/view/v017i01) she also discusses four other versions.
|
Check whether the email textbox value exists in database
I want to check the value in text box is present in database and clear the box if the value exists in the database without refreshing the whole page.
**HTML:**
```
<p>Email<input type="text" name="email" id="email" size=18 maxlength=50 required></p>
```
**Query:**
```
$echeck="select email from register where email='$email'";
$echk=mysql_query($echeck);
$ecount=mysql_num_rows($echk);
if($ecount!=0)
{
echo ("<SCRIPT LANGUAGE='JavaScript'>
var asign=document.getElementById(email); //now show null value
asign.value="";
</SCRIPT>");
}
```
If I use alert it will refresh the page.
```
window.alert('Email Id already exist');
```
|
try to implement something like this:
```
<script type="text/javascript">
function checkMailStatus(){
//alert("came");
var email=$("#email").val();// value in field email
$.ajax({
type:'post',
url:'checkMail.php',// put your real file name
data:{email: email},
success:function(msg){
alert(msg); // your message will come here.
}
});
}
</script>
<p>Email<input type="text" name="email" id="email" onblur="checkMailStatus()"size=18 maxlength=50 required></p>
```
your php: **checkMail.php**
```
$echeck="select email from register where email=".$_POST['email'];
$echk=mysql_query($echeck);
$ecount=mysql_num_rows($echk);
if($ecount!=0)
{
echo "Email already exists";
}
```
|
Image detection features: SIFT, HISTOGRAM and EGDE
I am working on developing a object classifier by using 3 different features i.e SIFT, HISTOGRAM and EGDE.
However these 3 features have different dimensional vector e.g. SIFT = 128 dimension. HIST = 256.
Now these features cannot be concatenated into once vector due to different sizes. What I am planning to do but I am not sure if it is going to be correct way is this:
For each features i train the classifier separately and than i apply classification separately for 3 different features and than count the majority and finally declare the image with majority votes.
Do you think this is a correct way?
|
There are several ways to get classification results that take into account multiple features. What you have suggested is one possibility where instead of combining features you train multiple classifiers and through some protocol, arrive at a consensus between them. This is typically under the field of **ensemble methods**. Try googling for boosting, random forests for more details on how to combine the classifiers.
However, it is not true that your feature vectors cannot be concatenated because they have different dimensions. You can still concatenate the features together into a huge vector. E.g., joining your SIFT and HIST features together will give you a vector of 384 dimensions. Depending on the classifier you use, you will likely have to normalize the entries of the vector so that no one feature dominate simply because by construction it has larger values.
**EDIT** in response to your comment:
It appears that your histogram is some feature vector describing a characteristic of the entire object (e.g. color) whereas your SIFT descriptors are extracted at local interest keypoints of that object. Since the number of SIFT descriptors may vary from image to image, you cannot pass them directly to a typical classifier as they often take in one feature vector per sample you wish to classify. In such cases, you will have to build a **codebook** (also called visual dictionary) using the SIFT descriptors you have extracted from many images. You will then use this codebook to help you derive a SINGLE feature vector from the many SIFT descriptors you extract from each image. This is what is known as a "**bag of visual words (BOW)**" model. Now that you have a single vector that "summarizes" the SIFT descriptors, you can concatenate that with your histogram to form a bigger vector. This single vector now summarizes the ENTIRE image/(object in the image).
For details on how to build the bag of words codebook and how to use it to derive a single feature vector from the many SIFT descriptors extracted from each image, look at this book (free for download from author's website) <http://programmingcomputervision.com/> under the chapter "Searching Images". It is actually a lot simpler than it sounds.
Roughly, just run KMeans to cluster the SIFT descriptors from many images and take their centroids (which is a vector called a "visual word") as the codebook. E.g. for K = 1000 you have a 1000 visual word codebook. Then, for each image, create a result vector the same size as K (in this case 1000). Each element of this vector corresponds to a visual word. Then, for each SIFT descriptor extracted from an image, find its closest matching vector in the codebook and increment the count in the corresponding cell in the result vector. When you are done, this result vector essentially counts how often the different visual words appear in the image. Similar images will have similar counts for the same visual words and hence this vector effectively represents your images. You will also need to "normalize" this vector to make sure that images with different number of SIFT descriptors (and hence total counts) are comparable. This can be as simple as simply dividing each entry by the total count in the vector or through a more sophisticated measure such as tf/idf as described in the book.
I believe the author also provide python code on his website to accompany the book. Take a look or experiment with them if you are unsure.
More sophisticated method for combining features include Multiple Kernel Learning (MKL). In this case, you compute different kernel matrices, each using one feature. You then find the optimal weights to combine the kernel matrices and use the combined kernel matrix to train a SVM. You can find the code for this in the Shogun Machine Learning Library.
|
Task.Run to increase parallelism of IO-bound operations?
I'm getting a bit confused with `Task.Run` and all I read about it on the internet. So here's my case: I have some function that handles incoming socket data:
```
public async Task Handle(Client client)
{
while (true)
{
var data = await client.ReadAsync();
await this.ProcessData(client, data);
}
}
```
but this has a disadvantage that I can only read next request once I've finished processing the last one. So here's a modified version:
```
public async Task Handle(Client client)
{
while (true)
{
var data = await client.ReadAsync();
Task.Run(async () => {
await this.ProcessData(client, data);
});
}
}
```
It's a simplified version. For more advanced one I would restrict the maximum amount of parallel requests of course.
Anyway this `ProcessData` is mostly IO-bound (doing some calls to dbs, very light processing and sending data back to `client`) yet I keep reading that I should use `Task.Run` with CPU-bound functions.
Is that a correct usage of `Task.Run` for my case? If not what would be an alternative?
|
Conceptually, that is a fine usage of `Task.Run`. It's very similar to how ASP.NET dispatches requests: (asynchronously) reading a request and then dispatching (synchronous or asynchronous) work to the thread pool.
In practice, you'll want to ensure that the result of `ProcessData` is handled properly. In particular, you'll want to observe exceptions. As the code currently stands, any exceptions from `ProcessData` will be swallowed, since the task returned from `Task.Run` is not observed.
IMO, the cleanest way to handle per-message errors is to have your own `try`/`catch`, as such:
```
public async Task Handle(Client client)
{
while (true)
{
var data = await client.ReadAsync();
Task.Run(async () => {
try { await this.ProcessData(client, data); }
catch (Exception ex) {
// TODO: handle
}
});
}
}
```
where the `// TODO: handle` is the appropriate error-handling code for your application. E.g., you might send an error response on the socket, or just log-and-ignore.
|
Procedure of object how to change the \_self?
I have
```
Type
TProcOfObject = Procedure of Object;
var
MyProc: TProcOfObject;
```
now if I do
```
MyProc := MyObject.MyProc
```
then when I will call MyProc self will be equal to MyObject (I do not yet fully understand where self is stored in MyProc). Is their a way to call myProc with another value than MyObject for Self ?
|
>
> I do not yet fully understand where self is stored in MyProc
>
>
>
A method pointer is represented by the [`TMethod`](https://docwiki.embarcadero.com/Libraries/en/System.TMethod) record, which contains 2 pointers as members - `Data` points to the `Self` object, and `Code` points to the beginning of the method's code.
When a method pointer is invoked as a function at compile-time, the compiler outputs codegen which executes the `Code` passing in the `Data` as the `Self` parameter.
>
> Is their a way to call myProc with another value than MyObject for Self ?
>
>
>
You can type-cast the method pointer to `TMethod` to access its inner pointers, eg:
```
var
MyProc: TProcOfObject;
...
TMethod(MyProc).Data := ...; // whatever value you want Self to be
TMethod(MyProc).Code := ...; // whatever function you want to call
...
MyProc();
```
|
Why override equals instead of using another method name
This seems like a silly question but why do we override `equals` method instead of creating a new method with new name and compare using it?
If I didn't override equals that means both `==` and equals check whether both references are pointed to same memory location?
|
>
> This seems like a silly question but why do we override `equals` method instead of creating a new method with new name and compare using it?
>
>
>
Because all standard collections (`ArrayList`, `LinkedList`, `HashSet`, `HashMap`, ...) use `equals` when deciding if two objects are equal.
If you invent a new method these collections wouldn't know about it and not work as intended.
The following is very important to understand: If a collection such as `ArrayList` calls `Object.equals` this call will, in runtime, resolve to the overridden method. So even though you invent classes that the collections are not aware of, they can still invoke methods, such as `equals`, on those classes.
>
> If I didn't override equals that means both `==` and equals check whether both references are pointed to same memory location?
>
>
>
Yes. The implementation of `Object.equals` just performs a `==` check.
|
Obtain one number from numpy subarrays given its peer
I have a array with pairs of numbers (subarrays of size 2) like this:
```
pairs = np.array([[1, 2],[5, 12],[9, 33],[9, 1],[34,7]])
```
and the peers array, like this:
```
nums = np.array([1,12,9])
```
What I want to do is recover the peers of the numbers contained in the num array. The result should be
```
result = np.array([2, 5, 33, 9, 1])
```
**How can I achieve this using numpy functions?** What I am now using is the lazy comparing (actually using lists), for each element in nums array I check if it is contained in any subarray and I store the values.
|
If the order of the elements doesn't matter, you can easily use:
```
>>> np.concatenate((pairs[:,1][np.in1d(pairs[:,0], nums)],
pairs[:,0][np.in1d(pairs[:,1], nums)]))
array([ 2, 33, 1, 5, 9])
```
**EDIT:**
To conserve the order, you can use [advanced indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing):
```
>>> pairs[np.in1d(pairs, nums).reshape((pairs.shape[0],2))[:,[1,0]]]
array([ 2, 5, 33, 9, 1])
```
|
FILESYSTEM vs SQLITE, while storing up-to 10M files
I would like to store up-to 10M files, 2TB storage unit. The only properties which I need restricted to filenames, and their contents (data).
The files max-length is 100MB, most of them are less than 1MB. The ability of removing files is required, and both writing and reading speeds should be a priority - while low storage efficiency, recovery or integrity methods, are not needed.
I thought about NTFS, but most of its features are not needed, while can't be disabled and considered to be an overhead concern, a few of them are: creation date, modification date, attribs, journal and of course permissions.
Due to the native features of a filesystem which are not needed, would you suggest I'll use SQLITE for this requirement? or there's an obvious disadvantage that I should be aware about? (one would guess that removing files will be a complicated task?)
(SQLITE will be via the C api)
My goal is to use a more suited solution to gain performance. Thanks in advance - Doori Bar
|
If your main requirement is performance, go with native file system. DBMS are not well suited for handling large BLOBs, so SQLite is not an option for you at all (don't even know why everybody considers SQLite to be a plug for every hole).
To improve performance of NTFS (or any other file system you choose) don't put all files into single folder, but group files by first N characters of their file names, or also by extension.
Also there exist some other file systems on the market and maybe some of them offer possibility to disable some of used features. You can check the [comparison on Wikipedia](http://en.wikipedia.org/wiki/Comparison_of_file_systems) and check them.
**Correction:** I've made some tests (not very extensive though) that show no performance benefit in grouping files into subdirectories for most types of operations, and NTFS quite efficiently handled 26^4 empty files named from AAAA to ZZZZ in a single directory. So you need to check efficiency for your particular file system.
|
LRU cache design
Least Recently Used (LRU) Cache is to discard the least recently used items first
How do you design and implement such a cache class? The design requirements are as follows:
1) find the item as fast as we can
2) Once a cache misses and a cache is full, we need to replace the least recently used item as fast as possible.
How to analyze and implement this question in terms of design pattern and algorithm design?
|
A linked list + hashtable of pointers to the linked list nodes is the usual way to implement LRU caches. This gives O(1) operations (assuming a decent hash). Advantage of this (being O(1)): you can do a multithreaded version by just locking the whole structure. You don't have to worry about granular locking etc.
Briefly, the way it works:
On an access of a value, you move the corresponding node in the linked list to the head.
When you need to remove a value from the cache, you remove from the tail end.
When you add a value to cache, you just place it at the head of the linked list.
Thanks to doublep, here is site with a C++ implementation: [Miscellaneous Container Templates](https://launchpad.net/libmct).
|
Button that runs a php script without changing current page
I have a webpage that generates a table from mysql. I have a button at the beginning of each row. I would like it so if the user decides to press on the button, the contents of that individual row are written to a new table in MySQL.
Currently I am thinking of just having the button be an href to another php script that connects to mysql and inserts the row into a table; however, I think that will redirect my current page.
I would like the button to run the script, without redirecting my current page. That way, the user can continue analyzing the table without having the page have to reload every time.
This is what my current table looks like. This is just a snippet, and the table can be very large (hundreds of rows)

|
In order to do this client side, there are a couple of ways I can think of off hand to do this:
**Javascript**
You can include a Javascript library (like the ever popular JQuery library), or code it yourself, but you could implement this as an XMLHTTPRequest, issued from a click handler on the button. Using a library is going to be the easiest way.
**An iframe**
Create a hidden iframe:
```
<iframe style="display:none;" name="target"></iframe>
```
Then just set the target of your tag to be the iframe:
```
<a href="your_script.php" target="target">...</a>
```
Whenever someone clicks on the link, the page will be loaded in the hidden iframe. The user won't see a thing change, but your PHP script will be processed.
Of the two options, I'd recommend the Javascript library unless you can't do that for some reason.
|
Elixir: How to convert a keyword list to a map?
I have a keyword list of Ecto changeset errors I'd like to convert to a map so that the Poison JSON parser can correctly output a list of validation errors in the JSON format.
I get a list of errors as follows:
```
[:topic_id, "can't be blank", :created_by, "can't be blank"]
```
...and I'd like to get a map of errors like so:
```
%{topic_id: "can't be blank", created_by: "can't be blank"}
```
Alternatively, if I could convert it to a list of strings, I could use that as well.
What is the best way to accomplish either of these tasks?
|
What you have there isn't a keyword list, it is just a list with every odd element representing a key and every even element representing a value.
The difference is:
```
[:topic_id, "can't be blank", :created_by, "can't be blank"] # List
[topic_id: "can't be blank", created_by: "can't be blank"] # Keyword List
```
A keyword list can be turned into a map using [Enum.into/2](https://hexdocs.pm/elixir/Enum.html#into/2)
```
Enum.into([topic_id: "can't be blank", created_by: "can't be blank"], %{})
```
Since your data structure is a list, you can convert it using [Enum.chunk\_every/2](https://hexdocs.pm/elixir/Enum.html#chunk_every/2) and [Enum.reduce/3](https://hexdocs.pm/elixir/Enum.html#reduce/3)
```
[:topic_id, "can't be blank", :created_by, "can't be blank"]
|> Enum.chunk_every(2)
|> Enum.reduce(%{}, fn ([key, val], acc) -> Map.put(acc, key, val) end)
```
You can read more about Keyword lists at <http://elixir-lang.org/getting-started/maps-and-dicts.html>
|
thumbs.db the file is open in windows explorer
All the time I have the problem that I cannot delete - move - rename folders in Windows 7 (also applies to windows 10) on network drives because of the thumbs.db file. It complains: "The action can't be completed because the file is open in Windows Explorer."
I found something in a long thread that works for me.
|
**short answer:**
windows 7: In explorer: Change file display settings from "details" to "Content"
windows 10: In explorer: View->Layout->List
Now the thumbs.db file can be removed.
**Long answer:**
<https://social.technet.microsoft.com/Forums/windows/en-US/ca2cbc1a-362f-4f01-a8f8-6f05112f1915/windows-7-bug-explorer-locks-thumbsdb-in-most-recently-viewed-folder?forum=w7itprogeneral>:
>
> I accidentally came across a slightly easier fix for this bug. Instead of setting "Turn off the display of thumbnails and only display icons on network folders" to Enable, **I simply changed the Windows Explorer display setting from "Details" to "Content**". Then I was able to delete my Thumbs.db files without any complaints from Windows 7, even though I have thumbnails enabled. After deleting the offending file(s), I just restore the display to "Details" again.
>
>
>
thanks my unknown hero robster8192
|
Rails validation :if => Proc.new or lambda?
I have found that in all examples (include rails documentation) that I have seen for the :if option of validation methods uses Proc.new instead of lambda, for example
```
class Foo < ActiveRecord::Base
validates_presence_of :name, :if => Proc.new{|f| .... } # why not lambda here?
end
```
is there any reason for this?
As far as I know, lambda
1. Is more strict with arguments.
2. Also return statement in lambda block returns from the block, not from calling function.
Both seems to be desirable behavior for :if option mentioned above, is there anything I am missing?
|
>
> Both seems to be desirable behavior for :if option mentioned above, is there anything I am missing?
>
>
>
I'm guessing that:
It's more desirable to allow Procs as they don't care about the number of arguments. So I could easily write any of the below:
```
validates_presence_of :name, :if => Proc.new{|f| f.display_name.blank? } # I care about 'f' here as I need it to check something.
```
... and:
```
validates_presence_of :secret_sauce, :if => Proc.new{ MyApp::REQUIRE_SECRET_SAUCE } # I don't care about any arguments being passed in.
```
This may seem like a minor thing, but I guess it adds to the flexibility.
|
POST request and Node.js without Nerve
Is there any way to accept POST type requests without using Nerve lib in Node.js?
|
By default the **http.Server** class of Node.js accepts [any http method](http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods).
You can get the method using `request.method` ([api link](http://nodejs.org/docs/v0.4.7/api/http.html#request.method)).
Example:
```
var sys = require('sys'),
http = require('http');
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.write(request.method);
response.end();
}).listen(8000);
sys.puts('Server running at http://127.0.0.1:8000/');
```
This will create a simple http server on the port 8000 that will echo the method used in the request.
If you want to get a *POST* you should just check the `request.method` for the string "POST".
Since version 0.1.90, the function to close the response is `response.end` instead of `response.close`. Besides the name change, `end` can also send data and close the response after this data is sent unlike close. ([api example](http://nodejs.org/docs/v0.4.7/api/synopsis.html#synopsis))
|
Proper use of karma-commonjs with Jasmine 2
I've spent a fair amount of time trying to debug this, and figured I would ask. I even created a [GitHub repository](https://github.com/danielzen/karma-commonjs-test) but won't rely on it, so here goes. I'm trying to take advantage of CommonJS syntax within the Karma test runner using PhantomJS. For my module I created the simplest thing I could think of:
```
exports.returnYes = function() {
return "Yes";
};
```
The Jasmine test is:
```
var returnYes = require("../js/returnYes").returnYes;
describe("returnYes", function() {
it("should return Yes", function() {
expect(returnYes()).toBe("Yes");
});
});
```
And, if I do a `jasmine init` I can run it from the command line thanks to `jasmine-npm` by simply typing `jasmine` with output:
```
$ jasmine
Started
.
1 spec, 0 failures
Finished in 0.003 seconds
```
Now to try and get it to work inside karma:
I create my [karma.conf.js](https://github.com/danielzen/karma-commonjs-test/blob/master/karma.conf.js) with frameworks: `jasmine`,`commonjs`. And, I add `commonjs` as preprocessor.
I try to do a `karma run` and I find that it can't find `global` which is part of `getJasmineRequireObj` in [jasmine.js](https://github.com/jasmine/jasmine/blob/master/lib%2Fjasmine-core%2Fjasmine.js) where it declares `jasmineGlobal = global;`
The command line output is a little hard to read, but here it is:
```
$ karma run
[2015-06-27 17:41:35.266] [DEBUG] config - Loading config /Users/zen/Projects/karma-commonjs-test/karma.conf.js
##teamcity[enteredTheMatrix]
##teamcity[testSuiteStarted nodeId='1' parentNodeId='0' name='karma.conf.js' nodeType='config' locationHint='config:///Users/zen/Projects/karma-commonjs-test/karma.conf.js']
##teamcity[testSuiteStarted nodeId='2' parentNodeId='1' name='PhantomJS 1.9.8 (Mac OS X 0.0.0)' nodeType='browser']
##teamcity[testStarted nodeId='3' parentNodeId='2' name='Error' nodeType='browserError']
##teamcity[testFailed nodeId='3' error='yes' message='ReferenceError: Can|'t find variable: global|nat http://localhost:9876/base/node_modules/jasmine-core/lib/jasmine-core/jasmine.js?68f13ab3f93af5a219b9fe8409f8763b31998bba:27']
##teamcity[testSuiteFinished nodeId='2']
##teamcity[testSuiteFinished nodeId='1']
```
For good measure here are the devDependencies in my packages.json:
```
"devDependencies": {
"jasmine-core": "^2.3.4",
"karma": "^0.12.37",
"karma-commonjs": "0.0.13",
"karma-jasmine": "^0.3.5",
"karma-phantomjs-launcher": "^0.2.0",
"phantomjs": "^1.9.17"
}
```
I'm not sure why I can't find `global`. Any help would be greatly appreciated!!! :)
|
It seems like my whole problem came down to the line in karma.conf.js (not shown in my original question:
```
preprocessors: {
'**/*.js': ['commonjs']
},
```
For some reason, `jasmine.js` is not happy being pre-processed by commonjs, and "\*\*/\*.js" says to go through all subdirectories (which is probably overkill), including node\_modules which has jasmine-core/jasmine.js
So I can either make my pre-processor more specific (best practice):
```
preprocessors: {
'spec/*.js': ['commonjs'],
'js/*.js': ['commonjs']
},
```
but as a test to see if any other files would give me a problem, I tried:
```
preprocessors: {
'**/!(jasmine).js': ['commonjs'],
},
```
And, everything worked as well. Bottom line. Do not process jasmine.js through commonjs preprocessor!
|
Is it possible to direct a single audio stream to a separate output device?
I listen to a streaming radio broadcast on my Ubuntu GNOME PC for most of the day. I'd like to send its output to my desktop speakers, while continuing to send the rest of the output to my headphones.
Would it be possible to direct a single audio stream to a specific output device, while allowing all others to continue to use the default?
|
To supplement what StallionSA wrote above, you do need a second audio device, but it does not have to be a sound card in the physical sense of a card which you insert into a slot on your mainboard.
If you do a search on the web you will find many USB "sound card" devices which could be used to connect your headphones and these range from budget to some quite expensive USB DACs specifically designed for quality headphone listening.
Once you have your USB sound card plugged in (which should in most cases be plug and play to use so long as it conforms to the USB audio conventions) you then would then need to configure `/etc/modprobe.d/alsa-base.conf`.
to set the USB device to default ALSA sound card 0, and the other as sound card 1, and then use pavucontrol to direct the application streaming the radio station to the sound card to which the speakers are connected.
It is also possible to do via the environmental variable PULSE\_SINK at the time of launching the application, eg in a Bourne shell
```
PULSE_SINK="alsa_output.usb-Creative_Technology_SB_X-Fi_Surround_5.1_000004it-00-S51.analog-stereo" mplayer "http://mozart.wkar.msu.edu/wkar-fm-mp3" &
```
The list of available sinks (Pulseaudio term for output device) can be found using
```
pactl list short sinks
```
|
2 containers using the same port in Kubernetes pod
I have the same problem as the following:
[Dual nginx in one Kubernetes pod](https://stackoverflow.com/questions/54289786/dual-nginx-in-one-kubernetes-pod)
In my Kubernetes `Deployment` template, I have 2 containers that are using the same port 80.
I understand that containers within a `Pod` are actually under the same network namespace, which enables accessing another container in the `Pod` with `localhost` or `127.0.0.1`.
It means containers can't use the same port.
It's very easy to achieve this with the help of `docker run` or `docker-compose`, by using `8001:80` for the first container and `8002:80` for the second container.
Is there any similar or better solution to do this in Kubernetes Pod ? Without separating these 2 containers into different Pods.
|
Basically I totally agree with *@David's* and *@Patric's* comments but I decided to add to it a few more things expanding it into an answer.
>
> I have the same problem as the following: [Dual nginx in one Kubernetes pod](https://stackoverflow.com/questions/54289786/dual-nginx-in-one-kubernetes-pod)
>
>
>
And there is already a pretty good answer for that problem in a mentioned thread. From the technical point of view it provides ready solution to your particular use-case however it doesn't question the idea itself.
>
> It's very easy to achieve this with the help of docker run or
> docker-compose, by using 8001:80 for the first container and 8002:80
> for the second container.
>
>
>
It's also very easy to achieve in **Kubernetes**. Simply put both containers in different `Pods` and you will not have to manipulate with nginx config to make it listen on a port different than `80`. Note that those two docker containers that you mentioned don't share a single network namespace and that's why they can both listen on ports `80` which are mapped to different ports on host system (`8001` and `8002`). This is not the case with **Kubernetes** *Pods*. Read more about **microservices architecture** and especially how it is implemented on **k8s** and you'll notice that placing a few containers in a single `Pod` is really rare use case and definitely should not be applied in a case like yours. There should be a good reason to put 2 or more containers in a single `Pod`. Usually the second container has some complimentary function to the main one.
There are [3 design patterns for multi-container Pods, commonly used in **Kubernetes**](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/): sidecar, ambassador and adapter. Very often all of them are simply referred to as **sidecar containers**.
Note that 2 or more containers coupled together in a single `Pod` in all above mentioned use cases *have totally different function*. Even if you put more than just one container in a single `Pod` (which is most common), in practice it is never a container of the same type (like two nginx servers listening on different ports in your case). They should be complimentary and there should be a good reason why they are put together, why they should start and shut down at the same time and share same network namespace. Sidecar container with a monitoring agent running in it has complimentary function to the main container which can be e.g. nginx webserver. You can read more about container design patterns in general in [this](https://techbeacon.com/enterprise-it/7-container-design-patterns-you-need-know) article.
>
> I don't have a very firm use case, because I'm still
> very new to Kubernetes and the concept of a cluster.
>
>
>
So definitely don't go this way if you don't have particular reason for such architecture.
>
> My initial planning of the cluster is putting all my containers of the system
> into a pod. So that I can replicate this pod as many as I want.
>
>
>
You don't need a single `Pod` to replicate it. You can have in your cluster a lot of `replicaSets` (usually managed by `Deployments`), each of them taking care of running declared number of replicas of a `Pod` of a certain kind.
>
> But according to all the feedback that I have now, it seems like I going
> in the wrong direction.
>
>
>
Yes, this is definitely wrong direction, but it was actually already said. I'd like only to highlight why namely this direction is wrong. Such approach is totally against the idea of *microservices architecture* and this is what **Kubernetes** is designed for. Putting all your infrastructure in a single huge `Pod` and binding all your containers tightly together makes no sense. Remember that a `Pod` *is the smallest deployable unit in **Kubernetes*** and when one of its containers crashes, the whole `Pod` crashes. There is no way you can manually restart just one container in a `Pod`.
>
> I'll review my structure and try with the
> suggests you all provided. Thank you, everyone! =)
>
>
>
This is a good idea :)
|
Implement a rule to access physically the server room
We have a server room and right now it's like in wild west: the strongest one can get in and do whatever he wants.
I would like to prepare a list of rules to follow to monitor the access and understand who got in so we can track who did what to troubleshoot problems or understand if someone stole gear.
I thought about keeping keys in a single and secure place and giving it only to a person who sign a register when taking the keys and when giving them back (both signatures with time details).
Is it a good idea?
Can it be improved without messing too much with people with the need to work fastly?
thanks!
|
Don't use keys, use access cards: access cards can be logged.
Institute a policy of documentation to track changes: the access logs will be backup only so if someone forgets to log a change to the system you can ask them. Every change should be written up afterwards.
Most importantly: if you don't trust your admins, get new ones. It's impossible to force limits on an admin's access to your network. It's counter-productive and will alienate them.
If the problem is just one of everyone trying to do what they think best and interfering with each other, consider selecting a chief admin. This can be difficult; some sysadmins (while competent) are poor at relationship-managing and consensus-building. If you have such an admin, count yourself lucky, give them more responsibility, and a pay raise.
|
MDC-like (Mozilla Documentation Center) resource for WebKit?
Playing around with latest HTML5 features feels nice, but it tends to be quite troublesome in the end, as standards and their browser specific implementations continue to evolve. Mozilla has a great resource - [MDN Doc Center](https://developer.mozilla.org/en-US/docs), where they continuously document all the changes that Gecko undergoes. I wonder if there is anything similar for WebKit, for both Safari and Chrome?
|
Here are the doc centres for all major browsers:
References - Basically exactly the same as safari link for IE/FF + Opera
**Chrome:** (thanks to @TomTu)
<https://www.chromium.org/developers>
**Firefox:**
<https://developer.mozilla.org/>
**Opera:**
<https://www.opera.com/docs/specs/presto23/#html> (very very good)
**Safari:**
<https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariCSSRef/Introduction.html#//apple_ref/doc/uid/TP30001267-SW1>
**IE:**
<https://msdn.microsoft.com/en-us/library/aa902517.aspx>
|
css background image is being cut off
I have an unordered list and the background image is being cut off when trying to place it next to the text.
I'm using jquery to add the class to the anchor tag to display the image, and its working fine, the only problem is the image gets cut off. I've been playing around with the css, but can't seem to figure out how to make the image display properly...it seems like the < li > is hiding the image behind it somehow...can I place the image in front of the < li > to make it display...or am I missing something else?
Can someone help me? Thanks.
Here's the HTML:
```
<ul id="nav>
<li>
<a class="folder_closed">Item 1</a>
<div style="display:none">Content for item 1</div>
</li>
</ul>
```
Here's the CSS:
```
ul#nav{
margin-left:0;
margin-right:0;
padding-left:0px;
text-indent:15px;
}
#nav > li{
vertical-align: top;
text-align:left;
clear: both;
margin-left:0px;
margin-right:0px;
padding-right:0px;
padding-left:15px;
}
.folder_open{
position:relative;
background-image: url(../images/maximize.png);
background-repeat: no-repeat;
background-position: -5px 1px;
}
.folder_closed{
position:relative;
background-image: url(../images/minimize.png);
background-repeat: no-repeat;
background-position: -5px 1px;
}
```
|
This sounds like a line-height issue---just for experimentation try setting the LI "line-height: 40px;" and see if your image shows completely...
One of the things I do in this case is I use some absolute positioning. First to set it up you have to have your UL and LIs relatively-positioned:
```
<style type="text/css">
ul, li {
position: relative;
}
</style>
<ul>
<li> ... </li>
<li> ... </li>
<li> ... </li>
</ul>
```
Then add some padding to the left side of the LI:
```
<style type="text/css">
li {
padding-left: 30px;
}
</style>
```
In this case you're using an <A> anchor w/ some class styling. Break up the <A> into two As:
```
<li>
<a class="folder_icon folder_closed"></a>
<a class="folder_title">... your title ...</a>
... your other content ...
</li>
```
And then turn one of the As into blocked display:
```
<style type="text/css">
li .folder_icon {
position: absolute;
left: 0px;
top: 0px;
display: block;
width: 16px;
height: 16px;
}
li .folder_closed {
background-image: url("../images/minimize.png");
background-repeat: no-repeat;
background-position: -5px 1px;
}
</style>
```
How is that?
|
Why SynchronizationContext does not work properly?
I have following code:
```
[TestMethod]
public void StartWorkInFirstThread()
{
if (SynchronizationContext.Current == null)
SynchronizationContext.SetSynchronizationContext(
new SynchronizationContext());
var syncContext = SynchronizationContext.Current;
Console.WriteLine("Start work in the first thread ({0})",
Thread.CurrentThread.ManagedThreadId);
var action = ((Action) DoSomethingInSecondThread);
action.BeginInvoke(CallbackInSecondThread, syncContext);
// Continue its own work
}
private static void DoSomethingInSecondThread()
{
Console.WriteLine("Do something in the second thread ({0})",
Thread.CurrentThread.ManagedThreadId);
}
private void CallbackInSecondThread(IAsyncResult ar)
{
Console.WriteLine("Callback in the second thread ({0})",
Thread.CurrentThread.ManagedThreadId);
var syncContext = (SynchronizationContext) ar.AsyncState;
syncContext.Post(CallbackInFirstThread, null);
}
private void CallbackInFirstThread(object obj)
{
Console.WriteLine("Callback in the first thread ({0})",
Thread.CurrentThread.ManagedThreadId);
}
```
I expect last method to be executed in the first thread, i.e. initial thread where SynchronizationContext is taken from, because I call `Post()` method of this context. I.e. something like this:
```
Start work in the first thread (28)
Do something in the second thread (17)
Callback in the second thread (17)
Callback in the first thread (28)
```
Isn't it the meaning of SynchronizationContext? But actually I have following output:
```
Start work in the first thread (28)
Do something in the second thread (17)
Callback in the second thread (17)
Callback in the first thread (7)
```
**What is the problem? Does something go wrong with SynchronizationContext or I have some misunderstanding?**
**Update:** I call this method as a unit test using Resharper test runner.
|
See <http://www.codeproject.com/KB/threads/SynchronizationContext.aspx>
There is the answer you need. You must override `SynchronizationContext` to make it properly handling your operations.
Read starting from:
>
> Notice that DoWork is executed on
> thread 11, the same thread as Run1.
> Not much of a SynchronizationContext
> into the main thread. Why? What's
> going on? Well... This is the part
> when you realize that nothing is for
> free in life. Threads can't just
> switch contexts between them, they
> must have an infrastructure built-in
> into them in order to do so. The UI
> thread, for example, uses a message
> pump, and within its
> SynchronizationContext, it leverages
> the message pump to sync into the UI
> thread.
>
>
>
|
How to write an async method with out parameter?
I want to write an async method with an `out` parameter, like this:
```
public async void Method1()
{
int op;
int result = await GetDataTaskAsync(out op);
}
```
How do I do this in `GetDataTaskAsync`?
|
You can't have async methods with `ref` or `out` parameters.
Lucian Wischik explains why this is not possible on this MSDN thread: <http://social.msdn.microsoft.com/Forums/en-US/d2f48a52-e35a-4948-844d-828a1a6deb74/why-async-methods-cannot-have-ref-or-out-parameters>
>
> As for why async methods don't support out-by-reference parameters?
> (or ref parameters?) That's a limitation of the CLR. We chose to
> implement async methods in a similar way to iterator methods -- i.e.
> through the compiler transforming the method into a
> state-machine-object. The CLR has no safe way to store the address of
> an "out parameter" or "reference parameter" as a field of an object.
> The only way to have supported out-by-reference parameters would be if
> the async feature were done by a low-level CLR rewrite instead of a
> compiler-rewrite. We examined that approach, and it had a lot going
> for it, but it would ultimately have been so costly that it'd never
> have happened.
>
>
>
A typical workaround for this situation is to have the async method return a Tuple instead.
You could re-write your method as such:
```
public async Task Method1()
{
var tuple = await GetDataTaskAsync();
int op = tuple.Item1;
int result = tuple.Item2;
}
public async Task<Tuple<int, int>> GetDataTaskAsync()
{
//...
return new Tuple<int, int>(1, 2);
}
```
|
Difference between Force Unwrapping Optionals and Implicitly Unwrapped Optionals
I was very confused about forced unwrapping and implicit unwrapping at first. Now, the following understanding comes from my self-study:
There is **no** action available for implicit unwrapping, but there is something called implicitly unwrapped Optionals. Implicitly unwrapped Optionals and normal Optionals are both Optionals, the difference being when accessing an implicitly unwrapped Optional, you confidently know that there is a valid value under the hood and ready for use. Normal Optionals need **`if let` binding** or a **forced unwrapping (`!`) action** to access the possible values behind the optional variables.
**Summary**:
***Forced unwrapping*** is an **action** done on the normal Optionals.
***Implicitly unwrapped Optionals*** are **Optionals**, usually used for class initialization and will pass values without exclamation mark when used.
**Question**:
Am I right? If my understanding is not precise, I would appreciate it if you correct me.
Thanks
|
## First of all let's define an Optional
An Optional value is a **container** of some type (`Int`, `String`, `UIColor`, ...), it could contain the value (`1`, `"Hello world"`, `.greenColor()`, ...) or `nil`.
```
let anOptionalInt: Int? = 1
let anotherOptionalInt: Int? = nil
```
[](https://i.stack.imgur.com/tNmSW.png)
When in Swift we see an Optional value we think:
>
> Ok this could contain the actual value or `nil`
>
>
>
## Force unwrapping
It's the action of **extracting** the value contained inside an `Optional`.
This operation is dangerous because you are telling the compiler: **I am sure this Optional value does contain a real value, extract it!**
```
let anOptionalInt: Int? = 1
let anInt: Int = anOptionalInt!
```
Now `anInt` contains the value 1.
[](https://i.stack.imgur.com/253ui.png)
If we perform a force unwrapping on an Optional value that happens to contain `nil` we get a `fatalError`, the app does crash and there is no way to recover it.
```
let anotherOptionalInt: Int? = nil
let anotherInt = anotherOptionalInt!
fatal error: unexpectedly found nil while unwrapping an Optional value
```
[](https://i.stack.imgur.com/BWIeE.png)
## Implicitly unwrapped optionals
When we define an Implicitly unwrapped optional, we define a container that will **automatically** perform a force unwrap each time we read it.
```
var text: String! = "Hello"
```
If now we read `text`
```
let name = text
```
we don't get an Optional `String` but a plain `String` because `text` automatically unwrapped it's content.
However text is still an optional so we can put a `nil` value inside it
```
text = nil
```
But as soon as we read it (and it contains `nil`) we get a fatal error because we are unwrapping an optional containing `nil`
```
let anotherName = text
fatal error: unexpectedly found nil while unwrapping an Optional value
```
|
Custom dependency property binding
I got some problem with custom dependency property binding.
We have:
Custom user control with one dependency property and binding to self:
```
<UserControl x:Class="WpfApplication1.SomeUserControl"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
mc:Ignorable="d"
d:DesignHeight="300" d:DesignWidth="300" DataContext="{Binding RelativeSource={RelativeSource Mode=Self}}">
<Grid>
<Label>
<Label.Template>
<ControlTemplate>
<Label Content="{Binding MyTest}"/>
</ControlTemplate>
</Label.Template>
</Label>
</Grid>
```
... and code of control:
```
public partial class SomeUserControl : UserControl
{
public SomeUserControl()
{
InitializeComponent();
}
public static readonly DependencyProperty MyTestProperty = DependencyProperty.Register("MyTest", typeof(int), typeof(SomeUserControl));
public int MyTest
{
get { return (int)GetValue(MyTestProperty); }
set { SetValue(MyTestProperty, value); }
}
}
```
I trying to use this control with binding to some simple property of simple model class:
```
<UserControl x:Class="WpfApplication1.AnotherUserControl"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:wpfApplication1="clr-namespace:WpfApplication1"
mc:Ignorable="d"
d:DesignHeight="300" d:DesignWidth="300" DataContext="{Binding RelativeSource={RelativeSource Mode=Self}}">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition Width="Auto"/>
</Grid.ColumnDefinitions>
<wpfApplication1:SomeUserControl MyTest="{Binding Path=Model.MyNum}" Grid.Column="0"/>
<Label Content="{Binding Path=Model.MyNum}" Grid.Column="1"/>
</Grid>
```
... with code
```
public partial class AnotherUserControl : UserControl
{
public MyModel Model { get; set; }
public AnotherUserControl()
{
Model = new MyModel();
Model.MyNum = 1231;
InitializeComponent();
}
}
```
... and model:
```
public class MyModel:INotifyPropertyChanged
{
private int _myNum;
public int MyNum
{
get { return _myNum; }
set { _myNum = value; OnPropertyChanged("MyNum");}
}
public event PropertyChangedEventHandler PropertyChanged;
[NotifyPropertyChangedInvocator]
protected virtual void OnPropertyChanged(string propertyName)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}
}
```
But binding is not working. Stack have no errors in compilation. Now, binding working with statndart wpf label control (with same model), and don't working with my custom control and custom property. Please help me to understand reasons of this problem and solve it;
Thanks)
|
You should use ElementName Binding in your `SomeUserControl`.
```
<UserControl x:Class="WpfApplication1.SomeUserControl"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
mc:Ignorable="d"
d:DesignHeight="300" d:DesignWidth="300"
x:Name="uc">
<Grid>
<Label>
<Label.Template>
<ControlTemplate>
<Label Content="{Binding MyTest, ElementName=uc}"/>
</ControlTemplate>
</Label.Template>
</Label>
</Grid>
```
here some more information why you should not set the datacontext of a usercontrol to self/this.
[Data Binding in WPF User Controls](https://stackoverflow.com/questions/11226843/data-binding-in-wpf-user-controls)
|
"list index out of range" when using sys.argv[1]
I am writing a simple Python client and server, which works fine passing the server address within my code, however, I want the user to be able to enter the server address and throw and error if its incorrect. When I have the code below I get a error message from the terminal "list index out of range".
```
server = (sys.argv[1])
serverAdd = (server, '65652') # server address and port number
```
Can anyone help me with this please.
When I run my client program in python I want to be able to enter a address to connect to and store that in server. I run the program directly from the command line by typing programname.py. The server is already running listening for incoming connections.
|
With this Python:
```
import sys
print(sys.argv)
```
And invoked with this command:
```
>python q15121717.py 127.0.0.1
```
I get this output:
```
['q15121717.py', '127.0.0.1']
```
I think you are not passing a argument to your Python script
Now you can change your code slightly to take a server form the command line or prompt for a server when none is passed. In this case you would look at something like this:
```
if len(sys.argv) > 1:
print(sys.argv[1])
else:
print(input("Enter address:"))
```
|
Structuring a PhoneGap jQuery Mobile Application
I am currently building a game in phonegap using jQuery Mobile frame work. What i have ended up with is lots of spaghetti code with one html and several js classes.
I was interested to know if there are any good guides to create a structured jQuery Mobile Application that follows MVC pattern.
I found a [good guide](http://www.sencha.com/learn/a-sencha-touch-mvc-application-with-phonegap/) for creating a MVC App with Sencha Touch. I was looking for something similar with jQuery Mobile.
|
I have a rather large application and this is how I have it structured
```
css
-- all css files
images
-- all image files
js
controller.js -- page events and element actions. Also contains PhoneGap specific methods
core
forms.js -- working with forms, saving info
mobile.js -- basic definitions, AJAX communications
encrypt.js -- encryption
global.js -- helper functions
storage.js -- database storage wrapper
cw
client.js -- a client object, > 400 lines of js code
Objects.js -- all other needed objects with <50 lines of js code each
question.js -- a question object, > 500 lines of js code
service.js -- a service object, > 700 lines of js code
jq
jquery-ui.min.js
jquery.min.js
jquery.mobile.min.js
phonegap-1.1.0.js
add_client.html
clients.html
client_list.html
index.html -- the only file that is structured like a real full html file
manager.html
schedule.html
service.html
```
aside for my index.html file, all other .html files are stubs. They only contain the `<div data-role='page'></div>` and other needed html elements that define the page and its intended functionality.
I develop the app on VS2010, using Chrome as my debugger. When I feel good with my progress, I copy everything over to my mac to a folder in an Eclipse project ( for Android devices ) which is also a linked reference in my xCode project ( for iOS devices ).
I have been working on this project for about 3-4 months now and once I got past the learning curve of jQM and PhoneGap, have been making very good progress with this structure.
|
How do I add a drop shadow to an SVG path element?
I've been attempting to apply a drop shadow to my SVG Path. I googled across the `filter` option which when applied to path by applying: `-webkit-filter: drop-shadow( -5px -5px 10px #000 );` which didn't seem to get applied.
[Here's a fiddle with my SVG path demonstrating the problem](http://jsfiddle.net/nkcxq3be/6/)
|
Within your JSFiddle, I deleted your CSS and added a filter definition. It seems to work:
```
<svg width="100%" height="300px">
<defs>
<filter id="filter1" x="0" y="0">
<feOffset result="offOut" in="SourceAlpha" dx="-5" dy="-5" />
<feGaussianBlur result="blurOut" in="offOut" stdDeviation="3" />
<feBlend in="SourceGraphic" in2="blurOut" mode="normal" />
</filter>
</defs>
<path stroke-linejoin="round" stroke-linecap="round" stroke="red" stroke-opacity="1" stroke-width="5" fill="none" class="leaflet-clickable" d="M1063 458L1055 428L1034 433L1030 421L1017 423L911 452L895 455L885 441L859 424L809 410L788 394L774 377L744 309L730 313L727 304L669 319L599 341L596 331L491 364L488 357L498 343L490 343L450 352L417 256L371 270L366 253L355 256L351 242L217 282L194 210L191 196L166 113L45 147L44 140L13 150" filter="url(#filter1)"></path>
</svg>
```
Maybe a few tweaks to the dx, dy, and stdDeviation values will get it just the way you want.
|
Why offset specific in loop mount?
I am studying the command [here](http://www.biosignalpi.org/index.php/software/development-guide/how-to-install-qt-qwt-and-overclock-rpi) in the post about *How to compile and install Qt, qwt and overclock the RPI*
```
sudo mount -o loop,offset=62914560
<date>-wheezy-raspbian.img /mnt/rasp-pi-rootfs
```
I do `fdisk 2016-02-26-raspbian-jessie.img` and I get
```
Disk: 2016-02-26-raspbian-jessie.img geometry: 976/128/63 [7870464 sectors]
Signature: 0xAA55
Starting Ending
#: id cyl hd sec - cyl hd sec [ start - size]
------------------------------------------------------------------------
1: 0C 0 130 3 - 8 40 32 [ 8192 - 122880] Win95 FAT32L
2: 83 8 40 33 - 489 232 63 [ 131072 - 7739392] Linux files*
3: 00 0 0 0 - 0 0 0 [ 0 - 0] unused
4: 00 0 0 0 - 0 0 0 [ 0 - 0] unused
```
---
Why is offset specific in mount?
|
As 62914560 points exactly 60MiB into the file, I think the best guess would be that the Raspian disk image is actually partitioned. The offset tells `mount` (or actually `losetup`) the actual offset of the *root* file-system (I suggest this is the second of two partitions, the first most-probably being `/boot` resp. the bootloader/firmware files).
The problem here is that even though the `loop` driver actually supports partitioned images, the number of maximum partitions per `loop` device has to be specified as a module parameter when loading the module (or on the kernel command line). As there are many distros out there that won't do this by default, `...,offset=XXX` is the most reliable way to cope with partitioned images when `loop` uses the default parameter (which is 0, hence no partition support).
You can test whether your `loop` driver was loaded with partition-support by looking into `/sys/module/loop/parameters/max_part`. On my current system (ArchLinux), after loading `loop` without parameters this is:
```
$ cat /sys/module/loop/parameters/max_part
0
```
To enable partitioning-support, you will have to unload `loop` and load it again with the desired value for the `max_part` options, e.g.
```
# modprobe -r loop
# modprobe loop max_part=8
```
After this, you could try to manually set-up the loop-device for your image by doing
```
# losetup /dev/loop0 /path/to/<date>-wheezy-raspbian.img
```
Now, you should not only see `/dev/loop0` representing the whole image, but (as long as my theory is correct ;) also have `/dev/loop0p1`, `/dev/loop0p2`, etc., for all partitions in the image (see `losetup` script [example](https://raspberrypi.stackexchange.com/a/99531/154463)).
**Edit:**
If you want to do this yourself the tedious way (I'd suggest simply reloading `loop` with the correct `max_part` option and simply using the partitions), you could find out which offset is required by using `fdisk` directly on the image-file (shown with an ArchLinux ISO, as I had it on hand, but the idea is the same):
```
$ fdisk -l archlinux-2016.03.01-dual.iso
Disk archlinux-2016.03.01-dual.iso: 268.3 MiB, 281339392 bytes, 549491 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2237702c
Device Boot Start End Sectors Size Id Type
archlinux-2016.03.01-dual.iso1 * 0 1452031 1452032 709M 0 Empty
archlinux-2016.03.01-dual.iso2 172 63659 63488 31M ef EFI (FAT-12/16/32)
```
The second partition starts at sector 172 with a sector size of 512 bytes. Multiplying both values gives you the offset in bytes, thus to mount the partition, you'll use:
```
# mount -o loop,offset=$((172*512)) archlinux-2016.03.01-dual.iso /mnt
# ls -l /mnt
total 4
drwxr-xr-x 4 root root 2048 Mar 1 15:49 EFI
drwxr-xr-x 3 root root 2048 Mar 1 15:49 loader
```
Voila.
|
How can I protect data in a SQLite database in Android?
I'm currently developing an Android game which saves data into a SQLite database. This is not really "sensitive" data, but I don't want users to be able to **modify** it (for obvious reasons of game balance, as it would be cheating). And it's quite easy to access and modify a SQLite db **when your phone is rooted** (there are plenty of applications for that in the market).
So should I even worry about that, or consider users with a rooted phone can do whatever they want including cheating and that's their choice? Or could I somehow **encrypt** the data I don't want them to modify, or add a **MD5 checksum** or something similar?
Another approach would be to abandon SQLite altogether and use some kind of binary files with game data.
Please let me know if some of you already encountered similar issues, and what are the approaches you followed, as well as the "good practices" in the Android gaming development community.
|
Root access for everybody and security are mutually exclusive.
Any application or user with root permissions can read *and modify* each and every file on your system, as well as all of the main memory. That doesn't leave many places to store a potential encryption key for the database.
You could hide parts of the key in the executables, configuration files etc, but everything you could come up with would be nothing more than obfuscation and security by obscurity.
If a user opts to grant root access to everybody, that's their decision, and it's not your job as an app developer to prevent any harm that might be caused.
Update:
[Storing API keys in Android, is obfustication enough?](https://stackoverflow.com/questions/4671859/storing-api-keys-in-android-is-obfustication-enough) is a pretty similar issue - it's about protecting API keys, but it's the same situation with regards to your options.
|
Identify Unique Browser / tab instance, e.g. User has 2 tabs open on mysite.com-identify each
I have read [this](http://www.pcreview.co.uk/forums/unique-browser-id-t3951101p2.html) and many other links the past few days.
The problem is that I need to have a unique identifier for each tab or browser that a user has open for mysite.com (example site name)
I cannot use a unique session, as when I open mysite.com and have e.g. selected "carrots" in the session, then all the other tabs/browsers for mysite.com now has "carrots" in the specific session value.
But still the server obviously identifies each browser/tab uniquely. Is there a way to get hold of this unique browser/tab ID, or to create a unique one?
I am not referring to generating a unique ID oneself via JavaScript, I saw some good examples on StackOverflow. The problem would be that I'd need to implement it on each master page and carry it around between master/non master pages. Also for e.g. In my instance it won't work as I can have the browser open and it would generate 20 different IDs if I ran it 20 times across a few pages e.g. when I cannot transfer the value across ports.
So please, not to waste everyone's time, this is not a random GUID creation question, this question relates to obtaining a unique recreatable ID (if it gets lost)for each tab/browser, preferably identifying by the link to the server.
|
This is a big topic you've touched on.
If you only need to persist the ID during a form submit, you can take the approach of the client-side `window.name` property and use the `window.onload` to propagate a hidden field with that value.
But this won't work when user is following hyper-links (click on a link on a site).
onload: check if `window.name` is set; if not set, assign a GUID. if set, set a hidden field=`window.name`. So, on submit, your form will have the hidden field with the `window.name` GUID.
You can also use the HTML5 `window.sessionStorage` which will solve all your troubles, but that don't work on older browsers.
If you need support backward-browser AND non-submit type, then you will need to write some pretty complex cookie/session management on the client side. Not pretty. Just hope the above 2 is sufficient.
|
Problem with Mergesort in C++
```
vector<int>& mergesort(vector<int> &a) {
if (a.size() == 1) return a;
int middle = a.size() / 2;
vector<int>::const_iterator first = a.begin();
vector<int>::const_iterator mid = a.begin() + (middle - 1);
vector<int>::const_iterator last = a.end();
vector<int> ll(first, mid);
vector<int> rr(mid, last);
vector<int> l = mergesort(ll);
vector<int> r = mergesort(rr);
vector<int> result;
result.reserve(a.size());
int dp = 0, lp = 0, rp = 0;
while (dp < a.size()) {
if (lp == l.size()) {
result[dp] = (r[rp]);
rp++;
} else if (rp == r.size()) {
result[dp] = (l[lp]);
lp++;
} else if (l[lp] < r[rp]) {
result[dp] = (l[lp]);
lp++;
} else {
result[dp] = (r[rp]);
rp++;
}
dp++;
}
a = result;
return a;
}
```
It compiles correctly but while execution, I am getting:
>
> This application has requested the
> runtime to end it in an unusual way.
>
>
>
This is a weird error.
**Is there something that is fundamentally wrong with the code?**
|
One problem is with the usage of `reserve()` (either use `resize()` or append items with `push_back()` instead of accessing the index).
---
```
if (a.size() == 1) return a;
int middle = a.size() / 2;
vector<int>::const_iterator first = a.begin();
vector<int>::const_iterator mid = a.begin() + (middle - 1);
vector<int>::const_iterator last = a.end();
vector<int> ll(first, mid);
vector<int> rr(mid, last);
```
This could be another problem. If the size is 2, then `ll` would end up being an empty vector, and this function doesn't appear to handle this. There doesn't seem to be much reason to subtract 1 from middle anyway.
---
It is also possible that you are copying things around far more than needed: you shouldn't need the `l` and `r` vectors (because they will just be copies of `ll` and `rr`), similarly I don't think you need the `result` vector, since you could just write the merged results right back to `a`.
|
Can you do keyframed speed changes with video?
I'm making a video and I want to do keyframed speed changes. That is, I want to have a video gradually get faster or slower at different points. I have been a user of kdenlive for awhile but have unfortunately found that is unable to do this. Can any recommend software that can definitely do this?
|
After a year of searching I've finally come across an answer. For this you'll need Blender. I'm using 2.56 beta.
Open Blender and switch to the Video Editor

Add a video to the editor

To make things easier combine the audio and video clips (make a meta clip)

With the clip selected add a Speed Control effect

In the effect properties on the right-hand side first untick "Stretch to input strip length". Then hover your mouse over Speed Factor and Press "I" on your keyboard. This field will go yellow.

Move to a different point in the movie. Change the value in the Speed Control field and then hover your cursor over the field and press "I" key again. You'll see the graph on the left-hand side change to reflect your actions

Do this a few more times and you've just changed the speed of your movie using keyframes!
There are many bug reports and feature requests in programs for easy keyframe editing of the speed of movies:
**Openshot**
[Bug 524364](https://bugs.launchpad.net/openshot/+bug/524364)
[Bug 506096](https://bugs.launchpad.net/openshot/+bug/506096)
**Kdenlive**
[Bug 336](http://www.kdenlive.org/mantis/view.php?id=336)
[Bug 397](http://www.kdenlive.org/mantis/view.php?id=397)
[Bug 289](http://www.kdenlive.org/mantis/view.php?id=289)
**VLMC**
[Bug 205](http://trac.videolan.org/vlmc/ticket/205)
**Novacut**
[Bug 680865](https://bugs.launchpad.net/novacut/+bug/680865)
|
how to place legend horizontaly above barplot in r
I have to place my legend horizontally above the barplot. I have search a lot but could not found a statisfactory answer. I have found an answer [here](https://stackoverflow.com/questions/3932038/plot-a-legend-outside-of-the-plotting-area-in-base-graphics) but it does not solve my problem.
|
I don't know what your data looks like and what you want your legend to be but horizontal legends are produced by setting `horiz = TRUE`. You can place a legend on the top of a plot using `"top"` as the legend position. If you want it outside your plot region you can move it upward using `inset` (you'll have to play around with the value a bit) and setting `xpd = TRUE` (which basically allows you to write outside the plot region):
Example:
```
barplot(c(10,2,7), col = 1:3)
legend("top", fill = 1:3, legend = c("A", "B", "C"),
horiz = TRUE, inset = c(0,-0.1), xpd = TRUE)
```
produces the following graph:
barplot with horizontal legend on top
[](https://i.stack.imgur.com/Jzqtz.png)
|
Moving a type with non-movable member
In his "C++ Move Semantics" book Nicolai M. Josuttis states that moving an object that contains a non-movable member amongst movable members with the generated move constructor move all members but the non-movable one (which is copied). Below is code snippet which is a variation of the example in the book.
```
#include <iostream>
class Copyable {
std::string name;
public:
explicit Copyable(std::string name): name(std::move(name)) {}
// Copying enabled
Copyable(const Copyable& other): name(other.name) {
std::cout << "Copyable copy ctor" << std::endl;
}
Copyable& operator=(const Copyable& other) {
name=other.name;
std::cout << "Copyable op=" << std::endl;
return *this;
}
// Moving disabled with no copy fallback
Copyable(Copyable&&) = delete;
Copyable& operator=(Copyable&&) = delete;
};
class Movable {
std::string name;
public:
explicit Movable(std::string name): name(std::move(name)) {}
// Copying enabled
Movable(const Movable& other): name(other.name) {
std::cout << "Movable copy ctor" << std::endl;
}
Movable& operator=(const Movable& other) {
name=other.name;
std::cout << "Movable op=" << std::endl;
return *this;
}
// Moving enabled
Movable(Movable&& other) noexcept: name(std::move(other.name)) {
std::cout << "Movable move ctor" << std::endl;
}
Movable& operator=(Movable&& other) noexcept {
name = std::move(other.name);
std::cout << "Movable move op=" << std::endl;
return *this;
}
};
class Container {
Copyable copyable;
Movable movable;
public:
Container(Copyable copyable, Movable movable): copyable(copyable), movable(std::move(movable)) {}
// Both copying and moving enabled by default
};
int main() {
Copyable c{"copyable"};
Movable m{"movable"};
Container container{c, m};
Container container2{std::move(container)};
}
```
Compiled with GCC on x86-64 with C++17 standard following output is produced:
Container created and initialized:
```
Copyable copy ctor
Movable copy ctor
Copyable copy ctor
Movable move ctor
```
Container moved:
```
Copyable copy ctor
Movable copy ctor
```
No move ctor is called for the movable member once the container is moved.
According to the book move ctor should be called for the Movable member, shouldn't it?
|
I'm not sure what the author of the book meant, but here is a quote from [cppreference](https://en.cppreference.com/w/cpp/language/move_constructor):
>
> The implicitly-declared or defaulted move constructor for class T is defined as deleted if any of the following is true:
>
>
> - T has non-static data members that cannot be moved (have deleted, inaccessible, or ambiguous move constructors);
> - [...]
>
>
>
So the move constructor of `Container` is implicitly-declared (aka "generated") as deleted. You cannot use it. Instead, `std::move(container)` binds to the const reference of the copy constructor and that is what is called.
Note that if you try to declare that move constructor as default like so, the compiler should give you an error message:
```
Container(Container&& other) noexcept = default;
```
For example, GCC says:
>
> error: use of deleted function 'Container::Container(Container&&)'
>
> note: 'Container::Container(Container&&)' is implicitly deleted because the default definition would be ill-formed
>
>
>
You can get the behavior described in the book, but you have to write it yourself. Something like this:
```
Container(Container&& other) :
copyable(other.copyable),
movable(std::move(other.movable)) {
}
```
... though I don't know why you would ever want to do that. In a concrete scenario, there must be a good reason for not being able to move `Copyable`. To be fair, [a copyable-but-not-movable type does not seem to be very useful](https://stackoverflow.com/questions/14323093/are-there-any-use-cases-for-a-class-which-is-copyable-but-not-movable). Still, I would not expect the behavior described in the book to be the default (implicit) one.
|
ExtJs TreeGrid with editor column. Exists?
I need an ability to edit values in a grid column of a treegrid. Simply adding a editor to the column's config didn't helped. I'm using ExtJs4 treepanel component.

Any ideas?
|
Yup, it exists.
I would recommend using the latest version, which at this time is 4.1 Release Candidate 1.
Use an Ext.Tree.Panel
add a Grid Editing plugin, like this:
```
plugins:[
Ext.create('Ext.grid.plugin.CellEditing', {
clicksToEdit:2
})
]
```
make at least one of the columns editable like this
```
editor:{
xtype:'textfield'
}
```
You are basically combining this:
<http://docs.sencha.com/ext-js/4-0/#!/example/tree/treegrid.html>
and this:
<http://docs.sencha.com/ext-js/4-0/#!/example/grid/cell-editing.html>
|
Forward slash / in CSS border radius syntax
Recently I came across following border radius syntax unknown to me:
```
.myClass{
border-radius: 30% / 20%;
}
```
Can anyone explain the syntax.
And is it compatible with IE8?
|
From [W3C](http://www.w3.org/TR/css3-background/#border-radius) :
>
> If values are given before and after the slash, then the values before
> the slash set the horizontal radius and the values after the slash set
> the vertical radius. If there is no slash, then the values set both
> radii equally.
>
>
>
As far as the support goes, IE8 doesn't support `border-radius` property be it whatever syntax you write in. There are polyfills available like [CSS3 Pie](http://css3pie.com/) if you want to make `border-radius` work on IE8.
You can check on [CanIUse](http://caniuse.com/#feat=border-radius) for `border-radius` support across browsers.
|
Does Resharper tell me a css class is unknown because it's on a CDN?
Resharper tells me, "*Unknown css class 'container-fluid*" for this line in my site's \_SiteLayout.cshtml file:
```
<header class="container-fluid">"
```
I do have this in my <head> section:
```
<link href="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.no-icons.min.css" rel="stylesheet">
```
...and I believe "container-fluid' is there. Is this simply a matter of Resharper not being able to find the class because it's remote?
|
ReSharper cannot access this file since it is not yet rendered into the project.
For example, when you have a CSS file within your project, ReSharper can simply search through it looking for class names. Although, when a CSS file is outside of your project (remote, or in a CDN), it does not have the capabilities to access it since it is not included yet.
Once your project is running within a browser and the file from the CDN is called, that is when it is included within your project.
Therefore, if you would like to know which class names exist or not when working with ReSharper, it is usually best to download them and include them within your project :)
**EDIT:**
After researching into this, I *think* I have found a way to allow ReSharper to search within external sources and uncompiled code.
Go to: ReSharper -> Options -> Tools -> External Sources

More information in the official documentation for [ReSharper | Options | Tools | External Sources](https://www.jetbrains.com/help/resharper/Reference__Options__Tools__External_Sources.html).
|
How can explicitly reset a template fragment cache in Django?
I am using Memcache for my Django application.
In Django, developers can use template fragment caching to only cache a section of a template. <https://docs.djangoproject.com/en/dev/topics/cache/#template-fragment-caching>
I was wondering if there is a way to explicitly change the value of a template fragment cache section say in views.py. For instance, could one use a method akin to cache.set("sidebar", "new value") except for a template fragment cache?
Thank you for your help.
|
In theory, yes. You first have to create a template cache key in the same pattern used by Django, which can be done with [this snippet of code](https://gist.github.com/1359075):
```
from django.utils.hashcompat import md5_constructor
from django.utils.http import urlquote
def template_cache_key(fragment_name, *vary_on):
"""Builds a cache key for a template fragment.
This is shamelessly stolen from Django core.
"""
base_cache_key = "template.cache.%s" % fragment_name
args = md5_constructor(u":".join([urlquote(var) for var in vary_on]))
return "%s.%s" % (base_cache_key, args.hexdigest())
```
You could then do something like `cache.set(template_cache_key(sidebar), 'new content')` to change it.
However, doing that in a view is kind of ugly. It makes more sense to set up [post-save signals](https://docs.djangoproject.com/en/1.3/topics/signals/) and expire cache entries when models change.
The above code snippet works for Django 1.2 and below. I'm not sure about Django 1.3+ compatibility; [`django/templatetags/cache.py`](https://code.djangoproject.com/browser/django/trunk/django/templatetags/cache.py) will have the latest info.
For Django 1.7, [django/core/cache/utils.py](https://github.com/django/django/blob/master/django/core/cache/utils.py) has a usable function.
|
c# List Sort : by Chinese strokes
I have a webpage, in which there is a sort that I have to order the list by Chinese strokes.
I have created an application containing code like this:
```
List<Student> stuList = new List<Student>() {
new Student("上海"),
new Student("深圳"),
new Student("广州"),
new Student("香港")
};
System.Globalization.CultureInfo strokCi = new System.Globalization.CultureInfo("zh-tw");
System.Threading.Thread.CurrentThread.CurrentCulture = strokCi; ;
//stuList.sort();
```
but there is an error: `At least one object must implement IComparable.`
What does this mean and how can I fix it?
|
You need to have your `Student` class implement the `IComparable` interface. This requires implementing a method `CompareTo`, which can simply return the result of calling `CompareTo` between the strings you're trying to order by.
For example, if the constructor initializes a `name` field, you might have something like this:
```
public class Student : IComparable
{
private string name;
public Student(string name)
{
this.name = name;
}
public int CompareTo(object other)
{
Student s = other as Student;
if (s == null)
{
throw new ArgumentException("Students can only compare with other Students");
}
return this.name.CompareTo(s.name);
}
}
```
|
Individual search filters for fields in django-rest-framework
Django-rest-framework will has a `SearchFilter` backend that will allow a single query against the searchable fields:
```
class OrganizationViewSet(viewsets.ModelViewSet):
queryset = Organization.objects.all()
serializer_class = OrganizationSerializer
pagination_class = CustomPagination
filter_backends = (filters.SearchFilter, DjangoFilterBackend)
filter_fields = ('sector', 'industry', 'marketplace')
search_fields = ('symbol',)
```
this way, when I query `...?search=AMZ` it will only return records with a non-sensitive match in the `symbol` field.
If I add another element into the `search_fields` tuple, it will look for this same search string in **both**.
Is there a way to define these search fields individually that will allow me to do something like:
`?search_symbol=AMZ&search_name=Good` so that it looks for objects that have `AMZ` in `symbol` field and `good` in `name` field?
|
To achieve this you will need a custom filter backend extending the `rest_framework.filters.SearchFilter` class. Specifically in the `rest_framework.filters.SearchFilter` class there is a method `get_search_terms`:
```
def get_search_terms(self, request):
"""
Search terms are set by a ?search=... query parameter,
and may be comma and/or whitespace delimited.
"""
params = request.query_params.get(self.search_param, '')
return params.replace(',', ' ').split()
```
We can override this method in our own `CustomSearchFilter` class to take control of how you specify the search terms within the url, for example:
```
class CustomSearchFilter(SearchFilter):
search_field_prefix = "search_"
def get_search_terms(self, request):
# get search fields from the class
search_fields = getattr(request.resolver_match.func.view_class, 'search_fields', list())
params = []
# iterate over each query paramater in the url
for query_param in request.query_params:
# check if query paramater is a search paramater
if query_param.startswith(CustomSearchFilter.search_field_prefix):
# extrapolate the field name while handling cases where <CustomSearchFilter.search_field_prefix> is in the field name
field = CustomSearchFilter.search_field_prefix.join(
query_param.split(CustomSearchFilter.search_field_prefix)[1:]
)
# only apply search filter for fields that are in the views search_fields
if field in search_fields:
params.append(request.query_params.get(query_param, ''))
return params
```
Now replace the `filters.SearchFilter` with your new `CustomSearchFilter` in your views `filter_backends`.
Hope this helps, I've written this without testing the code myself so let me know how you go!
|
How to get the first date and last date of current quarter in java.util.Date
I need to get the **first date of the current quarter** as a java.util.Date object and the **last date of the current quarter** as a java.util.Date object.
I'm using following methods to get this month first date and this month last date.
```
private Date getThisMonthFirstDate(){
Calendar calendar = new GregorianCalendar();
calendar.set(Calendar.HOUR_OF_DAY, 0);
calendar.set(Calendar.MINUTE, 0);
calendar.set(Calendar.SECOND, 0);
calendar.set(Calendar.MILLISECOND, 0);
calendar.set(Calendar.DAY_OF_MONTH, 1);
return calendar.getTime();
}
private Date getThisMonthLastDate(){
Calendar calandar = new GregorianCalendar();
calendar.set(Calendar.HOUR_OF_DAY, 0);
calendar.set(Calendar.MINUTE, 0);
calendar.set(Calendar.SECOND, 0);
calendar.set(Calendar.MILLISECOND, 0);
calendar.set(Calendar.DAY_OF_MONTH,1);
calendar.add(Calendar.MONTH, 1);
calendar.add(Calendar.DATE, -1);
return calendar.getTime();
}
```
Is there a way to modify that function to achieve this or could anyone point out a better way?
Assume that Q1 = Jan Feb Mar, Q2 = Apr, May, Jun, etc.
|
Here is solution in Java 7 or older (otherwise, I suggest to check other answers):
```
private static Date getFirstDayOfQuarter(Date date) {
Calendar cal = Calendar.getInstance();
cal.setTime(date);
cal.set(Calendar.DAY_OF_MONTH, 1);
cal.set(Calendar.MONTH, cal.get(Calendar.MONTH)/3 * 3);
return cal.getTime();
}
private static Date getLastDayOfQuarter(Date date) {
Calendar cal = Calendar.getInstance();
cal.setTime(date);
cal.set(Calendar.DAY_OF_MONTH, 1);
cal.set(Calendar.MONTH, cal.get(Calendar.MONTH)/3 * 3 + 2);
cal.set(Calendar.DAY_OF_MONTH, cal.getActualMaximum(Calendar.DAY_OF_MONTH));
return cal.getTime();
}
```
***Note 1:*** Months in [java.util.Calendar](https://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html) are represented as integers starting from 0 (Jan) through 11 (Dec)
***Note 2:*** division of integers in Java results in floor value (another integer value). So 2/3 = 0, 4/3 = 1 and so forth.
So, `cal.get(Calendar.MONTH)/3 * 3` calculates zero-based index of quarter: 0(Q1), 1(Q2), 2(Q3), 3(Q4).
*Example:* Feb in Calendar is 1. So, 1 / 3 \* 3 = 0.
If Feb date is supplied, then start of the quarter is 1st of Jan (because we got 0) and the end of quarter is last day of month 0 + 2 = 2 (Mar)
|
Maximum Concurrency Throttle
I expect there are many possible solutions to this question, I can come up a few myself, some clearly better than others but none that I am certain are optimal so I'm interested in hearing from you real multi threading gurus out there.
I have circa 100 pieces of work that can be executed concurrently as there are no dependencies between them. If I execute these sequentially my total execution time is approx 1:30s. If I queue each piece of work in the thread pool it takes approx 2m, which suggests to me that I am trying to do too much at once and context switching between all these threads is negating the advantage of having those threads.
So based on the assumption (please feel free to shoot me down if this is wrong) that if I only queue up to the number of cores in my system (8 on this machine) pieces of work at any one time I will reduce context switching and thus improve overall efficiency (other process threads not withstanding of course), **can anyone suggest the optimal pattern/technique for doing this?**
BTW I am using smartthreadpool.codeplex.com, but I don't have to.
|
A good threadpool already tries to have one active thread per available core. This isn't a matter of having one thread for work per core though, as if a thread is blocking (most classically on I/O) you want another thread using that core.
Trying the .NET threadpool instead might be worth a try, or the Parallel class.
If your CPU is hyper-threaded (8 virtual cores on 4 physical) this could be an issue. On average hypter-threading makes things faster, but there are plenty of cases where it makes them worse. Try setting affinity to every other core and see if it gives you an improvement - if it does, then this is likely a case where hyper-threading is bad.
Do you have to gather results together again, or share any resources between the different tasks? The cost of doing this could well be greater than the savings of multi-threading. Perhaps they are so unnecessarily though - e.g. if you are locking on shared data but that data is only ever read, you don't actually need to read with most data-structures (most but not all are safe for concurrent reads if there are no writes).
The partitioning of the work could be an issue too. Say the single-threaded approach works its way through an area of memory, but the multi-threaded approach gives each thread its next bit of memory to work with round-robin. Here there'd be more cache-flushing per core as the "good next bit" is actually being used by another core. In this situation, splitting work into bigger chunks can fix it.
There are plenty of other factors that can make a multi-threaded approach perform worse than a single-threaded, but those are a few I can think of immediately.
Edit: If you are writing to a shared store, it could be worth trying a run where you just throw away any results. That could narrow down whether that's where the issue lies.
|
Write a table (of doubles) to binary file IO C++
I need to write a matrix with four columns ("g\_Grid.r", "g\_Grid.t", "g\_Grid.b", "g\_Grid.ph")
Normally, I write to file using file stream:
```
ofstream fout;
fout.open("GRID.dat");
for (int m=0;m<N_PH;++m)
{
for (int k=0;k<N_B;++k)
{
for (int j=0;j<N_T;++j)
{
for (int i=0;i<N_R;++i)
{
fout << setprecision(32) << g_Grid.r[i]<<" "<<g_Grid.t[j]<<" "<<g_Grid.b[k]<<" "<<g_Grid.ph[m]<< endl;
}
}
}
}
fout.close();
```
It works fine, but now I'm dealing very large (long) matrix and it takes ages to write the formatted output (".txt"). Since I really do NOT need the file to be formatted, it prefer to write it as a binary file.
**QUESTION:** given four vectors/arrays (r,t,b,ph), how to write **a table** to binary file? (I use matlab to read this file later, and also need to read it as a binary table)
**EDIT**
Normally, I use the following simple code to write binary file:
```
ofstream myFile (fileName, ios::out | ios::binary);
double val;
for (int m=0;m<N_PH;++m)
{
for (int k=0;k<N_B;++k)
{
for (int j=0;j<N_T;++j)
{
for (int i=0;i<N_R;++i)
{
val = g_N.Amp[m][k][j][i];
myFile.write(reinterpret_cast<const char*>(&val), sizeof(val));
}
}
}
}
myFile.close();
```
But this one will yield only "one-column" while it will be read by matlab later.
|
To write binary data into a file, you first need to open the file in binary mode, you need to use `fstream::write`. It takes a `char *` argument, and writes a number of bytes from that - the `char *` argument requires a cast if the data is not an array of `char` or pointer to `char`.
So as to avoid having several calls to write, it's better to stuff the data into a structure.
So, the following should do what you need:
```
ofstream fout;
fout.open("GRID.dat", ios::binary);
for (int m=0;m<N_PH;++m)
{
for (int k=0;k<N_B;++k)
{
for (int j=0;j<N_T;++j)
{
for (int i=0;i<N_R;++i)
{
struct X
{
double a, b, c, d;
} x;
x.a = g_Grid.r[i];
x.b = g_Grid.t[j];
x.c = g_Grid.b[k];
x.d = g_Grid.ph[m];
fout.write(reinterpret_cast<char *>(&x), sizeof(x));
}
}
}
}
fout.close();
```
|
How add a 'Star \*' after a django ModelForm CharField?
i have some necessary fields in my django ModelForm. How can i add a red star (\*) after the required fields ?
|
I'm going to assume you want this to happen automatically, so here's one of a few different ways:
```
{% for field in form %}
<label for="{{ field.auto_id }}">{{ field.label_tag }}
{% if field.field.required %}
<span class="required">*</span>
{% endif %}
</label>
{% endfor %}
```
Then you can style the asterisk using CSS.
Or, you can add the asterisk using CSS instead if you want:
```
<style type="text/css">
span.required:after { content: '*'; }
</style>
{% for field in form %}
<label for="{{ field.auto_id }}">
{% if field.field.required %}
<span class="required">{{ field.label_tag }}</span>
{% else %}
{{ field.label_tag }}
{% endif %}
</label>
{% endfor %}
```
This one is probably a better option if you want to do other things with the required field as well.
However, if you will not be accessing the fields individually (such as using {{ form.as\_p }}), then you can add a property to your ModelForm:
```
class FooForm(forms.ModelForm):
required_css_class = 'required'
```
That will define all fields that are required as having the 'required' class (and thus, you can use the CSS code I mentioned above to add an asterisk (or whatever else you want to do with it).
|
Http status 500 Error instantiating servlet class
I'm using Tomcat8 server and i'm getting following error.
[](https://i.stack.imgur.com/7XZaz.png)
[](https://i.stack.imgur.com/dZwbJ.png)
It's url is `http://localhost:8080/WeatherWebApp` When i'm submitting the details then it's giving this error.
Here is WeatherServlet.java class
```
package org.akshayrahar;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class WeatherServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
WeatherServlet(){
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
System.out.println("again");
response.setContentType("text/html");
PrintWriter out=response.getWriter();
out.println("akshay rahar");
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
}
```
Here is web.xml file
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xml>
<web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee">
<display-name>WeatherWebApp</display-name>
<welcome-file-list>
<welcome-file>index.html</welcome-file>
</welcome-file-list>
<servlet>
<servlet-name>HelloServlet</servlet-name>
<servlet-class>WeatherServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HelloServlet</servlet-name>
<url-pattern>/CurrentWeather</url-pattern>
</servlet-mapping>
</web-app>
```
Here is index.html file too:-
```
<!DOCTYPE html>
<html>
<head>
<title>Weather App</title>
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<script >
function initMap() {
var input =document.getElementById('pac-input');
var autocomplete = new google.maps.places.Autocomplete(input);
}
</script>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyACZhmkSaIz436Dt3kHt_cVEYKN-gHzfYo&libraries=places&callback=initMap"async defer></script>
</head>
<body>
<h1>Find weather of any city in the world</h1>
<h2>Enter City Name</h2>
<form id="form" action="CurrentWeather" method="GET">
<input id="pac-input" type="text" name="cityname">
</form><br>
<div class="button1">
<button type="submit" form="form" value="Submit">Submit</button>
</div>
</body>
</html>
```
I've also mentioned stylesheet.css file in comment. Please check it.
|
The error shows that Tomcat is unable to create an instance of your `WeatherServlet` class.
You should make its constructor and other methods `public` too. You can even make use of the default constructor by removing the less accessible constructor:
```
public class WeatherServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
public WeatherServlet(){
}
public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
System.out.println("again");
response.setContentType("text/html");
PrintWriter out=response.getWriter();
out.println("akshay rahar");
}
public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
}
```
|
How to obtain a BigInt random number within a range in julia?
I need to obtain a random number between 1 and a `BigInt` in Julia, but I couldn't find out in the documentation how this can be done. The code below is what I thought would work:
```
julia> rand(BigInt(1):BigInt(2^1000))
ERROR: integer division error
in randu at random.jl:158
in rand at random.jl:178
in rand at random.jl:187
```
**edit**: GregS mentioned `2^1000` would wrap around zero. Actually, `2^1000` results in zero, so the code above is wrong. But using `BigInt(2)^1000` doesn't work:
```
julia> rand(BigInt(1):BigInt(2)^1000)
ERROR: InexactError()
in convert at gmp.jl:108
in colon at range.jl:38
julia> rand(BigInt(1):BigInt(2)^BigInt(1000))
ERROR: InexactError()
in convert at gmp.jl:108
in colon at range.jl:38
```
What is the fastest way to get this done? (Thee numbers should be uniformly distributed).
Thanks!
|
This is available after all if you use ccall. I'm sure at some point it will be eazy, but here's a way to do this now, I did not find a way to use it from base, but will amend if it things change. There are 2 calls that need to be made for this to work. From gmp's docs, I chose mpz\_urandomm
## GMP Support
>
>
> >
> > — Function: void **mpz\_urandomm (mpz\_t rop, gmp\_randstate\_t state, const mpz\_t n)**
> > Generate a uniform random integer in the range 0 to n-1, inclusive.
> >
> >
> > The variable state must be initialized by calling one of the gmp\_randinit functions (Random State Initialization) before invoking this function.
> >
> >
> >
>
>
>
You must first initialize the random number generator, I did this not optimally, will update with something refined.
>
>
> >
> > — Function: void **gmp\_randinit\_default (gmp\_randstate\_t state)**
> > Initialize state with a default algorithm. This will be a compromise between speed and randomness, and is recommended for applications with no special requirements. Currently this is gmp\_randinit\_mt.
> >
> >
> >
>
>
>
## ccall method
### Initialize RNG
Not having an elegant way to declare gmp\_randstate\_t, just declare a big buffer. This is important otherwise a segfault occurs.
```
julia> buffer = Array(Uint8,32);
julia> ccall((:__gmp_randinit_default,:libgmp),Void,(Ptr{Uint8},),buffer);
```
### Generate Random Numbers
Create BigInt, x to store the result
```
julia> x = BigInt(0)
0
```
Set y to MaxRange
>
>
> >
> > julia> y = BigInt(2)^1000
> >
> >
> > 10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376
> >
> >
> >
>
>
>
Generate random x
```
julia> ccall((:__gmpz_urandomm,:libgmp),Void,(Ptr{BigInt},Ptr{Uint8},Ptr{BigInt}),&x,buffer,&y)
```
verify
>
>
> >
> > julia> x
> > 9301165293246235069759966068146313776551258669855356477271940698500929939755418247622530571466332330697816620308003246225290293476785304004840090056840661553451916748315356563734257724978000166406621823207925733850455027807451108123161768212073821382033500073069184011344280494573919716117539236653172
> >
> >
> >
>
>
>
etc...
```
julia> ccall((:__gmpz_urandomm,:libgmp),Void,(Ptr{BigInt},Ptr{Uint8},Ptr{BigInt}),&x,buffer,&y)
```
>
>
> >
> > julia> x
> > 5073599723113217446035606058203362324610326948685707674578205618189982426100515602680640230141018758328161278469759835943678360952795440512680380424413847653984694781421269745198616340362470820037933917709243387214511018480191308767310495781355601069937334945556566243556239048498564021992916827796124
> >
> >
> >
>
>
>
|
Video.js : show big play button at the end
I would like to show the big play button at the end of the video, so user can replay it easily.
It seems that this big play button is shown by default (every posts I read are for hidding it instead of showing it...), but it is not the case for me...
I have tried to edit the following function (in video.dev.js file) but nothing has changed :
```
vjs.Player.prototype.onEnded = function(){
if (this.options_['loop']) {
this.currentTime(0);
this.play();
}
else { // I am not using loop mode
this.bigPlayButton.show();
this.pause();
}
};
```
Thanks for your responses.
|
There are a few ways you can do this. You can show the button when the video ends with the [API](https://github.com/videojs/video.js/blob/master/docs/api.md):
```
videojs("myPlayer").ready(function(){
var myPlayer = this;
myPlayer.on("ended", function(){
myPlayer.bigPlayButton.show();
});
});
```
Or if you do want to modify `video.dev.js` you just need to uncomment the line that does the same thing:
```
vjs.BigPlayButton = vjs.Button.extend({
/** @constructor */
init: function(player, options){
vjs.Button.call(this, player, options);
if (!player.controls()) {
this.hide();
}
player.on('play', vjs.bind(this, this.hide));
// player.on('ended', vjs.bind(this, this.show)); // uncomment this
}
});
```
Or with CSS you could force the button to be showed whenever the video is not playing (ended or paused):
```
.video-js.vjs-default-skin.vjs-paused .vjs-big-play-button {display:block !important;}
```
The posts you've seen about hiding it probably refer to version 3 of video.js, as with that the play button was shown at the end.
|
jquery draggable containment array values for scaled container
If anyone could help me figure out how to make the draggable elements contained in a div that changes scale based on window size, i'd really appreciate any guidance.
If I do:
```
element.draggable({
cursor: "move",
containment: '#container'
});
```
What will happen is it gives me the containment for the regular size of the container. So if I have a `transform: scale(1.5)`, there will be space in the container that the draggable element can not go.
I've also tried `containment: 'parent'` but that get's very glitchy.
**EDIT**
I've found out how to get the top and left containment but I can't figure out how to get the right and bottom.
```
var containmentArea = $("#container");
containment: [containmentArea.offset().left, containmentArea.offset().top, ???, ???]
```
I've tried **width** and **height** from `containmentArea[0].getBoundingClientRect()` but that doesn't seem to be the right move either.
---
[Here is a jsfiddle of some example code.](http://jsfiddle.net/z0gqy9w2/1/)
|
A version with resetting the coordinates in the drag event (since they were being recalculated already for the scale transformations), without using the containment:
```
var percent = 1, containmentArea = $("#container");
function dragFix(event, ui) {
var contWidth = containmentArea.width(), contHeight = containmentArea.height();
ui.position.left = Math.max(0, Math.min(ui.position.left / percent , contWidth - ui.helper.width()));
ui.position.top = Math.max(0, Math.min(ui.position.top / percent, contHeight- ui.helper.height()));
}
$(".draggable").draggable({
cursor: "move",
drag: dragFix,
});
//scaling here (where the percent variable is set too)
```
[Fiddle](http://jsfiddle.net/z0gqy9w2/7/)
In the example width and height of the container are obtained inside the dragevent, you could also store them when scaling for better performance. By having them calculated inside the event, they still work after rescaling, although the percent variable still has to be set. To be truly generic, it could be obtained inside the event as well (and instead of a fixed container, ui.helper.parent() could be used)
Since the offset inside the dragevent is (0,0) related to the container (at least it is for the current setup), took the liberty of simplifying `originalleft + (position - originalposition)/percent` to `position / percent`
Start offset didn't seem to be necessary any more, so left it out in the fiddle, but can be re-added if needed.
|
Remove Words Less Than 4 Characters from Pandas Series
I am trying to remove all words with less than 4 characters from each scalar value in a Pandas Series. What is the best way to do it? Here is my failed attempt:
```
df['text'] = df['text'].str.join(word for word in df['text'].str.split() if len(word)>3)
```
I receive the following error message:
>
> AttributeError: 'generator' object has no attribute 'join'
>
>
>
I based my attempt off of this post regarding the same in a string: [Remove small words using Python](https://stackoverflow.com/questions/12628958/remove-small-words-using-python)
Side note: If its better to tokenize my words before removing with less than 4 characters please let me know.
EDIT: Each scalar value contains sentences so I want to remove for any words less than a length of 4 within the value.
|
Using regex with [`.str.findall`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.findall.html) and [`.str.join`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.join.html) appears to be fastest:
```
df['text'].str.findall('\w{4,}').str.join(' ')
```
**Timings**
Using the following setup:
```
df = pd.DataFrame({'text':["The quick brown fox", "jumped over the lazy dog", "foo bar baz", 'words exceeding desired length']})
df = pd.concat([df]*10**4, ignore_index=True)
def pir2(df):
t = df.text.str.split(expand=True).stack()
return t.loc[t.str.len() >= 4].groupby(level=0).apply(' '.join)
```
I get the following timings:
```
%timeit df['text'].str.findall('\w{4,}').str.join(' ')
10 loops, best of 3: 44.8 ms per loop
%timeit df.text.apply(lambda i: ' '.join(filter(lambda j: len(j) > 3, i.split())))
10 loops, best of 3: 79.3 ms per loop
%timeit df['text'].str.split().map(lambda sl: " ".join(s for s in sl if len(s) > 3))
10 loops, best of 3: 87.2 ms per loop
%timeit pir2(df)
1 loop, best of 3: 2.87 s per loop
```
|
empty string is not a valid pathspec
After reading [Git - git-submodule Documentation](https://git-scm.com/docs/git-submodule), I decided to import two files from my previous project into the root directory of my new project, because I don't want to manually synchronize these two files.
However, an error occurs when I executed the following command:
```
$ git submodule add -b master -f --name latexci -- https://github.com/donizyo/LaTeX-Travis.git .
fatal: empty string is not a valid pathspec. please use . instead if you meant to match all paths
usage: git submodule--helper clone [--prefix=<path>] [--quiet] [--reference <repository>] [--name <name>] [--depth <depth>] --url <url> --path <path>
--prefix <path> alternative anchor for relative paths
--path <path> where the new submodule will be cloned to
--name <string> name of the new submodule
--url <string> url where to clone the submodule from
--reference <repo> reference repository
--dissociate use --reference only while cloning
--depth <string> depth for shallow clones
-q, --quiet Suppress output for cloning a submodule
--progress force cloning progress
```
I'm using *Git Bash 2.19.1.windows.1* to execute git commands.
|
The path (`.`) should be a non-existent folder, like `Latex-Travis`.
Instead, here '`.`' is interpreted as an empty path (since it is refereing to your parent repo), and as illustrated in [`ruby-git/ruby-git` issue 345](https://github.com/ruby-git/ruby-git/issues/345#issuecomment-364917289):
>
> This really was deprecated [since Git 2.1.6](https://raw.githubusercontent.com/git/git/master/Documentation/RelNotes/2.16.0.txt):
>
>
> An empty string as a pathspec element that means "everything" i.e. 'git add ""', is now illegal. We started this by first deprecating and warning a pathspec that has such an element in 2.11 (Nov 2016).
>
>
>
I am not aware of a submodule added directly within a parent repo folder: I always used to add as a parent repo *subfolder*.
|
Get dynamic parameter referenced in Annotation by using Spring SpEL Expression
What I am trying to do is to have an Annotation which looks a lot like the @Cacheable Annotation Spring is providing.
Used on top of a method it looks like the following:
```
@CleverCache(key = "'orders_'.concat(#id)")
public Order getOrder(int id) {
```
When I use the same using Cacheable it is somehow able to interpret this SpEL-Expression and generate a key which has the value `orders_1234` (for id=1234)
My matching advice looks like the following:
```
@Around("CleverCachePointcut(cleverCache)")
public Object clevercache(ProceedingJoinPoint joinPoint, CleverCache cleverCache) throws Throwable {
String expression = cleverCache.key();
//FIXME: Please add working code here :D - extracting the key by interpreting the passed SpEL Expression in expression
```
I definitly get the expression there, but I didn't yet figure out how to make it work that it is correctly interpreting the SpEL-Expression.
Another support syntax should be `key = "T(com.example.Utils).createCacheKey(#paramOfMethodByName)"` where the a static helper for creating a key is invoked.
Any idea how this could work? The code where I have the snippets from is available at: <https://github.com/eiselems/spring-redis-two-layer-cache/blob/master/src/main/java/com/marcuseisele/example/twolayercache/clevercache/ExampleAspect.java#L35>
Any help is really appreciated!
|
It is actually quite simple to evaluate SpEL, if you have the necessary context information. Please refer to [this article](https://www.baeldung.com/spring-expression-language#parsing) in order to find out how to programmatically parse SpEL.
As for that context information, you did not explain much about the types of methods you annotated by `@CleverCache`. The thing is, the pointcut intercepts all annotated methods and I do not know if each one's first parameter is an `int` ID. Depending on the answer to this question it is easier (just one simple case) or more difficult (you need if-else in order to just find the methods with an integer ID) to get the ID argument value from the intercepted method. Or maybe you have all sorts of expressions referencing multiple types and names of method parameters, instance variables or whatever. The solution's complexity is linked to the requirements' complexity. If you provide more info, maybe I can also provide more help.
---
**Update:** Having looked at your GitHub repo, I refactored your aspect for the simple case:
```
package com.marcuseisele.example.twolayercache.clevercache;
import lombok.extern.slf4j.Slf4j;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.expression.EvaluationContext;
import org.springframework.expression.Expression;
import org.springframework.expression.ExpressionParser;
import org.springframework.expression.spel.standard.SpelExpressionParser;
import org.springframework.expression.spel.support.StandardEvaluationContext;
import org.springframework.stereotype.Component;
import java.util.Map;
import java.util.concurrent.TimeUnit;
@Aspect
@Component
@Slf4j
public class ExampleAspect {
private static final ExpressionParser expressionParser = new SpelExpressionParser();
private Map<String, RedisTemplate> templates;
public ExampleAspect(Map<String, RedisTemplate> redisTemplateMap) {
this.templates = redisTemplateMap;
}
@Pointcut("@annotation(cleverCache)")
public void CleverCachePointcut(CleverCache cleverCache) {
}
@Around("CleverCachePointcut(cleverCache) && args(id)")
public Object clevercache(ProceedingJoinPoint joinPoint, CleverCache cleverCache, int id) throws Throwable {
long ttl = cleverCache.ttl();
long grace = cleverCache.graceTtl();
String key = cleverCache.key();
Expression expression = expressionParser.parseExpression(key);
EvaluationContext context = new StandardEvaluationContext();
context.setVariable("id", id);
String cacheKey = (String) expression.getValue(context);
System.out.println("### Cache key: " + cacheKey);
long start = System.currentTimeMillis();
RedisTemplate redisTemplate = templates.get(cleverCache.redisTemplate());
Object result;
if (redisTemplate.hasKey(cacheKey)) {
result = redisTemplate.opsForValue().get(cacheKey);
log.info("Reading from cache ..." + result.toString());
if (redisTemplate.getExpire(cacheKey, TimeUnit.MINUTES) < grace) {
log.info("Entry is in Grace period - trying to refresh it");
try {
result = joinPoint.proceed();
redisTemplate.opsForValue().set(cacheKey, result, grace+ttl, TimeUnit.MINUTES);
log.info("Fetch was successful - new value will be returned");
} catch (Exception e) {
log.warn("An error occured while trying to refresh the value - extending the old one", e);
//TODO: think about only adding 5 minutes on top of grace, or 50% of ttl on top of grace
//if protected by a circuit breaker we could go REALLY low here
redisTemplate.opsForValue().getOperations().expire(cacheKey, grace+ttl, TimeUnit.MINUTES);
}
}
} else {
result = joinPoint.proceed();
log.info("Giving from method ..." + result.toString());
redisTemplate.opsForValue().set(cacheKey, result, ttl + grace, TimeUnit.MINUTES);
}
long executionTime = System.currentTimeMillis() - start;
log.info("{} executed in {} ms", joinPoint.getSignature(), executionTime);
log.info("Result: {}", result);
return result;
}
}
```
The diff looks like this:
[](https://i.stack.imgur.com/qT1X9.png)
|
Counting regular working days in a given period of time
need some help. I need to count regular working days for a given date period, for example, in our country, we have 5 regular working days monday to friday, then in code i need to exclude saturdays and sundays when I use it on my computations.
I need an algorithm something like this in C#:
```
int GetRegularWorkingDays(DateTime startDate, DateTime endDate)
{
int nonWorkingDays = ((endDate - startDate) % 7) * 2;
return (endDate - startDate) - nonWorkingDays;
}
```
I know my draft is way way off :(. Thanks in advance. =)
**PS: Guys please up-vote the best/fastest/most efficient answer below. Thanks =)**
|
Check out [this example on Code Project](http://www.codeproject.com/KB/cs/datetimelib.aspx) that uses a very efficient way that doesn't involve any looping ;)
It uses this alogrithm:
>
> 1. Calculate the number of time span in terms of weeks. Call it, W.
> 2. Deduct the first week from the number of weeks. W= W-1
> 3. Multiply the number of weeks with the number of working days per
> week. Call it, D.
> 4. Find out the holidays during the specified time span. Call it, H.
> 5. Calculate the days in the first week. Call it, SD.
> 6. Calculate the days in the last week. Call it, ED.
> 7. Sum up all the days. BD = D + SD + ED - H.
>
>
>
|
Give user write access to folder
How do i give the user `'testuser'` write privileges on the folder: `/var/www/test/public_html`
|
The simplest way is to use `chown`:
```
sudo chown -R testuser:testuser /var/www/test/public_html
```
This will make the user & group `testuser` the owner of the file. **IF YOU ARE USING A DIFFERENT USERNAME** *(run `whoami` or look before the `@` of your terminal promp, or be lazy and replace `testuser` with `$USER`)*, use that username instead. For instance user `Flora colossus` may have the username `groot`, in which case you would run `sudo chown -R groot:groot`... . If in doubt use the manual pages linked below.
or to use `chmod` (read and use **carefully**):
```
sudo chmod -R 777 /var/www/test/public_html
```
Which will allow read-write-execute permissions for the owner, group, and **any other users**. The execute bit is required for directories to work, files can get by with `666` permissions (strictly speaking *most* files shouldnt need the execute permission, but this is least likely to break stuff and does not require `find` etc). `chmod` is much more difficult to 'undo' if needed that the other options.
Here are manual pages on [`chown`](http://ss64.com/bash/chown.html) and [`chmod`](http://ss64.com/bash/chmod.html) (these can also be found by running `man chown` and `man chmod`.)
I should add you can give groups of users write access as well (examples [here](https://unix.stackexchange.com/a/116073/52956) and [here](https://askubuntu.com/a/51337/178596)).
Also beware giving global write access with the `chmod` command if you have not as trustworthy users/scripts running on the server etc - I recommend changing the group or the user permissions instead. If using `chmod` please read up on this and understand what it is doing.
|
How can I use ffmpeg to output a screenshot gallery / mosaic?
I am wondering how to use ffmpeg to create video thumbnails something like this (taken from a VLC forum):
[](https://i.stack.imgur.com/vrfMg.jpg)
I know totem can do this but totem does not support the video format that I am using
[This article](http://digitalstudio7.blogspot.com/2013/11/create-video-thumbnails-linux.html) says that ffmpeg can do it.
|
## Create a mosaic of screenshots from a movie with `ffmpeg`

If you're using anything older than Ubuntu 15.04, then the so-called "`ffmpeg`" package from the repository refers to a fake version from the Libav fork which does not have the functionality you need, so you will have to [download a static build of `ffmpeg`](http://ffmpeg.org/download.html) or follow a [step-by-step guide to compile `ffmpeg`](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu).
### Example command using [`select`](http://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect), [`scale`](http://ffmpeg.org/ffmpeg-filters.html#scale), and [`tile`](http://ffmpeg.org/ffmpeg-filters.html#tile) filters:
```
./ffmpeg -i input -vf "select=gt(scene\,0.4),scale=160:-1,tile" -frames:v 1 \
-qscale:v 3 preview.jpg
```
In this example the output will be `960x450`. You can add an additional scale filter if you want to change that, or you can change the size of each tile. Example for 600 pixel wide output:
```
./ffmpeg -i input -vf "select=gt(scene\,0.4),scale=160:-1,tile,scale=600:-1" \
-frames:v 1 -qscale:v 3 preview.jpg
```
You can even add text to the output. This example will add 24 pixels of black padding to the top of the image and add the text "Iron Man" in the center of the padding.
```
./ffmpeg -i input -vf "select=gt(scene\,0.4), \
scale=160:-1, \
tile, \
scale=600:-1, \
pad=iw:ih+24, \
drawtext=fontsize=30:box=1:fontfile=/usr/share/fonts/TTF/Vera.ttf:text='Iron Man':x=(w-text_w)/2:0"
-frames:v 1 -qscale:v 3 preview.jpg
```
### Other stuff
- You can control output quality with `-qscale:v`. Effective range is a linear scale of 2-31; where 2 is best quality.
- See the [`select` filter documentation](http://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect) for more info.
|
Why Console in Eclipse or IntelliJ always null?
>
> **Possible Duplicate:**
>
> [System.Console() returns null](https://stackoverflow.com/questions/4203646/system-console-returns-null)
>
>
>
Code:
```
public class Demo {
public static void main(String[] args){
Console c = System.console();
if (c == null) {
System.err.println("No console.");
System.exit(1);
}else {
System.out.println("Console is.");
System.exit(0);
}
}
```
}
always No console. Why ? How to fix? Thanks.
|
You don't have any console associated.
As per [javadoc](http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#console%28%29)
>
> Returns the unique Console object associated with the current Java
> virtual machine, if any, otherwise null
>
>
>
EDIT:
From [Console javadoc](http://docs.oracle.com/javase/6/docs/api/java/io/Console.html).
Whether a virtual machine has a console is dependent upon the underlying platform and also upon the manner in which the virtual machine is invoked. If the virtual machine is started from an interactive command line without redirecting the standard input and output streams then its console will exist and will typically be connected to the keyboard and display from which the virtual machine was launched. If the virtual machine is started automatically, for example by a background job scheduler, then it will typically not have a console.
|
Incorrect time zone conversion with datetime in python
>
> **Possible Duplicate:**
>
> [Python datetime object show wrong timezone offset](https://stackoverflow.com/questions/6410971/python-datetime-object-show-wrong-timezone-offset)
>
>
>
I have a problem with conversion between timezones in Python, using the `pytz` library (last version 2012h). Here in Brussels we have normally UTC+1 hour in winter and UTC+2 hours in summer.
I have the following code :
```
from datetime import datetime
import pytz
brussels_tz = pytz.timezone('Europe/Brussels')
utc_tz = pytz.utc
def main():
intermdate = datetime(2012, 07, 15, 8, 0, 0, 0, brussels_tz)
utcdate = intermdate.astimezone(utc_tz)
print "Brussels time is %s" % intermdate
print "UTC time is %s" % utcdate
if __name__ == '__main__':
main()
```
The problem is that I get the following result :
```
Brussels time is 2012-07-15 08:00:00+00:00
UTC time is 2012-07-15 08:00:00+00:00
```
So no difference. In my opinion the result should be (in summer):
```
Brussels time is 2012-07-15 08:00:00+02:00
UTC time is 2012-07-15 06:00:00+00:00
```
If I use the timezone Europe/Paris (normally the same time as in Brussels) I get even more strange results:
```
Paris time is 2012-07-15 08:00:00+00:09
UTC time is 2012-07-15 07:51:00+00:00
```
A 9 minutes difference !?!
Could anybody help me?
|
You need to use the [`.localize()` method](http://pytz.sourceforge.net/#localized-times-and-date-arithmetic) to move a datetime into a timezone:
```
intermdate = brussels_tz.localize(datetime(2012, 07, 15, 8, 0, 0, 0))
utcdate = intermdate.astimezone(utc_tz)
```
The output is then:
```
Brussels time is 2012-07-15 08:00:00+02:00
UTC time is 2012-07-15 06:00:00+00:00
```
See the [`pytz` documentation](http://pytz.sourceforge.net/#localized-times-and-date-arithmetic):
>
> Unfortunately using the tzinfo argument of the standard datetime constructors ‘’does not work’’ with pytz for many timezones.
>
>
>
> ```
> >>> datetime(2002, 10, 27, 12, 0, 0, tzinfo=amsterdam).strftime(fmt)
> '2002-10-27 12:00:00 AMT+0020'
>
> ```
>
>
|
Git and changelog guideline
I'm a *noob* with git and i would like to know the best practice to create tags and generate/mantain a changelog.
Until now, i always created a readme.md and manually indicate (in the section "changelog") the most important feature
For example:
>
> version 1.0.1
>
>
> - changed layout in home
> - added slideshow in gallery
> - etc..
>
>
>
How can i create a changelog (maybe in markdown format) with these characteristics?
- title of changes based on personalized title/tags title/date
- changes based on tags or commits
I do not have a clear idea yet, so suggestions are aprreciate
Thanks in advance :)
|
There are no some common guidelines yet. But a lot of good rules are explained on **[keep a changelog](http://keepachangelog.com)**.
And as they mention:
>
> Don’t let your friends dump git logs into changelogs.
>
>
>
And I fully agree with it.
That's why I created [GitHub Changelog Generator](https://github.com/skywinder/github-changelog-generator) that generates a changelog according all these guidelines.
Actually it works with **GitHub only**, but it present a good example of how a changelog should look like.
It automatically generates a changelog from your tags, issues, labels and pull requests.
For example: This changelog was generated by this script: **[CHANGELOG.md](https://github.com/skywinder/ActionSheetPicker-3.0/blob/master/CHANGELOG.md)**
>
> # Changelog
>
>
> ## [1.2.5](https://github.com/skywinder/github-changelog-generator/tree/1.2.5) (2015-01-15)
>
>
> [Full Changelog](https://github.com/skywinder/github-changelog-generator/compare/1.2.4...1.2.5)
>
>
> **Implemented enhancements:**
>
>
> - Use milestone to specify in which version bug was fixed [#22](https://github.com/skywinder/github-changelog-generator/issues/22)
>
>
> **Fixed bugs:**
>
>
> - Error when trying to generate log for repo without tags [#32](https://github.com/skywinder/github-changelog-generator/issues/32)
>
>
> **Merged pull requests:**
>
>
> - PrettyPrint class is included using lowercase 'pp' [#43](https://github.com/skywinder/github-changelog-generator/pull/43) ([schwing](https://github.com/schwing))
> - support enterprise github via command line options [#42](https://github.com/skywinder/github-changelog-generator/pull/42) ([glenlovett](https://github.com/glenlovett))
>
>
>
|
Writing composable asynchronous monads from ffi
Right now I have an async function that works something like this:
```
foo = do
ayncGetNumber "/numberLocation" \a ->
(trace <<< show) a
```
but this callback style is not composable (to my understanding), i would like it to work like this
```
foo = do
a <- ayncGetNumber "/numberLocation"
(trace <<< show) a
```
or
```
foo = ayncGetNumber "/numberLocation" >>= show >>> trace
```
But I can't figure out how to escape from the callback and make it composable.
|
You might want to consider using the continuation monad transformer `ContT` from the `purescript-transformers` package.
The definition of `ContT` in that package is given as
```
newtype ContT r m a = ContT ((a -> m r) -> m r)
```
If you set `r` to be `Unit`, and `m` to be `Eff eff`, you get something which looks a bit like the type of `asyncGetNumber` inside the constructor:
```
(a -> Eff eff Unit) -> Eff eff Unit
```
Now, making sure `eff` contains the effect you need, you should be able to wrap up your
`asyncGetNumber` function for use in `ContT`:
```
asyncGetNumberCont :: ContT Unit (Eff SomeEffects) Number
asyncGetNumberCont = ContT $ \callback ->
asyncGetNumber "/numberLocation" callback
```
or just
```
asyncGetNumberCont = ContT $ asyncGetNumber "/numberLocation"
```
Note that the argument to `ContT` takes the callback as an argument.
You can now compose asynchronous computations in series using do notation:
```
do n <- asyncGetNumberCont
m <- asyncGetNumberCont
return (n + m)
```
You can even create an applicative functor which wraps `ContT` and supports parallel composition of asynchronous computations. You might also be interested in the `purescript-node-thunk` package which provides this sort of functionality out of the box.
*Edit*: Another option is to create your own `ContT`-like type using a foreign type, distinct from `Eff`. You might do this as follows:
```
-- Ignore effects to keep things simple
-- The runtime representation of 'Async a' is a function which takes a callback,
-- performs some side effects an returns.
foreign import data Async :: * -> *
-- Make an async computation from a function taking a callback
foreign import makeAsync
"function makeAsync(f) {\
\ return function(k) {\
\ f(function(a) {\
\ return function() {\
\ k(a)();\
\ };\
\ })();\
\ };\
\}" :: forall a eff. ((a -> Eff eff Unit) -> Eff eff Unit) -> Async a
-- Now we need to define instances for Async, which we can do using FFI
-- calls, for example:
foreign import fmapAsync
"function fmapAsync(f) {\
\ return function (comp) {\
\ return function (k) {\
\ comp(function(a) {\
\ k(f(a));\
\ });
\ };\
\ };\
\}" :: forall a b. (a -> b) -> Async a -> Async b
instance functorAsync :: Functor Async where
(<$>) = fmapAsync
```
and so on. This gets cumbersome quickly since you're essentially repeating the implementation of `ContT`.
In addition, without rewrite rule support in the compiler, there is no way to get the inlined binds like you would get with `Eff`.
|
Ionicons: How to add custom icons?
How can i add custom icons to ionicons?
I could not yet find a step by step instruction what to do.
There are many really basic icons (like the paragraph icon
for example) which are not provided by ionicons, so i somehow
have to add them by my own.
|
Download the fonts to include in our project. Go ahead and download the latest Font Awesome release and extract them. Copy the following font files into your www/fonts directory:
1. ontawesome-webfont.ttf
2. fontawesome-webfont.eot
3. fontawesome-webfont.svg
4. fontawesome-webfont-woff
5. fontawesome-webfont.otf
Next we need to edit www/css/style.css to include this new font set for use in our project. Add the following lines:
```
@font-face {
font-family: 'fontawesome';
src:url('../fonts/fontawesome-webfont.eot');
src:url('../fonts/fontawesome-webfont.eot') format('embedded-opentype'),
url('../fonts/fontawesome-webfont.woff') format('woff'),
url('../fonts/fontawesome-webfont.ttf') format('truetype'),
url('../fonts/fontawesome-webfont.svg') format('svg');
font-weight: normal;
font-style: normal;
}
```
Every time we want to use the Font Awesome set we just set our font family as fontawesome.
Get More custom icons <https://icomoon.io/>
|
The Parent Variable is not updating in the child Component in Angular
I have a variable in the parent component like this:
ParentComponent
```
export class ParentComponent {
variable = {};
varExample = false;
@ViewChild('child') child: ChildComponent;
someFunction () {
this.variable['id'] = {};
for (let i = 0; i < 10; i++) {
this.variable['id'][i] = i*2;
}
this.varExample = true;
}
otherFunction () { // called when i click a button
this.someFunction ();
console.log(this.child.variable); // {}
console.log(this.child.varExample); // true
this.child.initVars();
}
}
```
Parent HTML
```
<app-child #child [variable]="variable" [varExample]="varExample"></app-child>
```
ChildComponent
```
export class ChildComponent {
@Input() variable: any;
@Input() varExample: boolean;
initVars() {
console.log(this.variable); // {}
console.log(this.varExample); // true
}
}
```
This is just an example of my implementation and yes I have all imports.
In those `console.log(this.variable)` I got an empty object ( `{}` ), but varExample still works fine.
Why is child's `variable` always empty, Angular doesn't detect changes in this type of Objects ( `{}` ) ?
Can someone help me?
|
1. The answer is already in the question. For `varExample` you reassign it's value `this.varExample = ...`. But for `this.variable` only it's contents are changed. When the underlying reference doesn't change, Angular won't detect it's changes.
You could either use the solution from @user2846469 or use spread syntax to adjust the values and reference inline.
**Parent**
```
someFunction() {
this.variable["id"] = {};
for (let i = 0; i < 10; i++) {
this.variable = {
...this.variable,
id: {
...this.variable["id"],
[i]: i * 2
}
};
}
this.varExample = true;
}
```
2. Using `@ViewChild` to trigger another component's function is inelegant. Instead you could use `OnChanges` hook in the child component. It will be triggered whenever a change is detected in any of the `@Input` variables.
**Child**
```
import { Component, Input, OnChanges, SimpleChanges } from "@angular/core";
export class ChildComponent implements OnChanges {
@Input() variable: any;
@Input() varExample: boolean;
ngOnChanges(changes: SimpleChanges) {
if (changes.variable && changes.variable.currentValue) {
console.log(changes.variable.currentValue);
}
}
}
```
Working example: [Stackblitz](https://stackblitz.com/edit/angular-ivy-7zdkwt?file=src/app/child/child.component.ts)
|
Spark streaming custom metrics
I'm working on a Spark Streaming program which retrieves a Kafka stream, does very basic transformation on the stream and then inserts the data to a DB (voltdb if it's relevant).
I'm trying to measure the rate in which I insert rows to the DB. I think [metrics](http://metrics.dropwizard.io/3.1.0/manual/) can be useful (using JMX). However I can't find how to add custom metrics to Spark. I've looked at Spark's source code and also found [this thread](http://apache-spark-developers-list.1001551.n3.nabble.com/Registering-custom-metrics-td9030.html#a9968) however it doesn't work for me. I also enabled the JMX sink in the conf.metrics file. What's not working is I don't see my custom metrics with JConsole.
Could someone explain how to add custom metrics (preferably via JMX) to spark streaming? Or alternatively how to measure my insertion rate to my DB (specifically VoltDB)?
I'm using spark with Java 8.
|
Ok after digging through the [source code](https://github.com/apache/spark) I found how to add my own custom metrics. It requires 3 things:
1. Create my own custom [source](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/core/src/main/scala/org/apache/spark/metrics/source/Source.scala). Sort of like [this](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/streaming/src/main/scala/org/apache/spark/streaming/StreamingSource.scala)
2. Enable the Jmx sink in the spark metrics.properties file. The specific line I used is: `*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink` which enable JmxSink for all instances
3. Register my custom source in the SparkEnv metrics system. An example of how to do can be seen [here](http://mail-archives.us.apache.org/mod_mbox/spark-user/201501.mbox/%3CCAE50=dq+6tdx9VNVM3ctBMWPLDPbUAacO3aN3L8x38zg=xb6VQ@mail.gmail.com%3E) - I actually viewed this link before but missed the registration part which prevented me from actually seeing my custom metrics in the JVisualVM
I'm still struggling with how to actually count the number of insertions into VoltDB because the code runs on the executors but that's a subject for a different topic :)
I hope this will help others
|
Posting to Bottle server. How do I get the request data?
I'm having trouble getting the request data on the server side. I'm a newbie when it comes to web code som bear with me. Here's my (simplified) code:
**html**
```
<form id="form" action="http://localhost:8080/report" enctype="multipart/form-data" method="POST">
<div name="navigation-bar" id="navigation-bar">
<input type="submit" value="Post" onclick="post()" />
</div>
</form>
```
**js**
```
function post() {
$.post("http://localhost:8080/report",
{"id": 1},
function(data) {
alert(data);
},
"json");
}
```
**py**
```
@route('/report', method='POST')
def report():
payload = request.json
return payload
```
payload is always None. What is the correct way to get the data when sending with jQuery?
|
Setting the datatype to `"json"` only tells jquery how to interptet the *response*, it doesnt't change the format of the request, that is still sent `application/x-www-form-urlencoded`.
But `bottle`'s [json handling](http://bottlepy.org/docs/stable/api.html#bottle.BaseRequest.json) only works if the request is sent using the correct content type of `application/json`, so if you want to use that, you need to manually set the content type for the request and encode the request data to json:
```
function post() {
$.ajax({
url: "/report",
type: "POST",
data: JSON.stringify({"id": 1}),
contentType: "application/json",
dataType: "json",
success: function(data) {
alert(data);
}
});
}
```
That however only works for submission using javascript, the normal form submission won't work, that means you should change your submit button to a normal button or make sure your oncklick returns false.
|
TypeScript constructor syntax
In Typescript, does anyone know what is this mean?
```
constructor(obj?:any){
this.id = obj && obj.id || null;
}
```
Here is my guessing:
It seems like if (obj is not null) and (obj.id has value), then assign obj.id, otherwise, assign null to this.id??
I looked for around 15-20 places(sites/documents), but I still cannot find the answer. I am sorry that I just start learning TypeScript. Anyone knows any good reference? Thank you for your help in advance!
|
Your intuition is good! Here's the summary: Operators like `if, &&, and ||` on non-Boolean types in JavaScript cause those types to be evaluated as Boolean. Most commonly, this is used to help figure out if something is `null` (false), or an object (true).
Note that `truthyValue && obj` will give you `obj`, not `true`, so that syntax is useful for null-coalescing. eg: `name = mightBeNull && mightBeNull.name`.
A more verbose way to write this constructor would be:
```
if (obj) {
this.id = obj.id;
}
else {
this.id = null;
}
```
It's something of a code style issue - it's certainly less clear what the intent is, but when you're among many experienced JavaScript programmers, it may make sense to favor short code with patterns people are used to, rather than fully self-documenting routine patterns.
(Finally, note that the above explanation applies equally to JavaScript and TypeScript)
|
Add widgets on the fly in pyside
Very new to pyside so maybe a stupid question. I want to create a pyside UI which has a variable number of items in it and also has the possibility to add items while it is running and to make it even more complex it also needs a scroll bar to fit it all on screen!
This is what I've got right now:
```
import sys
from PySide import QtGui
from PySide import QtCore
class example(QtGui.QWidget):
def __init__(self, parent= None):
super(example, self).__init__()
grid = QtGui.QGridLayout()
grid.setSpacing(10)
self.widget = QtGui.QWidget()
self.layout = QtGui.QGridLayout(self)
for i in range(5):
btn = QtGui.QPushButton("test"+str(i))
self.layout.addWidget(btn,i,0)
btn.clicked.connect(self.buttonClicked)
self.count = i
self.widget.setLayout(self.layout)
self.scroll = QtGui.QScrollArea()
self.scroll.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.scroll.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.scroll.setWidget(self.widget)
grid.addWidget(self.scroll,3,0)
self.setLayout(grid)
def buttonClicked(self):
title = QtGui.QLabel('Title'+str(self.count))
self.layout.addWidget(title,self.count + 1,0)
self.count += 1
self.widget.addLayout(self.layout,0)
self.scroll.addWidget(self.widget,0)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
dialog = example()
dialog.show()
sys.exit(app.exec_())
```
But somehow the layout gets messed up when adding items through one of the buttons.
Does anybody have an idea how to fix this?
Thanx!
|
You're not far off. The key piece you're missing is [QScrollArea.setWidgetResizable](https://qt-project.org/doc/qt-4.8/qscrollarea.html#widgetResizable-prop), which will ensure the scrollarea automatically resizes its viewport to fit the contents.
I've made some other adjustments to your example and added comments where appropriate:
```
class example(QtGui.QWidget):
def __init__(self, parent= None):
super(example, self).__init__()
grid = QtGui.QGridLayout()
grid.setSpacing(10)
self.widget = QtGui.QWidget()
# set the widget as parent of its own layout
self.layout = QtGui.QGridLayout(self.widget)
for i in range(5):
btn = QtGui.QPushButton("test"+str(i))
self.layout.addWidget(btn,i,0)
btn.clicked.connect(self.buttonClicked)
# following lines are redundant
# self.count = i
# self.widget.setLayout(self.layout)
self.scroll = QtGui.QScrollArea()
# need this so that scrollarea handles resizing
self.scroll.setWidgetResizable(True)
# these two lines may not be needed now
self.scroll.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.scroll.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.scroll.setWidget(self.widget)
grid.addWidget(self.scroll, 3, 0)
self.setLayout(grid)
def buttonClicked(self):
title = QtGui.QLabel('Title' + str(self.layout.count()))
self.layout.addWidget(title)
# following lines are redundant
# self.layout.addWidget(title, self.count + 1, 0)
# self.count += 1
# self.widget.addLayout(self.layout,0)
# self.scroll.addWidget(self.widget,0)
```
|
Pathfinding Algorithm For Pacman
I wanted to implement the game Pacman. For the AI, I was thinking of using the A\* algorithm, having seen it on numerous forums. However, I implemented the Breadth First Search for some simple pathfinding (going from point a to point b with certain obstacles in between) and found it gave the optimum path always. I guess it might be because in a game like pacman which uses simple pathfinding, there is no notion of costs in the graph. So, will it be OK if I use BFS instead of A\* for pathfinding in Pacman?
|
For path-finding, note the following
- BFS will look at a lot more nodes than A\* will, which makes it much slower
- A\* will come up with the same answer as BFS
- A\* is really easy to implement
- Use [Manhattan Distance](http://en.wikipedia.org/wiki/Taxicab_geometry) as your heuristic - this is insanely easy to implement, and leads to very efficient searches
- Look at <http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html> for more information (the entire series is really interesting)
If you're talking about the ghost AI, check out the page Chad mentioned: [The Pac-Man Dossier](https://www.gamasutra.com/view/feature/3938/the_pacman_dossier.php?print=1) - the ghosts actually just use the euclidean distance when determining how to make it to their target tiles, which makes them very bad at finding Pac Man in some cases.
|
Does IE11 auto update to Edge?
I am wondering about future browser support...
Will IE11 auto update to Microsoft Edge, or will we be stuck with a huge chunk of IE11 users in the future (much like IE6)?
|
Internet Explorer and Microsoft Edge are two separate browsers.
Microsoft Edge is prominently featured on the Start screen and app list on Windows 10, so most consumers will discover Microsoft Edge and be "upgraded" to it in that sense when they upgrade to Windows 10 or buy a new device with Windows 10 preinstalled. IE11 remains available for enterprise users and consumers who need compatibility with legacy sites and technologies like ActiveX, but [it won't be updated with any new web platform features](https://stackoverflow.com/questions/35184048/what-is-after-internet-explorer-11-on-windows-7-how-well-will-es2016-be-support/35184087#35184087).
You only really have to support IE11 if a significant portion of your userbase consists of Windows 7 and 8.1 users (as Microsoft Edge is only available on Windows 10), or if you are maintaining a site that was built using older technologies.
See the [FAQ](https://dev.windows.com/en-us/microsoft-edge/platform/faq) for more details.
|
Order of evaluation and undefined behaviour
Speaking in the context of the C++11 standard (which no longer has a concept of sequence points, as you know) I want to understand how two simplest examples are defined.
```
int i = 0;
i = i++; // #0
i = ++i; // #1
```
There are two topics on SO which explain those examples within the C++11 context. [Here](https://stackoverflow.com/questions/4176328/undefined-behavior-and-sequence-points/4183735#4183735) it was said that `#0` invokes UB and `#1` is well-defined. [Here](https://stackoverflow.com/questions/3852768/unsequenced-value-computations-a-k-a-sequence-points/3852774#3852774) it was said that both examples are undefined. This ambiguity confuses me much. I've read this well-structured [reference](http://en.cppreference.com/w/cpp/language/eval_order) three times already but the topic seems to be way too complicated for me.
.
Let's analyze the example `#0`: `i = i++;`.
Corresponding quotes are:
>
> - The value computation of the built-in postincrement and postdecrement
> operators is sequenced before its side-effect.
> - The side effect (modification of the left argument) of the built-in
> assignment operator and of all built-in compound assignment operators
> is sequenced after the value computation (but not the side effects) of
> both left and right arguments, and is sequenced before the value
> computation of the assignment expression (that is, before returning
> the reference to the modified object)
> - If a side effect on a scalar object is unsequenced relative to another
> side effect on the same scalar object, the behavior is undefined.
>
>
>
As I get it, the side effect of the assignment operator is not sequenced with side effects of it's left and right arguments. Thus the side effect of the assignment operator is not sequenced with the side effects of `i++`. So `#0` invokes an UB.
.
Let's analyze the example `#1`: `i = ++i;`.
Corresponding quotes are:
>
> - The side effect of the built-in preincrement and predecrement
> operators is sequenced before its value computation (implicit rule due
> to definition as compound assignment)
> - The side effect (modification of the left argument) of the built-in
> assignment operator and of all built-in compound assignment operators
> is sequenced after the value computation (but not the side effects) of
> both left and right arguments, and is sequenced before the value
> computation of the assignment expression (that is, before returning
> the reference to the modified object)
> - If a side effect on a scalar object is unsequenced relative to another
> side effect on the same scalar object, the behavior is undefined.
>
>
>
I can not see, how this example is different from the `#0`. This seems to be an UB for me for the very same reason as `#0`. The side effect of assignment is not sequenced with the side effect of `++i`. It seems to be an UB. The topic liked above says it is well-defined. Why?
.
***Question***: how can I apply quoted rules to determine the UB of the examples. An as simple as possible explanation would be greatly appreciated. Thank you!
|
Since your quotes are not directly from the standard, I will try to give a detailed answer quoting the relevant parts of the standard. The definitions of "side effects" and "evaluation" is found in paragraph 1.9/12:
>
> Accessing an object designated by a volatile glvalue (3.10), modifying an object, calling a library I/O function, or calling a function that does any of those operations are all *side effects*, which are changes in the state of the execution environment. *Evaluation* of an expression (or a sub-expression) in general includes both value computations (including determining the identity of an object for glvalue evaluation and fetching a value previously assigned to an object for prvalue evaluation) and initiation of side effects.
>
>
>
The next relevant part is paragraph 1.9/15:
>
> Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced. [...] The value computations of the operands of an operator are sequenced before the value computation of the result of the operator. If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined.
>
>
>
Now let's see, how to apply this to the two examples.
```
i = i++;
```
This is the postfix form of increment and you find its definition in paragraph 5.2.6. The most relevant sentence reads:
>
> The value computation of the ++ expression is sequenced before the modification
> of the operand object.
>
>
>
For the assignment expression see paragraph 5.17. The relevant part states:
>
> In all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression.
>
>
>
Using all the information from above, the evaluation of the whole expression is (this order is not guaranteed by the standard!):
- value computation of `i++` (right hand side)
- value computation of `i` (left hand side)
- modification of `i` (side effect of `++`)
- modification of `i` (side effect of `=`)
All the standard guarantees is that the value computations of the two operands is sequenced before the value computation of the assignment expression. But the value computation of the right hand side is only "reading the value of `i`" and **not** modifying `i`, the two modifications (side effects) are not sequenced with respect to each other and we get undefined behavior.
What about the second example?
```
i = ++i;
```
The situation is quite different here. You find the definition of prefix increment in paragraph 5.3.2. The relevant part is:
>
> If x is not of type bool, the expression ++x is equivalent to x+=1.
>
>
>
Substituting that, our expression is equivalent to
```
i = (i += 1)
```
Looking up the compound assignment operator `+=` in 5.17/7 we get that `i += 1` is equivalent to `i = i + 1` except that `i` is only evaluated once. Hence, the expression in question finally becomes
>
> i = ( i = (i + 1))
>
>
>
But we already know from above that the value computation of the `=` is sequenced after the value computation of the operands and the side effects are sequenced before the value computations of `=`. So we get a well-defined order of evaluation:
1. compute value of `i + 1` (and `i` - left hand side of inner expression)(#1)
2. initiate side effect of inner `=`, i.e. modify "inner" `i`
3. compute value of `(i = i + 1)`, which is the "new" value of `i`
4. initiate side effect of outer `=`, i.e. modify "outer" `i`
5. compute value of full expression.
---
(#1): Here, `i` is only evaluated once, since `i += 1` is equivalent to `i = i + 1` except that `i` is only evaluated once (5.17/7).
|
Docker-Compose: Only one usage of each socket address (protocol/network address/port) is normally permitted
I'm trying to run docker-compose.yml from here: <https://github.com/Project-Books/book-project#running-the-app>.
I tried to run a docker-compose file in Intellij IDEA Community Edition - using **Docker plugin 202.7319.5**
Here's the docker-compose.yaml file used: <https://github.com/Project-Books/book-project/blob/master/docker-compose.yml>
Here's the details about Docker Desktop installed:
```
OS: Windows
Version: 2.3.0.4(46911)
Channel: Stable
Engine: 19.03.12
Compose: 1.26.2
```
Output I'm getting in the console:
```
ERROR: for book-project_mysql_1 Cannot start service mysql: Ports are not available: listen tcp 0.0.0.0:3306: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
ERROR: for mysql Cannot start service mysql: Ports are not available: listen tcp 0.0.0.0:3306: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
Encountered errors while bringing up the project.
```
|
The port 3306 is already in use by other application. You can deploy MySQL to another port.
example docker-compose:
```
version: '3'
services:
mysql:
image: mysql:latest
hostname: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: bookproject
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpassword
ports:
- "3307:3306"
volumes:
- db_data:/var/lib/mysql
- ./src/main/resources/db/init.sql:/data/application/init.sql
command: --init-file /data/application/init.sql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
links:
- mysql:db
ports:
- "8081:80"
bookapp:
build: ./
restart: on-failure
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=mysql:3307
- WAIT_HOSTS_TIMEOUT=300
- WAIT_SLEEP_INTERVAL=30
- WAIT_HOST_CONNECT_TIMEOUT=30
#- DEFAULT_PATH=<Target path in windows>
depends_on:
- mysql
- phpmyadmin
volumes:
db_data:
```
|
Async Task.Run Not Working
I simply wrote below codes and I expect to have 3 text files with async feature in C# but I do not see anything:
```
private async void Form1_Load(object sender, EventArgs e)
{
Task<int> file1 = test();
Task<int> file2 = test();
Task<int> file3 = test();
int output1 = await file1;
int output2 = await file2;
int output3 = await file3;
}
async Task<int> test()
{
return await Task.Run(() =>
{
string content = "";
for (int i = 0; i < 100000; i++)
{
content += i.ToString();
}
System.IO.File.WriteAllText(string.Format(@"c:\test\{0}.txt", new Random().Next(1, 5000)), content);
return 1;
});
}
```
|
There are a few potential issues:
1. Does `c:\test\` exist? If not, you'll get an error.
2. As written, your [`Random`](http://msdn.microsoft.com/en-us/library/system.random.aspx) objects might generate the same numbers, since the current system time is used as the seed, and you are doing these at about the same time. You can fix this by making them share a `static Random` instance. Edit: [but you need to synchronize the access to it somehow](http://blogs.msdn.com/b/pfxteam/archive/2009/02/19/9434171.aspx). I chose a simple `lock` on the `Random` instance, which isn't the fastest, but works for this example.
3. Building a long `string` that way is very inefficient (e.g. about 43 seconds in Debug mode for me, to do it once). Your tasks might be working just fine, and you don't notice that it's actually doing anything because it takes so long to finish. It can be made much faster by using the [`StringBuilder`](http://msdn.microsoft.com/en-us/library/system.text.stringbuilder.aspx) class (e.g. about 20 ms).
4. (this won't affect whether or not it works, but is more of a stylistic thing) you don't need to use the `async` and `await` keywords in your `test()` method as written. They are redundant, since `Task.Run` already returns a `Task<int>`.
This works for me:
```
private async void Form1_Load(object sender, EventArgs e)
{
Task<int> file1 = test();
Task<int> file2 = test();
Task<int> file3 = test();
int output1 = await file1;
int output2 = await file2;
int output3 = await file3;
}
static Random r = new Random();
Task<int> test()
{
return Task.Run(() =>
{
var content = new StringBuilder();
for (int i = 0; i < 100000; i++)
{
content.Append(i);
}
int n;
lock (r) n = r.Next(1, 5000);
System.IO.File.WriteAllText(string.Format(@"c:\test\{0}.txt", n), content.ToString());
return 1;
});
}
```
|
Using Multicore in R for a pentium 4 HT machine
I am using a Pentium 4 HT machine at office for running R, some of the code requires plyr package, which I usually need to wait for 6-7 minutes for the script to finish running, while I saw my processor is only half utilized.
I have heard of using Multicore package in R for better utilizing the multicore processor, is my case suitable for this?
Thanks!
|
There is a bunch of packages out there to do multicoring. See `doMPI`, `doSNOW`, `doMC` and `doSMP`. They are all front ends for other programs that run parallelization (like MPI/OpenMPI, multicore package...). On Windows, I've had good experience with [`doSMP`](http://www.r-statistics.com/2010/04/parallel-multicore-processing-with-r-on-windows/) while on Linux doMC looks promising (with some support for windows emerging, but some people have doubts about emulation of "fork").
That being said, I concur with Vince's comments about need to write `plyr` function to use the power of parallel computing. You could write your own function that emulate `plyr` (or edit `plyr`) that uses `%dopar%` (see `foreach` package as well).
Two "CPU usage history" windows could mean two cores or multi-threading. For instance, I have an i7-920 processor with 4 cores, but I see 8 history windows, because each core is multi-threaded.
Excuse my vocabulary and/or logic, but I would be that fish in Vince's post when it comes to these sort of things.

|
when i update to webpack5, there is a error: configuration has an unknown property 'before'
when I update webpack 4 to 5, the error exits.
I have a webpackDevServer.js which include the error message 'error'
```
// webpackDevServer.js
module.exports = function(proxy, allowedHost) {
return {
before(app, server) {
if (fs.existsSync(paths.proxySetup)) {
// This registers user provided middleware for proxy reasons
require(paths.proxySetup)(app);
}
// This lets us fetch source contents from webpack for the error overlay
app.use(evalSourceMapMiddleware(server));
// This lets us open files from the runtime error overlay.
app.use(errorOverlayMiddleware());
// This service worker file is effectively a 'no-op' that will reset any
// previous service worker registered for the same host:port combination.
// We do this in development to avoid hitting the production cache if
// it used the same host and port.
// https://github.com/facebook/create-react-app/issues/2272#issuecomment-302832432
app.use(noopServiceWorkerMiddleware());
},
};
};
```
I use the above file in a start.js file, when I run the project, I type `node scripts/start.js`
```
// start.js
...
const createDevServerConfig = require('../config/webpackDevServer.config');
...
const serverConfig = createDevServerConfig(
proxyConfig,
urls.lanUrlForConfig
);
const devServer = new WebpackDevServer(compiler, serverConfig);
```
then it throws an error
```
configuration has an unknown property 'before'. These properties are valid:
object { bonjour?, client?, compress?, dev?, firewall?, headers?, historyApiFallback?, host?, hot?, http2?, https?, injectClient?, injectHot?, liveReload?, onAfterSetupMiddleware?, onBeforeSetupMiddleware?, onListening?, open?, openPage?, overlay?, port?, proxy?, public?, setupExitSignals?, static?, stdin?, transportMode?, useLocalIp? }
```
here is my package.json
```
"webpack": "^5.20.2",
"webpack-dev-server": "^4.0.0-beta.0",
"webpack-manifest-plugin": "2.0.4",
"workbox-webpack-plugin": "^6.1.0"
```
|
You have to change `before` to the `onBeforeSetupMiddleware`. Link with migration description from v3 to v4. <https://github.com/webpack/webpack-dev-server/blob/master/migration-v4.md>
In case, something will change on the migration guide, details are attached below
```
v3:
module.exports = {
devServer: {
after: function (app, server, compiler) {
app.get("/some/path", function (req, res) {
res.json({ custom: "response" });
});
},
},
};
v4:
module.exports = {
devServer: {
onAfterSetupMiddleware: function (devServer) {
devServer.app.get("/some/path", function (req, res) {
res.json({ custom: "response" });
});
},
},
};
```
|
How to watermark PDFs using text or images?
I have a bunch of PDF documents in a folder and I want to augment them with a watermark. What are my options from a Java serverside context?
Preferably the watermark will support transparency. Both vector and raster is desirable.
|
Please take a look at the [TransparentWatermark2](http://itextpdf.com/sandbox/stamper/TransparentWatermark2) example. It adds transparent text on each odd page and a transparent image on each even page of an existing PDF document.
This is how it's done:
```
public void manipulatePdf(String src, String dest) throws IOException, DocumentException {
PdfReader reader = new PdfReader(src);
int n = reader.getNumberOfPages();
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(dest));
// text watermark
Font f = new Font(FontFamily.HELVETICA, 30);
Phrase p = new Phrase("My watermark (text)", f);
// image watermark
Image img = Image.getInstance(IMG);
float w = img.getScaledWidth();
float h = img.getScaledHeight();
// transparency
PdfGState gs1 = new PdfGState();
gs1.setFillOpacity(0.5f);
// properties
PdfContentByte over;
Rectangle pagesize;
float x, y;
// loop over every page
for (int i = 1; i <= n; i++) {
pagesize = reader.getPageSizeWithRotation(i);
x = (pagesize.getLeft() + pagesize.getRight()) / 2;
y = (pagesize.getTop() + pagesize.getBottom()) / 2;
over = stamper.getOverContent(i);
over.saveState();
over.setGState(gs1);
if (i % 2 == 1)
ColumnText.showTextAligned(over, Element.ALIGN_CENTER, p, x, y, 0);
else
over.addImage(img, w, 0, 0, h, x - (w / 2), y - (h / 2));
over.restoreState();
}
stamper.close();
reader.close();
}
```
As you can see, we create a `Phrase` object for the text and an `Image` object for the image. We also create a `PdfGState` object for the transparency. In our case, we go for 50% opacity (change the `0.5f` into something else to experiment).
Once we have these objects, we loop over every page. We use the `PdfReader` object to get information about the existing document, for instance the dimensions of every page. We use the `PdfStamper` object when we want to stamp extra content on the existing document, for instance adding a watermark on top of each single page.
When changing the graphics state, it is always safe to perform a `saveState()` before you start and to `restoreState()` once you're finished. You code will probably also work if you don't do this, but believe me: it can save you plenty of debugging time if you adopt the discipline to do this as you can get really strange effects if the graphics state is out of balance.
We apply the transparency using the `setGState()` method and depending on whether the page is an odd page or an even page, we add the text (using `ColumnText` and an `(x, y)` coordinate calculated so that the text is added in the middle of each page) or the image (using the `addImage()` method and the appropriate parameters for the transformation matrix).
Once you've done this for every page in the document, you have to `close()` the `stamper` and the `reader`.
**Caveat:**
You'll notice that pages 3 and 4 are in landscape, yet there is a difference between those two pages that isn't visible to the naked eye. Page 3 is actually a page of which the size is defined as if it were a page in portrait, but it is rotated by 90 degrees. Page 4 is a page of which the size is defined in such a way that the width > the height.
This can have an impact on the way you add a watermark, but if you use `getPageSizeWithRotation()`, iText will adapt. This may not be what you want: maybe you want the watermark to be added differently.
Take a look at [TransparentWatermark3](http://itextpdf.com/sandbox/stamper/TransparentWatermark3):
```
public void manipulatePdf(String src, String dest) throws IOException, DocumentException {
PdfReader reader = new PdfReader(src);
int n = reader.getNumberOfPages();
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(dest));
stamper.setRotateContents(false);
// text watermark
Font f = new Font(FontFamily.HELVETICA, 30);
Phrase p = new Phrase("My watermark (text)", f);
// image watermark
Image img = Image.getInstance(IMG);
float w = img.getScaledWidth();
float h = img.getScaledHeight();
// transparency
PdfGState gs1 = new PdfGState();
gs1.setFillOpacity(0.5f);
// properties
PdfContentByte over;
Rectangle pagesize;
float x, y;
// loop over every page
for (int i = 1; i <= n; i++) {
pagesize = reader.getPageSize(i);
x = (pagesize.getLeft() + pagesize.getRight()) / 2;
y = (pagesize.getTop() + pagesize.getBottom()) / 2;
over = stamper.getOverContent(i);
over.saveState();
over.setGState(gs1);
if (i % 2 == 1)
ColumnText.showTextAligned(over, Element.ALIGN_CENTER, p, x, y, 0);
else
over.addImage(img, w, 0, 0, h, x - (w / 2), y - (h / 2));
over.restoreState();
}
stamper.close();
reader.close();
}
```
In this case, we don't use `getPageSizeWithRotation()` but simply `getPageSize()`. We also tell the `stamper` *not* to compensate for the existing page rotation: `stamper.setRotateContents(false);`
Take a look at the difference in the resulting PDFs:
In the first screen shot (showing page 3 and 4 of the resulting PDF of `TransparentWatermark2`), the page to the left is actually a page in portrait rotated by 90 degrees. iText however, treats it as if it were a page in landscape just like the page to the right.

In the second screen shot (showing page 3 and 4 of the resulting PDF of `TransparentWatermark3`), the page to the left is a page in portrait rotated by 90 degrees and we add the watermark as if the page is in portrait. As a result, the watermark is also rotated by 90 degrees. This doesn't happen with the page to the right, because that page has a rotation of 0 degrees.

This is a subtle difference, but I thought you'd want to know.
If you want to read this answer in French, please read [Comment créer un filigrane transparent en PDF?](http://www.developpez.net/forums/blogs/133351-blowagie/b432/creer-filigrane-transparent-pdf/)
|
How to retrieve half of records from a table - Oracle 11g
How can i retrieve *(select)* half of records from a table, for example, a table with 1000 rows, retrieve 500 (50%) from the table. (in this case i can use rownum because we know the exact quantity of rows (1000) - `select * from table where rownum <= 500`), but i have to count every table to achieve the statement.
What's the best way do you think i can do this?
|
Well, you could count the rows and select half:
```
select *
from my_table
where rownum <= (select count(*)/2 from my_table)
```
That would *tend* to select rows that are contiguous within the physical segments.
Or ...
```
select *
from (select rownum rn, * from my_table)
where mod(rn,2) = 0
```
That would *tend* to select "every other" row, so you'd get a pretty even spread from the physical data segments.
Or ...
```
select *
from my_table sample (50)
```
That would be approximately half of the rows.
Or ...
```
select *
from my_table sample block (50)
```
That would be the rows from approximately half of the data blocks below the high water marks of the segments.
Probably lots of different ways available, and which one you want probably depends on whether you want the selected pseudo-randomly or not.
If you want to use the output of the query, use something like:
```
select ...
from (select *
from my_table
where rownum <= (select count(*)/2 from my_table)) my_table
join ...
```
In that circumstance the SAMPLE syntax would be more compact.
|
How to create groups and assign permission during project setup in django?
I know I can create user groups and assign permission to them from the admin area in a django project. I can also create a group and assign permission to it by importing `Group` and `Permission` model from django's auth module.
What I want to know if there is any way I can create group and assign permission to them when I set up the project. So, if I have types of users, **Admin**, **Developer**, **Tester** and **Project Manager**. They are basically user groups which would have different permission level. I did not customize the `User` model and only can differentiate by the groups they are assigned to. So is there a way to create these groups and assign required permission to them like when permissions are created for admin when I run `python manage.py migrate`?
|
You can define a [`post_migrate`](https://docs.djangoproject.com/en/1.10/ref/signals/#post-migrate) signal to create required `User` and `Group` model instances if they don't exist already.
When you create an application in using `python manage.py startapp <app_name>`, it creates an `AppConfig` class in apps.py file.
You can specify which signal to call in `AppConfig` class definition. Say the signal is called `populate_models`. In that case, modify AppConfig to look like following:
```
from django.apps import AppConfig
from django.db.models.signals import post_migrate
class AppConfig(AppConfig):
name = 'app'
def ready(self):
from .signals import populate_models
post_migrate.connect(populate_models, sender=self)
```
And in signals.py define the `populate_models` function.
```
def populate_models(sender, **kwargs):
from django.contrib.auth.models import User
from django.contrib.auth.models import group
# create groups
# assign permissions to groups
# create users
```
|
How can I search for broken links of a website using Java?
I would like to scan some websites looking for broken links, preferably using Java. Any hint how can I start doing this?
(I know there are some websites that do this, but I want to make my own personalized log file)
|
Writing a web-crawler isn't as simple as just reading the static HTML, if the page uses JavaScript to modify the DOM then it gets complex. You will also need to look for pages you've already visited aka Spider Traps? If the site is pure static HTML, then go for it... But if the site uses Jquery and is large, expect it to be complex.
If your site is all static, small and has little or no JS then use the answers already listed.
Or
You could use [Heritrix](http://crawler.archive.org/) and then later parsed it's crawl.log for 404's. [Heritrix doc on crawl.log](http://crawler.archive.org/articles/user_manual/analysis.html#logs)
Or **If you most write your own:**
You could use something like [HTMLUnit](http://htmlunit.sourceforge.net/) (it has a JavaScript engine) to load the page, then query the DOM object for links. Then place each link in a "unvisited" queue, then pull links from the unvisited queue to get your next url to load, if the page fails to load, report it.
To avoid duplicate pages (spider traps) you could hash each link and keep a HashTable of visited pages (see [CityHash](http://code.google.com/p/cityhash/) ). Before placing a link into the unvisited queue check it against the visited hashtable.
To avoid leaving your site check that the URL is in a safe domain list before adding it to the unvisited queue. If you want to confirm that the off domain links are good, then keep them in a offDomain queue. Then later load each link from this queue using URL.getContent(url) to see if they work (faster than using HTMLUnit and you don't need to parse the page anyway.).
|
Why does JavaScript's getElementsByClassName provide an object that is NOT an array?
I'm trying to get a list in JavaScript (not using jQuery) of all the elements on the page with a specific class name. I therefore employ the getElementsByClassName() function as follows:
```
var expand_buttons = document.getElementsByClassName('expand');
console.log(expand_buttons, expand_buttons.length, expand_buttons[0]);
```
Note that I have three anchor elements on my page with the class 'expand'. This console.log() outputs
```
[] 0 undefined
```
Next, for kicks, I threw expand\_buttons into its own array as follows:
```
var newArray = new Array(expand_buttons);
console.log(newArray, newArray.length);
```
This suddenly outputs
```
[NodeList[3]] 1
```
and I can click through the nodelist and see the attributes of the three 'expand' anchor elements on the page. It's also worth noting that I was able to get my code working in a [w3schools test page](http://www.w3schools.com/jsref/tryit.asp?filename=tryjsref_doc_getelementsbyname).
It may also be of note that my use of document.getElementsByName actually does output (to the console) an array of elements, but when I ask for its length, it tells me 0. Similarly, if I try to access an array element using `array_name[0]` as normal, it outputs 'undefined', despite there clearly being an element inside of an array when I print the object to the console.
Does anybody have any idea why this might be? I just want to loop through DOM elements, and I'm avoiding jQuery at the moment because I'm trying to practice coding with vanilla JavaScript.
Thanks,
ParagonRG
|
It's not so much a JavaScript thing as it is a web browser thing. That API is supplied by a native object (the `document` object), and by the DOM spec it returns a NodeList object. You can treat a NodeList like an array, and it's similar, but distinctly different (as you've noticed).
You can always copy a NodeList to a new array:
```
var nodeArr = Array.prototype.slice.call(theNodeList, 0);
```
or in modern ES2015 environments:
```
var nodeArr = Array.from(theNodeList);
```
JavaScript always exists in some runtime context, and the context can include all sorts of APIs that provide facilities to JavaScript code. A web browser is one of those contexts. The DOM is specified in a way that's not especially partial to JavaScript; it's a language-neutral interface definition.
I guess the short version of this answer would be, "because it just does."
|
How to hide legend selectively in a plotly line plot?
I'm struggling to hide the legend for some but not all of the lines in my line plot. Here is what the plot looks like now.
### [Plot:](https://i.stack.imgur.com/bz3fe.jpg)
[](https://i.stack.imgur.com/ZItnr.png)
Essentially I want to hide the legend for the light grey lines while keeping it in place for the coloured lines.
Here's my code:
```
import plotly.graph_objects as go
fig = go.Figure()
fig.update_layout(autosize=False, width=800, height=500, template='none')
fig.update_layout(title = 'Title', xaxis_title = 'Games', yaxis_title = 'Profit')
for team in rest_teams:
fig.add_traces(go.Scatter(x=df['x'], y = df[team], name = team, line = {'color': '#F5F5F5'}))
for team in big_eight:
line_dict = {'color': cmap[team]}
fig.add_traces(go.Scatter(x=df['x'], y = df[team], name = team, line = line_dict))
fig.show()
```
I can update layout with
```
fig.update_layout(showlegend=False)
```
which hides the whole thing and isn't optimal. Help would be appreciated.
|
If I understand your desired output correctly, you can use `showlegend = False` for the traces where you've set a grey color with `color = #F5F5F5`:
```
for c in cols1:
fig.add_trace(go.Scatter(x = df.index, y = df[c], line_color = '#F5F5F5',
showlegend = False))
```
And then leave that out for the lines you'd like colors assigned to, and make sure to include `name = c` in:
```
for c in cols2:
fig.add_trace(go.Scatter(x = df.index, y = df[c],
name = c))
```
### Plot:
[](https://i.stack.imgur.com/jPCPf.png)
### Complete code:
```
import plotly.graph_objects as go
import plotly.express as px
import pandas as pd
df = px.data.stocks()
df = df.set_index('date')
fig = go.Figure()
cols1 = df.columns[:2]
cols2 = df.columns[2:]
for c in cols1:
fig.add_trace(go.Scatter(x = df.index, y = df[c], line_color = '#F5F5F5',
showlegend = False))
for c in cols2:
fig.add_trace(go.Scatter(x = df.index, y = df[c],
name = c))
fig.update_layout(template=template
fig.show()
```
|
Mouse Lag in ubuntu Gnome
I just installed Ubuntu gnome and i am Getting trouble with the mouse.
When i go out of a Window a little mouse Icon sticks to the Screen. I have checked the Drivers. Also i tried Gnome in Arch linux, Fedora and Suse with the same Issue.
My version is 16.4.LTS just set up and nothing installed on it.
|
I saw this and I really think it solved my problem on a fresh install of 16.04 LTS:
"<https://superuser.com/questions/528727/how-do-i-solve-periodic-mouse-lag-on-linux-mint-mate>"
I found this solution! Credit goes to the original poster.
<https://superuser.com/questions/528727/how-do-i-solve-periodic-mouse-lag-on-linux-mint-mate>
To save some peeps the trouble of going to the link here are the specific instructions I followed (copy and paste each line on a terminal):
```
sudo su -
modprobe drm_kms_helper
echo N> /sys/module/drm_kms_helper/parameters/poll
echo "drm_kms_helper" >> /etc/modprobe.d/local.conf
echo 'drm_kms_helper' >> /etc/modules-load.d/local.conf
echo "options drm_kms_helper poll=N" >> /etc/modprobe.d/local.conf
```
Credit goes to the original poster, Mr. Indrek.
|
Debugging Quicklook Plugin in Xcode
I am trying to debug a quicklook plugin in Xcode 4.6. I have created the executable in Edit Scheme.
Now, when I build the project the plugin is not installed to the "/Library/Quicklook" path.
Rather I didn't find it anywhere.
I want to run the plugin in debug mode and want to hit the breakpoints so that I can figure out where the plugin crashes on different files.
|
You can use brake points and `NSLog` function for debugging QL plugin. But first you need to configure environment.
- Select your project on Project Navigator
- Then select QL plugin target
- Go to '**Build Phases**' tab and add new phase '**Copy files**'
- Select '**Absolute Path**' on destination drop down menu and set `~/Library/QuickLook` as subpath
- Open terminal and copy `qlmanage` to your project's root directory with command `cp /usr/bin/qlmanage PROJECT_ROOT_DIR`
- Then select menu **Product -> Scheme -> Edit Scheme...**
- Select '**Run**' on schemes list
- On info tab select executable drop down menu, then '**Other...**' and select `qlmanage` binary you have copied to project's root directory
- On arguments tab add row for '**Arguments Passed On Launch**' and set value to `-p FULL_PATH_TO_FILE_FOR_PREVIEW`
After all steps you can run your project and debug your code.
|
How to make normalize to work on all types of arrays in Julia?
This won't work `normalize([1 2])` it works only for `normalize([1, 2])`.
How to make it accept all types of lists, arrays and vectors?
|
normalize works on vectors, and [1 2] is a 2-dimensional matrix.
```
julia> [1, 2] isa Vector
true
julia> [1 2] isa Vector
false
```
You can make a flattened version of a matrix with [:], as in:
```
julia> [1 2][:] isa Vector
true
```
So you can call normalize([1 2][:]) without a problem:
```
julia> normalize([1, 2]) == normalize([1 2][:])
true
```
Note that using vec also works and avoids one extra copy:
```
julia> normalize([1, 2]) == normalize(vec([1 2]))
true
```
To avoid *any* copies, you can use normalize! instead of normalize to change the vector in-place (note that normalize! prefers floating point, not integer values).
Look up reshape() if you need to get the original [1 2] shape back.
|
TypeScript allows a function as React prop that conflicts with in-param of function signature?
Why does the TypeScript type checker allow a prop with a function parameter that does not strictly match the definition?
Specifically I define a function `callbackImpl = (str: string): number`, and give it as a React prop parameter defined as `callback(parameter: string | undefined): number;`, which surprisingly works.
This is unintuitive to me, and in my opinion quite dangerous!
But! It does not work to call `callbackImpl(undefined)` which I think is correct.
A complete example:
```
import React from "react";
interface Props {
callback(parameter: string | undefined): number;
}
class A extends React.Component<Props> {
componentDidUpdate() {
this.props.callback(undefined);
}
}
class B extends React.Component {
private callbackImpl = (str: string): number => {
// Will crash if str is undefined
return str.length;
};
// THIS IS NOT ALLOWED! And rightly so!
private callLocalWithUndefined() {
// TS2345: Argument of type 'undefined' is not assignable to parameter of type 'string'.
this.callbackImpl(undefined);
}
render() {
return (
<React.Fragment>
<A
// This is obviously just as illegal as what happens in callLocalWithUndefined,
// since callbackImpl explicitly does not accept undefined as the first parameter,
// but no type errors here!?
callback={this.callbackImpl}
/>
</React.Fragment>
);
}
}
```
I have set `"strict": true,` in `tsconfig.json`
Here is a more complete `tsconfig.json` listing, with some local stuff omitted.
```
{
"compilerOptions": {
"outDir": "./dist/",
"sourceMap": true,
"noImplicitAny": true,
"moduleResolution": "node",
"module": "esnext",
"allowSyntheticDefaultImports": true,
"target": "es6",
"jsx": "react",
"allowJs": true,
"strict": true,
"noEmitOnError": true,
"plugins": [],
"baseUrl": "./",
"paths": {
// Omitted
},
"lib": [
"es2017", // Object.entries support
"dom"
],
"types": ["gapi", "gapi.auth2", "node"]
},
"exclude": [
"node_modules"
]
}
```
Am I doing something wrong? Are my tsconfig settings wrong? Am I misunderstanding something?
Thanks!
# Edit
Additional resources after the answer by Titian Cernicova-Dragomir
- [Strict Function Types](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-6.html) as described by the TS 2.6 Release Notes. Describes that methods are not covered by Strict Function Type checking.
- And in the [regular documentation](https://www.typescriptlang.org/tsconfig#strictFunctionTypes)
|
You are right this is unsafe. There was a rationale for allowing this behavior to ease migration from JS to TS. The good news you can opt into not allowing this by using `strictFunctionTypes`.
For example this code
```
declare const callbackImpl: (str: string) => number
let callback: (parameter: string | undefined) => number;
callback = callbackImpl
callback(undefined);
```
The code above compiles with `strictNullChecks` even though it's not fully type safe as you noticed. But it will fail to compile with both `strictNullChecks` and `strictFunctionTypes`
**Note** I assumed you are already using `strictNullChecks` if you are not then the code works as expected because without this option `string|undefined` is by definition just `string`
**Edit**
The above generic explanation was posted before the question included the actual code. The reason the compiler does not catch the error is because you define callback as a method. If you define it as function field, the compiler catches the error. I am still unsure why See below:
```
interface Props {
callback: (parameter: string | undefined) => number;
}
class A extends React.Component<Props> {
componentDidUpdate() {
this.props.callback(undefined);
}
}
class B extends React.Component {
private callbackImpl = (str: string): number => {
// Will crash if str is undefined
return str.length;
};
render() {
return (
<React.Fragment>
<A
// Error now
callback={this.callbackImpl}
/>
</React.Fragment>
);
}
}
```
**Edit**
This behavior is by design. The `-strictFunctionTypes` flag does not apply to methods as stated in the original [PR](https://github.com/Microsoft/TypeScript/pull/18654)
>
> The stricter checking applies to all function types, except those originating in method or construcor declarations. Methods are excluded specifically to ensure generic classes and interfaces (such as Array) continue to mostly relate covariantly. The impact of strictly checking methods would be a much bigger breaking change as a large number of generic types would become invariant (even so, we may continue to explore this stricter mode).
>
>
>
|
Typescript/Javascript: using tuple as key of Map
Hit this odd bug in my code and I can't figure the way to get a constant time lookup from a Map when using a tuple as my key.
Hopefully this illustrates the issue, and the workaround I'm using now just to get it to work:
hello.ts:
```
let map: Map<[number, number], number> = new Map<[number, number], number>()
.set([0, 0], 48);
console.log(map.get([0,0])); // prints undefined
console.log(map.get(String([0, 0]))); // compiler: error TS2345: Argument of type
// 'string' is not assignable to parameter of type '[number, number]'.
//the work-around:
map.forEach((value: number, key: [number, number]) => {
if(String(key) === String([0, 0])){
console.log(value); // prints 48
}
})
```
To compile (transpile?) I'm using:
```
tsc hello.ts -target es6
```
tsc version 2.1.6
Tried several things to make the Map.get() method to work, not having much success.
|
In JavaScript (and as an extension, TypeScript), no two arrays are equal except if they refer to the same array (i.e., when changing the elements of one also would change the elements of another). If you create a new array with the same elements, it would not consider it to be equal to any existing one.
Because Maps consider such equality when looking up elements, if you store a value with an array as a key, you can only get the value out again if you pass in the exact same array reference as a key again:
```
const map: Map<[ number, number], number> = new Map<[ number, number ], number>();
const a: [ number, number ] = [ 0, 0 ];
const b: [ number, number ] = [ 0, 0 ];
// a and b have the same value, but refer to different arrays so are not equal
a === b; // = false
map.set(a, 123);
map.get(a); // = 123
map.get(b); // = undefined
```
One simple workaround for this is to use strings or numbers as keys, as these are always considered equal when they have the same value:
```
const map: Map<string, number> = new Map<string, number>();
const a: [ number, number ] = [ 0, 0 ];
const b: [ number, number ] = [ 0, 0 ];
const astr: string = a.join(','); // = '0,0'
const bstr: string = b.join(','); // = '0,0'
// astr and bstr have the same value, and are strings so they are always equal
astr === bstr; // = true
map.set(astr, 123);
map.get(astr); // = 123
map.get(bstr); // = 123
```
|
How to start a Scala akka actor
Below class is causing an error at line new HelloWorld :
```
Exception in thread "main" akka.actor.ActorInitializationException: You cannot create an instance of [HelloWorld] explicitly using the constructor (new). You have to use one of the 'actorOf' factory methods to create a new actor. See the documentation.
at akka.actor.ActorInitializationException$.apply(Actor.scala:219)
at akka.actor.Actor$class.$init$(Actor.scala:436)
at HelloWorld.<init>(HelloWorld.scala:4)
at Driver$.main(HelloWorld.scala:38)
at Driver.main(HelloWorld.scala)
```
So I try : `val hw = actorOf(new HelloWorld)`
But this causes a compiler error :
```
not found: value actorOf
```
How should HelloWorld below be implemented ?
Reading other Scala docs an act method is requried to be defined within the class that extends Actor and then invoke the start method on this class, is there a reason for using actorOf instead of defining an act method ?
Below class is taken from Scala akka docs <http://doc.akka.io/docs/akka/2.2.0/scala.html> :
```
import akka.actor.Actor
import akka.actor.Actor._
import akka.actor.Props
class HelloWorld extends Actor {
override def preStart(): Unit = {
// create the greeter actor
val greeter = context.actorOf(Props[Greeter], "greeter")
// tell it to perform the greeting
greeter ! Greeter.Greet
}
def receive = {
// when the greeter is done, stop this actor and with it the application
case Greeter.Done => context.stop(self)
}
object Greeter {
case object Greet
case object Done
}
class Greeter extends Actor {
def receive = {
case Greeter.Greet =>
println("Hello World!")
sender ! Greeter.Done
}
}
}
object Driver {
def main(args: Array[String]) {
new HelloWorld
}
}
```
|
You need to edit your main as shown below. Secondly, in line-5, you need to change it to `context.actorOf(Props(new Greeter))`. This is because your `Greeter` does not have `apply` function defined, hence you need to manually create Greeter object yourself.
Working code below:
```
import akka.actor.ActorSystem
class HelloWorld extends Actor {
override def preStart(): Unit = {
// create the greeter actor
val greeter = context.actorOf(Props(new Greeter), "greeter")//line 5
// tell it to perform the greeting
greeter ! Greeter.Greet
}
def receive = {
// when the greeter is done, stop this actor and with it the application
case Greeter.Done => context.stop(self)
}
object Greeter {
case object Greet
case object Done
}
class Greeter extends Actor {
def receive = {
case Greeter.Greet =>
println("Hello World!")
sender ! Greeter.Done
}
}
}
object Driver {
def main(args: Array[String]) {
val system = ActorSystem("Main")
val ac = system.actorOf(Props[HelloWorld])
}
}
```
|
Can i have any books about Azure Data Lake Internals?
I dont wanna use the ADL and ADLA as a black box. I need to understand how the gears rotate underhood to use it in an efficient way.
Where i can find an information that describe internals:
1. how U-SQL query is processed
2. how parallelism is worked
3. how storage is organized in ADL at low level
4. how DB's storage is organized in ADL at low level (is it rowstore or columnstore)
5. how partitioning is organized
6. etc
There is exists a lot of books and whitepappers that describes RDBMS engine's internals. Does it exists for ADL/ADLA?
There are a lot of guys who works in Azure. Could you publish any drafts/whitepappers to use as is (unoficially).
|
Some of that information is available in presentations we have given. For example you can find some of these presentations on my slideshare account at: <http://www.slideshare.net/MichaelRys>.
To answer some of your questions above:
The current clustered index version of U-SQL tables are stored in your catalog folder structured as so called structured stream files. These are highly compressible, scaled out files that use a row-oriented structure with self-contained meta data and statistics (more detailed stats can be created). The table construct provides 2 level partitioning: addressable partitions and internal distribution schemes (HASH, RANGE etc). Both help with parallelization, although distribution schemes are more for performance while partition more for data lifecycle management. There is no limit on them, although the sweet spot is 1GB to 4GB per distribution bucket.
1 AU is basically 1 container. And ADLS is NOT HDFS architecturally but offers the WebHDFS API for compatibility.
|
Debunking wrong CLT statement
The central limit theorem (CLT) gives some nice properties about converging to a normal distribution. Prior to studying statistics formally, I was under the extremely wrong impression that the CLT said that data approached normality.
I now find myself arguing with collaborators about this. I say that $68\%$ of the data need not be within one standard deviation of the mean when we have non-normal distributions. They agree but then say that, by the CLT, since we have many observations (probably 50,000), our data are very close to normal, so we can use the empirical rule and say that $68\%$ of the data are within one standard deviation of the mean. This is, of course, false. The population does not care how many observations are drawn from it; the population is the population, whether we sample from it or not!
What would be a good way to explain why the central limit theorem is not about the empirical distribution converging?
|
This is quite a ubiquitous misunderstanding of the central limit theorem, which I have also encountered in my statistical teaching. Over the years I have encountered this problem so often that I have developed a Socratic method to deal with it. I identify a student that has accepted this idea and then engage the student to tease out what this would logically imply. It is fairly simple to get to the *reductio ad absurdum* of the false version of the theorem, which is that *every sequence of IID random variables has a normal distribution*. A typical conversation would go something like this.
>
> **Teacher:** I noticed in this assignment question that you said that because $n$ is large, the data are approximately normally distributed. Can you take me through your reasoning for that bit?
>
>
> **Student:** Is that wrong?
>
>
> **Teacher:** I don't know. Let's have a look at it.
>
>
> **Student:** Well, I used that theorem you talked about in class; that main one you mentioned a bunch of times. I forget the name.
>
>
> **Teacher:** The central limit theorem?
>
>
> **Student:** Yeah, the central limit theorem.
>
>
> **Teacher:** Great, and when does that theorem apply?
>
>
> **Student:** I think if the variables are IID.
>
>
> **Teacher:** And have finite variance.
>
>
> **Student:** Yeah, and finite variance.
>
>
> **Teacher:** Okay, so the random variables have some *fixed* distribution with finite variance, is that right?
>
>
> **Student:** Yeah.
>
>
> **Teacher:** And the distribution isn't changing or anything?
>
>
> **Student:** No, they're IID with a fixed distribution.
>
>
> **Teacher:** Okay great, so let me see if I can state the theorem. The central limit theorem says that if you have an IID sequence of random variables with finite variance, and you take a sample of $n$ of them, then as that sample size $n$ gets large the distribution of the random variables converges to a normal distribution. Is that right?
>
>
> **Student:** Yeah, I think so.
>
>
> **Teacher:** Okay great, so let's think about what that would mean. Suppose I have a sequence like that. If I take say, a thousand sample values, what is the distribution of those random variables?
>
>
> **Student:** It's approximately a normal distribution.
>
>
> **Teacher:** How close?
>
>
> **Student:** Pretty close I think.
>
>
> **Teacher:** Okay, what if I take a billion sample values. How close now?
>
>
> **Student:** Really close I'd say.
>
>
> **Teacher:** And if we have a sequence of these things, then in theory we can take $n$ as high as we want can't we? So we can make the distribution as close to a normal distribution as we want.
>
>
> **Student:** Yeah.
>
>
> **Teacher:** So let's say we take $n$ big enough that we're happy to say that the random variables basically have a normal distribution. And that's a fixed distribution right?
>
>
> **Student:** Yeah.
>
>
> **Teacher:** And they're IID right? These random variables are IID?
>
>
> **Student:** Yeah, they're IID.
>
>
> **Teacher:** Okay, so they all have the same distribution.
>
>
> **Student:** Yeah.
>
>
> **Teacher:** Okay, so that means the *first* value in the sequence, it also has a normal distribution. Is that right?
>
>
> **Student:** Yeah. I mean, it's an approximation, but yeah, if $n$ is really large then it effectively has a normal distribution.
>
>
> **Teacher:** Okay great. And so does the second value in the sequence, and so on, right?
>
>
> **Student:** Yeah.
>
>
> **Teacher:** Okay, so really, as soon as we started sampling, we were already getting values that are essentially normal distributed. We didn't really need to wait until $n$ gets large before that started happening.
>
>
> **Student:** Hmmm. I'm not sure. That sounds wrong. The theorem says you need a large $n$, so I guess I think you can't apply it if you only sampled a small number of values.
>
>
> **Teacher:** Okay, so let's say we are sampling a billion values. Then we have large $n$. And we've established that this means that the first few random variables in the sequence are normally distributed, to a very close approximation. If that's true, can't we just stop sampling early? Say we were going to sample a billion values, but then we stop sampling after the first value. Was that random variable still normally distributed?
>
>
> **Student:** I think maybe it isn't.
>
>
> **Teacher:** Okay, so at some point its distribution changes?
>
>
> **Student:** I'm not sure. I'm a bit confused about it now.
>
>
> **Teacher:** Hmmm, well it seems we have something strange going on here. Why don't you have another read of the material on the central limit theorem and see if you can figure out how to resolve that contradiction. Let's talk more about it then.
>
>
>
That is one possible approach, which seeks to reduce the false theorem down to the *reductio* which says that every IID sequence (with finite variance) must be composed of normal random variables. Either the student will get to this conclusion, and realise something is wrong, or they will defend against this conclusion by saying that the distribution changes as $n$ gets large (or they may handwave a bit, and you might have to lawyer them to a conclusion). Either way, this usually provokes some further thinking that can lead them to re-read the theorem. Here is another approach:
>
> **Teacher:** Let's look at this another way. Suppose we have an IID sequence of random variables from some other distribution; one that is **not** a normal distribution. Is that possible? For example, could we have a sequence of random variables representing outcome of coin flip, from the Bernoulli distribution?
>
>
> **Student:** Yeah, we can have that.
>
>
> **Teacher:** Okay, great. And these are all IID values, so again, they all have the same distribution. So every random variable in that sequence is going to have a distribution that is *not* a normal distribution, right?
>
>
> **Student:** Yeah.
>
>
> **Teacher:** In fact, in this case, every value in the sequence will be the outcome of a coin flip, which we set as zero or one. Is that right?
>
>
> **Student:** Yeah, as long as we label them that way.
>
>
> **Teacher:** Okay, great. So if all the values in the sequence are zeroes or ones,
> no matter how many of them we sample, we are always going to get a histogram showing values at zero and one, right?
>
>
> **Student:** Yeah.
>
>
> **Teacher:** Okay. And do you think if we sample more and more values, we will get closer and closer to the true distribution? Like, if it is a fair coin, does the histogram eventually converge to where the relative frequency bars are the same height?
>
>
> **Student:** I guess so. I think it does.
>
>
> **Teacher:** I think you're right. In fact, we call that result the "law of large numbers". Anyway, it seems like we have a bit of a problem here doesn't it. If we sample a large number of the values then the central limit theorem says we converge to a normal distribution, but it sounds like the "law of large numbers" says we actually converge to the true distribution, which isn't a normal distribution. In fact, it's a distribution that is just probabilities on the zero value and the one value, which looks nothing like the normal distribution. So which is it?
>
>
> **Student:** I think when $n$ is large it looks like a normal distribution.
>
>
> **Teacher:** So describe it to me. Let's say we have flipped the coin a billion times. Describe the distribution of the outcomes and explain why that looks like a normal distribution.
>
>
> **Student:** I'm not really sure how to do that.
>
>
> **Teacher:** Okay. Well, do you agree that if we have a billion coin flips, all those outcomes are zeroes and ones?
>
>
> **Student:** Yeah.
>
>
> **Teacher:** Okay, so describe what its histogram looks like.
>
>
> **Student:** It's just two bars on those values.
>
>
> **Teacher:** Okay, so not "bell curve" shaped?
>
>
> **Student:** Yeah, I guess not.
>
>
> **Teacher:** Hmmm, so perhaps the central limit theorem doesn't say what we thought. Why don't you read the material on the central limit theorem again and see if you can figure out what it says. Let's talk more about it then.
>
>
>
|
Weak Reference to NSTimer Target To Prevent Retain Cycle
I'm using an `NSTimer` like this:
```
timer = [NSTimer scheduledTimerWithTimeInterval:30.0f target:self selector:@selector(tick) userInfo:nil repeats:YES];
```
Of course, `NSTimer` retains the target which creates a retain cycle. Furthermore, `self` isn't a UIViewController so I don't have anything like `viewDidUnload` where I can invalidate the timer to break the cycle. So I'm wondering if I could use a weak reference instead:
```
__weak id weakSelf = self;
timer = [NSTimer scheduledTimerWithTimeInterval:30.0f target:weakSelf selector:@selector(tick) userInfo:nil repeats:YES];
```
I've heard that the timer **must** be invalidated (i guess to release it from the run loop). But we could do that in our dealloc, right?
```
- (void) dealloc {
[timer invalidate];
}
```
Is this a viable option? I've seen a lot of ways that people deal with this issue, but I haven't seen this.
|
The proposed code:
```
__weak id weakSelf = self;
timer = [NSTimer scheduledTimerWithTimeInterval:30.0f target:weakSelf selector:@selector(tick) userInfo:nil repeats:YES];
```
has the effect that (i) a weak reference is made to self; (ii) that weak reference is read in order to provide a pointer to `NSTimer`. It won't have the effect of creating an `NSTimer` with a weak reference. The only difference between that code and using a `__strong` reference is that if self is deallocated in between the two lines given then you'll pass `nil` to the timer.
The best thing you can do is create a proxy object. Something like:
```
[...]
@implementation BTWeakTimerTarget
{
__weak target;
SEL selector;
}
[...]
- (void)timerDidFire:(NSTimer *)timer
{
if(target)
{
[target performSelector:selector withObject:timer];
}
else
{
[timer invalidate];
}
}
@end
```
Then you'd do something like:
```
BTWeakTimerTarget *target = [[BTWeakTimerTarget alloc] initWithTarget:self selector:@selector(tick)];
timer = [NSTimer scheduledTimerWithTimeInterval:30.0 target:target selector:@selector(timerDidFire:) ...];
```
Or even add a class method to BTWeakTimerTarget of the form `+scheduledTimerWithTimeInterval:target:selector:...` to create a neater form of that code. You'll probably want to expose the real `NSTimer` so that you can `invalidate` it, otherwise the rules established will be:
1. the real target isn't retained by the timer;
2. the timer will fire once after the real target has begun (and probably completed) deallocation, but that firing will be ignored and the timer invalidated then.
|
printf vs cout performance
```
void perfprint(unsigned int count)
{
char a[100] = "fosjkdfjlsjdflw0304802";
for(unsigned int i = 0;i<count;++i)
{
printf("%s", a);
}
}
void perfcout(unsigned int count)
{
char a[100] = "fosjkdfjlsjdflw0304802";
for(unsigned int i = 0;i<count;++i)
{
cout << a;
}
}
```
Environment : C++, VS 2010, Windows 7, 32-bit, Core-i7, 4GB, 3.40 GHz
I tested both the functions with `count = 10000` for `5` times each.
Measured the performance using `QueryPerformanceCounter`.
`perfprint` > `~850` milliseconds (Avg of 5 runs)
`perfcout` > `~9000` milliseconds (Avg of 5 runs)
Does this mean `printf` is `~10x` faster than `cout`?
**Edit:**
With /Ox, /Ot, No debug information in Release build
and with `std::ios_base::sync_with_stdio(false);` in `perfcout` method,
result is same for `cout` i.e. `~9000 millisecs`
**Edit 2:**
To conclude, `cout` is faster than `printf`. The reason of the observations above were due to console output. When redirecting output to file, things turned on its head!
|
I don't have VS 2010 installed any more, but I did a quick test with VS 2013 and 2015. I modified your code slightly to reduce duplication, and include timing code, giving this:
```
#include <iostream>
#include <cstdio>
#include <chrono>
#include <string>
template <class F>
int perf(F f) {
using namespace std::chrono;
const int count = 1000000;
char a[100] = "fosjkdfjlsjdflw0304802";
auto start = high_resolution_clock::now();
for (unsigned i = 0; i < count; i++)
f(a);
auto end = high_resolution_clock::now();
return duration_cast<milliseconds>(end - start).count();
}
int main() {
std::cerr << "cout: " << perf([](char const *a) { std::cout << a; }) << "\n";
std::cerr << "printf: " << perf([](char const *a) { printf("%s", a); }) << "\n";
}
```
With optimization turned off, `cout` showed up as slightly faster (e.g., 358 ms vs. 460 for `printf`) but measuring speed with optimization turned off is fairly meaningless.
With optimization turned on `cout` won by an even larger margin (191 ms vs 365 ms for `printf`).
To keep these meaningful, I ran them all with the output redirected to a file. Without that, essentially all you'd measure would be the speed of the console driver, which is essentially meaningless and worthless.
|
Set default java compliance level for Maven projects in Eclipse
The default compliance level for Maven is 1.5 and every time that I update a maven project in Eclipse it sets back the compliance level from 1.6 to 1.5 which is really annoying for me.
I know that I can set the target to 1.6 in the POM file but the problem is that I cannot set this in the parent POM and expect the children to inherit it. So I have to do it for every single Maven module. How can I set it in my Maven project or in the whole eclipse once for a "lifetime" without modifying every single Maven module!?
|
>
> I know that I can set the target to 1.6 in pom file but the problem is that I cannot set this in the parent pom and expect the children to inherit it.
>
>
>
Setting the `<source>` and `<target>` version in the parent pom does work.
For example, in my parent pom, I have:
```
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</pluginManagement>
```
If you are having problems, you might want to check that:
- the child specifies the correct version of the parent;
- the parent pom specifies both `source` and `target` values;
- you have run `mvn install` on the parent; and
- `mvn help:effective-pom` on the child project shows the expected source/target values.
After modifying the poms, you may need to select both projects and use *Maven->Update Project*.
|
How is CSS applied by the browser, and are repaints affected by it?
Let's say we have an HTML page with a single stylesheet `<link>`. How does the browser take the rules in this stylesheet and apply it to the HTML? I'm not asking about how to make it faster, I want to know how the rendering itself is handled.
Does it apply each rule one-by-one as it parses the stylesheet and render the result progressively? Or, are the CSS file's contents completely downloaded, then fully evaluated, and *then* applied to the HTML all at once? Or something else?
I ask this after posting an answer earlier on [a question about CSS rule order affecting rendering speed](https://stackoverflow.com/questions/6944190/does-the-order-of-rules-in-a-css-stylesheet-affect-rendering-speed/6945000), with the assumption that the styles were rendered *as* the stylesheet loaded, so the first rules would be applied before the last ones, and not all at once. I'm not sure where I picked up the idea, it's just something I have always thought.
I tried a demo on my server that looked like this:
```
<!DOCTYPE html>
<html>
<head>
<title>Test</title>
<link rel="stylesheet" href="test.css" />
</head>
<body></body>
</html>
```
`test.css` contents:
```
html { background:green }
/* thousands of lines of irrelevant CSS to make the download slow */
html { background:red }
```
Testing in Firefox 5, I expected to see green at first, then turn to red. It didn't happen. I tried with two separate stylesheets with conflicting rules and got the same results. After many combinations, the only way I got it to work was an inline `<style>` block in the `<head>`, with the conflicting rules coming from a `<link>` in the `<body>` (the body itself was completely empty except for the link tag). Even using an inline `style` attribute on the `<html>` tag, and then loading this stylesheet did not create the flicker that I expected.
Are repaints affected in *any* way by the CSS, or is the final output applied all at once after the entire stylesheet is downloaded and it's rules computed to what the final output should be? Do CSS files download in paralel with the HTML itself or block it (like script tags do)? How does this actually work?
I am not looking for optimization tips, I'm looking for authoritative references on the subject, so that I can cite them in the future. It's been very difficult to search for this information without turning up tons of unrelated material. Summary:
- Is all CSS content downloaded before *any* of it is applied? (reference please)
- How is this affected by things like `@import`, multiple `<link>`s, inline style attributes, `<style>` blocks in the head, and different rendering engines?
- Does the download of CSS content block the downloading of the HTML document itself?
|
>
> How does the browser take the rules in this stylesheet and apply it to the HTML?
>
>
>
Typically this is done in a streaming fashion. The browser reads the HTML tags as a stream, and applies what rules it can to the elements it has seen so far. (Obviously this is a simplification.)
An interesting related Q&A: [Use CSS selectors to collect HTML elements from a streaming parser (e.g. SAX stream)](https://stackoverflow.com/questions/4656975/use-css-selectors-to-collect-html-elements-from-a-streaming-parser-e-g-sax-stre) ~~(a diversion while I search for the article I have in mind).~~
---
Ah, here it is: [Why we don't have a parent selector](http://snook.ca/archives/html_and_css/css-parent-selectors).
>
> We often think of our pages as these full and complete documents full of elements and content. However, browsers are designed to handle documents like a stream. They begin to receive the document from the server and can render the document before it has completely downloaded. Each node is evaluated and rendered to the viewport as it is received.
>
>
> Take a look at the body of an example document:
>
>
>
> ```
> <body>
> <div id="content">
> <div class="module intro">
> <p>Lorem Ipsum</p>
> </div>
> <div class="module">
> <p>Lorem Ipsum</p>
> <p>Lorem Ipsum</p>
> <p>Lorem Ipsum <span>Test</span></p>
> </div>
> </div>
> </body>
>
> ```
>
> The browser starts at the top and sees a `body` element. At this point,
> it thinks it's empty. It hasn't evaluated anything else. The browser
> will determine what the computed styles are and apply them to the
> element. What is the font, the color, the line height? After it
> figures this out, it paints it to the screen.
>
>
> Next, it sees a `div` element with an ID of `content`. Again, at this
> point, it thinks it's empty. It hasn't evaluated anything else. The
> browser figures out the styles and then the `div` gets painted. The
> browser will determine if it needs to repaint the body—did the element
> get wider or taller? (I suspect there are other considerations but
> width and height changes are the most common effects child elements
> have on their parents.)
>
>
> This process continues on until it reaches the end of the document.
>
>
> [](http://www.youtube.com/watch?v=ZTnIxIA5KGw)
>
>
> ### CSS gets evaluated from right to left.
>
>
> To determine whether a CSS rule applies to a particular element, it
> starts from the right of the rule and works it's way left.
>
>
> If you have a rule like `body div#content p { color: #003366; }` then
> for every element—as it gets rendered to the page—it'll first ask if
> it's a paragraph element. If it is, it'll work its way up the DOM and
> ask if it's a `div` with an ID of content. If it finds what it's looking
> for, it'll continue its way up the DOM until it reaches the `body`.
>
>
> By working right to left, the browser can determine whether a rule
> applies to this particular element that it is trying to paint to the
> viewport much faster. To determine which rule is more or less
> performant, you need to figure out how many nodes need to be evaluated
> to determine whether a style can be applied to an element.
>
>
>
---
>
> So why was the stylesheet content not applied progressively (green first, then red)?
>
>
>
I *think* the answer is that external stylesheets are *parsed* as they are downloaded, but not *applied* until the entire stylesheet has been parsed. Surely, in parsing a stylesheet, the browser optimizes away unnecessary and redundant CSS rules.
I don't have any proof to back that up right now, but that explanation sounds reasonable to me and agrees with what you're seeing, both with external and inline styles.
|
Swift regex - How to extract a matching group
This is a question about Regular Expressions in Swift. I have found various tutorials that mention how to do searching and matching but nothing about grouping:
Here is a simple Match statement:
```
if Regex("\\w{4}").test("ABCD") {
println("matches pattern")
}
```
But what if i wanted to extract the 3rd letter:
```
if Regex("\\w{2}(\\w)\\w").text("REGX") {
print("Found a match with \$1")
}
```
What is the correct syntax to print matching groups?
Or more specifically - how would I extract **2044** from `2014-10-29T20:44:00`
|
You can use `NSRegularExpression`.
```
var pattern = ".+T(\\d\\d):(\\d\\d).+"
var string = "2014-10-29T20:44:00"
var error: NSError? = nil
var regex = NSRegularExpression(pattern: pattern, options: NSRegularExpressionOptions.DotMatchesLineSeparators, error: &error)
var result = regex?.stringByReplacingMatchesInString(string, options: nil, range: NSRange(location:0,
length:countElements(string)), withTemplate: "$1$2")
```
I would be inclined to do this without regular expressions, though:
```
var charSet = NSCharacterSet(charactersInString: "T:")
var array =
string.componentsSeparatedByCharactersInSet(charSet)
var result2 =
"\(array[1])\(array[2])"
```
This breaks the datetime string to array of substrings separated by either T or : and I get the hours and minutes in second and third element of the returned array.
|
Dynamic Data - Make Friendly Column Names?
I've created a Dynamic Data project with an Entity Framework model. It works nicely. But, right now it shows all my database tables with the db column names - which aren't always the most friendly (e.g. address\_line\_1). How can I got about giving these more friendly column titles that will display to the end user?
|
You should use Metadata classes to add additional annotations:
```
[MetadataType(typeof(MovieMetaData))]
public partial class Movie
{
}
public class MovieMetaData
{
[Required]
public object Title { get; set; }
[Required]
[StringLength(5)]
public object Director { get; set; }
[DisplayName("Date Released")]
[Required]
public object DateReleased { get; set; }
}
```
<http://www.asp.net/mvc/tutorials/validation-with-the-data-annotation-validators-cs> - find Using Data Annotation Validators with the Entity Framework
Attributes are used not only for setting display name, but also for validation, turning visibility, order or how data should be presented. You should look into it if you want to use Dynamic Data Entities project.
|
Windows Server 2008 R2 - Remove Product Key to Force Reactivation after Deployment
I have a VM configured with internal software that needs to be sent to an outside party for evaluation. As it stands, the user who configured it also activated the software with a company issued MSDN key, something I don't want to be sending outside the organization.
Sysprepping the image does seem to prompt for a new key on startup but can be skipped and, surprisingly, still permits activation with the previously used key.
Is there a way to remove the key completely and force the recipient to activate the OS by their own means? Is there another way to accomplish this outside of a rebuild on an unactivated clean install?
|
Specify the [KMS Client Setup Key](http://technet.microsoft.com/en-us/library/ff793421.aspx) in the sysprep answer file. Here's a list of keys for WS 2008 R2:
>
> Windows Server 2008 R2 HPC Edition - FKJQ8-TMCVP-FRMR7-4WR42-3JCD7
>
> Windows Server 2008 R2 Datacenter - 74YFP-3QFB3-KQT8W-PMXWJ-7M648
>
> Windows Server 2008 R2 Enterprise - 489J6-VHDMP-X63PK-3K798-CPX3Y
>
> Windows Server 2008 R2 for Itanium-Based Systems - GT63C-RJFQ3-4GMB6-BRFB9-CB83V
>
> Windows Server 2008 R2 Standard - YC6KT-GKW9T-YTKYR-T4X34-R7VHC
>
> Windows Web Server 2008 R2 - 6TPJF-RBVHG-WBW2R-86QPH-6RTM4
>
>
>
This will give them a 30 or 60 day grace period (provided you rearm the machine during sysprep) before forcing them to enter their own key (unless they run their own KMS server, in which case, it will just activate for them).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.