prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Can I define a scope anywhere in Python?
Sometimes I find that I have to use functions with long names such as `os.path.abspath` and `os.path.dirname` a **lot** in just a few lines of code. I don't think it's worth littering the global namespace with such functions, but it would be incredibly helpful to be able to define a scope around the lines where I need those functions. As an example, this would be perfect:
```
import os, sys
closure:
abspath = os.path.abspath
dirname = os.path.dirname
# 15 lines of heavy usage of those functions
# Can't access abspath or dirname here
```
I'd love to know if this is doable somehow
|
Python doesn't have a temporary namespace tool like *[let](http://www.gnu.org/software/emacs/emacs-lisp-intro/html_node/let.html)* in Lisp or Scheme.
The usual technique in Python is to put names in the current namespace and then take them out when you're done with them. This technique is used heavily in the standard library:
```
abspath = os.path.abspath
dirname = os.path.dirname
# 15 lines of heavy usage of those functions
a = abspath(somepath)
d = dirname(somepath)
...
del abspath, dirname
```
An alternative technique to reduce typing effort is to shorten the recurring prefix:
```
>>> import math as m
>>> m.sin(x / 2.0) + m.sin(x * m.pi)
>>> p = os.path
...
>>> a = p.abspath(somepath)
>>> d = p.dirname(somepath)
```
Another technique commonly used in the standard library is to just not worry about contaminating the module namespace and just rely on *\_\_all\_\_* to list which names you intend to make public. The effect of *\_\_all\_\_* is discussed in the [docs for the import statement](http://docs.python.org/reference/simple_stmts.html?highlight=__all__#the-import-statement).
Of course, you can also create your own namespace by storing the names in a dictionary (though this solution isn't common):
```
d = dict(abspath = os.path.abspath,
dirname = os.path.dirname)
...
a = d['abspath'](somepath)
d = d['dirname'](somepath)
```
Lastly, you can put all the code in a function (which has its own local namespace), but this has a number of disadvantages:
- the setup is awkward (an atypical and mysterious use of functions)
- you need to declare as *global* any assignments you want to do that aren't temporary.
- the code won't run until you call the function
>
>
> ```
> def temp(): # disadvantage 1: awkward setup
> global a, d # disadvantage 2: global declarations
> abspath = os.path.abspath
> dirname = os.path.dirname
> # 15 lines of heavy usage of those functions
> a = abspath(somepath)
> d = dirname(somepath)
> temp() # disadvantage 3: invoking the code
>
> ```
>
>
|
Longest acyclic path in a directed unweighted graph
What algorithm can be used to find the longest path in an unweighted directed acyclic graph?
|
[Dynamic programming](http://en.wikipedia.org/wiki/Dynamic_programming). It is also referenced in [Longest path problem](http://en.wikipedia.org/wiki/Longest_path_problem), given that it is a DAG.
The following code from Wikipedia:
```
algorithm dag-longest-path is
input:
Directed acyclic graph G
output:
Length of the longest path
length_to = array with |V(G)| elements of type int with default value 0
for each vertex v in topOrder(G) do
for each edge (v, w) in E(G) do
if length_to[w] <= length_to[v] + weight(G,(v,w)) then
length_to[w] = length_to[v] + weight(G, (v,w))
return max(length_to[v] for v in V(G))
```
|
Run EFI files/scripts from [boot, virtual media] ISO
Most vendors recently provide servers firmware updates only as set of `EFI` files ([example](https://www.supermicro.com/Bios/softfiles/4661/X11DPU8_224.zip)). But their servers internal (also as any external) IP-KVM can connect only `ISO` images as boot virtual media. With that virtual media remote firmware flash is impossible.
The servers are remote most of time and nobody can physically access it to insert `USB` stick to flash firmware updates. Firmware updates shall be remote only also for speed-up whole process. The question is how I can launch `EFI` files like [these](https://www.supermicro.com/Bios/softfiles/4661/X11DPU8_224.zip) from bootable `ISO` to flash firmware updates?
Most probably, ready solution is not exist. So I would like to build by myself. But I didn't found anything in google for start. Need point to some manual for start or something like. May be there is a way to arrange in `ISO` this files somehow to `EFI` shell will able to access and start it?
I'd like `Linux` solution for build `ISO`.
|
The below solution is implemented on Linux Mint 19 64 in Virtual Machine.
**Tools needed**
GParted – GUI tool to manage partitions. This is generally available in Live Linux CD/DVD. If you do not have some free space at the end of your disk, you will have to resize and create a small FAT partition. If existing partitions are in use you may need to invoke GParted from Live Linux CD/DVD for it to work on your partition.
genisoimage - command line tool to manage ISO’s. If this is not available just install it. This was readily available in Linux Mint 19 64bit
```
sudo apt-get install genisoimage
```
Step 1 – Using GParted (Preferably booted off Live Linux CD/DVD, in this case I have used Live Linux Mint 19 64bit) create a small FAT16 formatted partition at the end of your disk. For this particular case just 100MB is more than enough. Ensure that for such a small size the format is FAT16 rather than FAT32
[](https://i.stack.imgur.com/5RzEm.png)
Step 2- Unzip and copy the required files to this FAT formatted partition. You can simply use the default GUI file explorer in Linux. No separate gzip command needed
[](https://i.stack.imgur.com/y9Uce.png)
[](https://i.stack.imgur.com/LLK1P.png)
Step 3 – Create an Image of this partition. The command here is
```
dd if=/dev/sda3 of=/home/test/efi/fat.img
```
In above example `/dev/sda3` is the FAT formatted partition holding the files and `/home/test/efi/fat.img` is the image file generated.
[](https://i.stack.imgur.com/eHyGd.png)
Step 4 - Now we shall create an EFI Bootable ISO. The files themselves may not necessarily be bootable here.
```
genisoimage -v -J -r -V "TEST" \
-o /home/test/myiso.iso \
-eltorito-alt-boot \
-e fat.img \
-no-emul-boot \
/home/test/efi
```
[](https://i.stack.imgur.com/FaPpL.png)
[](https://i.stack.imgur.com/uPS8d.png)
In the above command, we are omitting the BIOS bootloader and just sticking to the EFI bootloader, the alternate one. `/home/test/myiso.iso` is the new ISO file being created, `/home/test/efi` is the location of source files. As such there’s nothing except the bootloader image therein.
Now the ISO is created on Linux Box. Mount the ISO as CD Drive and boot your UEFI System into EFI Shell. Now the bootloader section shall map to a drive in EFI Shell and the image therein holding the folder along with its contents shall be accessible thru EFI Shell.
[](https://i.stack.imgur.com/QMiZd.png)
[](https://i.stack.imgur.com/JggPC.png)
|
How can I generate new variable names on the fly in a shell script?
I'm trying to generate dynamic var names in a shell script to process a set of files with distinct names in a loop as follows:
```
#!/bin/bash
SAMPLE1='1-first.with.custom.name'
SAMPLE2='2-second.with.custom.name'
for (( i = 1; i <= 2; i++ ))
do
echo SAMPLE{$i}
done
```
I would expect the output:
```
1-first.with.custom.name
2-second.with.custom.name
```
but i got:
```
SAMPLE{1}
SAMPLE{2}
```
Is it possible generate var names in the fly?
|
You need to utilize Variable Indirection:
```
SAMPLE1='1-first.with.custom.name'
SAMPLE2='2-second.with.custom.name'
for (( i = 1; i <= 2; i++ ))
do
var="SAMPLE$i"
echo ${!var}
done
```
From the [Bash man page](https://linux.die.net/man/1/bash), under 'Parameter Expansion':
>
> "If the first character of parameter is an exclamation point (!), a
> level of variable indirection is introduced. Bash uses the value of
> the variable formed from the rest of parameter as the name of the
> variable; this variable is then expanded and that value is used in the
> rest of the substitution, rather than the value of parameter itself.
> This is known as indirect expansion."
>
>
>
|
what does formarray.controls in formsarray mean in angular 5?
I am new to angular 5 so basically am still grasping with the concepts. Using Angular's documentation on reactive forms as an example (<https://angular.io/guide/reactive-forms>), the following code is as given:
```
<div formArrayName="secretLairs" class="well well-lg">
<div *ngFor="let address of secretLairs.controls; let i=index"
[formGroupName]="i" >
<!-- The repeated address template -->
</div>
</div>
```
What does secretlairs.controls mean and what is it? According to angular it means that:
The source of the repeated items is the FormArray.controls, not the FormArray itself. Each control is an address FormGroup, exactly what the previous (now repeated) template HTML expected.
Does secretlairs.controls contain any data? Can i replace this portion with lets say an object itself with data of type any and instantiated with data obtained from webapi?
For example, instead of
```
*ngFor="let address of secretLairs.controls
```
i use
```
*ngFor="let address of addresses
```
where addresses is of type any and data obtained from database.
|
First, there are three type of forms - `FormControl`, `FormGroup`, and `FormArray` - all inherit from `AbstractControl`.
When you use Reactive Forms for validation and include either `formGroupName`, `formControlName`, or `formArrayName` in the component's template, you are actually declaratively defining a map between the form control tree and the root `FormGroup` model.
For example, given the following template:
```
<div [formGroup]="formGroup">
<div formGroupName="personalInfo">
First Name: <input type="text" formControlName="firstName"><br />
Last Name: <input type="text" formControlName="lastName"><br />
</div>
<div formArrayName="cities">
Top cities: <input *ngFor="let city of cities; index as i" type="text" [formControlName]="i">
</div>
</div>
```
You are declaratively setting up a form map for collecting information, which will eventually produce a JSON object in a specific format. For example, given the above form model, `formGroup.value` would return:
```
{
"personalInfo": {
"firstName: 'John',
"lastName: 'Smith'
},
"cities": [
"New York",
"Winnipeg",
"Toronto"
]
}
```
Once you've declared the structure of your form group in your template, you need to imperatively create the corresponding `formGroup`, `formControl`, and `formArray` in your component class. As you setup each form, you have the opportunity to set up additional parameters:
```
1. Initial Form Value
2. Array of synchronous validators
3. Array of asynchronous validators
```
This applies to any of the abstract controls.
This is what the corresponding formGroup model would look like given the above template:
```
export class AppComponent {
firstName: string,
lastName: string;
cities: string[];
@Input() formGroup: FormGroup;
constructor(private fb: FormBuilder) {
// setup initial values
this.cities = ['New York', 'Winnipeg', 'Toronto'];
this.firstName = 'John';
this.lastName = 'Smith';
// create a formGroup that corresponds to the template
this.formGroup = fb.group({
firstName: [this.firstName, Validators.required],
lastName: [this.lastName, Validators.required],
cities: fb.array(this.cities.map(t=> fb.control(t, Validators.required)))
})
}
}
```
To bind to any of the form's validation flags, you can use the root `formGroup` and a path string (which supports dots) to find the particular group, control, or array.
For example:
```
<div *ngIf="formGroup.get('firstName').errors.required">First Name is Required</div>
```
Hopefully, that makes it clear.
[Edit]
Better yet, given that Reactive Forms is all about declaratively setting up a model in the template, and mapping the same model imperatively in the component class, you should consider defining a JSON model and using that instead.
For example, suppose we have a custom model `MyModel` which has a `firstName` property, `lastName` property, and a `cities` property. The component class would look like this:
```
export class AppComponent {
@Input() model: MyModel;
@Output() modelChange: EventEmitter<MyModel>;
@Input() formGroup: FormGroup;
constructor(private fb: FormBuilder) {
this.model = new EventEmitter<MyModel>();
// create a formGroup that corresponds to the template
this.formGroup = fb.group({
firstName: [this.model.firstName, Validators.required],
lastName: [this.model.lastName, Validators.required],
cities: fb.array(this.model.cities.map(t=> fb.control(t, Validators.required)))
});
}
onSubmit() {
if (this.formGroup.valid) {
this.model = this.formGroup.value;
this.modelChange.next(this.model);
}
}
}
```
|
Create c# code under linux
Lately I used a lot of C# for Windows. I am trying to stick with C# and create C# applications under Ubuntu 11.
The problem is that I was so use to editor that I did not learn how to create elements under code only.
Could anyone point me in direction where I can find a solution on how to create windows, labels, textboxes etc. just by using code?
|
You can install MONO under Linux, and complie C# program with MONO and run it.
This information is about **What's MONO**.
<http://www.mono-project.com/What_is_Mono>
To compile and install MONO, you can reference [here](http://www.mono-project.com/Compiling_Mono), and [this is about how to use IDE to develop MONO application under Linux](http://monodevelop.com/).
Although MONO can help you run .NET/C# application under Linux, you still need to consider the following things when migrating the application.
- Linux is Case sensitive but Windows is not. Some code need to read ini or xml file, you need to ensure the file name is correct in the Linux program.
- Directory structure is different, such as C:\Windows and /etc/, you need to ensure that the code references the correct locations
- Some XML configuration information needs to be re-tested to ensure compatibility.
- Do a high coverage test to ensure every function works properly
- Security environment is different between Linux and Windows, and this may have an impact on the application
|
In R, use dplyr::across and compute difference between 2 columns, for multiple column prefixes
```
zed <- data.frame(
aAgg = c(5, 10, 15, 20),
bAgg = c(8, 16, 24, 32),
aPg = c(6, 9, 11, 24),
bPg = c(7, 15, 22, 26)
)
diff_func <- function(col) {
return(`{col}Agg` - `{colPg}`)
}
zed %>%
dplyr::mutate(dplyr::across(.cols = c('a', 'b'), .fns = diff_func, .names = "{col}Diff"))
# we want the output that this outputs, without having to have a mutate for each field.
zed <- zed %>%
dplyr::mutate(aDiff = aAgg - aPg) %>%
dplyr::mutate(bDiff = bAgg - bPg)
```
We are attempting to use dplyr's `across` function to create multiple columns. For each column prefix (`a` and `b` in this scenario), we'd like to compute the difference between `prefixAgg` - `prefixPg`, and name the new column `prefixDiff`. The last 3 lines of code in the example above generate the desired output. Our `diff_func` is currently not correct, throwing an error.
Is there a function we can pass to `across` that will generate this output?
|
We may need to loop over either the 'Agg' columns or 'Pg' columns and `get` the corresponding columns after replacing the substring from column names (`cur_column()`) and modify the `.names`
```
library(dplyr)
library(stringr)
zed %>%
mutate(across(ends_with("Agg"), ~ .x -
get(str_replace(cur_column(), "Agg", "Pg")),
.names = "{str_replace(.col, 'Agg', 'Diff')}"))
```
-output
```
aAgg bAgg aPg bPg aDiff bDiff
1 5 8 6 7 -1 1
2 10 16 9 15 1 1
3 15 24 11 22 4 2
4 20 32 24 26 -4 6
```
---
Or use two `across`, get the difference - the resulting column will be a data.frame/tibble, then `unpack` the data.frame column
```
library(tidyr)
zed %>%
mutate(Diff = across(ends_with("Agg")) - across(ends_with("Pg"))) %>%
unpack(where(is.data.frame), names_sep = "")
# A tibble: 4 × 6
aAgg bAgg aPg bPg DiffaAgg DiffbAgg
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 5 8 6 7 -1 1
2 10 16 9 15 1 1
3 15 24 11 22 4 2
4 20 32 24 26 -4 6
```
NOTE: If needed, can rename the columns
```
zed %>%
mutate(across(ends_with("Agg"),
.names = "{str_remove(.col, 'Agg')}Diff") -
across(ends_with("Pg")))
aAgg bAgg aPg bPg aDiff bDiff
1 5 8 6 7 -1 1
2 10 16 9 15 1 1
3 15 24 11 22 4 2
4 20 32 24 26 -4 6
```
---
Or may also use `dplyover` with `across2`
```
library(dplyover)
zed %>%
mutate(across2(ends_with("Agg"), ends_with("Pg"), `-`,
.names_fn = ~ str_replace(.x, "Agg_.*", "Diff")))
aAgg bAgg aPg bPg aDiff bDiff
1 5 8 6 7 -1 1
2 10 16 9 15 1 1
3 15 24 11 22 4 2
4 20 32 24 26 -4 6
```
|
Primefaces chart + jqplot extender - rounded value in the y-axis
**Background**
I have a primefaces line chart (date on x, integer >= 0 on y) extended with jqplot options:
```
function extender() {
this.cfg.axes = {
xaxis : {
renderer : $.jqplot.DateAxisRenderer,
rendererOptions : {
tickRenderer:$.jqplot.CanvasAxisTickRenderer
},
tickOptions : {
fontSize:'10pt',
fontFamily:'Tahoma',
angle:-40,
formatString:'%b-%y'
},
tickInterval:'2592000000'
},
yaxis : {
min: 0,
rendererOptions : {
tickRenderer:$.jqplot.CanvasAxisTickRenderer,
},
tickOptions: {
fontSize:'10pt',
fontFamily:'Tahoma',
angle:0,
formatString: '%d'
}
},
};
this.cfg.axes.xaxis.ticks = this.cfg.categories;
}
```
I'm using the jqplot extender to have custom date interval on the x-axis and this is working fine:

**Problem**
When I use the option `min: 0` in the y-axis the formatting of numbers goes really funky, especially when there are small values:

Note that the `minY` attribute in primefaces doesn't work (probably because the extender overwrites it)
To fix that, I use `formatString: %d`. It works but it creates problem with the number of ticks:

As you see on the screenshot, there are several times the line for the value 1.
**Question**
How can make sure I don't get several times the same value on the y-axis?
I can't really have a static number of ticks because when the data grows large (let's say around 100), I do want several values on the y-axis (e.g 20, 40, etc...)
|
I managed to solve my issue using ideas from Mehasse's post.
Defining the `max` value like suggested by Mehasse didn't remove the unwanted tick lines but helped me to find the answer.
By default, primefaces/jqplot wants to have `4` y-axis tick lines. Thus, if the max value is below `4`, there will be duplication in the y-axis label when they are rounded up (`formatString: '%d'`).
What I basically want, is the tick interval to be either `Max(y) \ 4` when `Max(y) > 4`, or `1` otherwise:
```
function actionPlanExtender() {
var series_max =maxSeries(this.cfg.data);
var numberOfTicks =4;
var tickInterval = Math.max(1, Math.ceil(series_max/numberOfTicks));
this.cfg.axes = {
xaxis : {
renderer : $.jqplot.DateAxisRenderer,
rendererOptions : {
tickRenderer:$.jqplot.CanvasAxisTickRenderer
},
tickOptions : {
fontSize:'10pt',
fontFamily:'Tahoma',
angle:-40,
formatString:'%b-%y'
},
tickInterval:'2592000000'
},
yaxis : {
min: 0,
rendererOptions : {
tickRenderer:$.jqplot.CanvasAxisTickRenderer,
},
tickOptions: {
fontSize:'10pt',
fontFamily:'Tahoma',
angle:0,
formatString: '%d',
},
tickInterval: tickInterval
},
};
this.cfg.axes.xaxis.ticks = this.cfg.categories;
}
```
To compute the y-max value, I'm getting the plot value using `this.cfg.data` which is of the form `[series_1,..., series_n]` with `series_i = [[x_1, y_1],..., [x_m, y_m]]`
The `maxSeries` function looks like:
```
function maxSeries(datas) {
var maxY = null;
var dataLength = datas.length;
for ( var dataIdx = 0; dataIdx < dataLength; dataIdx++) {
var data = datas[dataIdx];
var l = data.length;
for ( var pointIdx = 0; pointIdx < l; pointIdx++) {
var point = data[pointIdx];
var y = point[1];
if (maxY == null || maxY < y) {
maxY = y;
}
}
}
return maxY;
}
```
**Note that in my case I know my case I don't have value below `0`. This code should be updated if this is not the case.**
|
ListView asynchronous image loading strategy
I currently have a ListView with a custom adapter that gets information describing the content of the rows asynchronously. Part of each row is an image URL, that I'm planning to download asynchronously and then display.
My current plan for a strategy to download these images is:
- Keep a cache of soft references to downloaded Bitmap objects.
- When a getView() is called and the bitmap is in the cache, set the bitmap for the ImageView directly.
- If the bitmap isn't in the cache, start loading it in a separate thread, after the download is complete add it to the cache and call notifyDataSetChanged() on the adapter.
I am also planning to kill pending downloads when the Activity object owning the ListView's onDestroy()-method (Or possibly even in the onPause()-method) is called, but most importantly I want to kill the download of pending images when the row goes off screen. I might only actually cancel the download after a short delay, so it can be resumed without wasting bandwidth if the row comes on-screen quickly again.
I, however, am unsure about a few things:
- What is the best way to detect when a row goes off-screen so I can cancel the download?
- Is calling notifyDataSetChanged() the best thing to do after the download has completed or is there a better way?
Also any comments on the whole strategy would be appreciated.
|
I don't think calling notifyDataSetChanged() is really needed... I would do it like that:
- store URL as Tag in the view when created/updated
- register a listener in downloader thread (async task???) for download keeping reference to the view and the URL
- whenever image is downloaded asynchronously, I check TAG in the view and if it matches - i would update the ImageView (important to do it in UI thread, but when using async task, it is given). The image should also be stored on SD card (and every time you request URL you should check if it is not already downloaded).
- every time when getView() reuses the view (passed view is not empty) I would check the Tag (old URL), replace it with the new URL and cancel the download of the oldURL.
I think it would be pretty much it (some corner cases might happen)...
|
Using EXCEPT clause in PostgreSQL
I am trying to use the `EXCEPT` clause to retrieve data from table. I want to get all the rows from `table1` except the one's that exist in `table2`.
As far I understand, the following would not work:
```
CREATE TABLE table1(pk_id int, fk_id_tbl2 int);
CREATE TABLE table2(pk_id int);
Select fk_id_tbl2
FROM table1
Except
Select pk_id
FROM table2
```
The only way I can use `EXCEPT` seems to be to select from the same tables or select columns that have the same column name from different tables.
Can someone please explain how best to use the explain clause?
|
Your query seems perfectly valid:
```
SELECT fk_id_tbl2 AS some_name
FROM table1
EXCEPT -- you may want to use EXCEPT ALL
SELECT pk_id
FROM table2;
```
Column ***names*** are irrelevant to the query. Only ***data types*** must match. The output column name of your query is `fk_id_tbl2`, just because it's the column name in the first `SELECT`. You can use any alias.
What's often overlooked: the subtle differences between `EXCEPT` (which folds duplicates) and `EXCEPT ALL` - which keeps all individual unmatched rows.
More explanation and other ways to do the same, some of them much more flexible:
- [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
[Details for `EXCEPT` in the manual.](https://www.postgresql.org/docs/current/sql-select.html#SQL-EXCEPT)
|
Reactive form for each table row
I'm using Angular 2 and I want to validate controls in each row separately. But I'm not getting any way to do that. I want it to be done using reactive forms only and not using template-driven approach. I want [formGroup] on each `<tr>`. Any help will be appreciated. Below is the structure of my code:
```
<tbody *ngFor="let single of allTeamDetails"
[ngClass]="{'alternate-row-color': $even}">
<tr>
<td class="td-data first-column">
<input type="text" class="input-text form-control"
[value]="single.first_name">
</td>
<td class="td-data second-column">
<input type="text" class="input-text form-control"
[value]="single.last_name">
</td>
<td class="td-data third-column">
<input type="email" class="input-text form-control"
[value]="single.email">
</td>
<td class="td-data fourth-column">
<select class="selection-dropdown width-80-percent"
[value]="single.user_role">
<option *ngFor="let singleRole of allUserRole"
value="{{singleRole.name}}">
{{setUserRoleAndType(singleRole.name)}}</option>
</select>
</td>
<td class="td-data fifth-column" >
<input type="password" class="input-text form-control">
</td>
<td class="td-data sixth-column" >
<input type="password" class="input-text form-control">
</td>
<td class="td-data save-send-tm-data">
<button class="btn save-user-details save-sub-account-details"
type="button" data-toggle="tooltip" title="Save">
<i class="fa fa-floppy-o" aria-hidden="true"></i>
</button>
</td>
<td class="td-data save-send-tm-data">
<button type="button"
class="btn save-user-details save-sub-account-details"
data-toggle="tooltip" title="Send Message"
(click)="openSendMessageModal(single.email)">
<i class="fa fa-envelope" aria-hidden="true"></i>
</button>
</td>
<tr>
</tbody>
```
|
Use `formArray`. What you will do is you create a `formGroup` (master form) that contains multiple, smaller `formGroup`. Each smaller fromGroups will be what is repeated in your `*ngFor`.
Your form should look something like this:
```
<!--This is your master form-->
<form [formGroup]="teamForm">
<!--Use formArray to create multiple, smaller forms'-->
<div formArrayName="memberDetails">
<div *ngFor="let single of allTeamDetails; let $index=index">
<div [formGroupName]="$index">
<div>
<!--your field properties of every repeated items-->
<input placeholder="First Name" type="text" formControlName="firstName" />
</div>
<div>
<input placeholder="Last Name" type="text" formControlName="lastName" />
</div>
</div>
</div>
</div>
</form>
```
In your component, you can use angular's `formBuilder` to help to build a the form.
In your constructor:
```
constructor(private formBuilder: FormBuilder) {
this.teamForm = this.formBuilder.group({
memberDetails: this.formBuilder.array([])
});
}
```
Now you can initialize every property of your repeated models. You can then customise each validators of each field. Note the properties in your typescript file that corresponds to the ones in html. I do all these in `ngOnInit` so that the properties can binded to the html before they are rendered.
```
ngOnInit() {
this.teamForm = this.formBuilder.group({
memberDetails: this.formBuilder.array(
this.allTeamDetails.map(x => this.formBuilder.group({
firstName: [x.first_name, [Validators.required, Validators.minLength(2)]],
lastName: [x.last_name, [Validators.required, Validators.minLength(2)]]
}))
)
})
}
```
After all these, adding validation messages are very trivial. The benefits of doing it this way?
1. because each instance is now a single `formGroup` by itself, you can customize your validation logic down to a very granular level.
2. As the point above, you can subscribe to each `valueChange` of each smaller forms, down to each single field. For example, if you want to subscribe to first team member's first name's field change, you can do this:
```
this.teamForm
.controls.memberDetails
.controls[0] //get the first instance!
.controls.firstName //get the firstName formControlName
.valueChange
.subscribe(x=>console.log('value changed!))
```
3. In the event if you want to validate the master form, you can do so as well.
Have created a [plnkr](http://plnkr.co/edit/kjaxo3FQAmjJnP3yfl0w?p=preview), just for you :)
|
What does ssh-keygen [-o] do?
Gitlab lists their `ssh-keygen` commands with the `-o` flag:
<https://docs.gitlab.com/ee/ssh/>
But I cannot find `-o` in any help page or man page.
What does it do?
|
Quoting OpenSSH 7.7 man page of `ssh-keygen`:
>
> `-o` Causes ssh-keygen to save private keys using the new OpenSSH
> format rather than the more compatible PEM format. The new
> format has increased resistance to brute-force password cracking
> but is not supported by versions of OpenSSH prior to 6.5.
> Ed25519 keys always use the new private key format.
>
>
>
---
The option existed in OpenSSH 6.5 through 7.7. Since OpenSSH 7.8, the `-o` is the default behavior and the option is no longer documented (just silently ignored, when used).
In those older versions, `ssh-keygen` would by default use PEM format for RSA, DSA and ECDSA keys (but not Ed25519).
In OpenSSH 7.8 and newer, you can get the key in the PEM format by using [`-m PEM` flag](https://man.openbsd.org/ssh-keygen#m).
|
What does calc(.333 \* (100vw - 12em)) mean?
Can someone please explain to me what `calc(.333 * (100vw - 12em))` means in the following line of HTML? And in particular the .333 value; where does that come from?
```
sizes="(min-width: 36em) calc(.333 * (100vw - 12em)),
100vw"
```
Taken from <http://ericportis.com/posts/2014/srcset-sizes/>
>
> A length can be all sorts of things! A length can be absolute (e.g.
> 99px, 16em) or relative (33.3vw, as in our example). You’ll note that,
> unlike our example, there are lots of layouts which combine both
> absolute and relative units. That’s where the surprisingly
> well-supported calc() function comes in. Let’s say we added a 12em
> sidebar to our 3-column layout. We’d adjust our sizes attribute like
> so:
>
>
>
```
sizes="(min-width: 36em) calc(.333 * (100vw - 12em)),
100vw"
```
I understand this much:
- (min-width: 36em) = media query
- calc(.333 \* (100vw - 12em)) = rendered image size
- 100vw = default rendered image length
|
Let's dissect this expression:
```
calc(.333 * (100vw - 12em))
```
`calc` means evaluate as an expression.
`vw` is `1%` of the view width, so `100vw` is `100%` of the view width.
`em` is the width of the capital m (`M`), so `12` of those would be the width of 12 ems, or the width of: MMMMMMMMMMMM
`100vw - 12em` is thus the width minus twelve M's. If this post had a width of the view, then that would be something like:
```
/ from here to here \
```
`MMMMMM------------------------------------------------------------------------------MMMMMM`
`.333` of that is around `1/3`, so that would be one third of the width above. So, this width would look something like:
```
/ from here to here \ / or from here to here \
```
`MMMMMM------------------------------------------------------------------------------MMMMMM`
|
Scope binding not working in modal popup angularjs
I am using angular to bind data to my UI which works perfectly well. But when a modal popup is called on button click, the binding in the modal does not work.

```
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-hidden="true">×</button>
<h4 class="modal-title">{{checkItem}}</h4>
</div>
<div class="modal-body">
</div>
<div class="modal-footer">
<button ng-click="saveClient()" class="btn btn-primary pull-right btn-tabkey"><i class="fa fa-save"></i>Save</button>
<button type="button" class="btn btn-default" data-dismiss="modal" ng-click="focusInput=false"><i class="fa fa-ban"></i>Cancel</button>
</div>
</div>
<!-- /.modal-content -->
</div>
```
Angular:
```
angular.module('myModule').controller('myController', ["$rootScope", "$scope", "$filter", "dataService", function ($rootScope, $scope, $filter, dataService) {
$scope.checkItem = "";
$scope.loadEditForm = function () {
$scope.checkItem = "yes";
$("#modal-form-edit").modal();
};
}]);
```
|
Seems like you are opening the modal using plain jQuery approach. This is not going to work in Angular, because opened modal is not connected to Angular application, so it doesn't know that modal has to be handled, HTML parsed, etc.
Instead you should use directives properly, or in case of modal dialog you can simply use existent ones, like Angular UI project, which brings ready Bootstrap directives for Angular. In your case you need [`$modal`](http://angular-ui.github.io/bootstrap/#/modal) service.
The usage then would be very simple:
```
// remember to add ui.bootstrap module dependency
angular.module('myModule', ['ui.bootstrap']);
angular.module('myModule').controller('myController', ["$rootScope", "$scope", "$filter", "$modal", "dataService", function ($rootScope, $scope, $filter, $modal, dataService) {
$scope.checkItem = "";
$scope.loadEditForm = function () {
$scope.checkItem = "yes";
$modal.open({
templateUrl: 'modal.html',
controller: 'modalController',
scope: $scope
});
};
}]);
```
**Demo:** <http://plnkr.co/edit/kQz0fiaXLv7T37N8fzJU?p=preview>
|
mysql - making a mechanism similar to Oracle's seqences
MySQL provides an automatic mechanism to increment record IDs. This is OK for many purposes, but I need to be able to use sequences as offered by ORACLE. Obviously, there is no point in creating a table for that purpose.
The solution SHOULD be simple:
1) Create a table to hosts all the needed sequences,
2) Create a function that increases the value of a specific sequence and returns the new value,
3) Create a function that returns the current value of a sequence.
In theory, it looks simple... BUT...
When increasing the value of a sequence (much the same as `nextval` in Oracle), you need to prevent other sessions to perform this operation (or even fetch the current value) till the updated is completed.
Two theoretical options:
a - Use an UPDATE statement that would return the new value in a single shot, or
b - Lock the table between the UPDATE and SELECT.
Unfortunately, it would appear that MySQL does not allow to lock tables within functions / procedures, and while trying to make the whole thing in a single statement (like UPDATE... RETURNING...) you must use @-type variables which survive the completion of the function/procedure.
Does anyone have an idea/working solution for this?
Thanks.
|
The following is a simple example with a FOR UPDATE [intention lock](https://dev.mysql.com/doc/refman/5.7/en/innodb-locking.html). A row-level lock with the INNODB engine. The sample shows four rows for next available sequences that will not suffer from the well-known INNODB Gap Anomaly (the case where gaps occur after failed usage of an AUTO\_INCREMENT).
**Schema:**
```
-- drop table if exists sequences;
create table sequences
( id int auto_increment primary key,
sectionType varchar(200) not null,
nextSequence int not null,
unique key(sectionType)
) ENGINE=InnoDB;
-- truncate table sequences;
insert sequences (sectionType,nextSequence) values
('Chassis',1),('Engine Block',1),('Brakes',1),('Carburetor',1);
```
**Sample code:**
```
START TRANSACTION; -- Line1
SELECT nextSequence into @mine_to_use from sequences where sectionType='Carburetor' FOR UPDATE; -- Line2
select @mine_to_use; -- Line3
UPDATE sequences set nextSequence=nextSequence+1 where sectionType='Carburetor'; -- Line4
COMMIT; -- Line5
```
Ideally you do not have a `Line3` or bloaty code at all which would delay other clients on a Lock Wait. Meaning, get your next sequence to use, perform the update (the incrementing part), and `COMMIT`, **ASAP**.
**The above in a stored procedure:**
```
DROP PROCEDURE if exists getNextSequence;
DELIMITER $$
CREATE PROCEDURE getNextSequence(p_sectionType varchar(200),OUT p_YoursToUse int)
BEGIN
-- for flexibility, return the sequence number as both an OUT parameter and a single row resultset
START TRANSACTION;
SELECT nextSequence into @mine_to_use from sequences where sectionType=p_sectionType FOR UPDATE;
UPDATE sequences set nextSequence=nextSequence+1 where sectionType=p_sectionType;
COMMIT; -- get it and release INTENTION LOCK ASAP
set p_YoursToUse=@mine_to_use; -- set the OUT parameter
select @mine_to_use as yourSeqNum; -- also return as a 1 column, 1 row resultset
END$$
DELIMITER ;
```
**Test:**
```
set @myNum:= -1;
call getNextSequence('Carburetor',@myNum);
+------------+
| yourSeqNum |
+------------+
| 4 |
+------------+
select @myNum; -- 4
```
Modify the stored procedure accordingly for you needs, such as having only 1 of the 2 mechanisms for retrieving the sequence number (either the OUT parameter or the result set). In other words, it is easy to ditch the `OUT` parameter concept.
If you do not adhere to ASAP release of the LOCK (which obviously is not needed after the update), and proceed to perform time consuming code, prior to the release, then the following can occur after a timeout period for other clients awaiting a sequence number:
>
> ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting
> transaction
>
>
>
Hopefully this is never an issue.
```
show variables where variable_name='innodb_lock_wait_timeout';
```
MySQL Manual Page for [innodb\_lock\_wait\_timeout](http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout).
On my system at the moment it has a value of 50 (seconds). A wait of more than a second or two is probably unbearable in most situations.
Also of interest during TRANSACTIONS is that section of the output from the following command:
```
SHOW ENGINE INNODB STATUS;
```
|
Navigation.createNavigateOnClickListener from onBindViewHolder?
I want to set an onclicklistener in the `onBindViewHolder` in order to navigate to a different fragment and send along some data to that fragment.
For the life of me, I can't seem to find a way to make it work. Any and all help is greatly appreciated!
The adapter class:
```
class ListAdapter(private val list: List<Workout>): RecyclerView.Adapter<WorkoutViewHolder>() {
override fun getItemCount(): Int{
return list.size
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): WorkoutViewHolder {
val layoutInflater = LayoutInflater.from(parent.context)
return WorkoutViewHolder(layoutInflater, parent)
}
override fun onBindViewHolder(holder: WorkoutViewHolder, position: Int) {
val workout: Workout = list[position]
holder.itemView.setOnClickListener{
Toast.makeText(holder.itemView.context, "TEST", Toast.LENGTH_LONG).show()
val id = workout.workoutId
val bundle = Bundle()
bundle.putInt("workoutId", id)
Navigation.createNavigateOnClickListener(R.id.workoutDetailsFragment)
}
holder.bind(workout)
}
}
```
I can get the toast to pop up, so the onclicklistener seems to be working. However, the navigation part does not work.
If I just set a button inside the fragment that is hosting the recyclerview and add `button.setOnClickListener(Navigation.createNavigateOnClickListener(R.id.workoutDetailsFragment))` it can navigate just fine. So the problem seems to be calling the navigate function from inside the onclicklistener inside the onbindviewholder
|
`Navigation.createNavigateOnClickListener()` creates an `OnClickListener`. Creating an `OnClickListener` just to never set it on anything doesn't do anything.
Instead, you'll want to just trigger your `navigate()` call directly, doing the same [one line of code](https://android.googlesource.com/platform/frameworks/support/+/androidx-master-dev/navigation/navigation-runtime/src/main/java/androidx/navigation/Navigation.java#119) that `createNavigateOnClickListener` does internally:
```
override fun onBindViewHolder(holder: WorkoutViewHolder, position: Int) {
val workout: Workout = list[position]
holder.itemView.setOnClickListener{
Toast.makeText(holder.itemView.context, "TEST", Toast.LENGTH_LONG).show()
val id = workout.workoutId
val bundle = Bundle()
bundle.putInt("workoutId", id)
// Using the Kotlin extension in the -ktx artifacts
// Alternatively, use Navigation.findNavController(holder.itemView)
holder.itemView.findNavController().navigate(
R.id.workoutDetailsFragment, bundle)
}
holder.bind(workout)
}
```
|
Writing a C++20 range to standard output
I can take several `int`s from a `vector` putting them to standard output with an iterator:
```
std::vector<int> v{0,1,2,3,4,5};
std::copy_n(v.begin(),
3,
std::ostream_iterator<int>(std::cout, ":"));
```
I can use the new C++20 ranges to take several `int`s from a `vector` putting them to standard output with `|` operator in a `for` loop, one value at a time using `<<`.
```
for(int n : std::views::all(v)
| std::views::take(3))
{
std::cout << n << '/';
}
```
How can I put the results of `std::views::all(v) | std::views::take(3)` to standard output w/o explicitly looping through values?
Something like:
```
std::views::all(v)
| std::views::take(4)
| std::ostream_iterator<int>(std::cout, " ");
```
or
```
std::cout << (std::views::all(v)
| std::views::take(4));
```
|
The specific thing you're looking for is using the new ranges algorithms:
```
std::ranges::copy(v | std::views::take(4),
std::ostream_iterator<int>(std::cout, " "));
```
You don't need to use `views::all` directly, the above is sufficient.
You can also use fmtlib, either directly:
```
// with <fmt/ranges.h>
// this prints {0, 1, 2, 3}
fmt::print("{}\n", v | std::views::take(4));
```
or using `fmt::join` to get more control (this lets you apply a format string to each element in addition to specifying the delimiter):
```
// this prints [00:01:02:03]
fmt::print("[{:02x}]\n", fmt::join(v | std::views::take(4), ":"));
```
|
Inconsistent and erratic mouse wheel in Linux while moving the mouse pointer
I have Manjaro Linux in VirtualBox, and in some applications such as the terminal the mouse wheel seems not to register many of the scroll ticks. Sometimes it scrolls fine, then suddenly it scrolls only half as much or stops scrolling at all. In other applications such as Firefox, the scroll wheel always works.
After some experimentation I found that **I can't scroll at all while I'm moving the mouse pointer**. Yet, when I hold the pointer perfectly still, scrolling works as expected.
There are some related questions on this site, but none have this specific issue caused by moving the mouse pointer while scrolling in Linux.
|
I found [this post](https://forums.virtualbox.org/viewtopic.php?p=401248#p401248) on the VirtualBox forums in a thread that exactly describes my problem. It seems to be unrelated to VirtualBox, as VMWare shows the same behavior.
The solution that worked for me:
1. Check that you're currently using `libinput`.
```
$ grep "Using input" /var/log/Xorg.0.log
[ 0.000] (II) Using input driver 'libinput' for 'Power Button'
[ 0.001] (II) Using input driver 'libinput' for 'Sleep Button'
[ 0.002] (II) Using input driver 'libinput' for 'Video Bus'
[ 0.003] (II) Using input driver 'libinput' for 'VirtualBox mouse integration'
[ 0.004] (II) Using input driver 'libinput' for 'VirtualBox USB Tablet'
[ 0.005] (II) Using input driver 'libinput' for 'AT Translated Set 2 keyboard'
[ 0.006] (II) Using input driver 'libinput' for 'ImExPS/2 Generic Explorer Mouse'
[ 0.007] (II) Using input driver 'libinput' for 'VirtualBox USB Tablet'
```
2. Install the `evdev` input drivers.
On Manjaro this is found in the `xf86-input-evdev` package, which was already installed.
3. Enable the `evdev` drivers by modifying the X11 configuration.
In the directory `/usr/share/X11/xorg.conf.d/` I already had a file `10-evdev.conf` with the `evdev` configuration. It was just overridden by the higher-priority `40-libinput.conf` configuration. So all I had to do was:
```
cd /usr/share/X11/xorg.conf.d/
sudo mv 10-evdev.conf 80-evdev.conf
```
4. Restart.
5. Verify that the `evdev` drivers are now used instead:
```
$ grep "Using input" /var/log/Xorg.0.log
[ 0.000] (II) Using input driver 'evdev' for 'Power Button'
[ 0.001] (II) Using input driver 'evdev' for 'Sleep Button'
[ 0.002] (II) Using input driver 'evdev' for 'Video Bus'
[ 0.003] (II) Using input driver 'evdev' for 'VirtualBox mouse integration'
[ 0.004] (II) Using input driver 'evdev' for 'VirtualBox USB Tablet'
[ 0.005] (II) Using input driver 'evdev' for 'AT Translated Set 2 keyboard'
[ 0.006] (II) Using input driver 'evdev' for 'ImExPS/2 Generic Explorer Mouse'
[ 0.007] (II) Using input driver 'evdev' for 'VirtualBox USB Tablet'
```
This however didn't yet fix my problem. Apparently I needed `imwheel` too.
6. Install `imwheel`.
On Arch, I had to install the [imwheel](https://aur.archlinux.org/packages/imwheel/) AUR package.
7. Run `imwheel` to verify that this fixes the issue.
```
$ imwheel
```
8. All that remained was to make `imwheel` run at startup. I run this command, as it only intercepts the scroll wheel:
```
imwheel -b 45
```
This fixed the issues! Scrolling is now working correctly in all applications.
|
Power BI - Call Azure API with nextLink (next page)
Apologies, I'm new to Power BI. I'm using Power BI to call an Azure API that will list all the VMs in my subscription, however it will only show the first 50 before having a nextLink.
Here is the API I'm calling;
```
https://management.azure.com/subscriptions/< subscription >/providers/Microsoft.Compute/virtualMachines?api-version=2017-12-01
```
I've seen other pages and forums with a similar issue (such as [Microsoft API](https://www.linkedin.com/pulse/loading-data-paged-related-from-ms-graph-api-power-bi-rob-reilly/)), but not for Azure API. I messed about with their fix, but could not work out how to apply it to mine.
Their code;
```
let
GetUserInfo = (Path)=>
let
Source = Json.Document(Web.Contents(Path)),
LL= @Source[value],
result = try @LL & @GetUserInfo(Source[#"@odata.nextLink"]) otherwise @LL
in
result,
Fullset = GetUserInfo("https://graph.microsoft.com/beta/users?$select=manager&$expand=manager"),
#"Converted to Table" = Table.FromList(Fullset, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"id", "displayName", "manager"}, {"Column1.id", "Column1.displayName", "Column1.manager"}),
#"Expanded Column1.manager" = Table.ExpandRecordColumn(#"Expanded Column1", "Column1.manager", {"id", "displayName"}, {"id", "displayName"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded Column1.manager",{{"Column1.displayName", "Employee Full Name"}, {"Column1.id", "Employee Id"}, {"id", "Manager Id"}, {"displayName", "Manager Full name"}})
in
#"Renamed Columns"
```
Compared to the start of mine once I've connected the source by the simple web link;
```
let
Source = Json.Document(Web.Contents("https://management.azure.com/subscriptions/< subscription >/providers/Microsoft.Compute/virtualMachines?api-version=2017-12-01")),
#"Converted to Table" = Record.ToTable(Source)
in
#"Converted to Table"
```
If I were to adjust it, I suspected it would look something like this;
```
let
GetUserInfo = (Path)=>
let
Source = Json.Document(Web.Contents(Path)),
LL= @Source[value],
result = try @LL & @GetUserInfo(Source[#"@odata.nextLink"]) otherwise @LL
in
result,
Fullset = GetUserInfo("https://management.azure.com/subscriptions/< subscription >/providers/Microsoft.Compute/virtualMachines?api-version=2017-12-01"),
#"Converted to Table" = Record.ToTable(Source)
in
#"Converted to Table"
```
However I am prompted with the following error once clicking OK;
```
Expression.Error: The name 'Source' wasn't recognized. Make sure it's spelled correctly.
```
Any help on this would be greatly appreciated.
|
For anyone interested, here is what I ended up doing thanks to this link:
<https://datachant.com/2016/06/27/cursor-based-pagination-power-query/>
```
let
iterations = 10,
url =
"https://management.azure.com/subscriptions/< subscription >/providers/Microsoft.Compute/virtualMachines?api-version=2017-12-01",
FnGetOnePage =
(url) as record =>
let
Source = Json.Document(Web.Contents(url)),
data = try Source[value] otherwise null,
next = try Source[nextLink] otherwise null,
res = [Data=data, Next=next]
in
res,
GeneratedList =
List.Generate(
()=>[i=0, res = FnGetOnePage(url)],
each [i]<iterations and [res][Data]<>null,
each [i=[i]+1, res = FnGetOnePage([res][Next])],
each [res][Data])
in
GeneratedList
```
1 whole day of Googling headache :S
|
Parallelizing a dictionary comprehension
I have the following function and dictionary comprehension:
```
def function(name, params):
results = fits.open(name)
<do something more to results>
return results
dictionary = {name: function(name, params) for name in nameList}
```
and would like to parallelize this. Any simple way to do this?
In [here](https://stackoverflow.com/questions/20190668/python-multiprocessing-a-for-loop) I have seend that the `multiprocessing` module can be used, but could not understand how to make it pass my results to my dictionary.
*NOTE*: If possible, please give an answer that can be applied to any function that returns a result.
*NOTE 2*: the is mainly manipulate the fits file and assigning the results to a class
**UPDATE**
So here's what worked for me in the end (from @code\_onkel answer):
```
def function(name, params):
results = fits.open(name)
<do something more to results>
return results
def function_wrapper(args):
return function(*args)
params = [...,...,..., etc]
p = multiprocessing..Pool(processes=(max([2, mproc.cpu_count() // 10])))
args_generator = ((name, params) for name in names)
dictionary = dict(zip(names, p.map(function_wrapper, args_generator)))
```
using tqdm only worked partially since I could use my custom bar as tqdm reverts to a default bar with only the iterations.
|
The dictionary comprehension itself can not be parallelized. Here is an example how to use the `multiprocessing` module with Python 2.7.
```
from __future__ import print_function
import time
import multiprocessing
params = [0.5]
def function(name, params):
print('sleeping for', name)
time.sleep(params[0])
return time.time()
def function_wrapper(args):
return function(*args)
names = list('onecharNAmEs')
p = multiprocessing.Pool(3)
args_generator = ((name, params) for name in names)
dictionary = dict(zip(names, p.map(function_wrapper, args_generator)))
print(dictionary)
p.close()
```
This works with any function, though the [restrictions of the `multiprocssing` module](https://docs.python.org/2.7/library/multiprocessing.html#programming-guidelines) apply. Most important, the classes passed as arguments and return values as well as the function to be parallelized itself have to be defined at the module level, otherwise the (de)serializer will not find them. The wrapper function is necessary since `function()` takes two arguments, but `Pool.map()` can only handle functions with one arguments (as the built-in `map()` function).
Using Python >3.3 it can be simplified by using the `Pool` as a context manager and the `starmap()` function.
```
from __future__ import print_function
import time
import multiprocessing
params = [0.5]
def function(name, params):
print('sleeping for', name)
time.sleep(params[0])
return time.time()
names = list('onecharnamEs')
with multiprocessing.Pool(3) as p:
args_generator = ((name, params) for name in names)
dictionary = dict(zip(names, p.starmap(function, args_generator)))
print(dictionary)
```
This is a more readable version of the `with` block:
```
with multiprocessing.Pool(3) as p:
args_generator = ((name, params) for name in names)
results = p.starmap(function, args_generator)
name_result_tuples = zip(names, results)
dictionary = dict(name_result_tuples)
```
The `Pool.map()` function is for functions with a single argument, that's why the `Pool.starmap()` function was added in 3.3.
|
Object Factory in golang
I am a new to golang. I need to design a function to create object of differing types based on input. But I failed to figure out how to design the interface. Here comes my code:
```
package main
import (
"fmt"
)
type AA struct{
name string
}
func (this *AA) say(){
fmt.Println("==========>AA")
}
type BB struct{
*AA
age int
}
func (this *BB) say(){
fmt.Println("==========>BB")
}
func ObjectFactory(type int) *AA {
if type ==1 {
return new(AA)
}else{
return new(BB)
}
}
func main() {
obj1 := ObjectFactory(0)
obj1.say()
obj2 := ObjectFactory(0)
obj2.say()
}
```
The compiler tells me error no matter I ask ObjectFactory return \*AA or interface{}. How can I make it work?
|
First off, using `type` as a variable name is disallowed in go (see the [spec](http://golang.org/ref/spec#Keywords)). That is your first problem.
The return type of object factory is \*AA. This means that it can only return variables of type \*AA, which causes the return of type of BB to fail. As defined in the spec, go doesn't have type inheritance, just struct embedding.
If you create an interface called sayer, you can use that instead of \*AA in your ObjectFactory function.
```
type sayer interface {
say()
}
```
You probably want to use this interface when trying to get multiple dispatch (as demonstrated in the code below (see [on play.golang.org](http://play.golang.org/p/NJeSTnzdQl) as well).
Try this code:
```
package main
import (
"fmt"
)
type sayer interface {
say()
}
type AA struct{
name string
}
func (this *AA) say(){
fmt.Println("==========>AA")
}
type BB struct{
*AA
age int
}
func (this *BB) say(){
fmt.Println("==========>BB")
}
func ObjectFactory(typeNum int) sayer {
if typeNum ==1 {
return new(AA)
}else{
return new(BB)
}
}
func main() {
obj1 := ObjectFactory(1)
obj1.say()
obj2 := ObjectFactory(0)
obj2.say()
}
```
|
How do I create ColorStateList programmatically?
I am trying to create a `ColorStateList` programatically using this:
```
ColorStateList stateList = new ColorStateList(states, colors);
```
But I am not sure what are the two parameters.
As per the documentation:
>
>
> ```
> public ColorStateList (int[][] states, int[] colors)
>
> ```
>
> Added in API level 1
>
>
> Creates a ColorStateList that returns the specified mapping from states to colors.
>
>
>
Can somebody please explain me how to create this?
What is the meaning of two-dimensional array for states?
|
See <http://developer.android.com/reference/android/R.attr.html#state_above_anchor> for a list of available states.
If you want to set colors for disabled, unfocused, unchecked states etc. just negate the states:
```
int[][] states = new int[][] {
new int[] { android.R.attr.state_enabled}, // enabled
new int[] {-android.R.attr.state_enabled}, // disabled
new int[] {-android.R.attr.state_checked}, // unchecked
new int[] { android.R.attr.state_pressed} // pressed
};
int[] colors = new int[] {
Color.BLACK,
Color.RED,
Color.GREEN,
Color.BLUE
};
ColorStateList myList = new ColorStateList(states, colors);
```
Kotlin:
```
val states = arrayOf(
intArrayOf(android.R.attr.state_enabled), // enabled
intArrayOf(-android.R.attr.state_enabled), // disabled
intArrayOf(-android.R.attr.state_checked), // unchecked
intArrayOf(android.R.attr.state_pressed) // pressed
)
val colors = intArrayOf(
Color.BLACK,
Color.RED,
Color.GREEN,
Color.BLUE
)
val myList = ColorStateList(states, colors)
```
|
Nodemon and/or Hot Reloading with a Node-React Web App
I'm still pretty new when it comes to configuring a web app with webpack to create an optimal dev experience. I've taken two different Node-React courses: one where we used nodemon for tracking changes and another where we implemented hot reloading.
When it comes to these two dependencies, is it a one or the other? Should they be used together, or would it be sort of redundant?
Also, if I'm using an express server with React on the client side, do I use react-hot-loader, webpack-hot-middleware, or both? I've become confused on which approach to take with hot reloading as it seems that are many ways to do it.
Also, when I use nodemon as a wrapper (nodemon --exec babel-node server.js) my hot module reloading doesn't work, but I still find myself in want of a way to easily restart the server.
Thank you.
|
De-sugar the fancy terminologies, they're basically doing the same thing - "keep an eye (watch) on your local edits (file system changes) and updates the app for you", thus they're all dev tools intended to facilitate/speedup your dev process.(NOT for production)
`Nodemon` is in charge of your server-side (Express) while Webpack (watch mode) on the client-side (React).
Without too much magic, Nodemon simply restarts/reloads your express server when file changes, otherwise you need to kill & restart manually.
However, Webpack (with watch mode enabled, typically in a dev cycle) is a bit more complex, it watches your client side code change, but with the help of
1. [hot-module-replacement](https://webpack.js.org/concepts/hot-module-replacement/) - recompile changed module without full reload
2. [webpack-dev-middleware](https://github.com/webpack/webpack-dev-middleware) - serve the results through connected server
The recompiling process are pretty fast and can be served from a local dev server by either:
- [webpack-dev-server](https://github.com/webpack/webpack-dev-server)
serving changed modules and live reloading (connect to browser and hard refresh the page)
- [webpack-dev-middleware](https://github.com/webpack/webpack-dev-middleware) + Express/Koa server, can do the same but you get more control like serving static files or creating some api routes.
Even though live reloading is cool, since hard refresh of the page causes app to lose all client side state (break many dev tools, redux dev tool e.g), [react-hot-loader](https://github.com/gaearon/react-hot-loader) comes to rescue in this case.
In general, based on your Express + React app, i would set up `Nodemon` for Express. For React, if you want a standalone dev server works out of box, choose `webpack-dev-server` + `react-hot-loader`, or you want an integration of dev server on top of your existing Express server and a bit customization, use `webpack-dev-middleware` + `react-hot-loader` instead. (HMR needs to be added as webpack plugin anyway)
|
Angular2 data binding for custom reusable component
I want to have a template consisting of a label and textbox. This template can be reused at different places in my page. to do that i have created a MyTextComponent. I want to know how to i bind different values to the components input.
for example:
**MyTextComponent**
```
import {Component, Input} from 'angular2/core';
@Component({
selector: 'myText',
template:`<label>{{someLabel}}</label>
<input id=myId type="text" [(ngModel)]=bindModelData>
`
})
export class MyTextComponent {
@Input() myId : string;
@Input() bindModelData: any;
}
```
**MyPageComponent**
```
import {Component} from 'angular2/core';
import {MyTextComponent} from './MyTextComponent.component'
@Component({
template:`<myText myId="id1" bindModelData="myString1"></myText>
<myText myId="id2" bindModelData="myString2"></myText>
`,
directives:[MyTextComponent]
})
export class MyPageComponent{
myString1: string;
myString2: string;
}
```
How do i bind MyTextComponent input's ngModel to MyPageComponent local variable?
**Edit**:
Tried after adding @Output, but it didnt work :(
the interpolation in the MyPageComponent is blank as well as the log is printing undefined. However the interpolation is working for the mytextComponent. Any ideas
```
import {Component} from 'angular2/core';
import {MyTextComponent} from './myText.component'
@Component({
template:`<myText myId="id1" [(bindModelData)]="myString1"></myText>
<myText myId="id2" [(bindModelData)]="myString2"></myText>
{{myString1}}
{{myString2}}
<button (click)="clicked()">Click Me</button>
`,
directives:[MyTextComponent],
selector:'myPage'
})
export class MyPageComponent{
myString1: string;
myString2: string;
clicked(){
console.log(this.myString1+'--'+this.myString2);
}
}
import {Component,Input,Output,EventEmitter} from 'angular2/core';
@Component({
selector: 'myText',
template:`<label>{{someLabel}}</label>
<input id=myId type="text" [ngModel]=bindModelData (ngModelChange)="updateData($event)">
modeldata - {{bindModelData}}
`
})
export class MyTextComponent {
@Input() myId : string;
@Input() bindModelData: any;
@Output() bindModelDataChanged: any = new EventEmitter();
updateData(event) {
console.log('some random text');
this.bindModelData = event;
this.bindModelDataChanged.emit(event);
}
}
```
|
Your myText is missing an `@Output()`
```
import {Component, Input, Output} from 'angular2/core';
@Component({
selector: 'myText',
template:`<label>{{someLabel}}</label>
<input id=myId type="text" [ngModel]=bindModelData (ngModelChange)="updateData($event)">
`
})
export class MyTextComponent {
@Input() myId : string;
@Input() bindModelData: any;
// note that this must be named as the input name + "Change"
@Output() bindModelDataChange: any = new EventEmitter();
updateData(event) {
this.bindModelData = event;
this.bindModelDataChange.emit(event);
}
}
```
then you can use it like
```
import {Component} from 'angular2/core';
import {MyTextComponent} from './MyTextComponent.component'
@Component({
template:`<myText myId="id1" [(bindModelData)]="myString1"></myText>
<myText myId="id2" [(bindModelData)]="myString2"></myText>
`,
directives:[MyTextComponent]
})
export class MyPageComponent{
myString1: string;
myString2: string;
}
```
[**Plunker example**](https://plnkr.co/edit/AYUm0KVQctySuxRaHm4N?p=preview)
|
How do I autowire a Spring TaskExecutor created thread?
According to [Spring's documentation](http://static.springsource.org/spring/docs/3.0.4.RELEASE/reference/html/scheduling.html) the way to use the TaskExecutor is as follows:
```
import org.springframework.core.task.TaskExecutor;
public class TaskExecutorExample {
private class MessagePrinterTask implements Runnable {
private String message;
public MessagePrinterTask(String message) {
this.message = message;
}
public void run() {
System.out.println(message);
}
}
private TaskExecutor taskExecutor;
public TaskExecutorExample(TaskExecutor taskExecutor) {
this.taskExecutor = taskExecutor;
}
public void printMessages() {
for(int i = 0; i < 25; i++) {
taskExecutor.execute(new MessagePrinterTask("Message" + i));
}
}
}
```
However, if MessagePrinterTask has autowired dependencies they will not be configured by Spring because we are instantiating our bean outside of Spring's context (at least that's how I understand it) even though Spring will provide the actual thread creation. *If MessagePrinterTask were to have autowired dependencies how do we get Spring to recognize them?* I tried the following modified example to no avail (and yes, autowiring is enabled properly):
```
import org.springframework.core.task.TaskExecutor;
public class TaskExecutorExample {
@Component
private class MessagePrinterTask implements Runnable {
@Autowired
private autoWiredDependency;
public void run() {
autoWiredDependency.doNotThrowNullPointerExceptionPlease();
}
}
private TaskExecutor taskExecutor;
public TaskExecutorExample(TaskExecutor taskExecutor) {
this.taskExecutor = taskExecutor;
}
public void printMessages() {
for(int i = 0; i < 25; i++) {
taskExecutor.execute(new MessagePrinterTask());
}
}
}
```
|
There are two ways I think that you can go about this:
a. Provide the dependencies to the Task - this way:
```
class MessagePrinterTask implements Runnable {
public MessagePrinterTask(ADependency aDependency){
this.aDependency = aDependency;
}
private ADependency aDependency;
public void run() {
aDependency.doNotThrowNullPointerExceptionPlease();
}
}
```
And in your TaskExectorExample which can be the singleton:
```
import org.springframework.core.task.TaskExecutor;
public class TaskExecutorExample {
@Autowired private ADependency aDependency;
@Autowired
public TaskExecutorExample(TaskExecutor taskExecutor) {
this.taskExecutor = taskExecutor;
}
public void printMessages() {
for(int i = 0; i < 25; i++) {
taskExecutor.execute(new MessagePrinterTask(this.aDependency));
}
}
}
```
b. Using @Configurable annotation on your MesasgePrinterTask, this will inject in dependencies into MessagePrinterTask even though it is instantiated outside of a Spring Container - there are some catches in using @Configurable though(requires AspectJ):
```
@Configurable
class MessagePrinterTask implements Runnable {
```
|
mvc4 bundle, how it is working?
in mvc4 they use bundles to call all the scripts and css files once. as far as i know, the ordering of js and cs files is important when you call them. if i use bundles, how am i going to know if the css and js files are in the correct order inside the bundle? and can i customize the ordering?
i am having a problem with my datepicker now, it seems its css file/theme is not loading properly so i want to check how bundles order the css/js files... thanks :)
```
<link href="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Content/css")" rel="stylesheet" type="text/css" />
<link href="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Content/themes/base/css")" rel="stylesheet" type="text/css" />
<script src="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Scripts/js")"></script>
```
|
A late answer to that question, but ASP.NET MVC orders files by alphabetically. Also you can use [IBundleOrderer](http://msdn.microsoft.com/en-us/library/system.web.optimization.ibundleorderer%28VS.110%29.aspx) interface to manually order your script files.
For example, use custom IBundleOrderer implementation like that :
```
Bundle myBundle = new Bundle("~/bundles/SiteScripts", new JsMinify());
myBundle.IncludeDirectory("~/Scripts/SiteScripts", "*.js");
myBundle.Orderer = new MyBundleOrderer();
bundles.Add(myBundle);
```
MyBundleOrderer takes high priority scripts from web.config file :
```
public class MyBundleOrderer : IBundleOrderer
{
public IEnumerable<System.IO.FileInfo> OrderFiles(BundleContext context, IEnumerable<FileInfo> files)
{
if (ConfigurationManager.AppSettings["HighPriorityScripts"] != null)
{
string[] highPriorityScripts = ConfigurationManager.AppSettings["HighPriorityScripts"].Split(',');
List<FileInfo> listFiles = new List<FileInfo>(files);
List<FileInfo> orderedFiles = new List<FileInfo>();
// Add high priority files in order :
foreach (string highPriorityFile in highPriorityScripts)
{
FileInfo nextFileInfo = listFiles.Find(delegate(FileInfo arg)
{
return arg.Name == highPriorityFile;
}
);
if (nextFileInfo != null)
{
orderedFiles.Add(nextFileInfo);
}
}
// Add remaining files to bundle :
foreach (FileInfo lowPriorityFile in listFiles)
{
if (!orderedFiles.Contains(lowPriorityFile))
{
orderedFiles.Add(lowPriorityFile);
}
}
return orderedFiles;
}
return files;
}
}
```
|
Extracting Country Name from Author Affiliations
I am currently exploring the possibility of extracting country name from Author Affiliations (PubMed Articles) my sample data looks like:
`Mechanical and Production Engineering Department, National University of Singapore.`
`Cancer Research Campaign Mammalian Cell DNA Repair Group, Department of Zoology, Cambridge, U.K.`
`Cancer Research Campaign Mammalian Cell DNA Repair Group, Department of Zoology, Cambridge, UK.`
`Lilly Research Laboratories, Eli Lilly and Company, Indianapolis, IN 46285.`
Initially I tried to remove punctuations and split the vector into words and then compared it with a list of country names from Wikipedia but I am not successful at this.
Can anyone please suggest me a better way of doing it? I would prefer the solution in `R` as I have to do further analysis and generate graphics in `R`.
|
Here is a simple solution that might get you started some of the way. It makes use of a database containing city and country data in the maps package. If you can get hold of a better database, it should be simple to modify the code.
```
library(maps)
library(plyr)
# Load data from package maps
data(world.cities)
# Create test data
aa <- c(
"Mechanical and Production Engineering Department, National University of Singapore.",
"Cancer Research Campaign Mammalian Cell DNA Repair Group, Department of Zoology, Cambridge, U.K.",
"Cancer Research Campaign Mammalian Cell DNA Repair Group, Department of Zoology, Cambridge, UK.",
"Lilly Research Laboratories, Eli Lilly and Company, Indianapolis, IN 46285."
)
# Remove punctuation from data
caa <- gsub(aa, "[[:punct:]]", "") ### *Edit*
# Split data at word boundaries
saa <- strsplit(caa, " ")
# Match on cities in world.cities
# Assumes that if multiple matches, the last takes precedence, i.e. max()
llply(saa, function(x)x[max(which(x %in% world.cities$name))])
# Match on country in world.countries
llply(saa, function(x)x[which(x %in% world.cities$country.etc)])
```
This is the result for cities:
```
[[1]]
[1] "Singapore"
[[2]]
[1] "Cambridge"
[[3]]
[1] "Cambridge"
[[4]]
[1] "Indianapolis"
```
And the result for countries:
```
[[1]]
[1] "Singapore"
[[2]]
[1] "UK"
[[3]]
[1] "UK"
[[4]]
character(0)
```
With a bit of data cleanup you may be able to do something with this.
|
Stream from string in D
Is there a way to use `string` as `InputStream`?
Assuming I've downloaded text data from the network:
```
string str = to!string(std.net.curl.get("www.someurl.com/data.txt"));
```
And now I want to parse it using `read()`-family functions for scanning different types.
In plain C there is a `sscanf` function. In C++ we have `std::stringstream`.
So how to get similar functionality in D?
|
I think that the two likely candidates are [std.conv.parse](http://dlang.org/phobos/std_conv.html#.parse) and [std.format.formattedRead](http://dlang.org/phobos/std_format.html#.formattedRead).
`parse` will allow you to parse a string as a variety of types by making multiple calls to it. It takes the string by `ref` and consumes as much of the string as it can when converting it to requested type. It works particularly well when what you want to do is consume the string over a series of calls rather than converting it all at once. e.g.
```
import std.array;
import std.conv;
import std.math;
import std.string;
void main()
{
auto str = "10 12.22 3.14159 22";
auto a = parse!int(str);
assert(a == 10);
assert(str == " 12.22 3.14159 22");
str = str.stripLeft();
assert(str == "12.22 3.14159 22");
auto b = parse!double(str);
assert(approxEqual(b, 12.22));
assert(str == " 3.14159 22");
str = str.stripLeft();
assert(str == "3.14159 22");
auto c = parse!long(str);
assert(c == 3);
assert(str == ".14159 22");
str = str.stripLeft();
assert(str == ".14159 22");
auto d = parse!float(str);
assert(approxEqual(d, 0.14159));
assert(str == " 22");
str = str.stripLeft();
assert(str == "22");
auto e = parse!int(str);
assert(e == 22);
assert(str.empty);
}
```
`formattedRead` on the other hand is closer to `sscanf`. You have to give it a format string, and it'll return how many elements it read. Similar to `parse`, it'll consume the string as it reads it, but it consumes according to the format string rather than trying to consume as much of the string as it can to convert into the one, request type. Unlike `sscanf` however, `formattedRead` is type safe and is aware of the types of the variables being passed to it. So, you can use `%s` with it to convert to the types of the given variables rather than having to give flags specific to the types of the variables used (though you can still use more specific flags if you want to - just like with `writefln`). e.g.
```
import std.array;
import std.format;
import std.math;
import std.string;
void main()
{
auto str = "10 12.22 3.14159 22";
int a;
double b;
long c;
auto numRead1 = formattedRead(str, "%s %s %s", &a, &b, &c);
assert(numRead1 == 3);
assert(a == 10);
assert(approxEqual(b, 12.22));
assert(c == 3);
assert(str == ".14159 22");
float d;
int e;
auto numRead2 = formattedRead(str, "%s %s", &d, &e);
assert(numRead2 == 2);
assert(approxEqual(d, 0.14159));
assert(e == 22);
assert(str.empty);
}
```
Other alternatives would be to simply take advantage of the fact that strings are ranges and use the various range-based functions in Phobos to consume the string in whatever manner fits what you're doing. For instance, if you knew that you're string was made up purely of integers separated by whitespace, you could convert them to a range of `int`s lazily by doing
```
import std.algorithm;
import std.array;
import std.conv;
import std.string;
void main()
{
auto str = "42 22 9 77 46 2 1 0 99";
auto range = std.array.splitter(str).map!(a => to!int(a))();
assert(equal(range, [42, 22, 9, 77, 46, 2, 1, 0, 99]));
}
```
And if you wanted an array instead of a lazy range, you could simply call `std.array.array` on the range.
You can do a *lot* with the various range-based functions (the main ones being in [std.range](http://dlang.org/phobos/std_range.html) and [std.algorithm](http://dlang.org/phobos/std_algorithm.html)), but if you're converting a string's contents to something else, they'll tend to work better if the contents are uniform, since you can just convert the whole string at once that way, but you could use functions like [`find`](http://dlang.org/phobos/std_algorithm.html#find) and [`until`](http://dlang.org/phobos/std_algorithm.html#until) to pick the string apart and convert it in pieces if you need to convert different parts of the string differently. You could also use `splitter` to split the string on whitespace and then convert each piece according to where it was in the string, but at that point, you probably might as well just use `parse` or `formattedRead`. You do have quite a lot of options though.
If you're not particularly familiar with ranges, then I'd advise reading <http://ddili.org/ders/d.en/ranges.html> as it's the best tutorial that we have on them at the moment.
|
Using passport.js to connect a local strategy to another (Facebook, Twitter, Google, etc)
I have a node.js application which uses passport (with a modified passport-local for the login strategy) for authentication management. I would like to allow users to connect their social network accounts using passport strategies, but from what I can see the existing authentication (on the request object) gets overwritten when a new one is made.
How should I approach this?
|
I've created a sample for that see <https://github.com/fpp/drywall>
It allows to associate one or more social accounts with a local strategy user, create local strategy users from social accounts, plus add/remove social accounts from these users.
Currently twitter & github are implemented - just add other passport versions and adjust the code sample for more services.
The sample uses Node.js 0.8x, express 3.x and backbone.js - includes some basic user management from the <http://jedireza.github.com/drywall/> sample project.
P.S. I could not get a second route (for the same social service) / authorize vs authenticate with passport working as described in the passport documentation. Use
```
passReqToCallback: true
```
in the strategy instead and work on the user (see example for more).
|
python random.randint vs random.choice: different outcomes usingsame values
I asked my students to write a python program in which the results of 100 rolls of a pair of 6-sided dice were stored in a list and then plotted in a histogram.
I was treating `random.choice(1,2,3,4,5,6)` as inferior to `random.randint(1,6)`, until I noticed that the histograms of the students who used `random.choice` better reflected expected outcomes. For example, the occurrence of rolls of 12 (6+6) was unnaturally high in nearly all histograms of students who used `random.randint(1,6)`. Does anyone have an idea of what's going on?
|
[From the documentation](https://docs.python.org/3/library/random.html):
>
> Almost all module functions depend on the basic function random(),
> which generates a random float uniformly in the semi-open range [0.0,
> 1.0). Python uses the Mersenne Twister as the core generator. It produces 53-bit precision floats and has a period of 2\*\*19937-1. The
> underlying implementation in C is both fast and threadsafe. The
> Mersenne Twister is one of the most extensively tested random number
> generators in existence. However, being completely deterministic, it
> is not suitable for all purposes, and is completely unsuitable for
> cryptographic purposes.
>
>
>
So there shouldn't be any real difference in results. However, I would disagree that `random.choice()` is inferior to `randint()`, in fact, random choice is actually faster at generating random numbers. When you look at the source code:
```
def randint(self, a, b):
return self.randrange(a, b+1)
def randrange(self, start, stop=None, step=1, _int=int, _maxwidth=1L<<BPF):
istart = _int(start)
if istart != start:
# not executed
if stop is None:
# not executed
istop = _int(stop)
if istop != stop:
# not executed
width = istop - istart
if step == 1 and width > 0:
if width >= _maxwidth:
# not executed
return _int(istart + _int(self.random()*width))
```
And for `choice()`:
```
def choice(self, seq):
return seq[int(self.random() * len(seq))]
```
You can see that `randint()` has the additional overhead of using `randrange()`
**EDIT** *As @abarnert has noted in the comments, there really is almost no performance difference here, and `randint(1,6)` is a clear and intuitive way of representing a dice roll*
I ran both for 10000 rolls, and didn't see any skewing, so there is a chance your input samples were just too small:
[](https://i.stack.imgur.com/T9TWz.png)
And here is a distribution for rolling one dice twice, it is also very uniform:
[](https://i.stack.imgur.com/ESJvc.png)
I borrowed pieces of this from these two helpful answers: [Performance of choice vs randint](https://stackoverflow.com/questions/29574605/performance-of-choice-vs-randint)
[Is Pythons random.randint statistically random?](https://stackoverflow.com/questions/12164280/is-pythons-random-randint-statistically-random), which are helpful for further reading.
|
Python2 - using min/max on None type
I noticed that while the 'max' function do well on None type:
```
In [1]: max(None, 1)
Out[1]: 1
```
'min' function doesn't return anything:
```
In [2]: min(None, 1)
In [3]:
```
Maybe it's because there is no definition for min(None, 1)?
So why in max case, it return the number?
Is None treated like '-infinity'?
|
As jamylak wrote, `None` is simply not printed by Python shells.
This is convenient because *all functions return something*: when no value is specified, they return `None`:
```
>>> def f():
... print "Hello"
...
>>> f()
Hello
>>> print f() # f() returns None!
Hello
None
```
This is why Python shells do not print a *returned* None value. `print None` is different, though, as it explicitly asks Python to print the `None` value.
---
As for comparisons, `None` is *not* considered to be -infinity.
The [general rule](http://docs.python.org/2/library/stdtypes.html#comparisons) for Python 2 is that objects that cannot be compared in any meaningful way don't raise an exception when compared, but instead return some arbitrary result. In the case of CPython, the arbitrary rule is the following:
>
> Objects of different types except numbers are ordered by their type
> names; objects of the same types that don’t support proper comparison
> are ordered by their address.
>
>
>
Python 3 raises an exception, for non-meaningful comparisons like `1 > None` and the comparison done through `max(1, None)`.
---
If you do need -infinity, Python offers `float('-inf')`.
|
create mailto hyperlink that will open in outlook with a hyperlink in the body
Here's my scenario:
I'm trying to embed a hyperlink in the body of an email that will be generated using the mailto hyperlink. So I will have something like this:
```
<a href="mailto:[email protected]?subject=Email Subject&body=Click
<a href=%22http://www.google.com%22>Here</a> to go to google.com">Click Here to open
Outlook and with generated email</a>
```
The problem is, this scenario will open outlook but the email will actually appear as:
```
Click <a href="http://www.google.com">Here</a> to go to google.com
```
the <http://www.google.com> will be clickable in the email but the anchor tag will also show up and it is ugly.
I'm not sure if this is possible, but can I somehow create this hyperlink in a way that Outlook won't catch the URL address and automatically create the anchor tag around it?
|
You can't do this. See [the specification for mailto: URIs](https://www.rfc-editor.org/rfc/rfc6068#section-2), which says:
>
> The special "body" indicates that the associated
> is the body of the message. The "body" field value is intended to
> contain the content for the first **text/plain body part of the
> message**. The "body" pseudo header field is primarily intended for
> the generation of short text messages for automatic processing (such
> as "subscribe" messages for mailing lists), not for general MIME
> bodies. Except for the encoding of characters based on UTF-8 and
> percent-encoding, no additional encoding (such as e.g., base64 or
> quoted-printable; see [RFC2045]) is used for the "body" field value.
> As a consequence, header fields related to message encoding (e.g.,
> Content-Transfer-Encoding) in a 'mailto' URI are irrelevant and MUST
> be ignored. The "body" pseudo header field name has been registered
> with IANA for this special purpose (see Section 8.2).
>
>
>
|
SVG Rotation in 3D
I need to rotate the paths in my SVG document around an arbitrary point in 3D. [It appears that](http://www.w3.org/TR/SVG-Transforms/) there are multiple ways to do this by either using a 4x4 transformation matrix or the rotateX or rotateY transforms. I've tried both of these methods, and neither seem to work. Are these supported anywhere?
For my application, a bitmap is going to be the final output, so I'm not worried about browser support. I am open to any tool--I can run a specific browser through selenium, or use a standalone SVG rasterizer.
This is what I've tried so far (using Google Chrome 31):
I would expect this to be a black rectangle, rotated about the X axis, and appearing like a trapezoid.
```
<svg version="1.1"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink" width="640px" height="480px">
<rect x="100" y="100" width="440" height="280" fill="#000000"
transform="rotateX(30 580 100)"></rect>
</svg>
```
(omitting `cy` and `cz` from `rotateX` gives the same result).
I've also tried with a 4x4 matrix. I don't see any difference from above. I also doubt my math is correct in finding the right matrix elements.
```
<svg version="1.1"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink" width="640px" height="480px">
<rect x="100" y="100" width="440" height="280" fill="#000000"
transform="matrix(102400 0 0 0 0 88681.00134752653 -159.99999999999997 1387899.8652473476 0 159.99999999999997 88681.00134752653 -15986.602540378442)"></rect>
</svg>
```
|
I found that there really isn't a way in SVG to do a 3D rotation that is supported in any modern browser (to the best of my knowledge). However, CSS3 does have a similar "transform" property.
The following works for me:
```
<svg version="1.1"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink" width="640px" height="480px">
<rect x="100" y="100" width="440" height="280" fill="#000000" style="-webkit-transform: rotateX(30); -webkit-transform-origin-y: 580px; -webkit-transform-origin-z: 100"></rect>
</svg>
```
This, obviously, isn't a good cross-browser solution (as it uses prefixed properties), but that isn't something I need in my application.
|
How to properly implement "Confirm Password" in ASP.NET MVC 3?
There's already an [answered question](http://davidhayden.com/blog/dave/archive/2011/01/01/CompareAttributeASPNETMVC3.aspx) about the same subject but as it's from '09 I consider it outdated.
How to properly implement "Confirm Password" in ASP.NET MVC 3?
I'm seeing a lot of options on the Web, most of them using the `CompareAttribute` in the model [like this one](http://davidhayden.com/blog/dave/archive/2011/01/01/CompareAttributeASPNETMVC3.aspx)
The problem is that definitely `ConfirmPassword` shound't be in the model as it shouldn't be persisted.
As the whole unobstrusive client validation from MVC 3 rely on the model and I don't feel like putting a ConfirmPassword property on my model, what should I do?
Should I inject a custom client validation function? If so.. How?
|
>
> As the whole unobstrusive client validation from MVC 3 rely on the
> model and I don't feel like putting a ConfirmPassword property on my
> model, what should I do?
>
>
>
A completely agree with you. That's why you should use view models. Then on your view model (a class specifically designed for the requirements of the given view) you could use the `[Compare]` attribute:
```
public class RegisterViewModel
{
[Required]
public string Username { get; set; }
[Required]
public string Password { get; set; }
[Compare("Password", ErrorMessage = "Confirm password doesn't match, Type again !")]
public string ConfirmPassword { get; set; }
}
```
and then have your controller action take this view model
```
[HttpPost]
public ActionResult Register(RegisterViewModel model)
{
if (!ModelState.IsValid)
{
return View(model);
}
// TODO: Map the view model to a domain model and pass to a repository
// Personally I use and like AutoMapper very much (http://automapper.codeplex.com)
return RedirectToAction("Success");
}
```
|
Merge two dataframes by a closest value in R
I have two dataframes that I want to merge by the closest value in one column. The first dataframe (DF1) consists of individuals and their estimated individual risk ("risk"):
```
DF1<- data.frame(ID = c(1, 2, 3), risk = c(22, 40, 20))
```
```
ID risk
1 22
2 40
3 20
```
The second dataframe (DF2) consists of population by age groups ("population\_age") and the normal risks within each age group ("population\_normal\_risk"):
```
DF2<- data.frame(population_age = c("30-34","35-39","40-44"), population_normal_risk = c(15, 30, 45))
```
```
population_age population_normal_risk
30-34 15
35-39 30
40-44 45
```
What I want is to add a new column in the DF1 dataframe showing the population age group ("population\_age") with the closest risk value ("population\_normal\_risk") to the estimated risk on each individual ("risk").
What I expected would be:
```
ID risk population_age_group
1 22 30-34
2 40 40-44
3 20 30-34
```
Thanks in advance!
|
We can use `findInterval`.
First we need to calculate our break points at the halfway points between the population risk values:
```
breaks <- c(0, df2$population_normal_risk + c(diff(df2$population_normal_risk) / 2, Inf))
```
Then use `findInterval` to detect which bin our risks fall into:
```
matches <- findInterval(df1$risk, breaks)
```
Finally, write the matches in:
```
df1$population_age <- df2$population_age[matches]
```
Giving us:
```
df1
ID risk population_age
1 1 22 30-34
2 2 40 40-44
3 3 20 30-34`
```
|
Could not start Quickbooks. Error message in desktop SDK
Been developing a simple application using the Intuit Quickbooks SDK (Desktop), and everything has been working great for the last 3 months of development. I put development on hold for a couple of weeks, come back, and my app just won't open the Quickbooks (2015 Enterprise) data file. I can open it in Quickbooks, make changes and it works fine (file integrity is perfect - no errors).
When I initiate the QBsessionManager, I get an error: "Could not start Quickbooks". This happens whether Quickbooks is open, closed, multiuser or single user mode (makes no difference). As other forums suggested, I looked to see if there was another phantom version of QB opened in taskmanager - there is not. Makes no difference.
```
sessionManager.BeginSession(qbFilePath, ENOpenMode.omMultiUser)
```
MultiUser, SingleUser, DontCare open mode does not matter either. This code has been working fine for months - suddenly does not work at all.
In the QBSDKLOG.txt file, I see the following:
```
20150827.083322 I 10704 RequestProcessor ========= Started Connection =========
20150827.083322 I 10704 RequestProcessor Request Processor, QBXMLRP2 v13.0
20150827.083322 I 10704 RequestProcessor Connection opened by app named 'QBIntegrator'
20150827.083322 I 10704 CertVerifier The Authenticode signature validated OK.
20150827.083322 I 10704 RequestProcessor OS: Microsoft Professional (build 9200), 64-bit
20150827.083322 I 10704 RequestProcessor Current Process is elevated
20150827.083322 I 10704 RequestProcessor Current User is in AdminGroup
20150827.083322 I 10704 RequestProcessor Current Process is ran as Admin
20150827.083322 I 10704 RequestProcessor Current Process Integrity Level : 3000
20150827.083322 E 10704 RequestProcessor Unknown QBInstanceFinder error. File Name: "Q:\XXXXXXXXXXX.qbw" hr = 80070057
20150827.083322 E 10704 RequestProcessor Could not find or create an instance of QuickBooks using InstanceFinder hr = 80040403
20150827.083323 I 10704 RequestProcessor Connection closed by app named 'QBIntegrator'
20150827.083323 I 10704 RequestProcessor ========== Ended Connection ==========
```
The only thing I can think of (that changed) is that Windows 8.1 had some updates that somehow make Quickbooks 2015 Enterprise incompatible with Windows (Quickbooks has been plaqued with bugs like this for years, but Intuit forums do not indicate a new bug cropping up - so I'm at a loss).
I can only guess as to what this new problem could be.
NOTE: I'm developing in Visual Studio 2015 on Windows 8.1
|
There can be quite a few causes, but one that always gets me (and our users) -- if you are running QuickBooks as Windows Administrator, the Integrated Application (your app), must also be run as Windows Administrator and vice-versa. From the log you posted, it looks like your app is running as Administrator.
Like you said, it's always good to make sure there aren't any phantom QBW32.exes running, but if there are (and you're not running QB Enterprise) that usually gives a different error.
If that doesn't do the trick, this site has a list of 7 possible causes, but for me it's usually the first thing I listed above: <http://www.clearify.com/wiki/view/381/error-80040408-quickbooks-doesn-t-start>
|
How to check if Javascript Map has an object key
Looking at a couple of different docs, all I see is when the Map (ECMAScript6) key is a boolean, string, or integer. Is there a way we could use another customized Object (called with the new CustomObject(x,y) constructor call) to be added as a key?
I am able to add an object as a key, but unable to check if the Map has the said object.
```
var myMap = new Map();
myMap.set( new Tuple(1,1), "foo");
myMap.set('bar', "foo");
myMap.has(?);
myMap.has('bar'); // returns true
```
Is there a way around this?
```
var myMap = new Map();
myMap.set( new Tuple(1,1), "foo");
for(some conditions) {
var localData = new Tuple(1,1); //Use directly if exists in myMap?
map.has(localData) // returns false as this is a different Tuple object. But I need it to return true
}
```
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/has>
|
You just have to save the reference to the object:
```
var myMap = new Map();
var myKey = new Tuple(1,1);
myMap.set( myKey, "foo");
myMap.set('bar', "foo");
myMap.has(myKey); // returns true; myKey === myKey
myMap.has(new Tuple(1,1)); // returns false; new Tuple(1,1) !== myKey
myMap.has('bar'); // returns true; 'bar' === 'bar'
```
Edit: Here is how to use an object to achieve what you want, which is to compare objects by their values rather than by reference:
```
function Tuple (x, y) {
this.x = x;
this.y = y;
}
Tuple.prototype.toString = function () {
return 'Tuple [' + this.x + ',' + this.y + ']';
};
var myObject = {};
myObject[new Tuple(1, 1)] = 'foo';
myObject[new Tuple(1, 2)] = 'bar';
console.log(myObject[new Tuple(1, 1)]); // 'foo'
console.log(myObject[new Tuple(1, 2)]); // 'bar'
```
These operations will run in constant time on average, which is much faster than searching through a Map for a similar object key in linear time.
|
How to implement Text Search in CefSharp
I'm building an application using CefSharp and need to provide text search functionality to user just like Google Chrome has.
Can any one help me with the implementation of text search in CefSharp?
|
I've built this demo application using CefSharp 47.0.3, hopefully this is what you're looking for.
The view:
```
<Window x:Class="CefSharpSearchDemo.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:wpf="clr-namespace:CefSharp.Wpf;assembly=CefSharp.Wpf"
xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity"
xmlns:cefSharpSearchDemo="clr-namespace:CefSharpSearchDemo"
mc:Ignorable="d"
Title="MainWindow" Height="350" Width="525"
d:DataContext="{d:DesignInstance {x:Type cefSharpSearchDemo:MainWindowViewModel}}">
<DockPanel>
<DockPanel DockPanel.Dock="Top">
<Button Content="Next" DockPanel.Dock="Right" Command="{Binding ElementName=SearchBehavior, Path=NextCommand}" />
<Button Content="Previous" DockPanel.Dock="Right" Command="{Binding ElementName=SearchBehavior, Path=PreviousCommand}" />
<TextBox DockPanel.Dock="Right" Text="{Binding SearchText, UpdateSourceTrigger=PropertyChanged}"></TextBox>
</DockPanel>
<wpf:ChromiumWebBrowser x:Name="wb" DockPanel.Dock="Bottom"
Address="http://stackoverflow.com">
<i:Interaction.Behaviors>
<cefSharpSearchDemo:ChromiumWebBrowserSearchBehavior x:Name="SearchBehavior" SearchText="{Binding SearchText}" />
</i:Interaction.Behaviors>
</wpf:ChromiumWebBrowser>
</DockPanel>
</Window>
```
The code-behind for the view:
```
namespace CefSharpSearchDemo
{
using System.Windows;
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
DataContext = new MainWindowViewModel();
}
}
}
```
The view model:
```
namespace CefSharpSearchDemo
{
using System.ComponentModel;
using System.Runtime.CompilerServices;
public class MainWindowViewModel : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
private string _searchText;
public string SearchText
{
get { return _searchText; }
set
{
_searchText = value;
NotifyPropertyChanged();
}
}
protected virtual void NotifyPropertyChanged([CallerMemberName] string propertyName = null)
{
var handler = PropertyChanged;
if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}
}
}
```
And now the important part. As you could see in the view there is a behavior attached to the `ChromiumWebBrowser`:
```
namespace CefSharpSearchDemo
{
using System.Windows;
using System.Windows.Input;
using System.Windows.Interactivity;
using CefSharp;
using CefSharp.Wpf;
public class ChromiumWebBrowserSearchBehavior : Behavior<ChromiumWebBrowser>
{
private bool _isSearchEnabled;
public ChromiumWebBrowserSearchBehavior()
{
NextCommand = new DelegateCommand(OnNext);
PreviousCommand = new DelegateCommand(OnPrevious);
}
private void OnNext()
{
AssociatedObject.Find(identifier: 1, searchText: SearchText, forward: true, matchCase: false, findNext: true);
}
private void OnPrevious()
{
AssociatedObject.Find(identifier: 1, searchText: SearchText, forward: false, matchCase: false, findNext: true);
}
protected override void OnAttached()
{
AssociatedObject.FrameLoadEnd += ChromiumWebBrowserOnFrameLoadEnd;
}
private void ChromiumWebBrowserOnFrameLoadEnd(object sender, FrameLoadEndEventArgs frameLoadEndEventArgs)
{
_isSearchEnabled = frameLoadEndEventArgs.Frame.IsMain;
Dispatcher.Invoke(() =>
{
if (_isSearchEnabled && !string.IsNullOrEmpty(SearchText))
{
AssociatedObject.Find(1, SearchText, true, false, false);
}
});
}
public static readonly DependencyProperty SearchTextProperty = DependencyProperty.Register(
"SearchText", typeof(string), typeof(ChromiumWebBrowserSearchBehavior), new PropertyMetadata(default(string), OnSearchTextChanged));
public string SearchText
{
get { return (string)GetValue(SearchTextProperty); }
set { SetValue(SearchTextProperty, value); }
}
public static readonly DependencyProperty NextCommandProperty = DependencyProperty.Register(
"NextCommand", typeof (ICommand), typeof (ChromiumWebBrowserSearchBehavior), new PropertyMetadata(default(ICommand)));
public ICommand NextCommand
{
get { return (ICommand) GetValue(NextCommandProperty); }
set { SetValue(NextCommandProperty, value); }
}
public static readonly DependencyProperty PreviousCommandProperty = DependencyProperty.Register(
"PreviousCommand", typeof (ICommand), typeof (ChromiumWebBrowserSearchBehavior), new PropertyMetadata(default(ICommand)));
public ICommand PreviousCommand
{
get { return (ICommand) GetValue(PreviousCommandProperty); }
set { SetValue(PreviousCommandProperty, value); }
}
private static void OnSearchTextChanged(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs dependencyPropertyChangedEventArgs)
{
var behavior = dependencyObject as ChromiumWebBrowserSearchBehavior;
if (behavior != null && behavior._isSearchEnabled)
{
var newSearchText = dependencyPropertyChangedEventArgs.NewValue as string;
if (string.IsNullOrEmpty(newSearchText))
{
behavior.AssociatedObject.StopFinding(true);
}
else
{
behavior.AssociatedObject.Find(1, newSearchText, true, false, false);
}
}
}
protected override void OnDetaching()
{
AssociatedObject.FrameLoadEnd -= ChromiumWebBrowserOnFrameLoadEnd;
}
}
}
```
And the minor additional code for the `DelegateCommand`:
```
namespace CefSharpSearchDemo
{
using System;
using System.Windows.Input;
public class DelegateCommand : ICommand
{
private readonly Action _action;
public DelegateCommand(Action action)
{
_action = action;
}
public bool CanExecute(object parameter)
{
return true;
}
public void Execute(object parameter)
{
_action();
}
public event EventHandler CanExecuteChanged;
}
}
```
The resulting application has a `TextBox` on the top and two buttons labeled "Previous" and "Next" next to it.
The main area is a CefSharp browser which loads <http://www.stackoverflow.com>.
You can type into the `TextBox` and it will search in the browser (and highlight the scrollbar where the hits are, just like in Chrome). You can then press the Next/Previous buttons to cycle through the hits.
I hope this helps in developing your own solution.
All this said, let me just note that next time if you ask a question, actually provide some code what you tried, or try to ask a more specific question, because this is probably too broad for this site. Anyway I leave this here, maybe others will find it useful as well.
Important lesson: there are some methods exposed on `ChromiumWebBrowser` that you can use to implement the search functionality (namely: `Find` and `StopFinding`).
|
Why doesn't django take my language file into account?
I followed the whole documentation about text translation, but django doesn't take my `.mo` file into account.
Some facts:
- I created a `conf/locale/` folder at the root of my project
- `django.po` was generated successfully with `django-admin.py makemessages -l fr`
- `django.mo` was generated successfully with `django-admin.py compilemessages`
So my folder structure is:
```
project/
site/
locale/
fr/
LC_MESSAGES/
django.mo
django.po
```
- In `settings.py`, I have set `LANGUAGE_CODE = 'fr'`
- My browser correctly sends `Accept-Language:fr-FR,fr;`
- `request.LANGUAGE_CODE` shows `fr` from my views
But I get nothing translated... How to make django take these files into account ?
---
## Edit
Adding `/home/www/project/locale` to `settings.LOCALE_PATHS` works. However, Django should find this path by itself, and I don't like using absolute paths. What happens here ?
|
**LOCALE\_PATHS**
Django looks for translation files in 3 locations by default:
- `LOCALE_PATHS/(language)/LC_MESSAGES/django.(po|mo)`
- `$APPPATH/locale/(language)/LC_MESSAGES/django.(po|mo)`
- `$PYTHONPATH/django/conf/locale/(language)/LC_MESSAGES/django.(po|mo)`
LOCALE\_PATHS only need to be used if your translation files aren't in your app directories or on the PYTHON\_PATH.
An example of LOCALE\_PATHS in settings.py
```
LOCALE_PATHS = (
'/home/www/project/conf/locale', # replace with correct path here
)
```
**MIDDLEWARE\_CLASSES**
Add `django.middleware.locale.LocaleMiddleware` to `MIDDLEWARE_CLASSES` in settings.py
**LANGUAGES**
```
LANGUAGES = (
('fr', 'Français'),
)
```
|
Whenever I download an .sh-file gedit opens up
Whenever I download an .sh-file gedit opens up and then freezes. Why?
|
If `gedit` opens up after the download is complete, that's because your browser is set up to open downloaded files automatically and to open them in the system's default application.
If `gedit` opens up because you double-clicked on the `.sh` file in `nautilus`, that's because of the default behavior of `nautilus`, which is to open `.sh` files in the system's default text editor rather than to run them.
To change `nautilus`' default behavior:
1. Open Files from the Unity Launcher
2. On the Unity Panel, click on "Edit" > "Preferences"
3. Select the "Behavior" tab and under "Executable Text Files" select "Run executable files when they are opened" and click on the "Close" button.
On why `gedit` freezes, it might be that the `.sh` file is **very** long.
If the `.sh` is not **very** long, it might be that it is encoded with some exotic encoding that `gedit` cannot handle.
If the `.sh` file is not encoded with any exotic encoding, try to reinstall / update gedit by running this command in a Terminal: `sudo apt-get remove gedit && sudo apt-get update && sudo apt-get install gedit`
|
CSS Two Columns of Lists - responsive merge into one column
I have two lists that I'm floating into two columns. I want to make it so on small screens, the items become one column, BUT I'd like to alternate the items.
```
<div>
<ul class="left">
<li>Item A</li>
<li>Item B</li>
<li>Item C</li>
<li>Item D</li>
</ul>
<ul class="right">
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
<li>Item 4</li>
</ul>
</div>
```
So the result should look like this on small screens.
```
Item A
Item 1
Item B
Item 2
Item C
Item 3
Item D
Item 4
```
Here is my starting jsfiddle. Should I instead make one list with `li` `width` set to `50%`? I wanted to see if this was possible while keeping the HTML markup the way it is.
<http://jsfiddle.net/aAhX9/>
|
The only way to do this (outside of some very laborious positioning) is to combine the elements into a single list, giving each `li` a class-name and styling them appropriately:
```
<div>
<ul>
<li class="left">Item A</li>
<li class="right">Item 1</li>
<li class="left">Item B</li>
<li class="right">Item 2</li>
<li class="left">Item C</li>
<li class="right">Item 3</li>
<li class="left">Item D</li>
<li class="right">Item 4</li>
</ul>
</div>
li {
list-style-type: none;
width: 50%;
}
li.left {
float: left;
background-color: #0f0;
}
li.right {
float: right;
background-color: #00f;
}
@media only screen and (max-width: 480px) {
.left, .right {
float: none;
width: 100%;
}
}
```
[Updated JS Fiddle demo](http://jsfiddle.net/davidThomas/aAhX9/2/).
As noted by Hashem, in the comments below, it would be possible to use the `:nth-child()` selector, rather than class-names, to style the various `li` elements left, or right:
```
li:nth-child(odd) {
float: left;
background-color: #0f0;
}
li:nth-child(even) {
float: right;
background-color: #00f;
}
@media only screen and (max-width: 480px) {
li {
float: none;
width: 100%;
}
}
```
[Updated JS Fiddle demo](http://jsfiddle.net/davidThomas/aAhX9/9/).
|
Do different versions of Perl require different CPAN module installations?
We have a server farm that we are slowly migrating to a new version of Perl (5.12.1). We are currently running 5.8.5. The OS will be upgraded from RedHat 4 to RedHat 5 as well, but RedHat 5 is still back on Perl 5.8.8. Thus for a while in our source tree we'll be supporting two versions of Perl.
I have been told to install the new version of Perl into our source tree, and also all of the CPAN modules we currently use. I was actualy told to 'compile' the modules with the correct version of Perl. I'm confused by this. Do some modules actually configure themselves differently for different versions of Perl? Given this, I assume I should configure a CPAN directory for each version of Perl in our tree?
Any information or 'gotchas' about this scenario?
Edit: As an additional question, will the same cpan directory (pointed to by ~/.cpan) serve for both trees, or should I link in different directories when I'm working in different trees (installing modules)?
|
Any perl modules that use XS (compiled C code, dynamically loaded) will, in general, only work with the same version of perl that they were compiled with. This is for two reasons:
Reason one is that by default they're installed into a directory that includes the perl version number, and any other version of perl won't look into that directory.
Reason two is because the perl API can change between major versions, so even if you were to copy the libraries into the appropriate directory, they might or might not work depending on what features they use, and how different the two versions of perl are. Between 5.8 and 5.12 there are significant differences that are likely to break nearly all code.
This doesn't apply at all to *pure Perl* modules, though; they can be copied around freely with very few exceptions. It's only XS code that's the issue.
|
Smooth mouse-out animation
I have a diamond shaped div that spins 360 degrees around its own axis on hover by using CSS animation.
I can't work it out how to ensure **smooth going back to the original state** when not hovering anymore?
So far it "jumps" when the diamond is in the middle of its turn. I would like it to be smooth. Is it possible to do it with CSS animations? If not, maybe with JS?
```
.dn-diamond {
display: inline-block;
width: 100px;
height: 100px;
background: #000;
transform: rotate(-45deg);
margin: 50px;
overflow: hidden;
}
.dn-diamond:hover {
animation: spin 3s infinite linear;
}
@keyframes spin {
from { transform: rotateY(0deg) rotate(-45deg); }
to { transform: rotateY(360deg) rotate(-45deg); }
}
```
```
<div class="dn-diamond">
```
Here is [JSFiddle](https://jsfiddle.net/k1wqtd2k/)
I was trying to use the transition but could not keep the original transformed shape of it (it went back to being a square, not a diamond).
|
You should use [transitions](https://developer.mozilla.org/en-US/docs/Web/CSS/transition) for this. They will allow you to keep the transition smooth when the mouse moves out of the element.
Example :
```
.dn-diamond {
display: inline-block;
width: 100px;
height: 100px;
background: #000;
transform: rotateY(0deg) rotateZ(-45deg);
transition: transform 3s linear;
margin: 50px;
overflow: hidden;
}
.dn-diamond:hover {
transform: rotateY(360deg) rotateZ(-45deg);
}
```
```
<div class="dn-diamond">
```
---
You can also **control the speed of the transition** when the cursor moves out of the element by setting the transition property on normal and hover state.
Example :
```
.dn-diamond {
display: inline-block;
width: 100px;
height: 100px;
background: #000;
transform: rotateY(0deg) rotateZ(-45deg);
transition: transform 0.5s linear;
margin: 50px;
overflow: hidden;
}
.dn-diamond:hover {
transform: rotateY(360deg) rotateZ(-45deg);
transition: transform 3s linear;
}
```
```
<div class="dn-diamond">
```
*Note that in the above demos the vendor prefixes aren't included. check [canIuse](http://caniuse.com/#feat=css-transitions) to know which vendor prefixes you need according to the browsers you want to support.*
|
Three.js Reseting Matrix before Transformation
I want to apply the same matrix transformation to a `three.js` mesh multiple times, but I only ever want the transformation as describe by the first transform. How can I "reset" the mesh's matrix to its pre transformation value before I re-apply the transformation?
I have tried the following, but it hasn't worked
```
const identity = model.matrix.identity();
model.matrix = identity;
model.matrixWorld = identity;
model.updateMatrix();
model.updateMatrixWorld();
model.applyMatrix4(newMatrix);
```
|
**EDIT:** *Replacing my entire answer because I was unaware that `Object3D.applyMatrix4` updates the `position`/`quaterion`/`scale` properties in addition to modifying the `matrix`.*
The easiest way to approach this is to back-up the model's original `matrix` property, and copy it back in place before the next update. This can be accomplished using [`Object3D.userData`](https://threejs.org/docs/#api/en/core/Object3D.userData).
```
// before doing any updates, ensure the local matrix is set from postion/quaternion/scale:
model.updateMatrix();
// back-up the matrix:
model.userData.originalMatrix = model.matrix.clone();
// reset the model before each update
model.userData.originalMatrix.decompose( model.position, model.quaternion, model.scale );
model.matrix.copy( model.userData.originalMatrix );
```
|
What is the coordinate system used in metal?
In metal what coordinate system to use inside shader (in and out)? and when we render to texture is it the same? with the z buffer also? Are there any inconsistencies? finally what are the difference between metal, opengl and directX ?
|
## Metal Coordinate Systems
Metal defines several standard coordinate systems to represent transformed graphics data at
different stages along the rendering pipeline.
---
**1) NDC (Normalized Device Coordinate):** this coordinates is used by developers to construct their geometries and transform the geometries in vertex shader via model and view matrices.
**Point(-1, -1)** in **NDC** is located at the **the bottom left corner (Y up).**.
[](https://i.stack.imgur.com/NKcsk.png)
---
**2) Framebuffer Coordinate (Viewport coordinate):** when we write into attachment or read from attachment or copy/blit between attachments, we use framebuffer coordiante to specify the location. The **origin(0, 0)** is located at the **top-left corner (Y down)**.
[](https://i.stack.imgur.com/ZQRhP.png)
---
**3) Texture Coordinate:** when we upload texture into memory or sample from texture, we use texture coordinate. The **origin(0, 0)** is located at the **top-left corner (Y down)**.
[](https://i.stack.imgur.com/CE593.png)
---
## D3D12 and Metal
**NDC:** +Y is up. Point(-1, -1) is at the bottom left corner.
**Framebuffer coordinate:** +Y is down. Origin(0, 0) is at the top left corner.
**Texture coordinate:** +Y is down. Origin(0, 0) is at the top left corner.
---
## **OpenGL, OpenGL ES and WebGL**
**NDC:** +Y is up. Point(-1, -1) is at the bottom left corner.
**Framebuffer coordinate:** +Y is up. Origin(0, 0) is at the bottom left corner.
**Texture coordinate:** +Y is up. Origin(0, 0) is at the bottom left corner.
---
## **Vulkan**
**NDC:** +Y is down. Point(-1, -1) is at the top left corner.
**Framebuffer coordinate:** +Y is down. Origin(0, 0) is at the bottom left corner.
**Texture coordinate:** +Y is up. Origin(0, 0) is at the bottom left corner.
---
|
Jquery: disable the 'click' event on a checkbox
In my current JQuery, I have an event that will either check or uncheck a checkbox if the user clicks on a row in a table. The problem with this is, if the user actually checks the checkbox, the jquery will fire on the checkbox event and either check/uncheck the box, but then the TR event will fire and then undo the checkbox value.
See an example here: <http://jsfiddle.net/radi8/KYvCB/1/>
I can disable the checkbox but then if the user tries to select the checkbox, the TR event will not trigger.
What I need is a method to disable the 'click' event of the checkbox but still allow the TR event to fire when the checkbox is selected.
```
var charges = {
init: function() {
// get the selected row checkbox
//$('.charges').attr('disabled', true);
$('.rowclick tr').click(function() {
if ($(this).find('td input.charges:checkbox').is(':checked')) {
$(this).find('td input.charges:checkbox').attr("checked", false);
}
else {
$(this).find('td input.charges:checkbox').attr("checked", true);
}
});
}
};
charges.init();
```
|
You need to check if the `click` event was fired on a `checkbox` or somewhere else. **This needs less ressources than a second event handler for the checkbox with `e.stopPropagation`.**
```
$('.rowclick tr').click(function(e) {
if($(e.target).closest('input[type="checkbox"]').length > 0){
//Chechbox clicked
}else{
//Clicked somewhere else (-> your code)
if ($(this).find('td input.charges:checkbox').is(':checked')) {
$(this).find('td input.charges:checkbox').attr("checked", false);
}
else {
$(this).find('td input.charges:checkbox').attr("checked", true);
}
}
});
```
Working example: <http://jsfiddle.net/KYvCB/5/>
|
Using live endpoints vs using mocked endpoints in integration tests
Recently my colleagues and I have been writing integration tests for a Java project. Most of these integration tests require at least one SOAP web service call, LDAP query, or something else that relies on endpoints we do not necessarily have control over. Some of these SOAP/LDAP calls use libs that are also still in development.
What this ends up meaning is that our integration tests will sometimes fail during a build when a machine goes down, a lib changed, or an endpoint was altered. After doing some research, I have noticed that it seems fairly common that people use live endpoints in integration testing, but I have also found articles about why using live endpoints can be harmful(<http://martinfowler.com/articles/nonDeterminism.html#RemoteServices>) .
I am wondering what makes more sense when creating integration tests: mocking all endpoints, or using live ones? It seems that using live endpoints, especially when unreliable, makes tests non-deterministic. However mocks seem like they will only take you so far, and you won't be able to test what happens in a production-like environment. Are integration tests made up of pure mocks valuable for verifying anything aside from regressions?
|
When you mock and enpoint, it is very important that you accurately mock the endpoint. If you don't, your tests can lead to you falsely believe that you will can properly integrate with the service. The fact that you are working with endpoints that are changing would seem to make this difficult.
Whether you do it at an integration testing or acceptance testing level, you should have tests that interact with the real endpoint, otherwise you won't know if integration actually works.
In your case, for example, if a library changes or an endpoint is altered and your test fails, that is in fact a failure in integration, so that's a good thing to detect. If the machine goes down, you could detect that in your test and report the test as skipped rather than failing in this.
So in this case, I would use the real service to ensure your software properly integrates with the third party components.
|
Pandas Invert Sign for Selected Index
In my data below referring to the index, Revenue is in positive values, while the Costs i.e. Direct Cost, Manpower, Supplies & Material, Other Operating Cost are in negative values following finance convention.
[](https://i.stack.imgur.com/9m1XE.png)
How can I transform the data so that:
- Revenue row is unchanged
- Other rows will have their values inverted i.e. multiply by -1
|
Use [`difference`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html) for all rows where is necessary multiple by [`mul`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html):
```
cols = df.index.difference(['Revenue'])
df.loc[cols] = df.loc[cols].mul(-1)
```
Sample:
```
df = pd.DataFrame({'A':[1,2,3],
'B':[-4,5,6],
'C':[7,-8,9],
'D':[1,3,-5],
'E':[5,-3,6],
'F':[-7,4,3]},
index=['Revenue', 'Direct Cost','Manpower'])
print (df)
A B C D E F
Revenue 1 -4 7 1 5 -7
Direct Cost 2 5 -8 3 -3 4
Manpower 3 6 9 -5 6 3
cols = df.index.difference(['Revenue'])
df.loc[cols] = df.loc[cols].mul(-1)
print (df)
A B C D E F
Revenue 1 -4 7 1 5 -7
Direct Cost -2 -5 8 -3 3 -4
Manpower -3 -6 -9 5 -6 -3
```
Another solution is created `Series` for multiple with [`to_series`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html) + [`map`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html):
```
s = (df.index.to_series() == 'Revenue').map({True:1, False:-1})
print (s)
Revenue 1
Direct Cost -1
Manpower -1
dtype: int64
df = df.mul(s, axis=0)
print (df)
A B C D E F
Revenue 1 -4 7 1 5 -7
Direct Cost -2 -5 8 -3 3 -4
Manpower -3 -6 -9 5 -6 -3
```
Or simplier use [`numpy.where`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) for convert array by mask with [`mul`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html):
```
df = df.mul(np.where(df.index == 'Revenue', 1, -1), axis=0)
print (df)
A B C D E F
Revenue 1 -4 7 1 5 -7
Direct Cost -2 -5 8 -3 3 -4
Manpower -3 -6 -9 5 -6 -3
```
|
How to make android support FloatingActionButton at bottom right of the screen?
I have added a `FloatingActionButton` to my layout inside a `RelativeLayout` as follow
```
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<android.support.design.widget.FloatingActionButton
android:id="@+id/submit"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="bottom|right"
android:layout_margin="16dp"
android:clickable="true"
android:src="@drawable/ic_ok" />
</RelativeLayout>
```
As you see, I have set `layout_gravity` to `bottom|right` but the place of my `FloatingActionButton` hasn't changed and it rests on top left.
How can i make my `FloatingActionButton` at bottom right?
|
Using the CoordinatorLayout:
```
<?xml version="1.0" encoding="utf-8"?>
<android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent">
<RelativeLayout
android:id="@+id/test"
android:layout_width="match_parent"
android:layout_height="match_parent">
</RelativeLayout>
<android.support.design.widget.FloatingActionButton
android:layout_margin="10dp"
android:id="@+id/myFAB"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:backgroundTint="#FF0000"
app:borderWidth="0dp"
app:elevation="8dp"
app:layout_anchor="@id/test"
app:layout_anchorGravity="bottom|right|end" />
</android.support.design.widget.CoordinatorLayout>
```
Note
```
app:layout_anchor="@id/test"
app:layout_anchorGravity="bottom|right|end"
```
|
warning this call is not awaited, execution of the current method continues
Just got VS2012 and trying to get a handle on `async`.
Let's say I've got an method that fetches some value from a blocking source. I don't want caller of the method to block. I could write the method to take a callback which is invoked when the value arrives, but since I'm using C# 5, I decide to make the method async so callers don't have to deal with callbacks:
```
// contrived example (edited in response to Servy's comment)
public static Task<string> PromptForStringAsync(string prompt)
{
return Task.Factory.StartNew(() => {
Console.Write(prompt);
return Console.ReadLine();
});
}
```
Here's an example method that calls it. If `PromptForStringAsync` wasn't async, this method would require nesting a callback within a callback. With async, I get to write my method in this very natural way:
```
public static async Task GetNameAsync()
{
string firstname = await PromptForStringAsync("Enter your first name: ");
Console.WriteLine("Welcome {0}.", firstname);
string lastname = await PromptForStringAsync("Enter your last name: ");
Console.WriteLine("Name saved as '{0} {1}'.", firstname, lastname);
}
```
So far so good. The problem is when I *call* GetNameAsync:
```
public static void DoStuff()
{
GetNameAsync();
MainWorkOfApplicationIDontWantBlocked();
}
```
The whole point of `GetNameAsync` is that it's asynchronous. I don't *want* it to block, because I want to get back to the MainWorkOfApplicationIDontWantBlocked ASAP and let GetNameAsync do its thing in the background. However, calling it this way gives me a compiler warning on the `GetNameAsync` line:
```
Warning 1 Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call.
```
I'm perfectly aware that "execution of the current method continues before the call is completed". That's the *point* of asynchronous code, right?
I prefer my code to compile without warnings, but there's nothing to "fix" here because the code is doing exactly what I intend it to do. I can get rid of the warning by storing the return value of `GetNameAsync`:
```
public static void DoStuff()
{
var result = GetNameAsync(); // supress warning
MainWorkOfApplicationIDontWantBlocked();
}
```
But now I have superfluous code. Visual Studio seems to understand that I was forced to write this unnecessary code, because it suppresses the normal "value never used" warning.
I can also get rid of the warning by wrapping GetNameAsync in a method that's not async:
```
public static Task GetNameWrapper()
{
return GetNameAsync();
}
```
But that's even *more* superfluous code. So I have to write code I don't need or tolerate an unnecessary warning.
Is there something about my use of async that's wrong here?
|
If you really don't need the result, you can simply change the `GetNameAsync`'s signature to return `void`:
```
public static async void GetNameAsync()
{
...
}
```
Consider to see answer to a related question:
[What's the difference between returning void and returning a Task?](https://stackoverflow.com/questions/8043296/whats-the-difference-between-returning-void-and-returning-a-task)
**Update**
If you need the result, you can change the `GetNameAsync` to return, say, `Task<string>`:
```
public static async Task<string> GetNameAsync()
{
string firstname = await PromptForStringAsync("Enter your first name: ");
string lastname = await PromptForStringAsync("Enter your last name: ");
return firstname + lastname;
}
```
And use it as follows:
```
public static void DoStuff()
{
Task<string> task = GetNameAsync();
// Set up a continuation BEFORE MainWorkOfApplicationIDontWantBlocked
Task anotherTask = task.ContinueWith(r => {
Console.WriteLine(r.Result);
});
MainWorkOfApplicationIDontWantBlocked();
// OR wait for the result AFTER
string result = task.Result;
}
```
|
Attach to already running JVM
Is there a way to attach to an already running JVM?
For example, in JNI you can use `JNI_CreateJavaVM` to create a VM and run a jar and inspect all its classes..
However, if the jar is already running, I cannot find a way to attach to its JVM and communicate with its classes or get its `env` pointer..
Another problem is that if the jar loads my native library (.dll) and I want to create a JVM inside the .dll, I cannot.. Nor can I attach the jar's current JVM either without the jar calling my function..
Example on the Java side:
```
class Foo
{
static {loadLibrary("Foo")}
}
```
on the C++ side:
```
void Foo()
{
//CreateJVM
//Attach to the current process..
//Call function from the jar that loaded me.
}
```
This cannot be done without the jar calling `Foo` first.
Any ideas? Is there no way to get the current JVM or to attach to it or an external jvm instance?
|
Yes you can.
**1)** Inject a DLL in the process hosting the JVM (eg, `java.exe`, or `javaw.exe`, or `iexplore.exe`). A common injection technique is to use [SetWindowsHookEx](http://msdn.microsoft.com/en-us/library/windows/desktop/ms644990.aspx)
**2)** In the DLL, get the module handle of the `jvm.dll` using `GetModuleHandle`
**3)** Get the address of the `JNI_GetCreatedJavaVMs` function, using `GetProcAddress`
**4)** Call the function and, if successfull, attach your thread to the first JVM found, using the `AttachCurrentThread` function pointer from the `JavaVM` struture.
**5)** Done.
Usefull link: [The Invocation API](http://docs.oracle.com/javase/7/docs/technotes/guides/jni/spec/invocation.html)
|
What's the meaning of the following statement from file gproc\_lib.erl?
When reading github's Project gproc's source code file "gproc\_lib.erl", I have faced some problem.
Where could I find some related reference doc for this statement syntax?
```
check_option_f(ets_options) -> fun check_ets_option/1; **%<----**What's the meaning of this** statement**?
check_option_f(server_options) -> fun check_server_option/1.
check_ets_option({read_concurrency , B}) -> is_boolean(B);
check_ets_option({write_concurrency, B}) -> is_boolean(B);
check_ets_option(_) -> false.
check_server_option({priority, P}) ->
%% Forbid setting priority to 'low' since that would
%% surely cause problems. Unsure about 'max'...
lists:member(P, [normal, high, max]);
check_server_option(_) ->
%% assume it's a valid spawn option
true.
```
|
`fun module:name/arity` is a function value, equivalent to the following:
```
fun(A1,A2,...,AN) -> module:name(A1,A2,...,AN) end
```
where N is the `arity`. In short it is a useful shorthand to pass normal Erlang functions as arguments to other functions which expect a function as an argument.
Example:
To convert a list `List` to a set:
```
lists:foldl(fun sets:add_element/2, sets:new(), List).
```
Equivalent to:
```
lists:foldl(fun (E, S) -> sets:add_element(E, S) end, sets:new(), L).
```
(The latter is the definition used in OTP's `set` module for the `from_list` function.)
More info [here](http://www.erlang.org/doc/reference_manual/expressions.html#funs).
|
How to order knockout bindings?
I am using knockout.js. I am stuck in a little bit strange situation ( its hard to explain but i am trying, sorry if i am not clear ). I am using custom binding and options binding on a single select-list :
```
<select data-bind="options : arrayOfOptions, optionsText: 'Name',
optionsValue: 'Name', chosen: { }">
</select>
ko.bindingHandlers.chosen = {
init: function (element, valueAccessor, allBindingAccessor,
viewModel, bindigContext) {
var options = ko.utils.unwrapObservable(valueAccessor() || {});
$(element).chosen(options);
}
};
```
Here at runtime selectlist will fill with all available options from the `arrayOfOptions` array and `chosen` is a custom binding in which i am applying a [CHOSEN PLUGIN](http://harvesthq.github.com/chosen/) on the select-list.
Now the problem i am facing here is that in custom binding when i applied choose plugin on select list at that time the selectlist is not filled with the options from the `arrayOfOptions` array. Means in a simple term the `custom binding` is executing before `options binding`. Can anybody please give me a solution for this so that custom binding applied after options binding?
|
Move your call to `chosen` into the update.
<http://jsfiddle.net/jearles/avSfa/28/>
--
```
ko.bindingHandlers.chosen = {
init: function(element, valueAccessor, allBindingsAccessor, viewModel) {
var allBindings = allBindingsAccessor();
var options = {default: 'Select one...'};
$.extend(options, allBindings.chosen)
$(element).attr('data-placeholder', options.default);
},
update: function(element, valueAccessor, allBindingsAccessor, viewModel) {
$(element).chosen();
}
};
```
--
Alternatively, you can use `setTimeout` to move the call to `chosen` to the bottom of the execution queue. This will give the Knockout options binding time to do its work before `chosen` tries to transform it.
```
ko.bindingHandlers.chosen = {
init: function (element, valueAccessor, allBindingAccessor,
viewModel, bindingContext) {
var options = ko.utils.unwrapObservable(valueAccessor() || {});
setTimeout(function() { $(element).chosen(options); }, 0);
}
};
```
|
Spring Batch + Web Scraping
I need to develop a batch processing that will be executed every day. These are the steps:
1. Read each line of a database table that contains a URL (SQLite)
2. Extract some data, say `User`s, from that website by scraping it. Each website may contain 1..n users.
3. Persist each `User` in a local NoSQL database.
4. Send each `User`(one by one) through third-party REST API.
I'm going to implement this process using Spring Batch and I have thought this design:
- Item Reader: Read each URL of SQLite database using JdbcCursorItemReader.
- Item Processor: Scrap and deserialize users from website. `(Url -> List<User>)`
- Item Writer: for each `User`, persist it in database and send it through REST API.
Is this approach right? Should I change any step? I have never worked with Spring Batch so I'm willing to change the technology if needed. I need some advice before start developing since I need this process to be very robust.
|
This is generally a good application for Spring Batch, and you seem to understand the logical separation of Reader, Processor and Writer fairly well.
There are certain things that you should consider and think about when it comes to an application like this. Spring Batch gives you the concept of chunking where rather than read/process/write each record one at a time you could read in several items as a chunk, process them as a single transaction, and write them as a single transaction. Something that isn't clear to me based on your question is what your domain model will look like in your application to where this is possible. It sounds as if there is a one to many relationship from URL to Users. You would likely read in a single URL and build a collection of User objects that are ready to be processed and written out as a single transaction.
The second thing I would consider in your design and generally a good practice to get into when designing software is to document what your system constraints are.
- Are there alternative means to retrieve required data about a User apart from screen scraping? If not document the constraints that exist.
- What software system or component requires the User data to be provided by your software (REST API). Does this third party software have the ability to take a batch file for input as opposed to REST API? Are there other potential interfaces that might be more reliable?
Also good to document Risks:
- Screen scraping presents tight coupling between the web design and application and your batch job
In light of this information I would design like such:
## Reader
- Retrieve the URL from database
- Screen scrape for user data
- Create a `List<User>` objects for the Processor step
## Processor
- Integration of data from multiple Readers if applicable?
- Special processing rules or calculation of derived data?
- Preparation of User object for Writers
## Writer
- One unique writer for persisting to your database
- Second unique writer for POST to REST API
Each chunk will be composed of users in a single URL. Each chunk in process should be transacted so that in the event of an exception or failure, any persisted changes can be rolled back. In the event of an exception, is it possible to define custom rollback behavior for the REST API?
Your final considerations should be the supportability and maintainability of the batch job. You might want to consider Spring Batch Admin for this. Any time where your business process depends on URL resources for internal or external network, screen scraping, and the availability and proper functioning of a REST API I would consider this sufficiently high risk. There are many potential points of failure in this job so not only are Transactions and good exception handling a must, you will also want the ability to administer this easily and with minimal manual intervention.
Spring Batch Admin maintains a database of historical jobs as well as currently running jobs and paused and failed jobs. You can configure a Spring Batch job managed with Spring Batch Admin to pick up where the failed job left off. Perhaps your job got through 350 URL's of 400 to scan. No need to clean up and start over if you can restart the failed job instance, it will pick up at record 351 and try again. You may even be able to have it wait a few minutes and try several times before sending notifications out.
I hope this gives you things to consider.
|
What does mnemonic means in sublime text?
What does **mnemonic** attribute mean in sublime text?
Because i found it in some **Main.sublime-menu** files and it is written like that :
```
"mnemonic": "O" // or any other letter instead of O
```
It is used many times in sublime text packages, but now i said to myself it is time to know what does it mean and why it is there.
|
From [Google](https://www.google.com/search?q=mnemonic), ***mnemonic*** means
>
> a device such as a pattern of letters, ideas, or associations that assists in remembering something.
>
>
>
and from [Techopedia](https://www.techopedia.com/definition/28287/mnemonic):
>
> Mnemonics are used in computing to provide users with a means to quickly access a function, service or process, bypassing the actual more lengthy method used to perform or achieve it.
>
>
>
In Sublime (and most software), the mnemonic is the underlined letter in a menu item that allows you to choose that item using the keyboard. For example, in the following `.sublime-menu` entry:
```
[
{
"caption": "File",
"mnemonic": "F",
"id": "file",
"children":
[
{ "command": "new_file", "caption": "New File", "mnemonic": "N" },
```
the **`F`** in **`File`** will be underlined, and within that menu, the **`N`** in **`New File`** will be as well. So, on Windows or Linux, for example, you can use `Alt``F` to open the **`File`** menu, then hit `N` to choose **`New File`**.
|
HTML: What exactly is 's purpose?
I've read many explanations of what the actual purpose of the < span > tag is, and I've tried to incorperate those explanations into real applications but have failed every time.
One person told me that it was to apply classes to sub-tags below it, which does kind of work, except it doesn't apply dimensions to elements, unless you mess around with the display and/or inline settings, which can totally screw up a layout.
Then someone else told me that it's use as a substitute for the < div > tag, which doesn't work because floats or "margin: auto"-type attributes don't work unless contained inside certain types of elements.
Then someone else told me that it's used as a text container, which doesn't work because the "text-align" attribute doesn't work, again, unless contained inside certain types of elements. A default-attribute-cleared < p > tag is much more suited, in my experience.
So what exactly is the point of them? Why are so many people using them when < div > seems to do everything that they're apparently capable of and more?
|
[From Official Docs:](http://www.w3.org/TR/html4/struct/global.html#edef-SPAN)
>
> The DIV and SPAN elements, in conjunction with the id and class
> attributes, offer a generic mechanism for adding **structure** to
> documents. These elements define content to be **inline (SPAN)** or
> block-level (DIV) but impose no other presentational idioms on the
> content. Thus, authors may use these elements in conjunction with
> **style sheets, the lang attribute, etc**., to tailor HTML to their own
> needs and tastes.
>
>
>
As it says, you can use `<span>` tag to *structure* (inline) the sections of page along with *styling* which you may optionally pass via id, class or stylesheets.
Characteristics of `<span>` tag:
- It's display is `inline` by default which means:
- you can not apply `width` to it
- you can not apply `height` to it
- you can make it block-level too by using `display:block` (`div` serves the same purpose)
The `<div>` tag is opposite to that and you can apply above rules to it.
|
Is it a must to reverse byte order when sending numbers across the network?
In most of the examples on the web, authors usually change the byte order before sending a number from host byte order to network byte order. Then at the receiving end, authors usually restore the order back from network byte order to host byte order.
Q1:Considering that the architecture of two systems are unknown, wouldn't it be more efficient if the authors simply checked for the endianness of the machines before reversing the byte order?
Q2:Is it really necessary to reverse the byte order of numbers even if they are passed to & received by the same machine architecture?
|
In general, you can't know the architecture of the remote system. If everyone uses a specific byte order - [network byte order](http://en.wikipedia.org/wiki/Endianness#Endianness_in_networking), then there is no confusion. There is some cost to all the reversing, but the cost of re-engineering ALL network devices would be far greater.
A1: Suppose that we were going to try to establish the byte order of the remote systems. We need to establish communication between systems and determine what byte order the remote system has. How do we communicate without knowing the byte order?
A2: If you know that both systems have the same architecture, then no you don't need to reverse bytes at each end. But you don't, in general, know. And if you do design a system like that, then you have made an network-architecture decision that excludes different CPU architectures in the future. Consider Apple switching from 86k to PPC to x86.
|
Searching element in sorted 3d array
>
> A 3d matrix sorted in all three dimensions is given and we have to
> find a given number in it.
>
>
>
For the above problem, I have been thinking this: The 3D array `arr[m][n][r]` would be like a stack of rectangles where each rectangle (consider `arr[m][n][0]`) would have the largest element as the lowest right-most element(`arr[m-1][n-1][0]`). We can search inside each rectangle in `O(m+n)` :
```
int row = 0;
int col = N-1;
while (row < M && col >= 0)
{
if (mat[row][col] == elem)
{
return true;
}
else if (mat[row][col] > elem)
{
col--;
}
else
{
row++;
}
}
```
I was thinking it could be similarly extended to the 3rd dimension , hence making it a linear complexity solution(`O(m+n+r)`). Am I right ?
Does anyone have any other ideas? What would be the complexity?
|
You cannot extend a linear complexity 2D solution to the 3rd dimension, making O(m+n+r) solution out of it. 3D array, sorted independently in each direction, contains groups of O(N2) elements, which are not ordered between each other. For example, sub-array `arr[i][j][k]` where `i+j+k = (m+n+r)/2` is completely unsorted. So you have to inspect each element of such a sub-array to find given number. This proves that you cannot invent an algorithm with complexity better than O(N2) (at least when m, n, and r are not very much different from each other). This is just an extension of the proof from [this answer](https://stackoverflow.com/a/10597806/1009831).
Here is an example:
```
k=0: |1 x| k=1: |z 3|
|y 3| |3 4|
```
This array is sorted in all 3 dimensions. But this does not determine any sorting order for elements x,y,z. You can assign any values in the range of (1, 3) to these elements. And while searching for element with value '2' you have to inspect all these 'unsorted' values (x and y and z). If you increase size of the array, you see that the number of 'unsorted' values may increase quadratically. Which means worst case time complexity of the search algorithm should also increase quadratically.
You can find the smallest array size (let it be 'r'), and for each matrix `arr[*][*][k]` search given number in O(m+n) time, which gives O((m+n)\*r) time complexity.
Or if one of the array sizes is much larger than others (`r >> m*n`), you can use binary search in `arr[i][j][*]` (for each i,j), which gives O(m*n*log(r)) time complexity.
|
Download file from Amazon S3 using REST API
I have my own `REST API` to call in order to **download** a file. *(At the end, the file could be store in different kind of server... Amazon s3, locally etc...)*
To get a file from s3, I should use this method:
```
var url = s3.getSignedUrl('getObject', params);
```
This will give me a downloadable link to call.
Now, my question is, how can I use my own rest API to download a file when it comes from that link? Is there a way to redirect the call?
I'm using `Hapi` for my **REST** server.
```
{
method: "GET", path: "/downloadFile",
config: {auth: false},
handler: function (request, reply) {
// TODO
reply({})
}
},
```
|
Instead of using a redirect to download the desired file, just return back an unbufferedStream instead from S3. An [`unbufferedStream`](http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/HttpResponse.html#createUnbufferedStream-property) can be returned from the `HttpResponse` within the `AWS-SDK`. This means there is no need to download the file from S3, then read it in, and then have the requester download the file.
FYI I use this `getObject()` approach with Express and have never used Hapi, however I think that I'm pretty close with the route definition but hopefully it will capture the essence of what I'm trying to achieve.
### Hapi.js route
```
const getObject = require('./getObject');
{
method: "GET", path: "/downloadFile",
config: {auth: false},
handler: function (request, reply) {
let key = ''; // get key from request
let bucket = ''; // get bucket from request
return getObject(bucket, key)
.then((response) => {
reply.statusCode(response.statusCode);
response.headers.forEach((header) => {
reply.header(header, response.headers[header]);
});
return reply(response.readStream);
})
.catch((err) => {
// handle err
reply.statusCode(500);
return reply('error');
});
}
},
```
### getObject.js
```
const AWS = require('aws-sdk');
const S3 = new AWS.S3(<your-S3-config>);
module.exports = function getObject(bucket, key) {
return new Promise((resolve, reject) => {
// Get the file from the bucket
S3.getObject({
Bucket: bucket,
Key: key
})
.on('error', (err) => {
return reject(err);
})
.on('httpHeaders', (statusCode, headers, response) => {
// If the Key was found inside Bucket, prepare a response object
if (statusCode === 200) {
let responseObject = {
statusCode: statusCode,
headers: {
'Content-Disposition': 'attachment; filename=' + key
}
};
if (headers['content-type'])
responseObject.headers['Content-Type'] = headers['content-type'];
if (headers['content-length'])
responseObject.headers['Content-Length'] = headers['content-length'];
responseObject.readStream = response.httpResponse.createUnbufferedStream();
return resolve(responseObject);
}
})
.send();
});
}
```
|
How to extract all regex matches in a file using Vim?
Consider the following example:
```
case Foo:
...
break;
case Bar:
...
break;
case More: case Complex:
...
break:
...
```
Say, we would like to retrieve all matches of the regex `case \([^:]*\):` (the whole matching text or, even better, the part between `\(` and `\)`), which should give us (preferably in a new buffer) something like this:
```
Foo
Bar
More
Complex
...
```
Another example of a use case would be extraction of some fragments of an HTML file, for instance, image URLs.
Is there a simple way to collect all regex matches and take them out to a separate buffer in Vim?
**Note:** It’s similar to the question “[How to extract text matching a regex using Vim?](https://stackoverflow.com/questions/6564828/extract-text-using-vim)”. However, unlike the setting in that question, I’m also interested in removing the lines that don’t match, preferably without a hugely complicated regex.
|
There is a general way of collecting pattern matches throughout
a piece of text. The technique takes advantage of the substitute
with an expression feature of the `:substitute` command (see
`:help sub-replace-\=`). The key idea is to use a substitution
enumerating all of the pattern matches to evaluate an expression
storing them without replacement.
First, let us consider saving the matches. In order to keep a sequence
of matching text fragments, it is convenient to use a list (see
`:help List`). However, it is not possible to modify a list
straightforwardly, using the `:let` command, since there is no way to
run Ex commands in expressions (including `\=` substitute expressions).
Yet, we can call one of the functions that modify a list in place, for
example, the `add()` function that appends a given item to a list
(see `:help add()`).
Another problem is how to avoid text modifications while running
a substitution. One approach is to make the pattern always have
a zero-width match by prepending `\ze` or by appending `\zs` atoms
to it (see `:help /\zs`, `:help /\ze`). The pattern modified in this
way captures an empty string preceding or succeeding an occurrence
of the original pattern in text (such matches are called *zero-width*
matches in Vim; see `:help /zero-width`). Then, if the replacement
text is also empty, substitution effectively changes nothing: it just
replaces a zero-width match with an empty string.
Since the `add()` function, like most of the list modifying functions,
returns the reference to the changed list, for our technique to work
we need to somehow get an empty string from it. The simplest way
is to extract a sublist of zero length from it by specifying a range
of indices such that a starting index is greater than an ending one.
Combining the aforementioned ideas, we obtain the following Ex command:
```
:let m=[] | %s/\<case\s\+\(\w\+\):\zs/\=add(m,submatch(1))[1:0]/g
```
After its execution, all matches of the first subgroup are accumulated
in the list referenced by the variable `m`, and can be used as is
or processed in some way. For instance, to paste the contents of the
list one by one on separate lines in Insert mode, type
>
> `Ctrl`+`R``=m``Enter`
>
>
>
To do the same in Normal mode, simply use the `:put` command:
```
:put=m
```
---
Starting with version 7.4 (see `:helpg Patch 7.3.627`), Vim evaluates
a `\=` expression in the replacement string of a substitution command
for every match of the pattern, even when the `n` flag is given
(which instructs it to simply count the number of matches without
substituting—see `:help :s_n`). What the expression evaluates to
does not matter in that case, because the resulting value is being
discarded anyway, as no substitution takes place during counting.
This allows us to take advantage of the side effects of an expression
without worrying about leaving the contents of the buffer in tact in
the process, so all the trickery with zero-width matching and
empty-sublist indexing can be elided:
```
:let m=[] | %s/\<case\s\+\(\w\+\):/\=add(m,submatch(1))/gn
```
Conveniently, the buffer does not even get marked as modified after
running this command.
|
Unable to clone Python venv to another PC
I want to clone my existing venv to another PC but simply copy paste is not working. When I copy the venv and paste to the second machine and run
>
> pip list
>
>
>
It only list pip and setup\_tools as the installed dependencies.
I tried another way to clone the packages.
I created a new venv in the second machine and copied all the file of first venv to that new venv with skipping the existing files with the same name in new venv. Now, when I run
>
> pip list
>
>
>
It shows all the dependencies but, when I try to launch the jupyter notebook as
>
> jupyter notebook
>
>
>
It gives the following error.
>
> Fatal error in launcher: Unable to create process using '"f:\path\to\first\_venv\on\_first\_machine\scripts\python.exe"
> "C:\path\to\new\_venv\on\_the\_second\_machine\Scripts\jupyter.exe" notebook': The system cannot find the file specified.
>
>
>
**I don't know to make things working. Please help!**
**Edit**
The problem is I don't have internet connection on the second machine. Actually it's a remote machine with some security protocols applied and having no internet connection is part of security ! My bad :'(
|
You can't copy-paste venvs from one machine to another since scripts in them may refer to system locations. (The same stands for attempting to move venvs within a machine.)
Instead, recreate the environment on the new machine:
1. On the old machine, run `pip freeze -l > packages.txt` in the virtualenv.
2. Move `packages.txt` over to the new machine.
3. Create a new virtualenv on the new machine and enter it.
4. Install the packages from the txt file: `pip install -r packages.txt`.
**EDIT:** If you don't have internet access on the second machine, you can continue from step 2 with:
3. Run `pip wheel -w wheels -r packages.txt` in the venv on the first machine. This will download and build `*.whl` packages for all the packages you require. Note that this assumes both machines are similar in OS and architecture!
4. Copy the wheel files over to the new machine.
5. Create a new virtualenv on the new machine and enter it.
6. Install the packages from wheels in the new virtualenv: `pip install *.whl`.
|
Nested Configuration Section app.config
I don't find any examples of how to access such a nested configuration section in a app.config
```
<my.configuration>
<emailNotification>
<to value="[email protected]" />
<from value="[email protected]" />
<subject value="Subject" />
<smtpHost value="smtp.you.com" />
<triggers>
<add name="1" varAlias="Var1" lower="-200" upper="-150"/>
</triggers>
</emailNotification>
</my.configuration>
```
I used ConfigurationElementCollection and ConfigurationElement before. But I don't know how to do the above?
|
You need to:
Define `my.configuration` as section group and `emailNotification` as a section within the group. Add following to the configuration file:
```
<configSections>
<sectionGroup name="my.configuration"
type="SectionGroupRetrieval.MyConfigurationGroup, SectionGroupRetrieval">
<section name="emailNotification"
type="SectionGroupRetrieval.EmailNotificationSection, SectionGroupRetrieval" />
</sectionGroup>
</configSections>
```
Implement the configuration section group (`my.configuration`).
```
public class MyConfigurationGroup : ConfigurationSectionGroup
{
[ConfigurationProperty( "emailNotification" )]
public EmailNotificationSection EmailNotification
{
get { return (EmailNotificationSection)base.Sections[ "emailNotification" ]; }
}
}
```
Implement the configuration section (`emailNotification`).
```
public class EmailNotificationSection : ConfigurationSection
{
[ConfigurationProperty( "to" )]
public ValueElement To
{
get { return (ValueElement)base[ "to" ]; }
}
[ConfigurationProperty( "from" )]
public ValueElement From
{
get { return (ValueElement)base[ "from" ]; }
}
[ConfigurationProperty( "subject" )]
public ValueElement Subject
{
get { return (ValueElement)base[ "subject" ]; }
}
[ConfigurationProperty( "smtpHost" )]
public ValueElement SmtpHost
{
get { return (ValueElement)base[ "smtpHost" ]; }
}
[ConfigurationProperty( "triggers" )]
public TriggerElementCollection Triggers
{
get { return (TriggerElementCollection)base[ "triggers" ]; }
}
}
```
Implement necessary configuration elements and configuration element collection.
```
public class ValueElement : ConfigurationElement
{
[ConfigurationProperty( "value" )]
public string Value
{
get { return (string)base[ "value" ]; }
set { base[ "value" ] = value; }
}
}
public class TriggerElement : ConfigurationElement
{
[ConfigurationProperty( "name" )]
public string Name
{
get { return (string)base[ "name" ]; }
set { base[ "name" ] = value; }
}
[ConfigurationProperty( "varAlias" )]
public string VarAlias
{
get { return (string)base[ "varAlias" ]; }
set { base[ "varAlias" ] = value; }
}
[ConfigurationProperty( "lower" )]
public int Lower
{
get { return (int)base[ "lower" ]; }
set { base[ "lower" ] = value; }
}
[ConfigurationProperty( "upper" )]
public int Upper
{
get { return (int)base[ "upper" ]; }
set { base[ "upper" ] = value; }
}
}
[ConfigurationCollection( typeof( TriggerElement ) )]
public class TriggerElementCollection : ConfigurationElementCollection
{
public TriggerElement this[ string name ]
{
get { return (TriggerElement)base.BaseGet( name ); }
}
public TriggerElement this[ int index ]
{
get { return (TriggerElement)base.BaseGet( index ); }
}
protected override ConfigurationElement CreateNewElement()
{
return new TriggerElement();
}
protected override object GetElementKey( ConfigurationElement element )
{
return ( (TriggerElement)element ).Name;
}
}
```
After updating the configuration file and implementing necessary configuration bits, you can access you section as follows:
```
Configuration config = ConfigurationManager.OpenExeConfiguration( ConfigurationUserLevel.None );
MyConfigurationGroup myConfiguration = (MyConfigurationGroup)config.GetSectionGroup( "my.configuration" );
EmailNotificationSection section = myConfiguration.EmailNotification;
```
|
Validates acceptance always failing
I can't see what I'm missing, but something is obviously not right.
In model:
```
validates :terms, :acceptance => true, :on => :update
```
Trying a few options:
```
>> a = Factory(:blog_agreement)
=> #<BlogAgreement id: 54, terms: false, created_at: "2011-01-20 11:33:03", updated_at: "2011-01-20 11:33:03", accept_code: "fa27698206bb15a6fba41857f12841c363c0e291", user_id: 874>
>> a.terms
=> false
>> a.terms = true
=> true
>> a.save
=> false
>> a.terms = "1"
=> "1"
>> a.save
=> false
>> a.terms = 1
=> 1
>> a.save
=> false
>> a.errors.full_messages
=> ["Terms must be accepted"]
```
|
### Updated answer..
So it turns out that the problem was having terms as an actual column in the table. In general validates\_acceptance\_of is used without such a column, in which case it defines an attribute accessor and uses that for its validation.
In order for validates\_acceptance\_of to work when it maps to a real table column it is necessary to pass the :accept option, like:
```
validates :terms, :acceptance => {:accept => true}
```
The reason for this has to do with typecasting in Active Record. When the named attribute actually exists, AR performs typecasting based on the database column type. In most cases the acceptance column will be defined as a boolean and so `model_object.terms` will return true or false.
When there's no such column `attr_accessor :terms` simply returns the value passed in to the model object from the params hash which will normally be `"1"` from a checkbox field.
|
How am I supposed to use ReturnUrl = ViewBag.ReturnUrl in MVC 4
I'm working on 'ASP.NET MVC 4' application. I'm using/learning SimpleMembershipProvider and try to stick to the default logic created by `VS2012` with the `Internet template` (if I'm not mistaken, the one with 'SimpleMembershipProvider' out of the box).
I'm stuck at the `AccountController` where I just can't figure put how exactly I can use this method:
```
private ActionResult RedirectToLocal(string returnUrl)
{
if (Url.IsLocalUrl(returnUrl))
{
return Redirect(returnUrl);
}
else
{
return RedirectToAction("Index", "Home");
}
}
```
From what I understand the whole idea is to get redirected to the location from where you've decided to log in (exactly what I want to accomplish). I took a look at how it's used in the view :
```
@using (Html.BeginForm(new { ReturnUrl = ViewBag.ReturnUrl }))
```
Look for a place where actually `ViewBag.ReturnUrl` is set with some value and I only got this method here:
```
[AllowAnonymous]
public ActionResult Login(string returnUrl)
{
ViewBag.ReturnUrl = returnUrl;
return View();
}
```
and I'm getting pretty confused about how exactly I'm supposed to get the location/url. I set some breakpoints and I have never seen `returnUrl` to be something different from `null` which in this scenario seems pretty logical to me since it doesn't get value anywhere (unless I miss something of course).
So I really can't figure out how this work. I post the above just to show that I tried to do my homework, I investigate as much as I could but I didn't found an answer so I ask here. Could you provide explanation/example on how this actually work?
|
When using forms authentication and the user is not authenticated or authorized the ASP.NET security pipeline will redirect to the login page and pass as a parameter in the query string the *returnUrl* equal to the page that redirected to the login page. The login action grabs the value of this parameter and puts it in the ViewBag so it can be passed to the View.
```
[AllowAnonymous]
public ActionResult Login(string returnUrl)
{
ViewBag.ReturnUrl = returnUrl;
return View();
}
```
The View then stores this value in the form as shown by this line of code in the View.
```
@using (Html.BeginForm(new { ReturnUrl = ViewBag.ReturnUrl }))
```
The reason it is stored in the View is so that when the user does a Submit after entering their user name and password, the controller action that handles the post back will have access to this value.
```
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public ActionResult Login(LoginModel model, string returnUrl)
{
if (ModelState.IsValid && WebSecurity.Login(model.UserName, model.Password, persistCookie: model.RememberMe))
{
return RedirectToLocal(returnUrl);
}
// If we got this far, something failed, redisplay form
ModelState.AddModelError("", "The user name or password provided is incorrect.");
return View(model);
}
```
If the model state is valid and they are authenticated by calling the *WebSecurity.Login* method then it calls the method *RedirectToLocal* with the value of *returnUrl* which came from the View, which originally came form the login action that created the View.
The *returnUrl* value will be null if the user is not redirected to the login page as is the case when they just click on the login link at the top of the page in the default layout. In this case the user will be redirected to the home page after successful login. The whole purpose of the *returnUrl* is to automatically send the user back to the page they were trying to access before they were authenticated/authorized.
|
call method at runtime
I am wondering if it is possible to load a .net DLL at runtime, view the methods available and execute one at runtime.
If this is possible could you point me in the right direction
|
Generally, you use System.Reflection classes to do this task.
Specifically, you'd load the DLL via [Assembly.Load](http://msdn.microsoft.com/en-us/library/system.reflection.assembly.load.aspx) (or [Assembly.LoadFrom](http://msdn.microsoft.com/en-us/library/system.reflection.assembly.loadfrom.aspx)) and then call [Assembly.GetTypes](http://msdn.microsoft.com/en-us/library/system.reflection.assembly.gettypes.aspx) and then for each type call [Type.GetMethods](http://msdn.microsoft.com/en-us/library/td205ybf.aspx). When you have a [MethodInfo](http://msdn.microsoft.com/en-us/library/system.reflection.methodinfo.aspx), you can call [MethodInfo.Invoke](http://msdn.microsoft.com/en-us/library/system.reflection.methodinfo.invoke.aspx) on it.
|
Undoing temporary changes with Git
Say, I'm on 'master' and have a blob:
```
DEBUG = FALSE
CACHE_SIZE = 100
code
code
code
```
Now I start debugging in a new branch...
```
DEBUG = TRUE # Don't forget to turn off!
CACHE_SIZE = 0 # Don't forget to set back to 100!
```
... fix some bugs, change some code... and merge my fixes back into 'master'. But unfortunately I have forgotten to return these "don't forget"-s to the original value.
How can I automate the process of returning some lines back to the original value? Or at least shorten it to a single command.
Maybe, some temporary commit, or stash, or some other technique?
|
Cameron has some good ideas for shorter term debug changes. I wanted to add a common one that works even for larger or more permanent sets of local debug changes, like if you commonly make the same "don't forget" changes every time you add a feature. I've heard it called a loom, quilt, stacked branches, and a pipeline. You can find plugins with those names to help maintain this kind of workflow, but there are subtle differences between them I've never really grasped, and the technique is not too difficult to do manually.
The basic idea is you add another branch between *master* and *feature*, let's call it *debug*. You make all your "don't forget" changes in that branch, then branch off again from *debug* to make *feature*, which contains all your changes that will go into production as normal. Then, to remove all your "don't forget" changes in *feature*, do:
```
git rebase --onto master debug feature
```
That makes it look like you branched straight from *master* and never added the changes in the *debug* branch. Then you merge into *master* like normal. The next time you want to add a feature, you just merge *master* into *debug* and your "don't forget" changes are automatically reapplied to the latest upstream code. Then just create a new feature branch from *debug* and the cycle starts again.
Obviously, you still have to remember to do the rebase before merging into *master*. Cameron's hook idea can be used to prevent merges if you forget.
|
method must call super() error in Netbeans
Recently I've made a Netbeans project and I am using SVN along with it. I am seeing duplicate class error, and in the console it says
`java.lang.VerifyError: (class: pie/chart/explorer/PieChartExplorer, method: <init> signature: ()V) Constructor must call super() or this()
Could not find the main class: pie.chart.explorer.PieChartExplorer. Program will exit.
Exception in thread "main" Java Result: 1`
Here is PieChartExplorer.java:
```
package pie.chart.explorer;
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.*;
public class PieChartExplorer extends JFrame implements ActionListener {
JTextField one = new JTextField(10);
JTextField two = new JTextField(10);
JTextField three = new JTextField(10);
JButton sub = new JButton("Click to be amazed");
public PieChartExplorer() {
super("Pie Chart Explorer");
setSize(300,100);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
FlowLayout flo = new FlowLayout();
setLayout(flo);
setVisible(true);
add(one);
add(two);
add(three);
sub.addActionListener(this);;
add(sub);
}
public static void main(String[] args) {
PieChartExplorer app = new PieChartExplorer();
}
@Override
public void actionPerformed(ActionEvent e) {
Object source = e.getSource();
if(source == sub) {
try {
Pie show = new Pie(Float.parseFloat(one.getText()),Float.parseFloat(two.getText()),Float.parseFloat(three.getText()));
} catch(Exception ex) {
JOptionPane.showMessageDialog(this, "Please check entered data");
}
}
}
}
```
I have tried:
1. Clean and Rebuild project
2. Making sure that I have called super in all constructors
How can this be fixed? [Code for download](http://www.mediafire.com/?kby3dfjqz99jui8).
|
I saw these symptoms just the other day.
I had I file I had been editing and decided I wanted to split my changes into 2 commits. I went to the directory containing my file "x/y/Z.java", made a directory in "x/y" named "backup", moved "Z.java" there, and pulled a fresh copy from version control. *Note all of this was done outside the IDE.*
Back in the IDE I merged in the changes for the first commit and when I built I got the duplicate class message for "Z.java".
When I copied the source to "backup" I did it outside the IDE and it still had the original package "x.y" as did my newly edited "Z.java". NB would not compile the new "Z.java" because it could see it had already created "x.y.Z.class" (from "x/y/backup/Z.java").
There are 2 ways to fix this:
1. Rename "x/y/backup/Z.java" to "x/y/backup/Z.java.backup". (Prevent the backup copy from being compiled.)
2. Change the package in "x/y/backup/Z.java" from "x.y" to "x.y.backup". (Make the backup create a different class file.)
After making either of these changes, perform a "clean and build". **Note:** simply building will not fix the problem, you need to perform a clean to remove the rogue class file.
**Note:** #1 was done by renaming Z.java from the command line, not within NB. NB will not let you change the file extension.
|
how to delete temp files which is created by google chrome in linux?
I want clean up cache file or row data which is created by the Google chrome during playing video's form youtube as well as other site task.
During cleanup I want to protect my history and cookies.
Where does Google chrome store cache data in Linux?
Is it possible that we manually delete cache files of Google chrome?
But during delete that files we at least save our browsers history.
Note:- I am using standard account privilege.
|
Normally Google Chrome save this temporary files inside:
```
~/.cache/google-chrome/[profile]/Cache/
```
and
```
~/.config/google-chrome/[profile]/Application Cache/Cache/
```
And yes you can delete these manually and you don't need any permission.
So just take a look at these dirs and choose what to delete
Another interesting directory is
```
~/.config/google-chrome/Default
```
Here you find your history , bookmarks , extensions, ....
so choose carefuly what do you want to purge
---
UPDATE:
The bookmark is not really a directory it's just a file. The path to this file is:
```
~/.config/google-chrome/Default/Bookmarks
```
The history also is not a file, it's some sqlite3 database file under this path
```
~/.config/google-chrome/Default/History
```
A simple tip:
```
nautilus ~/.config/google-chrome/Default
```
will open the directory so just take some time and check all of files found there, you can easily know what to keep or what to delete
|
what is the best html parser for java?
Assuming we have to use java, what is the best html parser that is flexible to parse lots of different html content, and also requires not a whole lot of code to do complex types of parses?
|
I would recommend [Jsoup](http://jsoup.org) for this. It has a very nice API with support for [jQuery like CSS selectors and non-verbose element iteration](http://jsoup.org/cookbook/extracting-data/selector-syntax). To take a copy of [this answer](https://stackoverflow.com/questions/2835505/how-to-scan-a-website-or-page-for-info-and-bring-it-into-my-program/2835555#2835555) as an example, this prints your own question and the name of all answerers here:
```
URL url = new URL("https://stackoverflow.com/questions/3121136");
Document document = Jsoup.parse(url, 3000);
String question = document.select("#question .post-text").text();
System.out.println("Question: " + question);
Elements answerers = document.select("#answers .user-details a");
for (Element answerer : answerers) {
System.out.println("Answerer: " + answerer.text());
}
```
An alternative would be [XPath](http://www.w3schools.com/XPath/xpath_syntax.asp), but JSoup is more useful for webdevelopers who already have a good grasp on CSS selectors.
|
most pythonic may to pass a nested list into formatting
What's the best way to pass this:
```
board = [['X','o','o','o','o','X'],
['X','o','o','_','_','X'],
['X','o','o','_','_','X'],
['X','o','o','o','X','X'],
['X','o','o','o','X','X'],
['X','o','o','_','X','X'],
['_','o','o','o','X','X']]
```
into this:
```
"""
╔═══╦═══╦═══╦═══╦═══╦═══╦═══╗
║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║ {} ║
╚═══╩═══╩═══╩═══╩═══╩═══╩═══╝
"""
```
With formatting? Note: the lists are vertical based in the board definition, so a simple for loop won't work.
|
Assuming `board` is the board and `fmt` is that format string, unpack it with `*` after you flatten it using any method you like, such as a generator expression:
```
>>> print(fmt.format(*(item for row in board for item in row)))
╔═══╦═══╦═══╦═══╦═══╦═══╦═══╗
║ X ║ o ║ o ║ o ║ o ║ X ║ X ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ o ║ _ ║ _ ║ X ║ X ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ _ ║ _ ║ X ║ X ║ o ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ X ║ X ║ X ║ o ║ o ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ X ║ X ║ X ║ o ║ o ║ _ ║ X ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ X ║ _ ║ o ║ o ║ o ║ X ║ X ║
╚═══╩═══╩═══╩═══╩═══╩═══╩═══╝
```
Or `itertools.chain.from_iterable`:
```
>>> import itertools
>>> print(fmt.format(*itertools.chain.from_iterable(board)))
╔═══╦═══╦═══╦═══╦═══╦═══╦═══╗
║ X ║ o ║ o ║ o ║ o ║ X ║ X ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ o ║ _ ║ _ ║ X ║ X ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ _ ║ _ ║ X ║ X ║ o ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ X ║ X ║ X ║ o ║ o ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ X ║ X ║ X ║ o ║ o ║ _ ║ X ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ X ║ _ ║ o ║ o ║ o ║ X ║ X ║
╚═══╩═══╩═══╩═══╩═══╩═══╩═══╝
```
Or you can use a plain `itertools.chain` and unpack the `board` with `*` first:
```
>>> import itertools
>>> print(fmt.format(*itertools.chain(*board)))
╔═══╦═══╦═══╦═══╦═══╦═══╦═══╗
║ X ║ o ║ o ║ o ║ o ║ X ║ X ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ o ║ _ ║ _ ║ X ║ X ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ _ ║ _ ║ X ║ X ║ o ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ o ║ X ║ X ║ X ║ o ║ o ║ o ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ X ║ X ║ X ║ o ║ o ║ _ ║ X ║
╠═══╬═══╬═══╬═══╬═══╬═══╬═══╣
║ X ║ _ ║ o ║ o ║ o ║ X ║ X ║
╚═══╩═══╩═══╩═══╩═══╩═══╩═══╝
```
|
How to handle asynchronous callbacks in a synchronous way in Java?
I have an architechture related question. This is a language independent question, but as I come from Java background, it will be easier for me if someone guides me in the Java way.
Basically, the middleware I'm writing communicates with a SOAP based third party service. The calls are async - in a way that, when a service is called, it returns with a response 01 - processing; meaning that the third party has successfully received the request. In the original SOAP request, one callback URL has to be submitted each time, where third party actually sends the result. So, calling a particular service doesn't actually return the result immediately; the result is received in a separate HTTP endpoint in the middleware.
Now in our frontend, we don't want to complicate the user experience. We want our users to call a middleware function (via menu items/buttons), and get the result immediately; and leave the dirty work to the middleware.
Please note that the middleware function (lets say X()) which was invoked from the front end and the middleware endpoint URL(lets call it Y) where third party pushes the result are completely separate from each other. X() somehow has to wait and then fetch the result grabbed in Y and then return the result to the frontend.
[](https://i.stack.imgur.com/bXL4E.png)
How can I build a robust solution to achieve the above mentioned behavior?
The picture depicts my case perfectly. Any suggestions will be highly appreciated.
|
This question could be more about integration patterns than it is about multi-threading. But requests in the same application/JVM can be orchestrated using a combination of asynchronous invocation and the observer pattern:
This is better done using an example (exploiting your Java knowledge). Check the following simplistic components that try to replicate your scenario:
**The third-party service:** it exposes an operation that returns a correlation ID and starts the long-running execution
```
class ExternalService {
public String send() {
return UUID.randomUUID().toString();
}
}
```
**Your client-facing service:** It receives a request, calls the third-party service and then **waits** for the response after registering with the result receiver:
```
class RequestProcessor {
public Object submitRequest() {
String correlationId = new ExternalService().send();
return new ResultReceiver().register(correlationId).join();
}
}
```
**The result receiver:** It exposes an operation to the third-party service, and maintains an internal correlation registry:
```
class ResultReceiver {
Map<String, CompletableFuture<Object>> subscribers;
CompletableFuture<Object> register(String responseId) {
CompletableFuture<Object> future = new CompletableFuture<Object>();
this.subscribers.put(responseId, future);
return future;
}
public void externalResponse(String responseId, Object result) {
this.subscribers.get(responseId).complete(result);
}
}
```
Futures, promises, call-backs are handy in this case. Synchronization is done by the initial request processor in order to force the execution to block for the client.
Now this can raise a number of issues that are not addressed in this simplistic class set. Some of these problems may be:
- race condition between `new ExternalService().send()` and `new ResultReceiver().register(correlationId)`. This is something that can be solved in `ResultReceiver` if it undestands that some responses can be very fast (2-way wait, so to say)
- Never-coming results: results can take too long or simply run into errors. These future APIs typically offer timeouts to force cancellation of the request. For example:
```
new ResultReceiver().register(correlationId)
.get(10000, TimeUnit.SECONDS);
```
|
Scraping the full content from a lazy-loading webpage
I've written a script in python in combination with selenium which is able to scrape 1000 links from a webpage in which lazy-loading method is applied for that reason it displays it's content 20 at a time and full content can only be seen when it is made to scroll downmost. However, my script can scroll the webpage to the end. After collecting the 1000 links from main page, it then gets to each individual link to scrape the Name of CEO and Web address of that organization. It is working great now. I tried to make the whole thing accordingly. Here is the full code:
```
from selenium import webdriver
import time
def get_links(driver):
driver.get('http://fortune.com/fortune500/list/')
while True:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(3)
links = [posts.get_attribute("href") for posts in driver.find_elements_by_xpath("//li[contains(concat(' ', @class, ' '), ' small-12 ')]//a")]
if (len(links) == 1000):
break
for link in links:
process_links(driver, link)
def process_links(driver, sub_links):
driver.get(sub_links)
for items in driver.find_elements_by_xpath('//div[contains(@class,"company-info-card-table")]'):
Name = items.find_element_by_xpath('.//div[contains(@class,"small-7")]/p[@class="remove-bottom-margin"]')
Web = items.find_element_by_xpath('.//div[contains(@class,"small-9")]/a')
print(Name.text, Web.get_attribute("href"))
if __name__ == '__main__':
driver = webdriver.Chrome()
try:
get_links(driver)
finally:
driver.quit()
```
|
### Code Flow:
- you are redefining `links` on every iterations of the `while` loop - you basically need to do it once
- as a `while` loop exit condition, we can use the fact that there are line numbers in the company list grid - we can simply wait for the number `1000` to show up while scrolling
- I would also create a class to have the `driver` and `WebDriverWait` instance shared across the class instance methods
- instead of a hardcoded 3 second delay, use an [Explicit Wait](http://selenium-python.readthedocs.io/waits.html#explicit-waits) with a condition of the last line number to change - this would be much faster and reliable overall
### Code Style:
- `posts` variable name does not actually correspond to what it is - name it `company_link` instead
- `Name` and `Web` violate [PEP8 Python naming guidelines](https://www.python.org/dev/peps/pep-0008/#naming-conventions)
- `process_links` should be `process_link` - since you are processing a single link at a time. And, actually, we can name it `get_company_data` and let it return the data instead of printing it
### Locating Elements:
- don't use XPaths to locate elements - they are generally the slowest and the least readable
- for the company links, I'd better use a more readable and concise `ul.company-list > li > a` CSS selector
- in the `process_links` method you don't actually need a loop since there is a single company being processed. And, I think, you can generalize and return a dictionary generated from a company web page data dynamically - data labels to data values
Here is the modified working code:
```
from pprint import pprint
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
class Fortune500Scraper:
def __init__(self):
self.driver = webdriver.Chrome()
self.wait = WebDriverWait(self.driver, 10)
def get_last_line_number(self):
"""Get the line number of last company loaded into the list of companies."""
return int(self.driver.find_element_by_css_selector("ul.company-list > li:last-child > a > span:first-child").text)
def get_links(self, max_company_count=1000):
"""Extracts and returns company links (maximum number of company links for return is provided)."""
self.driver.get('http://fortune.com/fortune500/list/')
self.wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "ul.company-list")))
last_line_number = 0
while last_line_number < max_company_count:
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
self.wait.until(lambda driver: self.get_last_line_number() != last_line_number)
last_line_number = self.get_last_line_number()
return [company_link.get_attribute("href")
for company_link in self.driver.find_elements_by_css_selector("ul.company-list > li > a")]
def get_company_data(self, company_link):
"""Extracts and prints out company specific information."""
self.driver.get(company_link)
return {
row.find_element_by_css_selector(".company-info-card-label").text: row.find_element_by_css_selector(".company-info-card-data").text
for row in self.driver.find_elements_by_css_selector('.company-info-card-table > .columns > .row')
}
if __name__ == '__main__':
scraper = Fortune500Scraper()
company_links = scraper.get_links(max_company_count=100)
for company_link in company_links:
company_data = scraper.get_company_data(company_link)
pprint(company_data)
print("------")
```
Prints:
```
{'CEO': 'C. Douglas McMillon',
'CEO Title': 'President, Chief Executive Officer & Director',
'Employees': '2,300,000',
'HQ Location': 'Bentonville, AR',
'Industry': 'General Merchandisers',
'Sector': 'Retailing',
'Website': 'www.walmart.com',
'Years on Fortune 500 List': '23'}
------
{'CEO': 'Warren E. Buffett',
'CEO Title': 'Chairman & Chief Executive Officer',
'Employees': '367,700',
'HQ Location': 'Omaha, NE',
'Industry': 'Insurance: Property and Casualty (Stock)',
'Sector': 'Financials',
'Website': 'www.berkshirehathaway.com',
'Years on Fortune 500 List': '23'}
------
...
```
|
mergesort C implementation
i wrote this code in C language on Xcode following the algorithm of mergesort.
The problem is that sometimes i get EXC\_BAD\_ACCESS and i can't manage where the error is!
The merge algorithm should work (i tried it outside the mergesort function and works!). Thank you for your help and patience!
```
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define DIM 6
void mymerge (int v[], int i1,int i2, int last); //mergesort core: merge two ordinated arrays in one bigger ordinated array
void mymergesort (int v[], int lower, int upper);//mergesort
void printv (int v[],int lower, int upper);
int main () {
int i;
srand((unsigned int)time(NULL));
int v[DIM];
for (i=0; i<DIM; i++)
v[i]=rand()%15;
printv(v, 0, DIM-1);
getc(stdin);
mymergesort(v, 0, DIM-1);
printv(v, 0, DIM-1);
}
void printv (int v[],int lower, int upper){
int i;
for (i=lower; i<=upper; i++)
printf("%d\t",v[i]);
}
void mymergesort (int v[], int lower, int upper){
int mid=(upper+lower)/2;
if (upper<lower) {
mymergesort(v, lower, mid);
mymergesort(v, mid+1, upper);
mymerge(v,lower,mid+1,upper);
}
}
void mymerge (int v[], int i1,int i2, int last){
int i=i1,j=i2,k=i1,*vout;
vout=(int*)malloc((last-i1+1)*sizeof(int));
while (i<i2 && j<=last) {
if (v[i]<=v[j]) {
vout[k++]=v[i++];
}else {
vout[k++]=v[j++];
}
}
for (;i<i2;i++) vout[k++]=v[i];
for (;j<=last;j++) vout[k++]=v[j];
for (k=i1; k<=last; k++) v[k]=vout[k];
free(vout);
}
```
EDIT:
thank you very much! but i think think there is another problem, when I try to sort a bigger array (200 elements), the program doesn't work (i get a malloc error: incorrect checksum for freed object - object was probably modified after being freed). But if I run it from the xCode debugger everything works fine
|
This: `vout=(int*)malloc((last-i1)*sizeof(int));` is wrong.
First, the number of elements you want is `last-i1+1`, not `last-i1` - classic off-by-1. This kind of error is one of the reasons why the convention in C code is to make lower bounds inclusive and upper bounds exclusive - less `+1` and `-1` you need to do, less opportunity to screw up.
The more serious error is that you index `vout` starting from `i1`. If you do it this way, you need to allocate `last+1` element for `vout`, and you never use the first `i1` (index 0 .. `i1-1`).
Fix: First, allocate `last-i1+1` elements. Second, initialize `k` to 0 at the beginning, not `i1`. Third, change the final copy to be
```
for (k=i1; k<=last; k++) v[k] = vout[k-i1];
```
|
Hg Merge specific commit from another branch
I have two branches Dev and Feature1. I was working on Feature1, creating said feature, and committed it. I then wrote the code for Feature2 but committed it under Feature1 branch instead of a new branch (Feature2). So now i have two features in Feature1 branch as two separate commits, but I only want to include the second feature back into Dev.
What is the mercurial way to do this?
|
1. Supposed you have not yet published your commits:
If you want to merge the commit *Feature2* independent of commit *Feature1*, you should
[move it on its own branch](https://stackoverflow.com/questions/4665549/mercurial-move-changes-to-a-new-branch/4673148#4673148).
2. If already published:
Use the [transplant](https://www.mercurial-scm.org/wiki/TransplantExtension) extension to "duplicate" the *Feature2* commit and put it on
its own branch. Then [backout](https://www.mercurial-scm.org/wiki/Backout) *Feature2* commit on the *Feature1* branch. Now you
merge *Feature2* independent of *Feature1* too.
In any case, instead of putting *Feature2* on its own branch, you could also put it
directly onto your *Dev* branch if this is your actual intention.
|
multiple queries with mysql\_query in a c++ project
So, this will not work with mysql\_query.
I am strictly working with c++ and I am not using php.
I want this double query to be executed so that I will always have a unique ID in a transaction system with concurrent users creating IDs.
```
mysql_query(connection, \
"INSERT INTO User() VALUES (); SELECT LAST_INSERT_ID(); ");
```
It works in MySql DataBase perfectly, but I need to add it to Eclipse( I am using Ubuntu 12.04 LTS).
My application is quite big and I would not like to change to mysqli, if this is possible but if there is no other way it will be ok.
Can you help me with this? Thanks in advance.
|
According to the MySQL C API documentation:
>
> MySQL 5.6 also supports the execution of a string containing multiple
> statements separated by semicolon (“;”) characters. This capability is
> enabled by special options that are specified either when you connect
> to the server with mysql\_real\_connect() or after connecting by
> calling` mysql\_set\_server\_option().
>
>
>
And:
>
> CLIENT\_MULTI\_STATEMENTS enables mysql\_query() and mysql\_real\_query()
> to execute statement strings containing multiple statements separated
> by semicolons. This option also enables CLIENT\_MULTI\_RESULTS
> implicitly, so a flags argument of CLIENT\_MULTI\_STATEMENTS to
> mysql\_real\_connect() is equivalent to an argument of
> CLIENT\_MULTI\_STATEMENTS | CLIENT\_MULTI\_RESULTS. That is,
> CLIENT\_MULTI\_STATEMENTS is sufficient to enable multiple-statement
> execution and all multiple-result processing.
>
>
>
So, you can supply several statements in a single mysql\_query() call, separated by a semicolon, assuming you set up your mysql connection a bit differently, using mysql\_real\_connect.
You need to pass the following flag as the last argument: CLIENT\_MULTI\_STATEMENTS, whose documentation says:
>
> Tell the server that the client may send multiple statements in a
> single string (separated by “;”). If this flag is not set,
> multiple-statement execution is disabled. See the note following this
> table for more information about this flag.
>
>
>
See [C API Support for Multiple Statement Execution](http://dev.mysql.com/doc/refman/5.6/en/c-api-multiple-queries.html) and [22.8.7.53. mysql\_real\_connect()](http://dev.mysql.com/doc/refman/5.6/en/mysql-real-connect.html) for mroe details.
|
how to validate Zipcode for US or Canada in iOS?
I want to know that is there any way to validate the the zipcode of US or Zipcode of Canada?I have tried to use regex.
Like for US
```
- (BOOL)validateZip:(NSString *)candidate {
NSString *emailRegex = @"(^{5}(-{4})?$)|(^[ABCEGHJKLMNPRSTVXY][A-Z][- ]*[A-Z]$)";
NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex];
return [emailTest evaluateWithObject:candidate];
}
```
but it's not working.Please any body have any idea regarding this validation.if any rest api is there for the validation?Please guide me if possible?
|
For the US, you have the quantifiers (`{5}`, `{4}`, `?`) correct but forgot to specify exactly what you're quantifying. You want:
```
(^[0-9]{5}(-[0-9]{4})?$)
```
For Canada, according to Wikipedia, the format is `A0A 0A0`, so I would do:
```
(^[a-zA-Z][0-9][a-zA-Z][- ]*[0-9][a-zA-Z][0-9]$)
```
Now, I'd write the complete expression like this, with case insensitivity enabled:
```
@"^(\\d{5}(-\\d{4})?|[a-z]\\d[a-z][- ]*\\d[a-z]\\d)$"
```
Frankly, I'm not actually familiar with Objective C or iOS, and sadly I haven't tested the above. However, previously I've seen such posts mention `NSRegularExpression`, which is missing in your code, but perhaps isn't necessary. Take a look at others' examples to see what other simple errors you might be making. Good luck.
|
Pandas df.resample(): Specify NaN threshold for calculation of mean
I want to resample a pandas dataframe from hourly to annual/daily frequency with the `how=mean` method. However, of course some hourly data are missing during the year.
How can I set a threshold for the ratio of allowed NaNs before the mean is set to NaN, too? I couldn't find anything considering that in the docs...
Thanks in advance!
|
Here is a simple solution using `groupby`.
```
# Test data
start_date = pd.to_datetime('2015-01-01')
pd.date_range(start=start_date, periods=365*24, freq='H')
number = 365*24
df = pd.DataFrame(np.random.randint(1,10, number),index=pd.date_range(start=start_date, periods=number, freq='H'), columns=['values'])
# Generating some NaN to simulate less values on the first day
na_range = pd.date_range(start=start_date, end=start_date + 3 * Hour(), freq='H')
df.loc[na_range,'values'] = np.NaN
# grouping by day, computing the mean and the count
df = df.groupby(df.index.date).agg(['mean', 'count'])
df.columns = df.columns.droplevel()
# Populating the mean only if the number of values (count) is > to the threshold
df['values'] = np.NaN
df.loc[df['count']>=20, 'values'] = df['mean']
print(df.head)
# Result
mean count values
2015-01-01 4.947368 20 NaN
2015-01-02 5.125000 24 5.125
2015-01-03 4.875000 24 4.875
2015-01-04 5.750000 24 5.750
2015-01-05 4.875000 24 4.875
```
|
Universal linking iOS - where is the .xcodeproj file?
I've been trying for days to get Universal Linking working on iOS for React Native.
The docs [here](https://reactnative.dev/docs/linking#enabling-deep-links) say "On iOS, you'll need to link RCTLinking to your project". I'm then told to go to `node_modules/react-native/Libraries/LinkingIOS` and drag a `.xcodeproj` file into my project.
None of the libraries in the `Libraries` folder whatsoever have any `.xodeproj` files in them.
Is there meant to be? Is my `react-native` instance missing something? Googling this issue pulls up nothing at all - does everyone else have `.xcodeproj` files in their `Libraries` folder??
|
Starting with version 0.60 React Native provides auto-linking.
To have the library available in RN versions newer than 0.60 you have to:
**1) Add reference to `LinkingIOS` folder into header search paths**
in
`Xcode -> project target -> projectName ->Build Settings -> header search paths:`
[](https://i.stack.imgur.com/PIe7M.png)
`$(SRCROOT)/../node_modules/react-native/Libraries/LinkingIOS`
**2) Import it in `AppDelegate.m`**
`#import "RCTLinkingManager.h"`
About the other steps that you need to do:
- **upload an apple app site association file to your web server**,
- **preparing the app for universal linking in developer.apple.com** and
- **setting up universal links in Xcode and RN code**
you can check this link:
<https://www.ekreative.com/blog/universal-linking-in-react-native-for-ios/>
Just ignore the manual linking paragraph on that page.
|
Azure WebJob and wiring up IServiceCollecton from Microsoft.Extensions.DependencyInjection
I'm trying to figure out how to do dependency injection in an Azure WebJob using a `ServiceCollection` from `Microsoft.Extensions.DependencyInjection`
E.g.:
```
services.AddTransient<IAdminUserLogsService, AdminUserLogsService>();
```
I can't quite figure out how to wire up this service collection into something that the WebJobs `JobHostConfiguration.JobActivator` can understand
My intention is to re-use the default service wiring I've setup with this method as per the default AspNet core `Startup.cs` way.
|
Still wasn't able to find much after searching around last night.
But after a bit of fiddling, I managed to get something working with the following:
EDIT: I've added a more complete solution with Entity Framework.
I should note that my ASP.Net Core webapp is built upon 4.6.2 instead of pure core.
```
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Azure.WebJobs.ServiceBus;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
namespace Settlements.WebJob
{
public class ServiceJobActivator : IJobActivator
{
IServiceProvider _serviceProvider;
public ServiceJobActivator(IServiceCollection serviceCollection) : base()
{
_serviceProvider = serviceCollection.BuildServiceProvider();
}
public T CreateInstance<T>()
{
return _serviceProvider.GetRequiredService<T>();
}
}
class Program
{
static void Main()
{
var config = new JobHostConfiguration();
var dbConnectionString = Properties.Settings.Default.DefaultConnection;
var serviceCollection = new ServiceCollection();
// wire up your services
serviceCollection.AddTransient<IThing, Thing>();
// important! wire up your actual jobs, too
serviceCollection.AddTransient<ServiceBusJobListener>();
// added example to connect EF
serviceCollection.AddDbContext<DbContext>(options =>
options.UseSqlServer(dbConnectionString ));
// add it to a JobHostConfiguration
config.JobActivator = new ServiceJobActivator(serviceCollection);
var host = new JobHost(config);
host.RunAndBlock();
}
}
```
}
|
Qt : How to add a widget to right of QStatusBar
When I add new widget to status bar using `addWidget` function of `QStatusBar` class this new widget will be added to the left of the status bar but I'm going to add it to the right. Is it possible without changing the direction of the main window?
|
You'll need to use [QStatusBar](http://qt-project.org/doc/qt-4.8/qstatusbar.html#addPermanentWidget).addPermanentWidget() to that effect. This is the documentation of that method:
`void QStatusBar::addPermanentWidget ( QWidget * widget, int stretch = 0 )`
>
> Adds the given widget permanently to this status bar, reparenting the
> widget if it isn't already a child of this QStatusBar object. The
> stretch parameter is used to compute a suitable size for the given
> widget as the status bar grows and shrinks. The default stretch factor
> is 0, i.e giving the widget a minimum of space. Permanently means that
> the widget may not be obscured by temporary messages. **It is is located
> at the far right of the status bar**.
>
>
>
|
Querying XML data from a SQL Server table
I've tried every suggestion on the SO threads related to this topic, but still not getting what I need. I have an XML column named `SC`. This is on SQL Server 2014 (in case it matters). There is only one row in the table, and the XML data in column `SC` contains the following snippet...
```
<SC_ROOT>
<COMPONENTS>
<COMPONENT>
<NAME>Status A Detection</NAME>
<PROPERTIES>
<COMP_ENABLED>True</COMP_ENABLED>
</PROPERTIES>
</COMPONENT>
...
</COMPONENTS>
</SC_ROOT>
```
I would like to return a table with `<NAME>` and `<COMP_ENABLED>` for only those items where the `<NAME>` contains "Detection". If there is already a good example of this please point me to it? Thanks in advance!
|
Try this:
```
SELECT
Name = xc.value('(NAME)[1]', 'varchar(50)'),
CompEnabled = xc.value('(PROPERTIES/COMP_ENABLED)[1]', 'varchar(10)')
FROM
dbo.YourTable
CROSS APPLY
SC.nodes('/SC_ROOT/COMPONENTS/COMPONENT') AS XT(XC)
WHERE
xc.value('(NAME)[1]', 'varchar(50)') LIKE '%Detection'
```
The `.nodes()` call bascially returns a "virtual table" with a table alias `XT` which has one column (alias `XC`) that contains the XML fragment that corresponds to the XPath expression - basically the `<COMPONENT>` XML fragment. You reach into that to extract the detailed info you need
**Update:** if your XML looks like this:
```
<COMPONENT>
<NAME>Status A Detection</NAME>
<PROPERTIES NAME="COMP_ENABLED" VALUE="True" />
</COMPONENT>
```
then use this code to get the results:
```
SELECT
Name = xc.value('(NAME)[1]', 'varchar(50)'),
CompEnabled = xc.value('(PROPERTIES[@NAME="COMP_ENABLED"]/@VALUE)[1]', 'varchar(10)')
FROM
dbo.YourTable
CROSS APPLY
SC.nodes('/SC_ROOT/COMPONENTS/COMPONENT') AS XT(XC)
WHERE
xc.value('(NAME)[1]', 'varchar(50)') LIKE '%Detection'
```
|
Why do we have to do [MyClass class] in Objective-C?
In Objective-C, you can invoke class methods with:
```
[MyClass aClassMethod];
```
And you can query an instance's kind with:
```
[someInstance isKindOfClass:[MyClass class]];
```
But, why do we need to do `[MyClass class]`, and not simply provide `MyClass` like this:
```
[someInstance isKindOfClass:MyClass];
```
Is there a reason that the compiler is fine with encountering `MyClass` as a receiver (a pointer type) but not as an argument? Is it a limitation of parsing the language? Or perhaps a limitation of the compiler?
|
Ooooh... fun question. The answer is a c-ism.
Consider:
```
@interface MyClass : NSObject
@end
@implementation MyClass
@end
```
Now, say you have:
```
...
MyClass *m = nil;
...
```
In that context, the compiler sees `MyClass` as a type definition. The `*` says that the variable `m` is a `pointer to a hunk o' memory that contains one (or many -- don't forget your C pointer-fu) MyClass instances`.
In other words, `MyClass` is a type.
But, in the context of something like:
```
[someInstance isKindOfClass: x ];
```
`x` must be an rvalue or, in human terms, *the value of an expression*. A type, however, cannot be used as an rvalue.
That `[MyClass class]` works is actually a bit of a hack, both in the language and the compiler in that the grammar specifically allows a type name to be the message receiver (to be the target of a method call).
And, as a matter of fact, you can do:
```
typedef MyClass Foo;
....
[MyClass class];
[Foo Class];
```
It'll all work. However, you can't do the following *but* the error message is illuminating:
```
[NSUInteger class];
```
**error: ‘NSUInteger’ is not an Objective-C class name or alias**
---
Now, why not special case it everywhere as a bare name?
That colludes type names and rvalues and you quickly end up having to swallow something like `[foo isKindOfClass: (MyClass)];` while barfing on `[foo isKindOfClass: (MyClass *)];` which then encroaches upon typecasting territory in a rather uncomfortable fashion.
|
Compute distance in Cartesian Coordinate System in Mathematica
Analyzing Eye-movements on a screen, I set my origin to the bottom left corner of it
(Hard to modify at that point).
Trying to compute distance between some points and the center of the screen I use the simple formula displayed below.
Problem is that using this in conditional statement, it gets ugly.
```
Sqrt[
(
(fixationX - centerX)^2 + (fixationY - centerY)^2
)
]
```
Is there a way to customize Norm to compute distance between points and not between a point and the origin ?
Or in my case, set the origin to be at the "center" of the current coordinate system ?
|
A slight variation of Simon's method is to use a default value in the function, rather than a global variable (`$Center`).
Suppose your default origin is (5, 5), then:
```
myNorm[pos:{_, _}, center_:{5, 5}] := EuclideanDistance[pos, center]
```
Notice the use of `_:{5, 5}` to define the default value.
Now you can do:
```
myNorm[{5, 7}]
(* Out[]= 2 *)
```
Or temporarily use a different the center with:
```
myNorm[{5, 7}, {8, 8}]
(* Out[]= Sqrt[10] *)
```
For this simple function, you could use `EuclideanDistance` in the second case instead, but I hope you can see the utility of this method were the definition of `myNorm` more complex.
The downside to this method is that you cannot easily change the default center.
---
Another variation that does allow one to easily change the default center, but is more verbose, is to use `Options`:
```
Options[myNorm2] = {Center -> {5, 5}};
myNorm2[pos : {_, _}, OptionsPattern[]] :=
EuclideanDistance[pos, OptionValue[Center]]
```
Syntax is:
```
myNorm2[{5, 7}]
myNorm2[{5, 7}, Center -> {8, 8}]
```
```
**2**
```
```
**Sqrt[10]**
```
Changing the default center:
```
SetOptions[myNorm2, Center -> {8, 8}];
myNorm2[{5, 7}]
```
```
**Sqrt[10]**
```
|
Make browsers like chrome and firefox go through VPN
I ran vpn(PPTP and openvpn both doesn't make any different) on my ubuntu (In my country some sites like telegram ,youtube or twitter are block and I have to run vpn to access them)
So after I ran vpn , telegram messenger worked well but the browsers didn't work and all block site like youtube still couldn't be accessed (but the normal sites could be accessed.In the other hand,vpn didn't any effects on block sites when I use browsers)
I also did ping block sites but it didn't work out
The funny thing is sometimes when I power on my lap top and after connect to AP immediately start vpn (at the very first seconds) , all browser work and I can access to any block site
here's my ip route output command
```
default dev ppp0 proto static scope link metric 50
default via 192.168.1.1 dev wlp2s0 proto dhcp metric 600
10.10.0.0 dev ppp0 proto kernel scope link src 10.10.11.239 metric 50
169.254.0.0/16 dev ppp0 scope link metric 1000
172.16.77.0/24 dev vmnet1 proto kernel scope link src 172.16.77.1
172.16.149.0/24 dev vmnet8 proto kernel scope link src 172.16.149.1
185.180.15.243 via 192.168.1.1 dev wlp2s0 src 192.168.1.120
185.180.15.243 via 192.168.1.1 dev wlp2s0 proto static metric 600
192.168.1.0/24 dev wlp2s0 proto kernel scope link src 192.168.1.120
metric 600
192.168.1.1 dev wlp2s0 proto static scope link metric 600
```
ppp0 is my vpn and wlp2s0 is my wireless card
|
I solved the the problem !
The problem was DNS ,so you should first install `resolvconf` , but why?? when you use this feature your system first read the dns server from this and then write it to `/etc/resolv.conf` so for installing this you should simply run this command :
```
sudo apt-get install resolvconf
```
And after doing that you should edit `head` file in the `/etc/resolvconf/resolv.conf.d/head` and put the DNS servers like in it :
```
nameserver 1.1.1.1
nameserver 8.8.8.8
nameserver 1.0.0.1
nameserver 2606:4700:4700::1111
nameserver 2606:4700:4700::1001
```
**And for more sure you can go to your vpn connection and put the DNS in that too.**
|
Dynamically creating JQuery Mobile page from object
I know that it's possible to create a JQuery Mobile page dynamically by writing out a long string and appending it to the page container. However I want to create the page as an object and append it to the page container.
This is how I used to create the pages:
<http://jsfiddle.net/pjUcn/>
```
var page = $("<div data-role='page' id='page'><div data-role=header>
<a data-iconpos='left' data-icon='back' href='#' data-role='button'
data-rel='back'>Back</a><h1>Dynamic Page</h1></div>
<div data-role=content>Stuff here</div></div>");
page.appendTo($.mobile.pageContainer);
$.mobile.changePage('#page');
```
This is how I want to create them: <http://jsfiddle.net/8NZMw/2/>
```
var page = $('<div/>'),
header = $('<div/>'),
back = $('<a/>'),
title = $('<h1/>'),
content = $('<div/>');
page.data('role', 'page');
page.attr('id', 'page');
header.data('role', 'header');
back.data('iconpos', 'left');
back.data('icon', 'back');
back.data('role', 'button');
back.data('rel', 'back');
back.attr('href', '#');
back.text('Back');
title.text('Dynamic Page');
header.append(back);
header.append(title);
page.append(header);
content.data('role', 'content');
content.text('stuff here');
page.append(content);
page.appendTo($.mobile.pageContainer);
$.mobile.changePage('#page');
```
I have no problem creating other simple JQuery Mobile elements, like [this](http://jsfiddle.net/b2zBq/), I just can't figure out how to create a whole page this way. I even tried calling a .trigger("create") on the page object, but that still didn't do the trick.
Any help would be greatly appreciated, thanks!
|
Here ya go, this works for me (notice where I commented out your code to make it work) -
```
var page = $('<div/>'),
header = $('<div/>'),
back = $('<a/>'),
title = $('<h1/>'),
content = $('<div/>');
//page.data('role', 'page');
page.attr('data-role', 'page');
page.attr('id', 'page');
//header.data('role', 'header');
header.attr('data-role', 'header');
back.data('iconpos', 'left');
back.data('icon', 'back');
back.data('role', 'button');
back.data('rel', 'back');
back.attr('href', '#');
back.text('Back');
title.text('Dynamic Page');
header.append(back);
header.append(title);
page.append(header);
//content.data('role', 'content');
content.attr('data-role', 'content');
content.text('stuff here');
page.append(content);
page.appendTo($.mobile.pageContainer);
$.mobile.changePage('#page');
```
[Here is a jsFiddle demo](http://jsfiddle.net/rossmartin/efkRT/)
|
Mask all values that occur before the max in a pandas DataFrame
I want to take the result from [`pd.DataFrame.idxmax`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.idxmax.html) and use it to change the values before the index with the maximum value.
If I have`df`:
```
Mule Creek Saddle Mtn. Calvert Creek
Date
2011-05-01 23.400000 35.599998 8.6
2011-05-02 23.400000 35.599998 8.0
2011-05-03 23.400000 35.700001 7.6
2011-05-04 23.400000 50.000000 7.1
2011-05-05 23.100000 35.799999 6.4
2011-05-06 23.000000 35.799999 5.7
2011-05-07 40.000000 35.900002 4.7
2011-05-08 23.100000 36.500000 12.0
2011-05-09 23.299999 37.500000 4.4
2011-05-10 23.200001 37.500000 3.6
```
and I find where the maximum of each column occurs with:
```
max = df.idxmax()
```
I want to make values before the identified maximums `max` all `np.nan`
Desired result:
```
Mule Creek Saddle Mtn. Calvert Creek
Date
2011-05-01 NaN NaN NaN
2011-05-02 NaN NaN NaN
2011-05-03 NaN NaN NaN
2011-05-04 NaN 50.000000 NaN
2011-05-05 NaN 35.799999 NaN
2011-05-06 NaN 35.799999 NaN
2011-05-07 40.000000 35.900002 NaN
2011-05-08 23.100000 36.500000 12.0
2011-05-09 23.299999 37.500000 4.4
2011-05-10 23.200001 37.500000 3.6
```
|
Can check where the cumulative max is the same as the max:
```
df.where(df.cummax() == df.max())
Mule Creek Saddle Mtn. Calvert Creek
Date
2011-05-01 NaN NaN NaN
2011-05-02 NaN NaN NaN
2011-05-03 NaN NaN NaN
2011-05-04 NaN 50.000000 NaN
2011-05-05 NaN 35.799999 NaN
2011-05-06 NaN 35.799999 NaN
2011-05-07 40.000000 35.900002 NaN
2011-05-08 23.100000 36.500000 12.0
2011-05-09 23.299999 37.500000 4.4
2011-05-10 23.200001 37.500000 3.6
```
|
RouteObserver for GoRouter to catch go/goNamed transitions
I have been analyzing different navigator observers by 3rd party integrations (e.g., [`SentryNavigatorObserver`](https://github.com/getsentry/sentry-dart/blob/main/flutter/lib/src/navigation/sentry_navigator_observer.dart), [`FirebaseAnalyticsObserver`](https://github.com/firebase/flutterfire/blob/master/packages/firebase_analytics/firebase_analytics/lib/observer.dart)) to make identical for tools that don't have it by default. But I noticed that neither of these is really catching all route transitions. I see it when they display data in their corresponding consoles. Yes, it works with `push`/`pop`, but any transitions within the same level with `go` are not being tracked. Which is very annoying. Anybody knows the way to solve it?
---
For example,
The transition from `/page` to `/page/detail` is observed.
The transition from `/pageA` to `/pageB` is not.
|
Found the issue. Some of the root routes were built using `pageBuilder` and `NoTransitionPage`, so to avoid animation, since they are working as tabs.
But besides setting `NoTransitionPage.key` initializer argument, you also need to set `NoTransitionPage.name` for the routes to be recognized.
So, because I did not do that, the navigator saw all these routes as `null`, so it just didn't see them changing in between.
---
Example how it should be:
```
GoRoute(
name: Routes.home,
path: Routes.getPathNamed(Routes.home),
pageBuilder: (context, state) => NoTransitionPage<void>(
key: state.pageKey, // <--- note this
name: state.name, // <--- note this
child: ScaffoldWithBottomBarNav(body: Routes.getPageNamed(Routes.home)),
),
),
```
|
Method to extract details from YouTube Clip
I've been looking for a method to extract details from a YouTube clip when only knowing the clip URL within a python script. I specifically need the original channel name.
The YouTube API does not seem to include any method to do so, only to extract details from a known *video* ID.
For example, this clip: <https://www.youtube.com/clip/UgkxnEqNDtOHMOOoS5TyJFr2QOjdKbaTOTlW>, with the ID `UgkxnEqNDtOHMOOoS5TyJFr2QOjdKbaTOTlW` is from this video <https://youtu.be/z-e2bDx7tUA>, with the ID `z-e2bDx7tUA`.
`z-e2bDx7tUA` *is* searchable with the API, but `UgkxnEqNDtOHMOOoS5TyJFr2QOjdKbaTOTlW` is not.
Even if there's a more roundabout method, such as getting the clip ID, then getting the video ID and then being able to use that within the API would work. But I've not found an efficient way of doing so, including web scraping.
Any help would be greatly appreciated.
|
One more time YouTube Data API v3 doesn't provide a basic feature.
I recommend you to use my [open-source](https://github.com/Benjamin-Loison/YouTube-operational-API/blob/9ea46f8a389ba3ca9b667d02b08ea26558d3d9c5/videos.php#L149) [YouTube operational API](https://yt.lemnoslife.com). Indeed by using <https://yt.lemnoslife.com/videos?part=id&clipId=CLIP_ID> you'll get in `item['videoId']` the video id associated with the clip.
For example the `CLIP_ID` [`UgkxfiAGoXJYA02_JdIzA3k3pvqpLNm90DBx`](https://www.youtube.com/clip/UgkxfiAGoXJYA02_JdIzA3k3pvqpLNm90DBx) is extracted from the `videoId` [`9bZkp7q19f0`](https://www.youtube.com/watch?v=9bZkp7q19f0).
|
Possibly learning old C++ standard
I am interested in learning C++, and I found a book that seems to be the right fit for me. However, even though the book is quite new, some readers criticized that an earlier version of the book did not really dive into the new C++ standards (such as C++11, C++14, C++17) and rather used the "old C++ standard" (i.e., pre C++11).
While I checked if the newer version covered the new aspects of C++11 etc. (and it seemed like it covered quite a few), I am uncertain if possibly learning a pre C++11 standard would be inefficient? It seems like the new version does covers the new C++ standards (C++11 and upwards), but let's assume it does not. Would the knowledge I gain be outdated?
Again, I am new to C++, so I am not sure if the newer C++ standards completely changed the language, possibly removing/changing (integral) parts of the language which I would learn with this (possibly outdated) book. I just want to avoid learning (integral) parts of the language which are not used anymore or which are used differently nowadays (with the newer standards). However, if all the new C++ standards did was adding new features (but not removing/changing old features), I don't think learning from the book would be an issue since I could simply learn the newer features afterwards.
So in what ways did the newer C++ standards change the language, and would you say learning from a possibly outdated book is an issue?
|
Some code have changed in C++11, main changes, IMO, are:
- way to "return" big object:
- Pre-C++11, it has been recommended to use output parameter to avoid copy:
```
void MakeVector(std::vector<int>& v) {
v.clear();
// fill v;
}
```
- Since C++11, no longer required for movable types, and return by value is the way to go:
```
std::vector<int> MakeVector() {
std::vector<int> v;
// fill v;
return v;
}
```
- way to iterate container:
- Pre-C++11, it would be a mix between index and iterator way:
```
void Foo(std::vector<int>& v) {
for (std::vector<int>::iterator it = v.begin(); it != v.end(); ++it) { /* *it ..*/ }
for (std::size_t i = 0; i != v.size(); ++i) { /* v[i] ..*/ }
}
```
- Since C++11, it is shorter (and optimal (`end()` is computed only once)):
```
void Foo(std::vector<int>& v) {
for (int& e : v) { /* ..*/ }
}
```
- way to write types:
- Pre-C++11, type should be explicitly written:
```
void Foo(std::vector<int>& v) {
std::vector<int>::iterator it = std::find(v.begin(), v.end(), 42);
// ...
}
```
- Since C++11, type can be deduced:
```
void Foo(std::vector<int>& v) {
auto it = std::find(v.begin(), v.end(), 42);
// ...
}
```
- way to create predicate:
- Pre-C++11, predicate should be done outside of the function as function or class:
```
bool less_than_42(int i) { return i < 42; }
struct less_than_x {
less_than_x(int x) : x(x) {}
bool operator()(int i) const { return i < x; }
int x;
};
void Foo(std::vector<int>& v, int x) {
std::vector<int>::iterator it1 = std::find_if(v.begin(), v.end(), less_than_42);
std::vector<int>::iterator it2 = std::find_if(v.begin(), v.end(), less_than_x(x));
// ...
}
```
- Since C++11, lambda simplify stuff (dedicated function might still be useful to avoid to duplicate lambda though):
```
void Foo(std::vector<int>& v, int x) {
auto it1 = std::find_if(v.begin(), v.end(), [](int e){ return e < 42; });
auto it2 = std::find_if(v.begin(), v.end(), [x](int e){ return e < x; });
// ...
}
```
There are other changes, but which less invalidate C++03 way to code.
|
C++ Using a Class Within a Class
I have a basic question that has bothered me for sometime.
When using a Class within a Class I can define the header of the Class I want to use in the header file. I have seen two ways of doing this and would like to know the difference between the two methods?
ex1
```
#include "ClassA.h"
class ClassB {
public:
ClassB();
~ClassB();
ClassA* a;
};
#endif
```
ex2 Here is the other way of doing it. The ClassA Header would be defined in ClassB source file.
```
class ClassA;
class ClassB {
public:
ClassB();
~ClassB();
ClassA* a;
};
#endif
```
What are the differences with these two methods?
|
The comlpete layout of the `classA` is known to the compiler when you include the class definition.
The second syntax is called **Forward declaration** and now `classA` is an **Incomplete type** for the compiler.
For an Incomplete type,
You can:
- Declare a member to be a pointer or a reference to the incomplete type.
- Declare functions or methods which accepts/return incomplete types.
- Define functions or methods which accepts/return pointers/references to the incomplete type (but without using its members)
But You cannot:
- Use it as a base class.
- Use it to declare a member.
- Define functions or methods using this type.
- Use its methods or fields, in fact trying to dereference a variable with incomplete type.
So Forward Declaring the class might work faster, because the complier does not have to include the entire code in that header file but it restricts how you can use the type, since it becomes an Incomplete type.
|
How to set control's Width or Height property using a ControlTemplate?
How can I set a control's width or height using `ControlTemplate`?
For instance, I have this template:
```
<ControlTemplate x:Key="GridSplitterTemplate" TargetType="{x:Type GridSplitter}">
<!-- What goes here? -->
</ControlTemplate>
```
This is the template usage:
```
<GridSplitter Template="{StaticResource GridSplitterTemplate}"
Grid.Column="1" ResizeDirection="Columns" VerticalAlignment="Stretch" HorizontalAlignment="Center"/>
```
And I also have several `GridSplitter`s, all of which should have `GridSplitter.Width="10"`. How can I set it once using a `ControlTemplate`?
|
If you want to "hard code" the width/height using the ControlTemplate, then just set the root element's width/height:
```
<ControlTemplate x:Key="GridSplitterTemplate" TargetType="{x:Type GridSplitter}">
<!-- What goes here? -->
<Grid Width="10" Height="10">
</Grid>
</ControlTemplate>
```
---
I think @lll has a point though -- this is typically something you would do with a Style. What happens if you want an instance of your GridSplitter with a different size? And you don't lose anything by using this approach:
```
<Style x:Key="GridSplitterStyle" TargetType="{x:Type GridSplitter}">
<Setter Property="Width" Value="10" />
<Setter Property="Height" Value="10" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate x:Key="GridSplitterTemplate" TargetType="{x:Type GridSplitter}">
<Grid Width="{TemplateBinding Width}" Height="{TemplateBinding Height}">
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
```
Now, just have your instances reference the style instead of the control template:
```
<GridSplitter Style="{StaticResource GridSplitterStyle}"
```
This allows you to still have a default height/width of 10, but instances can override that if they need to:
```
<GridSplitter Style="{StaticResource GridSplitterStyle}" Width=20
```
|
One way SSL is one way encryption?
If one way SSL is used (Server Certificate authentication) then data sent from client gets encrypted using Public key of the server certificate. So privacy protection is available for data sent from client. My questions are
1. Does this mean that in One way SSL data sent from Server to client is not encrypted and sent as plain text ?
2. For both server to client and client to server communications the data/message is not signed and so tamper protection or data integrity is not assured. Are there any other means to achieve data integrity while using SSL based transport security and not Message security options ?
|
One way SSL just means that the server does not validate the identity of the client. It has no effect on any of the other security properties of SSL.
While the SSL protocol is a bit complex, the basic gist of what happens is this: The client generates a random key, encrypts it so that only the server can decrypt it, and sends it to the server. The server and client now have a shared secret that can be used to encrypt and validate the communications in both directions.
The server has no idea of the client's identity, but otherwise, the encryption and message validation is two way.
Update:
1) Yes, encryption both ways is symmetric and uses a shared secret generated during session setup.
2) With a shared secret, message integrity is trivial to assure. You just ensure the message has a particular form. For example, I can prefix every message I send with a sequence number and append a checksum onto it before encryption. You decrypt it with the shared secret and validate the sequence number and checksum. How can an attacker substitute or modify the message without knowing the shared secret and still keep the sequence number and checksum intact?
|
Not last child mixin SASS
Is it possible to turn this:
```
.redstripe p:not(last-child) {
border-bottom:1px solid red;
}
```
Into a mixin so that I can apply it to any element and assign a child tag to it like:
```
@mixin redstripe (this.$children):not(last-child) {
border-bottom:1px solid red;
}
```
And then apply:
```
div {
@include redstripe(p);
}
```
What is the correct way to implement this?
|
Here's a general purpose mixin like you've described.
`[**DEMO**](http://sassmeister.com/gist/218a068f6ba46d05e160)`
```
@mixin not-last-child($selector) {
& #{$selector}:not(:last-child) {
@content;
}
}
```
We can pass it a selector string to use.
SCSS:
```
.thing {
@include not-last-child('p') {
color: red;
}
}
```
CSS:
```
.thing p:not(:last-child) {
color: red;
}
```
[Sass Documentation](http://sass-lang.com/documentation/file.SASS_REFERENCE.html)
|
How does stackoverflow make its Tag input field?
How does StackOverflow create its tag system. How can it format the html in a text area?
I just don't know where to begin. If someone could guide me, then I can know what direction to take so I can write some code that someone can check.
**Like this:**

**EDIT:
Basically, when I press space, how do i add a new element/div to the inside of the div?**
|
What I would do is:
- Create a master DIV with a border (like here the border 1px solid #000)
- After, inside this DIV, I will create another DIV (float: left;), another DIV (float: right) and an input inside the right div.
When you write inside the input and let's say you select HTML, it creates a span in the left DIV and reduces the width of the right div to match the remaining size. Of course, inside your span there's the text HTML with the delete sign.
You could do it easily using jQuery.
Example:
```
<div id="master_div">
<div id="categories">
</div>
<div id="input">
<input type="text" value="" />
</div>
</div>
```
And you write some jQuery / JS
|
Can you reproduce or explain this Visual C++ bug with ctime?
This [code example](http://rextester.com/DBG7157) will output `time: 0` regardless of the value of `N` when compiled with Visual Studio Professional 2013 Update 3 in release mode, both 32 and 64-bit option:
```
#include <iostream>
#include <functional>
#include <ctime>
using namespace std;
void bar(int i, int& x, int& y) {x = i%13; y = i%23;}
int g(int N = 1E9) {
int x, y;
int r = 0;
for (int i = 1; i <= N; ++i) {
bar(i, x, y);
r += x+y;
}
return r;
}
int main()
{
auto t0 = clock();
auto r = g();
auto t1 = clock();
cout << r << " time: " << t1-t0 << endl;
return 0;
}
```
When tested with gcc, clang and other version of vc++ on rextester.com, it behaves correctly and outputs `time` greater than zero. Any clues what is going on here?
I noticed that inlining the `g()` function restores correct behaviour, but changing declaration and initialization order of `t0`, `r` and `t1` does not.
|
If you look at the disassembly winddow using the debugger you can see the generated code. For VS2012 express in release mode you get this:
```
00AF1310 push edi
auto t0 = clock();
00AF1311 call dword ptr ds:[0AF30E0h]
00AF1317 mov edi,eax
auto r = g();
auto t1 = clock();
00AF1319 call dword ptr ds:[0AF30E0h]
cout << r << " time: " << t1-t0 << endl;
00AF131F push dword ptr ds:[0AF3040h]
00AF1325 sub eax,edi
00AF1327 push eax
00AF1328 call g (0AF1270h)
00AF132D mov ecx,dword ptr ds:[0AF3058h]
00AF1333 push eax
00AF1334 call dword ptr ds:[0AF3030h]
00AF133A mov ecx,eax
00AF133C call std::operator<<<std::char_traits<char> > (0AF17F0h)
00AF1341 mov ecx,eax
00AF1343 call dword ptr ds:[0AF302Ch]
00AF1349 mov ecx,eax
00AF134B call dword ptr ds:[0AF3034h]
```
from the first 4 lines of assembly you can see that the two calls to `clock` (`ds:[0AF30E0h]`) happen before the call to `g`. So in this case it doesn't matter how long `g` takes, the result will only show the time take between those two sequential calls.
It seems VS has determined that `g` doesn't have any side effects that would affect `clock` so it is safe to move the calls around.
As Michael Petch points out in the comments, adding `volatile` to to the declaration of `r` will stop the compiler from moving the call.
|
Pandas - sort and head inside groupby
I have following dataframe:
```
uniq_id value
2016-12-26 11:03:10 001 342
2016-12-26 11:03:13 004 5
2016-12-26 12:03:13 005 14
2016-12-26 12:03:13 008 114
2016-12-27 11:03:10 009 343
2016-12-27 11:03:13 013 5
2016-12-27 12:03:13 016 124
2016-12-27 12:03:13 018 114
```
And i need get top N records for each day sorted by value.
Something like this (for N=2):
```
2016-12-26 001 342
008 114
2016-12-27 009 343
016 124
```
Please suggest right way to do that in pandas 0.19.x
|
Unfortunately there is no yet such method as `DataFrameGroupBy.nlargest()`, which would allow us to do the following:
```
df.groupby(...).nlargest(2, columns=['value'])
```
So here is a bit ugly, but working solution:
```
In [73]: df.set_index(df.index.normalize()).reset_index().sort_values(['index','value'], ascending=[1,0]).groupby('index').head(2)
Out[73]:
index uniq_id value
0 2016-12-26 1 342
3 2016-12-26 8 114
4 2016-12-27 9 343
6 2016-12-27 16 124
```
PS i think there must be a better one...
**UPDATE:** if your DF wouldn't have duplicated index values, the following solution should work as well:
```
In [117]: df
Out[117]:
uniq_id value
2016-12-26 11:03:10 1 342
2016-12-26 11:03:13 4 5
2016-12-26 12:03:13 5 14
2016-12-26 12:33:13 8 114 # <-- i've intentionally changed this index value
2016-12-27 11:03:10 9 343
2016-12-27 11:03:13 13 5
2016-12-27 12:03:13 16 124
2016-12-27 12:33:13 18 114 # <-- i've intentionally changed this index value
In [118]: df.groupby(pd.TimeGrouper('D')).apply(lambda x: x.nlargest(2, 'value')).reset_index(level=1, drop=1)
Out[118]:
uniq_id value
2016-12-26 1 342
2016-12-26 8 114
2016-12-27 9 343
2016-12-27 16 124
```
|
Can jQuery.getJSON put a domain's cookies in the header of the request it makes?
*(Note: See also the related question [Can browsers react to Set-Cookie specified in headers in an XSS jquery.getJSON() request?](https://stackoverflow.com/questions/4263392/can-browsers-react-to-set-cookie-specified-in-headers-in-an-xss-jquery-getjson))*
I can't seem to set a cookie (whose name is [mwLastWriteTime](http://mql.freebaseapps.com/ch06.html#id2972569)) in the request header of a JSON operation. The request itself is a simple one from the Freebase MQL tutorials, and it is working fine otherwise:
```
// Invoke mqlread and call the function below when it is done.
// Adding callback=? to the URL makes jQuery do JSONP instead of XHR.
jQuery.getJSON("http://api.sandbox-freebase.com/api/service/mqlread?callback=?",
{query: JSON.stringify(envelope)}, // URL parameters
displayResults); // Callback function
```
I'd hoped that I could set this cookie with something along the lines of:
```
$.cookie('mwLastWriteTime', value, {domain: ".sandbox-freebase.com"});
```
Unfortunately, looking in FireBug at the outgoing request header I see only:
```
Host api.sandbox-freebase.com
User-Agent [...]
Accept */*
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 115
Connection keep-alive
Referer [...]
```
But if I don't specify the domain (or if I explicitly specify the domain of the requesting site) I can get `mwLastWriteTime` to show up in the headers for local requests. Since the `.sandbox-freebase.com` domain owns these cookies, shouldn't they be traveling along with the GET? Or does one need a workaround of some sort?
My code is all JavaScript, and I would like to set this cookie and then call the getJSON immediately afterward.
|
You [cannot set a cross-domain cookie](http://www.quirksmode.org/js/cookies.html), because that would open the browser (and therefore the user) to XSS attacks.
To quote from the [QuirksMode.org article](http://www.quirksmode.org/js/cookies.html) that I reference above:
>
> Please note that the purpose of the
> domain is to allow cookies to cross
> sub-domains. My cookie will not be
> read by search.quirksmode.org because
> its domain is www.quirksmode.org .
> When I set the domain to
> quirksmode.org, the search sub-domain
> may also read the cookie. I cannot set
> the cookie domain to a domain I'm not
> in, I cannot make the domain
> www.microsoft.com . Only
> quirksmode.org is allowed, in this
> case.
>
>
>
If you want to make cross-site request with cookie values you will need to set up a special proxy on a server you control that will let you pass in values to be sent as cookie values (probably via POST parameters). You'll also want to make sure that you properly secure it, lest your proxy become the means by which someone else's private information is "liberated".
|
How to use application classpath in Gradle task
The idea is rather simple, I want to access project classes in my build script directly.
I first tried:
```
apply plugin: 'java'
tasks.create(name: 'randomTask', type: SourceTask) {
source.each {
println it
}
}
```
and sadly got exactly zero files.
Then i decided to give compileJava a try:
```
apply plugin: 'java'
compileJava {
source.each {
println it
}
}
```
and while indeed I did get a list of files that will be compiled it's not really what I need. I want to get the Class objects of the files.
So for example if I had `com.example.SomeClass`, I'd expect to get `Class<com.example.SomeClass>` so that I could use reflections on those classes.
Here's a non-working example of what I'd like:
```
apply plugin: 'java'
tasks.create(name: 'randomTask', type: SomeMagicalType) {
UMLBuilder builder = UMLBuilder.builder()
classes.each { clazz ->
clazz.methods.each { method ->
builder.addMethod(method)
}
}
builder.build().writeToFile(file('out/application.uml'))
}
```
P.S.: I'm using Gradle 2.5 and Java 8
|
I had a little play trying to get the classes in a project and instantiating them. This is what I managed to get, it's not pretty but it does the raw job of getting the Class object
```
task allMethods(dependsOn: classes) << {
def ncl = new GroovyClassLoader()
ncl.addClasspath("${sourceSets.main.output.classesDir}")
configurations.compile.each { ncl.addClasspath(it.path) }
def cTree = fileTree(dir: sourceSets.main.output.classesDir)
def cFiles = cTree.matching {
include '**/*.class'
exclude '**/*$*.class'
}
cFiles.each { f ->
def c = f.path - (sourceSets.main.output.classesDir.path + "/")
def cname = c.replaceAll('/', '\\.') - ".class"
def cz = Class.forName(cname, false, ncl)
cz.methods.each { println it }
}
}
```
It creates a new class loader, adds the default output classes dir, and all dependency classes from the `compile` configuration, then builds a list of files from the classesDir, converts them from a path to a string in format 'a.b.YourClass', and then loads it in and dumps out the methods on the classes.
You can add your own logic for what attributes from the methods you need, but this should get you going.
|
What are "terse ranged-based for loops"?
clang has begun implementing [terse ranged-based for loops](http://clang.llvm.org/cxx_status.html) from [n3994](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3994.htm). Often when introducing ranged-based for loops, we see code in the form of `for (auto & v : vector)` to avoid unnecessary copying. It seems that n3994 is proposing that `for (auto && v : vector)` is superior in every way. I have a few questions:
1. What advantages does the latter form offer over the former? Why do we usually go with `auto &` instead of `auto &&` if the latter is clearly advantageous?
2. Is making the new ranged-for based loop equivalent to `auto &&` going to break existing code? Will it make an actual impact on new code?
3. Wouldn't this introduce a gotcha to beginners that their code is actually equivalent to `auto &&`?
|
>
> What advantages does the latter form offer over the former?
>
>
>
In the form `for(auto& v : vector)`, the type of `v` is deduced as an lvalue reference to the type obtained by dereferencing the container's iterator type. This means that if the result of dereferencing the iterator is an rvalue (think `std::vector<bool>` which returns a proxy type representing a reference to a single `bool` value), then the code will fail to compile, because an lvalue reference cannot bind to an rvalue.
When you write `for(auto&& v : vector)`, this is a *universal reference*, meaning the type of `v` will either be deduced as an rvalue reference in the case described above; or as an lvalue reference in the usual case where dereferencing the iterator returns a reference to the container's element. So it works in the `vector<bool>` case as well. That's why that form should be preferred if you plan on modifying the elements you're iterating over within the loop.
>
> Why do we usually go with `auto&` instead of `auto&&` if the latter is clearly advantageous?
>
>
>
You shouldn't. The only downside I can think of `auto&&` is that it doesn't guarantee that the changes you make to the elements will necessarily propagate back to the container, but that's indicative of a broken design, and I don't think it's worth protecting against.
>
> Is making the new ranged-for based loop equivalent to `auto &&` going to break existing code?
>
>
>
I don't see how it can break existing code because the old syntax will continue to function as it does today. But if you mean replacing existing code with the new syntax, then it might have an effect if the one you're replacing is the `auto const&` form. See [this example](http://coliru.stacked-crooked.com/a/62b401913a1cfc72). Notice how the `auto const&` version calls the `const` member function, while the other two call the non-`const` version? Replacing the first one with the terse version will change the member function being called.
>
> Will it make an actual impact on new code?
>
>
>
Again, it's no different from old code that uses `auto&&` today, so that'll see no difference. If you use it in places where you don't intend to modify the elements, then the compiler will no longer stop you from doing that accidentally, and you might call a different overload, as shown in the example above.
>
> Wouldn't this introduce a gotcha to beginners that their code is actually equivalent to `auto &&`?
>
>
>
I'm not sure I understand what you mean by this, but if you're asking whether beginners will write code without knowing, or understanding the intricacies of reference collapsing, then yes, it's possible they will. But that is one of the stated goals of the paper you've linked to. The argument is that you can stay away from teaching those difficult concepts right at the get go, but introduce beginners to a single syntax for range-based `for` loops that works with everything. As far as that is concerned, I think the syntax has merit, but looking at it from a `const` correctness perspective, I frown upon it because I'd rather use `auto const&` if I want read-only access to the elements, and then the terse syntax seems asymmetrical.
|
get random value from a PHP array, but make it unique
I want to select a random value from a array, but keep it unique as long as possible.
For example if I'm selecting a value 4 times from a array of 4 elements, the selected value should be random, but different every time.
If I'm selecting it 10 times from the same array of 4 elements, then obviously some values will be duplicated.
I have this right now, but I still get duplicate values, even if the loop is running 4 times:
```
$arr = $arr_history = ('abc', 'def', 'xyz', 'qqq');
for($i = 1; $i < 5; $i++){
if(empty($arr_history)) $arr_history = $arr;
$selected = $arr_history[array_rand($arr_history, 1)];
unset($arr_history[$selected]);
// do something with $selected here...
}
```
|
You almost have it right. The problem was the `unset($arr_history[$selected]);` line. The value of `$selected` isn't a key but in fact a value so the unset wouldn't work.
To keep it the same as what you have up there:
```
<?php
$arr = $arr_history = array('abc', 'def', 'xyz', 'qqq');
for ( $i = 1; $i < 10; $i++ )
{
// If the history array is empty, re-populate it.
if ( empty($arr_history) )
$arr_history = $arr;
// Select a random key.
$key = array_rand($arr_history, 1);
// Save the record in $selected.
$selected = $arr_history[$key];
// Remove the key/pair from the array.
unset($arr_history[$key]);
// Echo the selected value.
echo $selected . PHP_EOL;
}
```
Or an example with a few less lines:
```
<?php
$arr = $arr_history = array('abc', 'def', 'xyz', 'qqq');
for ( $i = 1; $i < 10; $i++ )
{
// If the history array is empty, re-populate it.
if ( empty($arr_history) )
$arr_history = $arr;
// Randomize the array.
array_rand($arr_history);
// Select the last value from the array.
$selected = array_pop($arr_history);
// Echo the selected value.
echo $selected . PHP_EOL;
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.