prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Writing jpg or jpeg image with ImageIO.write does not create image file
I recreated the issue in a minimal form:
**MAIN:**
```
public class ImageIOMain extends Application {
@Override
public void start(Stage stage) throws Exception{
Scene scene = new Scene(new StackPane(), 800.0, 600.0);
stage.setScene(scene);
stage.show();
WritableImage img = scene.snapshot(null);
//Image file created on desktop
ImageIO.write(SwingFXUtils.fromFXImage(img, null), "png", new File("D:\\Desktop\\test.png"));
//Image file NOT created on desktop
ImageIO.write(SwingFXUtils.fromFXImage(img, null), "jpeg", new File("D:\\Desktop\\test.jpeg"));
}
public static void main(String[] args) {
launch(args);
}
}
```
***Important:*** I'm using JavaFX 12.
If I export an image as `png` the image shows up on my desktop. If the file format is for example `jpeg` then the image is not created.
In the past when I used JavaFX 8 saving as `jpeg` using the above code caused the image to became pink toned. Now in version 12 nothing happens.
So how can I create an image which format is not `png` but something else like `jpeg`?
I also checked ImageIO supported formats which returns these:
```
JPG, jpg, tiff, bmp, BMP, gif, GIF, WBMP, png, PNG, JPEG, tif, TIF, TIFF, wbmp, jpeg
```
|
First of all I can confirm this behaviour for JavaFX 13 ea build 13. This was probably a very simplistic attempt to fix an old bug which the OP has already mentioned (image turning pink) which I reported a long time ago. The problem is that JPEGS cannot store alpha information and in the past the output was just garbled when an image with an alpha channel was written out as a JPEG. The fix now just refuses to write out the image at all instead of just ignoring the alpha channel.
A workaround is to make a copy of the image where you explicitly specify a color model without alpha channel.
Here is the original bug report which also contains the workaround: <https://bugs.openjdk.java.net/browse/JDK-8119048>
Here is some more info to simplify the conversion:
If you add this line to your code
```
BufferedImage awtImage = new BufferedImage((int)img.getWidth(), (int)img.getHeight(), BufferedImage.TYPE_INT_RGB);
```
and then call `SwingFXUtils.fromFXImage(img, awtImage)` with this as the second parameter instead of `null`, then the required conversion will be done automatically and the JPEG is written as expected.
|
Java overloading and inheritance rules
I've been studying because I have an exam and I don't have many problems with most of Java but I stumbled upon a rule I can't explain. Here's a code fragment:
```
public class A {
public int method(Object o) {
return 1;
}
public int method(A a) {
return 2;
}
}
public class AX extends A {
public int method(A a) {
return 3;
}
public int method(AX ax) {
return 4;
}
}
public static void main(String[] args) {
Object o = new A();
A a1 = new A();
A a2 = new AX();
AX ax = new AX();
System.out.println(a1.method(o));
System.out.println(a2.method(a1));
System.out.println(a2.method(o));
System.out.println(a2.method(ax));
}
```
This returns:
1
3
1
3
While I would expect it to return:
1
3
1
4
Why is it that the type of a2 determines which method is called in AX?
I've been reading on overloading rules and inheritance but this seems obscure enough that I haven't been able to find the exact rule. Any help would be greatly appreciated.
|
The behavior of these method calls is dictated and described by the [Java Language Specification](https://docs.oracle.com/javase/specs/jls/se10/html/jls-8.html#jls-8.4.9) (reference section 8.4.9).
>
> When a method is invoked (§15.12), the number of actual arguments (and
> any explicit type arguments) and the compile-time types of the
> arguments are used, at compile time, to determine the signature of the
> method that will be invoked (§15.12.2). If the method that is to be
> invoked is an instance method, the actual method to be invoked will be
> determined at run time, using dynamic method lookup (§15.12.4).
>
>
>
In your example, the Java compiler determines the closest match on the compile type of the instance you are invoking your method on. In this case:
```
A.method(AX)
```
The closest method is from type A, with signature `A.method(A)`. At runtime, dynamic dispatch is performed on the *actual* type of A (which is an instance of AX), and hence this is the method that is actually called:
```
AX.method(A)
```
|
how to move an std::unique\_ptr<> from one STL container to another?
### Problem
I have a template container `MyContainer<std::unique_ptr<Foo>>` which has a `std::deque<T>` and a `std::vector<T>` member.
Inside method, `send_to_purgatory_if( predicate )`, I would like to look at all items in `m_taskdq` and move items from `m_taskdq` to `m_purgatory`, if the predicate evaluates to true.
### Issues
I have two issues that I'm struggling with:
- my iterator **`it`** gets trashed if I remove items from m\_taskdq from inside the loop
- I am worried about the state of the `std::unique_ptr<>` if I do the move in two steps (problem lines 1 and 2 - by line 2, I think the `std::unique_ptr<>` pointed to by **`it`** is undefined?)
**How should I fix this code?**
```
template <typename T>
class MyContainer
{
typedef std::function<bool(T&)> PREDICATE;
void send_to_purgatory_if( PREDICATE p )
{
// bad code -------------------------------------
for( auto it=m_taskdq.begin(); it!=m_taskdq.end(); ++it )
{
if ( p( *it ) )
{
m_purgatory.emplace_back( move( *it )); // problem line 1
m_taskdq.erase( it ); // problem line 2
}
}
// end bad code ---------------------------------
}
std::deque< T > m_taskdq;
std::vector< T > m_purgatory;
};
```
|
This is really a C++98 question, with a red-herring concerning move semantics. The first thing to ask is how to do this in C++98:
`std::deque::erase(iterator)` returns an `iterator` that refers to the element after the one erased. So get that working first:
```
void send_to_purgatory_if( PREDICATE p )
{
for( auto it=m_taskdq.begin(); it!=m_taskdq.end();)
{
if ( p( *it ) )
{
m_purgatory.emplace_back(*it);
it = m_taskdq.erase(it);
}
else
++it;
}
}
```
And now it is easy to make it work with C++11 move semantics:
```
void send_to_purgatory_if( PREDICATE p )
{
for( auto it=m_taskdq.begin(); it!=m_taskdq.end();)
{
if ( p( *it ) )
{
m_purgatory.emplace_back(std::move(*it));
it = m_taskdq.erase(it);
}
else
++it;
}
}
```
The `unique_ptr` moved from in `taskdq` becomes a null `unique_ptr` after the `emplace_back`, and then it gets erased in the next line. No harm, no foul.
When there is an `erase`, the return from the `erase` does a good job at incrementing the iterator. And when there is no `erase`, a normal iterator increment is in order.
|
Show callout when tapping overlay
I have an MKMapView with several overlays. Works all just fine, and it's incredible how simple it works. There is, however, one thing I can't get to work. The idea is simple: when a user taps within the area that is covered by an overlay, a callout with some information about that overlay has to come up. The overlays are all `MKPolygon`s, which follow the MKOverlay protocol and therefore the MKAnnotation protocol.
>
> The MKOverlay protocol conforms to the
> MKAnnotation protocol. As a result,
> all overlay objects are also
> annotation objects and can be
> treated as one or both in your code.
> If you opt to treat an overlay object
> as both, you are responsible for
> managing that object in two places. If
> you want to display both an overlay
> view and annotation view for it, you
> must implement both the
> mapView:viewForOverlay: and
> mapView:viewForAnnotation: methods in
> your application delegate. It also
> means that you must add and remove the
> object from both the overlays and
> annotations arrays of your map.
>
>
>
This comes from the Apple docs. I tried something like this:
```
- (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation {
MKAnnotationView *aView = [[MKAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:nil];
aView.canShowCallout = YES;
}
```
But that doesn't seem to work. I've tried using gesture recognizers, but I have no idea how to show a callout other than by using the `canShowCallOut` property...
|
I suppose you have to add the MKOverlays as annotations also
```
[self.mapView addAnnotations:myOverlays];
```
Then return a MKAnnotationView in (mapView:viewForAnnotation) that's not hidden, either a graphic (tap-able) or zero alpha view. Next, add a UITapGestureRecognizer for each MKOverlayView, make sure it works with the map's gestures (UIGestureRecognizerDelegate implementation for simultaneous recognition). Finally when your gesture recognizer fires do this
```
[self.mapView setSelectedAnnotations:[NSArray arrayWithObject:myOverlayView.overlay]];
```
I'm not certain that this actually triggers the callOut showing though.
Also make sure your return title and/or subtitle from your overlay object.
|
Git clone hangs - is there a way to continue cloning?
Sometimes my `git clone` command hangs:
```
git clone -v [email protected]:user/repo.git
Cloning into repo...
remote: Counting objects: 105350, done.
remote: Compressing objects: 100% (28919/28919), done.
Receiving objects: 13% (14481/105350), 6.84 MiB | 46 KiB/s
```
There is no progress for ten minutes.
Is there any way to continue cloning using a partially cloned repository if I stop the current process?
|
As of now (git version 1.7.10.4) this is not supported yet.
You can [read why](http://git.661346.n2.nabble.com/Continue-git-clone-after-interruption-td3458217.html) the developers disagreed on how to implement. There were debates in 2009 an 2011 but no implementation so far as this seems to be tough.
It could be so easy (but it unfortunately is not):
```
git clone --continue
```
As one knows: Questions or comments for the Git community can be sent to the mailing list by using the email address [email protected]. Bug reports should be sent to this mailing list. Just go ahead and ask there again :)
>
> Git does not support resumable clones. That feature, it turns out, is
> pretty tricky to implement properly. One workaround is to download a
> bundle over http and then fetch the remaining bits and pieces with
> git. But many repository admins do not provide bundles for you to
> download. This service aims to fill that gap: give us the URL to a
> repository and we'll create a bundle which you can download through
> http. [ bundler.caurea.org ]
>
>
>
I tried this for qtmoko.git and looks works [quite well](http://bundler.caurea.org/bundle/51c066fe9a23a56a7800002f). Another option is to ask upstream/github to implement "git bundle", there are howtos ([How to use git-bundle for keeping development in sync?](https://stackoverflow.com/questions/3635952/how-to-use-git-bundle-for-keeping-development-in-sync)) for this as well.
|
Cannot find function name
I'm trying to do my first TypeScript/React project and I'm running into issues.
Using [this answer](https://stackoverflow.com/a/27852403), I have managed to read and play sound from my mic and also display some sample analysis data in the console. Now i'm trying to translate it into TS. Going step by step, I've arrived at this:
```
export class Processor {
readonly BUFFER_SIZE = 16384;
audioContext: AudioContext;
gainNode: GainNode;
microphoneStream: MediaElementAudioSourceNode;
constructor() {
this.audioContext = new AudioContext();
console.log('audio is starting up ...');
if (navigator.getUserMedia) {
navigator.getUserMedia(
{ audio: true },
function (stream) {
startMicrophone(stream);
},
function (e) {
alert('Error capturing audio.');
});
} else {
alert('Seems like this browser might not be supported.');
}
}
private startMicrophone(stream: MediaStream) {
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.microphoneStream =
this.audioContext.createMediaStreamSource(stream);
}
}
```
Except the call to startMicrophone gives me
```
'Cannot find name 'startMicrophone'.'
```
I also tried to refer to it with `this`, which results in a different error:
```
''this' implicitly has type 'any' because it does not have a type annotation.'
```
I don't know what I'm doing wrong and could really use a bit of guidance.
|
**Recommended:** You have to use arrow function if you want to use this because if you write this inside function block it refers current function this not parent this.
```
export class Processor {
readonly BUFFER_SIZE = 16384;
audioContext: AudioContext;
gainNode: GainNode;
microphoneStream: MediaElementAudioSourceNode;
constructor() {
this.audioContext = new AudioContext();
console.log('audio is starting up ...');
if (navigator.getUserMedia) {
navigator.getUserMedia({
audio: true
},
(stream) => {
this.startMicrophone(stream);
},
(e) => {
alert('Error capturing audio.');
});
} else {
alert('Seems like this browser might not be supported.');
}
}
private startMicrophone(stream: MediaStream) {
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.microphoneStream =
this.audioContext.createMediaStreamSource(stream);
}
}
```
Another way is you can assign this to some other variable and use `const self= this;` use self inside the function.
```
constructor() {
const self = this;
this.audioContext = new AudioContext();
if (navigator.getUserMedia) {
navigator.getUserMedia({
audio: true
},
function (stream) {
self.startMicrophone(stream);
},
function (e) {
alert('Error capturing audio.');
});
} else {
alert('Seems like this browser might not be supported.');
}
}
```
|
Determine Number of Decimal Place using BigDecimal
I was interested to have the following `getNumberOfDecimalPlace` function:
```
System.out.println("0 = " + Utils.getNumberOfDecimalPlace(0)); // 0
System.out.println("1.0 = " + Utils.getNumberOfDecimalPlace(1.0)); // 0
System.out.println("1.01 = " + Utils.getNumberOfDecimalPlace(1.01)); // 2
System.out.println("1.012 = " + Utils.getNumberOfDecimalPlace(1.012)); // 3
System.out.println("0.01 = " + Utils.getNumberOfDecimalPlace(0.01)); // 2
System.out.println("0.012 = " + Utils.getNumberOfDecimalPlace(0.012)); // 3
```
May I know how can I implement `getNumberOfDecimalPlace`, by using `BigDecimal`?
The following code doesn't work as expected:
```
public static int getNumberOfDecimalPlace(double value) {
final BigDecimal bigDecimal = new BigDecimal("" + value);
final String s = bigDecimal.toPlainString();
System.out.println(s);
final int index = s.indexOf('.');
if (index < 0) {
return 0;
}
return s.length() - 1 - index;
}
```
The following get printed :
```
0.0
0 = 1
1.0
1.0 = 1
1.01
1.01 = 2
1.012
1.012 = 3
0.01
0.01 = 2
0.012
0.012 = 3
```
However, for case 0, 1.0, it doesn't work well. I expect, "0" as result. But they turned out to be "0.0" and "1.0". This will return "1" as result.
|
This code:
```
int getNumberOfDecimalPlaces(BigDecimal bigDecimal) {
String string = bigDecimal.stripTrailingZeros().toPlainString();
int index = string.indexOf(".");
return index < 0 ? 0 : string.length() - index - 1;
}
```
... passes these tests:
```
assertThat(getNumberOfDecimalPlaces(new BigDecimal("0.001")), equalTo(3));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("0.01")), equalTo(2));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("0.1")), equalTo(1));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("1.000")), equalTo(0));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("1.00")), equalTo(0));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("1.0")), equalTo(0));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("1")), equalTo(0));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("10")), equalTo(0));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("10.1")), equalTo(1));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("10.01")), equalTo(2));
assertThat(getNumberOfDecimalPlaces(new BigDecimal("10.001")), equalTo(3));
```
... if that is indeed what you want. The other replies are correct, you have to use BigDecimal all the way through for this rather than double/float.
|
Angular/Jasmine - Do Spies work if invoked on ngOnInit?
I have a reactive form which I have split into smaller components to be able to better manage each form control individually. I'm relying on event emitters to be able to communicate the state of each control to the "parent" component that manages the state of the whole form.
My ngOnInit method for a given component looks like this:
```
@Output() formReady: EventEmitter<FormControl> = new EventEmitter();
ngOnInit() {
(some other unrelated logic here...)
this.formReady.emit(this.userIdFormControl);
}
```
The test I'm trying to write for this component is pretty straightforward
```
it('should emit formReady event when component is initialised', () => {
spyOn(component.formReady, 'emit');
expect(component.formReady.emit).toHaveBeenCalled();
});
```
However this is test is failing because the Spy never gets invoked (although if I add a clg statement to ngOnInit I can see it getting printed as many times as expected).
My question is: are Spies able to be invoked on ngOnInit? I can't see why they wouldn't work but you never know!
Thanks in advance,
Tiago
|
The problem is that, `OnInit` is getting called before `spy` has been created.
That is because you might be calling `fixture.detectChanges()` in `beforEach` block. Just remove that and call it in your spec.
```
it('should emit formReady event when component is initialised', () => {
spyOn(component.formReady, 'emit');
fixture.detectChanges()
expect(component.formReady.emit).toHaveBeenCalled();
});
```
Or alternatively you can call `ngOnInit()` method again in your spec to see if it is working.
```
it('should emit formReady event when component is initialised', () => {
spyOn(component.formReady, 'emit');
component.ngOnInit();
expect(component.formReady.emit).toHaveBeenCalled();
});
```
|
xlsx file created in my java code is not opening in ms excel, says the file format or file extension is not valid
I have a java code to create a excel file and it works fine for the extensions .xls and .xlsx I'm able to open .xls file but unable to open the file extension .xlsx in ms excel and it says ***Excel cannot open the 'sample.xlsx' file because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file.***
Below is my java code I used.
```
HSSFWorkbook workBook = new HSSFWorkbook();
String file = "D:/sample.xlsx";
FileOutputStream fos = new FileOutputStream(file);
workBook.write(fos);
fos.flush();
```
I have gone thru the below link which looked like having similar issue but I didn't find the answer so raised here as new question.
[File Excel From Apache POI Cant Open by Ms Excel (corrupt)](https://stackoverflow.com/questions/30397347/file-excel-from-apache-poi-cant-open-by-ms-excel-corrupt)
|
If you're attempting to create an Excel file with the .xlsx extension, you need to use the XSSFWorkbook. I suggest reading [this](https://poi.apache.org/components/spreadsheet/index.html) to understand the difference between the XSSFWorkbook and the HSSFWorkbook objects.
Further, you can use the documentation [here](https://poi.apache.org/components/spreadsheet/quick-guide.html#NewWorkbook) to figure out how to make a new Excel file using Apache POI.
Essentially, you'll be doing this:
```
Workbook wb = new XSSFWorkbook();
Sheet sheet1 = wb.createSheet("new sheet");
FileOutputStream fileOut = new FileOutputStream("workbook.xlsx");
wb.write(fileOut);
fileOut.close();
```
|
How to find a Java thread running on Linux with ps -axl?
I have a running JVM with two threads. Is it possible to see these running threads on my Linux OS with ps -axl ? I am trying to find out what priority the OS is giving to my threads. More info about this other issue [here](https://stackoverflow.com/questions/9916876/is-it-possible-to-create-a-high-priority-thread-with-a-regular-jvm-one-that-wil/).
|
Use
```
jps -v
```
for finding your java process. Sample Output:
```
3825 RemoteMavenServer -Djava.awt.headless=true -Xmx512m -Dfile.encoding=MacRoman
6172 AppMain -Didea.launcher.port=7533 -Didea.launcher.bin.path=/Applications/IntelliJ IDEA 10.app/bin -Dfile.encoding=UTF-8
6175 Jps -Dapplication.home=/Library/Java/JavaVirtualMachines/1.6.0_31-b04-411.jdk/Contents/Home -Xms8m
```
Then use
```
jstack 6172
```
(6172 is id of your process) to get stack of threads inside jvm. Thread priority could be found from it. Sample output:
```
.....
"main" **prio=5** tid=7ff255800800 nid=0x104bec000 waiting on condition [104beb000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at au.com.byr.Sample.main(Sample.java:11)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
.....
```
Enjoy!
**EDIT:** If application running under different user than yourself (typical case on production and other non-local environments) then jps/jstack should be run via sudo. Examples:
```
sudo jps -v
sudo jstack 6172
```
|
PHP create xls from array
I try to create xls file from array and download it with the browser with this code:
```
$sheet = array(
array(
'a1 data',
'b1 data',
'c1 data',
'd1 data',
)
);
$doc = new PHPExcel();
$doc->getActiveSheet()->fromArray($sheet, null, 'A1');
header('Content-Type: application/vnd.ms-excel');
header('Content-Disposition: attachment;filename="your_name.xls"');
header('Cache-Control: max-age=0');
// Do your stuff here
$writer = PHPExcel_IOFactory::createWriter($doc, 'Excel5');
```
The problem is that i get a empty file.Any idea what can be the issue?
|
Please try :
[As per official documentation](https://phpexcel.codeplex.com/wikipage?title=Examples), you first need to save the file with the object writer
Please let me know if this is what you wanted
```
<?php
date_default_timezone_set('America/Los_Angeles');
require_once('PHPExcel.php');
$sheet = array(
array(
'a1 data',
'b1 data',
'c1 data',
'd1 data',
)
);
$doc = new PHPExcel();
$doc->setActiveSheetIndex(0);
$doc->getActiveSheet()->fromArray($sheet, null, 'A1');
header('Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
header('Content-Disposition: attachment;filename="your_name.xls"');
header('Cache-Control: max-age=0');
// Do your stuff here
$writer = PHPExcel_IOFactory::createWriter($doc, 'Excel5');
$writer->save('php://output');
?>
```
|
Why does (int 10) produce a Long instance?
Why does `(int 10)` not produce an instance of type `java.lang.Integer`?
```
; why Long here?
=> (type (int 10))
; java.lang.Long
; this one is also Long, why not java.lang.Number?
=> (type (num 10))
; java.lang.Long
=> (type (double 10))
; java.lang.Double
=> (type (long 10))
; java.lang.Long
=> (type (float 10))
; java.lang.Float
=> (type (short 10))
; java.lang.Short
=> (type (bigint 10))
; clojure.lang.BigInt
=> (type (bigdec 10))
; java.math.BigDecimal
=> (type (boolean 10))
; java.lang.Boolean
=> (type (char 10))
; java.lang.Character
=> (type (byte 10))
; java.lang.Byte
```
|
Clojure deals only with `long` integers internally. `(int)` is used to cast a `long` to an `int` for calling Java methods that expect an `int` argument.
In this case `(int 10)` does indeed return a Java `int`, but Clojure then promotes the `int` back to a `long`. `(type)` uses `(class)` to find out the type of its argument (in this case), and therefore the `long` gets boxed into a `java.lang.Long`.
You can produce `java.lang.Integer` by using one of the `java.lang.Integer` constructors or factory methods:
```
user> (type (Integer. 10))
java.lang.Integer
user> (type (Integer/valueOf 10))
java.lang.Integer
user> (type (Integer/decode "10"))
java.lang.Integer
...
```
`(num)` will upcast its argument to the abstract class `java.lang.Number`, but `(type)` will return the actual type of its argument, i.e. `java.lang.Long` again.
|
Ruby: Cannot allocate memory
I am in the process of development of a Ruby on Rails application. I am a newbie to Ruby/Rails.
I use Ruby 2.2.0 and Rails 4.2. When I run a command like:
```
rails g migration SomeMigrationName
```
it fails with the
```
Cannot allocate memory - fork(2) (Errno::ENOMEM)
```
I use Macbook Pro mid 2014 with OS X 10.10 on board and Vagrant/Virtualbox to run a virtual machine (Ubuntu 14.04) for Rails development.
Here is my Vagrant file:
```
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "dev", "/home/vagrant/dev"
config.vm.synced_folder "opt", "/opt"
config.vm.provider "virtualbox" do |vb|
vb.memory = "512"
end
end
```
I've read that such an error occurs when RAM is out of limit, but I use same config (Vagrant file) for the another dev environment which runs several Python/Tornado apps, MongoDB and Redis and it all works fine.
Do I need to increase vb.memory value or it's a Ruby bug?
|
When Ruby calls `fork` the OS will make a copy of the entire parent processes address space, even if fork is only being called to `exec` another small process like `ls`. Momentarily, your system needs to be able to allocate a chunk of memory at least the size of the Ruby parent process before collapsing it down to what the child process actually needs.
So rails is generally quite memory hungry. Then if something uses `fork`, you need twice as much memory.
**TL;DR** Use [posix-spawn](https://github.com/rtomayko/posix-spawn) instead of fork if you are in control of the code. Otherwise give your VM 1024MB or a bit of extra swap space to take up the slack for the `fork` call
**Example Ruby Memory Usage with`fork`**
Take a random VM, this one has swap space disabled:
```
$ free -m
total used free shared buffers cached
Mem: 1009 571 438 0 1 35
-/+ buffers/cache: 534 475
Swap: 0 0 0
```
Look at the `Mem:` row and `free` column. This is around about your size limit for a new process, in my case `438`MiB
My `buffers/cached` have already been [flushed](https://unix.stackexchange.com/a/87909/22470) for this test so that my `free` memory is at it's limit. You may need to take the `buffers/cache` values into account if they are large. Linux has the ability to evict stale cache when memory is needed by a process.
**Use up some memory**
Create a ruby process with a string around the size of your free memory. There is some overhead for the `ruby` process so it's not going to exactly match `free`.
```
$ ruby -e 'mb = 380; a="z"*mb*2**20; puts "=)"'
=)
```
Then make the string slightly larger:
```
$ ruby -e 'mb = 385; a="z"*mb*2**20; puts "=)"'
-e:1:in `*': failed to allocate memory (NoMemoryError)
from -e:1:in `<main>'
```
Add a `fork` to the ruby process, reducing `mb` until it runs.
```
$ ruby -e 'mb = 195; a="z"*mb*2**20; fork; puts "=)"'
=)
```
A slightly larger fork process will produce the `ENOMEM` error:
```
$ ruby -e 'mb = 200; a="z"*mb*2**20; fork; puts "=)"'
-e:1:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
from -e:1:in `<main>'
```
Running a command with backticks launches that process with a `fork` so has the same outcome:
```
$ ruby -e 'mb = 200; a="z"*mb*2**20; `ls`'
-e:1:in ``': Cannot allocate memory - ls (Errno::ENOMEM)
from -e:1:in `<main>'
```
So there you go, you need about twice the parent processes memory available on the system to fork a new process. MRI Ruby relies heavily on `fork` for it's multi process model, this is due to the design of Ruby which uses a [global interpreter lock (GIL)](http://www.jstorimer.com/blogs/workingwithcode/8085491-nobody-understands-the-gil) that only allows one thread to execute at a time per ruby process.
I believe Python has a lot less use of `fork` internally. When you do use `os.fork` in Python, the same occurs though:
```
python -c 'a="c"*420*2**20;'
python -c 'import os; a="c"*200*2**20; os.fork()'
```
Oracle have a [detailed article on the problem](http://www.oracle.com/technetwork/server-storage/solaris10/subprocess-136439.html) and talk about using the alternative of `posix_spawn()`. The article is directed at Solaris but this is a general POSIX Unix issue so applies to Linux (if not most Unices).
There is also a Ruby implementation of [`posix-spawn`](https://github.com/rtomayko/posix-spawn) which you could use if you are in control of the code. This module doesn't replace anything in Rails, so it won't help you here unless you replaced the calls to `fork` yourself.
|
How detect when a vcl style is changed?
I use several WinAPi functions which needs the Handle of the form in order to work, due which the handle of the form is recreated when the vcl styles is changed many of the calls to these functions stop working. So I need a way to detect when the current vcl style is modified (changed) in order to update the calls to these functions.The question is `How detect when a vcl style is changed?`
|
When a vcl style is changed via the [`TStyleManager.SetStyle`](http://docwiki.embarcadero.com/Libraries/en/Vcl.Themes.TStyleManager.SetStyle) method a [`CM_CUSTOMSTYLECHANGED`](http://docwiki.embarcadero.com/Libraries/XE2/en/Vcl.Controls) message is sent to all the forms of the application, then that messgae is processed in the `WndProc` method of the form and then a `CM_STYLECHANGED` message is sent to inform which the vcl style has changed, so you can listen the [`CM_STYLECHANGED`](http://docwiki.embarcadero.com/Libraries/en/Vcl.Controls) message to detect when a vcl style has changed.
Try this sample Code.
```
type
TForm17 = class(TForm)
Button1: TButton;
procedure Button1Click(Sender: TObject);
private
{ Private declarations }
procedure CMStyleChanged(var Message: TMessage); message CM_STYLECHANGED;
public
{ Public declarations }
end;
var
Form17: TForm17;
implementation
uses
Vcl.Themes;
{$R *.dfm}
procedure TForm17.Button1Click(Sender: TObject);
begin
TStyleManager.SetStyle('Carbon');
end;
procedure TForm17.CMStyleChanged(var Message: TMessage);
begin
ShowMessage('The vcl style has changed');
end;
end.
```
|
Why does my code only select year that is equal 10?
I'm a beginner in programming and have problems with this `if` statement:
```
if (f.year == (10 || 20 || 30 || 40 || 50 || 60 || 70 || 80 || 90 || 100 || 110 || 120)) && (f.rund != true)
```
The first problem is that this code is very complicated. Actually I only want to check if the `f.year` is a round two-digit number.
Next my code does not work correctly. Somehow it only selects the `f.year` that are equal `10`.
How can I solve these problems?
|
It's because
```
(10 || 20 || 30 || 40 || 50 || 60 || 70 || 80 || 90 || 100 || 110 || 120)
```
expression always evaluates to `10`.
You can solve the problem with, for example:
```
(1..12).map { |el| el * 10 }.include?(f.year)
```
or, as suggested by @AurpRakshit:
```
(1..12).map(&10.method(:*)).include?(f.year)
```
[Here](https://stackoverflow.com/questions/18059294/how-to-create-simple-array-in-ruby) you have more examples of generating this kind of array.
Or, if you really want to check if `f.year` is round two-digit number, you can:
```
(10...100).include?(f.year) && f.year % 1 == 0
```
|
Who owns CWinThread after it was created by AfxBeginThread?
I am starting a thread by using `AfxBeginThread`. This returns a pointer to a new `CWinThread` object.
MSDN states that this pointer is NULL and it will deallocate everything if thread creation fails. However as soon as the thread exits after starting regularly the `CWinThread` object still exists. I am not sure if I should delete the `CWinThread` object or if this is done by MFC itself (though it does not seem so).
FYI it is not likely that the thread exits, since it should run until the application ends. However since I use it to be part of a thread pool, I do not want the `CWinThread` to hang aorund for ever.
|
The responsibility for cleaning up the CWinThread object depends on it's m\_bAutoDelete value. The default is to delete itself. For fire and forget short running threads this is fine. It will clean up after itself.
If your thread is long running and needs to be told it's time to quit or otherwise interacted with, you will want that CWinThread handle to remain valid and not point to a self-deleted object.
If you set m\_bAutoDelete to FALSE, you claim responsibility for deleting it. To play it safe with the returned pointer you should create suspended and set it to FALSE before resuming as suggested by Joseph Newcomer in his [Using Worker Threads](http://www.flounder.com/workerthreads.htm#Shutting%20down%20a%20thread%20from%20a%20view%20or%20main%20frame) article.
```
thread = AfxBeginThread(proc, this,
THREAD_PRIORITY_NORMAL, // default: use it
0, // default stack size
CREATE_SUSPENDED); // let us set auto delete
if(thread) { // protect against that rare NULL return
thread->m_bAutoDelete = FALSE;
thread->ResumeThread();
}
```
|
how to return value to parent function from nested anonymous function
I have a javascript function which should return a geocoding for a string:
```
function codeAddress(address) {
var result = (new google.maps.Geocoder()).geocode( { 'address': address}, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
return String(results[0].geometry.location.Ya)+','+String(results[0].geometry.location.Za)
} else {
return status;
}
});
console.log(result);
return result
}
```
However it returns "undefined". I understand the bug here,i.e, since javascript is asynchronous, its returning from the function `codeAddress` even before `function(results, status)` gets fully executed. But I need to know whats the solution here and the best practice.
|
Since it's asynchronous, you should pass a callback which handles the function:
```
function codeAddress(address, callback) {
(new google.maps.Geocoder()).geocode({
'address' : address
}, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
callback(String(results[0].geometry.location.Ya) + ','
+ String(results[0].geometry.location.Za))
} else {
callback(status);
}
});
}
codeAddress("test", function(result) {
// do stuff with result
});
```
If you're using jQuery, you could also use deferred:
```
function codeAddress(address, callback) {
var dfd = new jQuery.Deferred();
(new google.maps.Geocoder()).geocode({
'address' : address
}, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
// trigger success
dfd.resolve(String(results[0].geometry.location.Ya) + ','
+ String(results[0].geometry.location.Za));
} else {
// trigger failure
dfd.reject(status);
}
});
return dfd;
}
codeAddress("some address").then(
// success
function(result) {
// do stuff with result
},
// failure
function(statusCode) {
// handle failure
}
);
```
|
How to monitor a UDP multicast stream on a cisco network, hopefully with SNMP
We have a LAN with 2x Cisco 4500's as gateways running HSRP.
We're using Exterity HD IP Encoders to take HD video and put it onto the network as a multicast UDP stream (playable in VLC).
I have a fairely extensive Nagios setup on Linux and would like to find some way to check that:
1. Multicast stream is on the network.
2. Multicast stream isn't frozen, so check for audio or ...
3. Confirm the source IP of the stream matches what we expect from the multicast address.
1 and 3 could be combined maybe.
My approach thus far:
Using SNMP on the Cisco HSRP gateway IP:
Nagios sends 2 arguments, IP of the host (which should be the source of the multicast), i.e. 172.18.25.101
Second argument is IP of the stream ($mroute), i.e. 239.101.0.1
snmpwalk -v 2c -c changed 172.30.0.1 1.3.6.1.3.59.1.1.2.1.4 | grep $mroute | sed -e 's/.\*IpAddress: //'
A few if's later, and I have, if the stream is on the network, if the multicast I sid for matches the host ip, or if not tells me where is is coming from. And exits correct for nagios.
Or so I thought. Generally it is working as expected, but randomly with some hosts the source IP is not expected and is something different, and when checking manually it is clearly not correct. I think maybe a topology change or something (we have quite a large network), and it's seen from the other gateway or... I'm not great on multicast sorry.
I'm pretty much stuck with the above part.
I then wanted to check that the video/audio was not frozen, I thought another check could be to use mplayer to dump the stream for 2 seconds to a file and do a check based on the size of the file. if it is very small then its probably frozen. But the stream will still send an image, so go with an audio check over a longer, say 10 second period. The more I thought about this, the more I thought "there must be a better way"...
IPTV is pretty big these days, how are people monitoring multicast streams.
Thanks very much.
|
Have you considered using the IP-MROUTE-STD-MIB rather than the IGMP MIB? You can get statistics on a per-mroute basis - which will give you a much better view of the source in particular. There's also a set of Cisco extensions to this MIB that can provide more in the way of platform-specific info. One item you can potentially look for is a substantial difference in counters on your various routers through the path of the mroute. Some delta is to be expected but this is would be a good place to track thresholds.
For tracking streams freezing there's a pretty easy answer: ip multicast heartbeat (<http://www.cisco.com/en/US/docs/ios/12_2/ip/configuration/guide/1cfmulti.html#wp1003131>). You can configure a given router to throw an SNMP trap if no packets are seen on a configured multicast group for 10 seconds.
There is also a feature called mrm (multicast route monitor) that can be called from the Cisco CLI to set up and track synthetic multicast groups. You'd likely want to use EEM or similar to call it periodically and then throw a trap or syslog if it doesn't behave normally. This is also a good troubleshooting tool.
Also - just as you (should) monitor for changes in IGP adjacency, so too should you track on PIM. Events like neighbor state changes, elections, etc can indicate instability in the tree. It's not -necessarily- a big deal in all cases but should generally be quiet on a stable network.
I'm not sure which supervisor you're running in your 4500's, but some of the more recent models support netflow for multicast. This would give you a much more granular and global view of multicast performance and would naturally lend itself to statistical trending, storage, etc.. definitely a good way to go.
I hope this helps-
|
Laravel : Error when using first() back to back on a query builder
I have the following code in my laravel project
```
$config = DB::table('custom_config')->where('item_id', 5);
$cost = [
'car_service_fee' => $config->where('managed_by', 1)->first()->service_fee,
'bike_service_fee' => $config->where('managed_by', 2)->first()->service_fee
];
```
My `custom_config` table is as of below.
```
+---------+------------+-------------+
| item_id | managed_by | service_fee |
|---------+------------+-------------|
| 5 | 1 | 8.5 |
|---------+------------+-------------|
| 5 | 2 | 2.0 |
+---------+------------+-------------+
```
my `car_service_fee` is fetching the result of `8.5`
but my `bike_service_fee` is returning `null` on `first()`
The same code works if it is just like given below,
```
$cost = [
'car_service_fee' => DB::table('custom_config')->where('item_id', 5)->where('managed_by', 1)->first()->service_fee,
'bike_service_fee' => DB::table('custom_config')->where('item_id', 5)->where('managed_by', 2)->first()->service_fee
];
```
Is there any problem on back to back `first()` method used on a query builder that is stored in a variable or something in laravel?
Thank you
|
`$config` is a Query Builder object. The majority of calls you are making on this object are 'building' a query. The object keeps all these `where` conditions internally. When a method to execute the query is called it will compile the query, execute it and return a result [calling `first` or `get` or ...]. The builder itself still exists as a builder and can continue to be built upon or the query can be executed again, etc.
In your case you are adding more `where` conditions to this single query object, `$config`, every time you call `where` on it.
You can see this behavior at any time by calling `toSql` on a builder to see what the generated query would look like.
You can avoid this by creating a new builder object or cloning `$config` so you can have 2 separate queries being built.
Example:
```
$config = DB::table('custom_config')->where('item_id', 5);
$config2 = clone $config;
$cost = [
'car_service_fee' => $config->where('managed_by', 1)->first()->service_fee,
'bike_service_fee' => $config2->where('managed_by', 2)->first()->service_fee
];
```
`$config` and `$config2` both have the first `where` condition.
You could just clone them inline as well, if you don't need these builders after the fact:
```
'car_service_fee' => (clone $config)->where(...)->first()->...,
'bike_service_fee' => (clone $config)->where(...)->first()->...,
```
|
Understanding ldd output
How does `ldd` knows it's depending on `libc.so.6` ,not `libc.so.5` or `libc.so.7`?
```
libc.so.6 => /lib64/libc.so.6 (0x00000034f4000000)
/lib64/ld-linux-x86-64.so.2 (0x00000034f3c00000)
```
|
It is recorded inside application binary itself (specified at compile time, more exactly at link step, done with `ld`):
```
$ readelf -d /bin/echo
Dynamic section at offset 0x5f1c contains 21 entries:
Tag Type Name/Value
0x00000001 (NEEDED) Shared library: [libc.so.6]
...
```
(there are some additional columns for how elf does store information in dynamic section. but you can see that libc.so.6 is hardcoded with `.6` suffix because of [SONAME](http://en.wikipedia.org/wiki/Soname))
or even without any knowledge of ELF file format:
```
$ strings /bin/echo |grep libc.so
libc.so.6
```
To find, how does linker find a library (it is done at final step of compilation), use `gcc` option `-Wl,--verbose` (this asks gcc to pass option `--verbose` to `ld`):
```
$ gcc a.c -Wl,--verbose
...
attempt to open /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/libc.so failed
attempt to open /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/libc.a failed
attempt to open /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/libc.so failed
attempt to open /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/libc.a failed
attempt to open /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/../../../libc.so succeeded
opened script file /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/../../../libc.so
opened script file /usr/lib/gcc/i686-pc-linux-gnu/4.4.4/../../../libc.so
attempt to open /lib/libc.so.6 succeeded
/lib/libc.so.6
```
Linker doesn't know anything about `.digit` suffix, it just iterate over all library search directories trying to open `libLIBNAME.so` and `libLIBNAME.a`, where LIBNAME is a string after `-l` option. ( `-lc` option is added by default).
First success is `/usr/lib/libc.so` which itself is not a library, but a linker script (text file). Here is content from typical `libc.so` script:
```
$ cat /usr/lib/libc.so
/* GNU ld script
Use the shared library, but some functions are only in
the static library, so try that secondarily. */
OUTPUT_FORMAT(elf32-i386)
GROUP ( /lib/libc.so.6 /usr/lib/libc_nonshared.a AS_NEEDED ( /lib/ld-linux.so.2 ) )
```
So, script `/usr/lib/libc.so` is found earlier than actual library, and this script says, what file will be linked, `libc.so.6` in this case.
In more common case, `lib___.so` is symlink to some version like `lib___.so.3.4.5` and there is SONAME field filled in `lib___.so.3.4.5` which says to `ld` link not to `lib___.so` but to `lib___.so.3.4` which is another symlink to `lib___.so.3.4.5`. The `.3.4` name will be recorded in NEEDED field of binary.
|
Red color palette for gnuplot
I am looking for a palette in gnuplot for red color from very-light-red to deep-red. I found a script for black here <http://gnuplot.sourceforge.net/demo_5.3/pm3dcolors.16.gnu>
I tried by changing `set palette functions red, red, red` in the above script, but it is not working.
|
By "legend" do you really mean "palette"? The legend is the list of plot titles and samples; entries there will normally appear in whatever color the corresponding plot used. The palette is the color gradient used for pm3d plots. To define a gradient from white to deep red:
```
set palette defined (0 "white", 1 "dark-red")
test palette
```
If you want more precise control over how light/dark the two extremes are you can provide instead hexadecimal descriptions of the RGB components. E.g. the above command is equivalent to
```
set palette defined (0 "0xffffff", 1 "0x8b0000")
```
[](https://i.stack.imgur.com/q2zyO.png)
|
Terraform - Use nested loops with count
I am trying to use a nested loop in terraform. I have two list variables `list_of_allowed_accounts` and `list_of_images`, and looking to iterate over list `list_of_images` and then iterate over list `list_of_allowed_accounts`.
Here is my terraform code.
```
variable "list_of_allowed_accounts" {
type = "list"
default = ["111111111", "2222222"]
}
variable "list_of_images" {
type = "list"
default = ["alpine", "java", "jenkins"]
}
data "template_file" "ecr_policy_allowed_accounts" {
template = "${file("${path.module}/ecr_policy.tpl")}"
vars {
count = "${length(var.list_of_allowed_accounts)}"
account_id = "${element(var.list_of_allowed_accounts, count.index)}"
}
}
resource "aws_ecr_repository_policy" "repo_policy_allowed_accounts" {
count = "${length(var.list_of_images)}"
repository = "${element(aws_ecr_repository.images.*.id, count.index)}"
count = "${length(var.list_of_allowed_accounts)}"
policy = "${data.template_file.ecr_policy_allowed_accounts.rendered}"
}
```
This is a bash equivalent of what I am trying to do.
```
for image in alpine java jenkins
do
for account_id in 111111111 2222222
do
// call template here using variable 'account_id' and 'image'
done
done
```
|
Terraform doesn't have direct support for this sort of nested iteration, but we can fake it with some arithmetic.
```
variable "list_of_allowed_accounts" {
type = "list"
default = ["1111", "2222"]
}
variable "list_of_images" {
type = "list"
default = ["alpine", "java", "jenkins"]
}
data "template_file" "ecr_policy_allowed_accounts" {
count = "${length(var.list_of_allowed_accounts) * length(var.list_of_images)}"
template = "${file("${path.module}/ecr_policy.tpl")}"
vars {
account_id = "${var.list_of_allowed_accounts[count.index / length(var.list_of_images)]}"
image = "${var.list_of_images[count.index % length(var.list_of_images)]}"
}
}
resource "aws_ecr_repository_policy" "repo_policy_allowed_accounts" {
count = "${data.template_file.ecr_policy_allowed_accounts.count}"
repository = "${var.list_of_images[count.index % length(var.list_of_images)]}"
policy = "${data.template_file.ecr_policy_allowed_accounts.*.rendered[count.index]}"
}
```
Since we want to create a policy template for every combination of account and image, the `count` on the `template_file` data block is the two multiplied together. We can then use the division and modulo operations to get back from `count.index` to the separate indices into each list.
Since I didn't have a copy of your policy template I just used a placeholder one; this configuration thus gave the following plan:
```
+ aws_ecr_respository_policy.repo_policy_allowed_accounts.0
policy: "policy allowing 1111 to access alpine"
repository: "alpine"
+ aws_ecr_respository_policy.repo_policy_allowed_accounts.1
policy: "policy allowing 1111 to access java"
repository: "java"
+ aws_ecr_respository_policy.repo_policy_allowed_accounts.2
policy: "policy allowing 1111 to access jenkins"
repository: "jenkins"
+ aws_ecr_respository_policy.repo_policy_allowed_accounts.3
policy: "policy allowing 2222 to access alpine"
repository: "alpine"
+ aws_ecr_respository_policy.repo_policy_allowed_accounts.4
policy: "policy allowing 2222 to access java"
repository: "java"
+ aws_ecr_respository_policy.repo_policy_allowed_accounts.5
policy: "policy allowing 2222 to access jenkins"
repository: "jenkins"
```
Each policy instance applies to a different pair of account id and image, covering all combinations.
|
How do I set the log directory of glog from code
Does anyone know how to set/modify the log dir in the golang source code?
I want to set the log dir in the soure code, instead of `-log_dir=` in the cmdline
|
This is a hack I have seen lying around: set the flags in code.
Also very good for setting log levels from code.
```
package main
import (
"flag"
"github.com/golang/glog"
)
func main() {
flag.Parse()
glog.Info("hi_a")
flag.Lookup("logtostderr").Value.Set("true")
glog.Info("hi_b")
flag.Lookup("log_dir").Value.Set("/path/to/log/dir")
glog.V(4).Info("v4a")
flag.Lookup("v").Value.Set("10")
glog.V(4).Info("v4b")
//etc.
}
>>> hi_b
>>> v4b
```
|
Zxing Camera in Portrait mode on Android
I want to show `portrait` orientation on `Zxing`'s camera.
How can this be done?
|
Here's how it works.
**Step 1: Add following lines to rotate data before `buildLuminanceSource(..)` in *decode(byte[] data, int width, int height)***
**DecodeHandler.java:**
```
byte[] rotatedData = new byte[data.length];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++)
rotatedData[x * height + height - y - 1] = data[x + y * width];
}
int tmp = width;
width = height;
height = tmp;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(rotatedData, width, height);
```
---
**Step 2: Modify `getFramingRectInPreview()`.**
**CameraManager.java**
```
rect.left = rect.left * cameraResolution.y / screenResolution.x;
rect.right = rect.right * cameraResolution.y / screenResolution.x;
rect.top = rect.top * cameraResolution.x / screenResolution.y;
rect.bottom = rect.bottom * cameraResolution.x / screenResolution.y;
```
---
**Step 3: Disable the check for Landscape Mode in `initFromCameraParameters(...)`**
**CameraConfigurationManager.java**
```
//remove the following
if (width < height) {
Log.i(TAG, "Display reports portrait orientation; assuming this is incorrect");
int temp = width;
width = height;
height = temp;
}
```
---
**Step 4: Add following line to rotate camera in`setDesiredCameraParameters(...)`**
**CameraConfigurationManager.java**
```
camera.setDisplayOrientation(90);
```
---
**Step 5: Do not forget to set orientation of activity to portrait. I.e: manifest**
|
Xcode: What is a target and scheme in plain language?
Yeah the title says it :-) What do they mean in plain English language? I really don't understand the explanation on Apple's website and I need to rename my target and I'm afraid that nothing works after that..
|
I've added in Workspace and Project too!
- **Workspace** - Contains one or more *projects*. These projects usually relate to one another
- **Project** - Contains code and resources, etc. (You'll be used to these!)
- **Target** - Each project has one or more targets.
- Each target defines a list of build settings for that project
- Each target also defines a list of classes, resources, custom scripts etc to include/ use when building.
- Targets are usually used for different distributions of the same project.
- For example, my project has two targets, a "normal" build and an "office" build that has extra testing features and may contain several background music tracks and a button to change the track (as it currently does).
- You'll be used to adding classes and resources to your default target as you add them.
- You can pick and choose which classes / resources are added to which target.
- In my example, I have a "DebugHandler" class that is added to my office build
- If you add tests, this also adds a new target.
- **Scheme** - A scheme defines what happens when you press "Build", "Test", "Profile", etc.
- Usually, each target has at least one scheme
- You can autocreate schemes for your targets by going to Scheme > Manage Schemes and pressing "Autocreate Schemes Now"
|
Python equivalent of curl for posting data from file
I am looking for a Python equivalent for the curl command below.
`curl http://localhost/x/y/update -H 'Content-type: text/xml; charset=utf-8' --data-binary @filename.xml`
Incidentally, I normally post data as strings using the code below.
`curl http://localhost/x/y/update --data '<data>the data is here</data>' -H 'Content-type:text/xml; charset=utf-8'`
```
baseurl = http://localhost/x/y
thedata = '<data>the data is here</data>'
headers = {"Content-type": "text/xml", "charset": "utf-8"}
thequery = urlparse.urljoin(baseurl, thedata, querycontext)
therequest = urllib2.Request(thequery, headers)
theresponse = urllib2.urlopen(therequest)
```
|
Python [requests](http://docs.python-requests.org/en/latest/) its a great library for this kind of stuff. What you have there can be simply done by:
```
import requests
headers = {'content-type': 'text/xml; charset=utf-8'}
response = requests.post(url, data="<data>the data is here</data>", headers=headers)
with open("filename.xml", "w") as fd:
fd.write(response.text)
```
The problem of pycurl and some other url and http client libraries for python is that it requires more effort than the one you should need to achieve something relatively simple. [requests](http://docs.python-requests.org/en/latest/) its way more user friendly and I think its what you are looking for on this problem.
Hope this helps
|
How to change height of UITableViewCell when constraint changes?
My question is related to this one: [How to change height Constraint of UIView in UitableviewCell when using UITableViewAutomaticDimension](https://stackoverflow.com/questions/48401654/how-to-change-height-constraint-of-uiview-in-uitableviewcell-when-using-uitablev)
The solution there does not seem to be working for me.
[](https://i.stack.imgur.com/7eR5O.png)
In the image above i have a simple cell.
On tap of cell, i want to change the constraint of the redView to be larger. This should then automatically change the height of the cell.
I already have the height constraint of the cell set to an @IBOutlet and i think i am correctly changing the size of the cell, but it is not working.
Here is my sample app that is not working. Any help? [SampleApp - for Xcode 9.3](https://github.com/patchthecode/JTAppleCalendar/files/2193541/test.zip)
|
You need to set a bottom constraint to the red view so auto-layout can stretch the cell after setting the constant value
```
extension ViewController: UITableViewDataSource, UITableViewDelegate {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return 1
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "c", for: indexPath) as! customcell
configure(cell: cell, indexPath: indexPath)
cell.redview.backgroundColor = .red
cell.selectionStyle = .none
return cell
}
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let cell = tableView.cellForRow(at: indexPath) as! customcell
cell.constraint.constant = data[indexPath.row] == "contracted" ? 30 : 200
data[indexPath.row] = data[indexPath.row] == "contracted" ? "expanded" : "contracted"
tableView.reloadData()
}
func configure(cell: customcell, indexPath: IndexPath) {
let data = self.data[indexPath.row]
if data == "expanded" {
cell.constraint.constant = 200
} else {
cell.constraint.constant = 30
}
cell.layoutIfNeeded()
}
}
```
|
Quartz.Net in ASP.Net Application
Just downloaded Quartz.Net, read the documentation which is out of date and have ended up with the code below which I believe is correct. (Let me know if it isn't)
I put this in my Application\_Start of my ASP.Net application and the code gets executed but the job does not run. I think I read somewhere about setting Quartz up as a singleton but not sure if I've done that here?
I want to set this up to run daily at 9.00 but for now have used StartNow to check it works.
Please advise what I have to do?
```
private void StartScheduler()
{
ISchedulerFactory schedulerFactory = new StdSchedulerFactory();
IScheduler scheduler = schedulerFactory.GetScheduler();
scheduler.Start();
IJobDetail jobDetail = JobBuilder
.Create()
.OfType(typeof(DBCleanUpJob))
.WithIdentity(new JobKey("test", "1"))
.Build();
var trigger = Quartz.TriggerBuilder.Create()
.ForJob(jobDetail)
.WithIdentity(new TriggerKey("test", "1"))
.WithSimpleSchedule()
.StartNow()
.Build();
//.WithDailyTimeIntervalSchedule(x=>x.StartingDailyAt(new TimeOfDay(09,00)));
scheduler.ScheduleJob(jobDetail, trigger);
}
public class DBCleanUpJob : IJob
{
private IDocumentSession DocumentSession;
public DBCleanUpJob(IDocumentSession DocSession)
{
DocumentSession = DocSession;
}
#region IJob Members
public void Execute(IJobExecutionContext context)
{
throw new NotImplementedException();
}
#endregion
}
```
|
as you said, scheduler should be a singleton. with the code about scheduler is not a singleton and the scheduler only exists in the scope of the application starting, not the application running.
```
public static IScheduler Scheduler { get; private set; }
private void StartScheduler()
{
Scheduler = new StdSchedulerFactory().GetScheduler();
Scheduler.Start();
var jobDetail = JobBuilder
.Create()
.OfType(typeof(DBCleanUpJob))
.WithIdentity(new JobKey("test", "1"))
.Build();
var trigger = Quartz.TriggerBuilder.Create()
.ForJob(jobDetail)
.WithIdentity(new TriggerKey("test", "1"))
.WithSimpleSchedule()
.StartNow()
.Build();
//.WithDailyTimeIntervalSchedule(x=>x.StartingDailyAt(new TimeOfDay(09,00)));
Scheduler.ScheduleJob(jobDetail, trigger);
}
```
and as Jehof pointed out. IIS will shutdown a website/application if there is no activity for a certain period of time.
Also note that your jobs will not have access to the asp.net pipeline. the jobs do not execute within the context of a request, therefore session, request, response, cookies are not available to the job.
Finally, if you want the scheduler to always run it will need to be independent of the website. Windows services are a good candidate. create a windows service project and have the scheduler start when the service starts. you could then setup quartz on the website to proxy jobs to the windows service. allowing the site to schedule jobs but the actual storage and execution is performed by the windows service scheduler.
|
How to make Python's findall regular expression method inclusive
I have a string that looks as follows.
`s = 'string with %%substring1%% and %%substring2%%'`
I want to extract the text in the substrings *including the `%%`* and I cannot figure out how to make a regular expression inclusive.
For example, `re.findall('%%(.*?)%%', s, re.DOTALL)` will output `['substring1', 'substring2']`, but what I really want is for it to return `['%%substring1%%', '%%substring2%%']`.
Any suggestions?
|
You were quite near. Put the group to match the entire required portion rather than only the string in between
```
>>> s = 'string with %%substring1%% and %%substring2%%'
>>> import re
>>> re.findall('(%%.*?%%)', s, re.DOTALL)
['%%substring1%%', '%%substring2%%']
```
You actually do not need the parens at all!
```
>>> re.findall('%%.*?%%', s, re.DOTALL) # Even this works !!!
['%%substring1%%', '%%substring2%%']
```
And for some visualization, check this out

[Debuggex Demo](https://www.debuggex.com/r/HFb9DVAVtl2UTXz_)
And check the explaination [here](https://regex101.com/r/bQ6pA5/1#python)
|
How can I search for substring in a buffer that contains null?
Using C, I need to find a substring inside a buffer that may contain nulls.
```
haystack = "Some text\0\0\0\0 that has embedded nulls".
needle = "has embedded"r
```
I need to return the start of the substring, or null, similat to strstr():
```
request_segment_end = mystrstr(request_segment_start, boundary);
```
Are there any existing implementations that you know of?
**Update**
I found implementations of memove on google's codesearch, which I've copied here verbatim, untested,
```
/*
* memmem.c
*
* Find a byte string inside a longer byte string
*
* This uses the "Not So Naive" algorithm, a very simple but
* usually effective algorithm, see:
*
* http://www-igm.univ-mlv.fr/~lecroq/string/
*/
#include <string.h>
void *memmem(const void *haystack, size_t n, const void *needle, size_t m)
{
const unsigned char *y = (const unsigned char *)haystack;
const unsigned char *x = (const unsigned char *)needle;
size_t j, k, l;
if (m > n || !m || !n)
return NULL;
if (1 != m) {
if (x[0] == x[1]) {
k = 2;
l = 1;
} else {
k = 1;
l = 2;
}
j = 0;
while (j <= n - m) {
if (x[1] != y[j + 1]) {
j += k;
} else {
if (!memcmp(x + 2, y + j + 2, m - 2)
&& x[0] == y[j])
return (void *)&y[j];
j += l;
}
}
} else
do {
if (*y == *x)
return (void *)y;
y++;
} while (--n);
return NULL;
}
```
|
It doesn't make sense to me for a "string" to contain null characters. Strings are null-terminated so the first occurrence marks the end of the string. Besides, what's to say that the null-terminator after the word `"nulls"` doesn't have any more characters after it.
If you mean to search in a *buffer*, then that would make more sense to me. You'd just have to search the buffer ignoring null characters and just relying on the lengths. I don't know of any existing implementations but it should be easy to whip up a simple naive implementation. Of course use a better search algorithm here as needed.
```
char *search_buffer(char *haystack, size_t haystacklen, char *needle, size_t needlelen)
{ /* warning: O(n^2) */
int searchlen = haystacklen - needlelen + 1;
for ( ; searchlen-- > 0; haystack++)
if (!memcmp(haystack, needle, needlelen))
return haystack;
return NULL;
}
char haystack[] = "Some text\0\0\0\0 that has embedded nulls";
size_t haylen = sizeof(haystack)-1; /* exclude null terminator from length */
char needle[] = "has embedded";
size_t needlen = sizeof(needle)-1; /* exclude null terminator from length */
char *res = search_buffer(haystack, haylen, needle, needlen);
```
|
Can't pass environment variables with the docker-compose run -e option
I'm trying to get the variable from the command line using:
```
sudo docker-compose -f docker-compose-fooname.yml run -e BLABLA=hello someservicename
```
My file looks like this:
```
version: '3'
services:
someservicename:
environment:
- BLABLA
image: docker.websitename.com/image-name:latest
volumes:
- /var/www/image-name
command: ["npm", "run", BLABLA]
```
All of this is so that I can run a script defined by what I use as BLABLA in the command line, I've tried going with [official documentation](https://docs.docker.com/compose/environment-variables/#set-environment-variables-in-containers).
Tried several options including:
```
sudo COMPOSE_OPTIONS="-e BLABLA=hello" docker-compose -f docker-compose-fooname.yml run someservicename
```
UPDATE:
I have to mention that as it is, I always get:
```
WARNING: The FAKE_SERVER_MODE variable is not set. Defaulting to a blank string.
```
Even when I just run the following command (be it remove, stop..):
```
sudo docker-compose -f docker-compose-fooname.yml stop someservicename
```
For the record: I'm pulling the image first, I never build it but my CI/CD tool does (gitlab), does this affect it?
I'm using docker-compose version 1.18, docker version 18.06.1-ce, Ubuntu 16.04
|
That `docker-compose.yml` syntax doesn't work the way you expect. If you write:
```
command: ["npm", "run", BLABLA]
```
A YAML parser will turn that into a list of three strings `npm`, `run`, and `BLABLA`, and when Docker Compose sees that list it will try to run literally that exact command, without running a shell to try to interpret anything.
If you set it to a string, Docker will run a shell over it, and that shell will expand the environment variable; try
```
command: "npm run $BLABLA"
```
That having been said, this is a little bit odd use of Docker Compose. As the `services:` key implies the more usual use case is to launch some set of long-running services with `docker-compose up`; you might `npm run start` or some such as a service but you wouldn't typically have a totally parametrizable block with no default.
I might make the `docker-compose.yml` just say
```
version: '3'
services:
someservicename:
image: docker.websitename.com/image-name:latest
command: ["npm", "run", "start"]
```
and if I did actually need to run something else, run
```
docker-compose run --rm someservicename npm run somethingelse
```
(or just use my local `./node_modules/.bin/somethingelse` and not involve Docker at all)
|
Why is the pricing of my GCP instance going down every week and comes back up at the beginning of each month?
I'm running on Google Cloud. I was looking at the costs and I noticed something weird - every week the daily cost of running machines is going down, and then on the 1st day of the month, it goes back up.
It's "jumping" between $1/day up to $2.5/day, so it's very significant.
This is a screenshot of the cost breakdown of running one machine with 2 vCPUs, 7.5 GB RAM in europe-west.
[](https://i.stack.imgur.com/avEPy.png)
Why is it going up and down? Is there any way to keep it at $1/day?
|
The prices don't change, but an increasing discount is applied when your instances run longer.
Google Compute offers a **sustained use discount** for running specific Compute Engine resources a significant portion of the billing month. The discounts come in a couple of tiers and start to apply after running first 25% of a month, then increase after running 50% and 75% of the month. See <https://cloud.google.com/compute/docs/sustained-use-discounts>
[](https://i.stack.imgur.com/HKPKE.png)
An alternative to the automatically applied sustained use discount is the **committed use discount**, where signing up for one or three years is awarded with a discount: <https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts>
|
How to embed V8 in a Java application?
I'm looking for a solution for embedding the Google JavaScript engine V8 in my Java application.
Have you got some solutions?
|
You can use J2V8 <https://github.com/eclipsesource/J2V8>. It's even available in [Maven Central](http://eclipsesource.com/blogs/2015/02/25/announcing-j2v8-2-0/).
Below is a Hello, World! program using J2V8.
```
package com.example;
import com.eclipsesource.v8.V8;
public class EclipseCon_snippet5 {
public static class Printer {
public void print(String string) {
System.out.println(string);
}
}
public static void main(String[] args) {
V8 v8 = V8.createV8Runtime();
v8.registerJavaMethod(new Printer(), "print", "print", new Class<?>[]{String.class});
v8.executeVoidScript( "print('Hello, World!');" );
v8.release(true);
}
}
```
You will need to specify your platform in your pom.xml. J2V8 currently supports win32\_x86, macosx\_x86\_64, android\_x86 and android\_armv7l. The reason they are different is because of the native bindings and pre-build version of V8 that is bundled.
For example, on MacOS you can use.
```
<dependencies>
<dependency>
<groupId>com.eclipsesource.j2v8</groupId>
<artifactId>j2v8_macosx_x86_64</artifactId>
<version>2.0</version>
<scope>compile</scope>
</dependency>
</dependencies>
```
|
How to Create a Single Dummy Variable with conditions in multiple columns?
I am trying to efficiently create a binary dummy variables (1/0) in my data set based on whether or not one or more of 7 variables (col9-15) in the data set take on a specific value (35), but I don't want to test all columns.
While as.numeric is ideal usually, I can only get it to work with one column at a time:
```
data$indicator <- as.numeric(data$col1 == 35)
```
Any idea how I can modify the above code so that if any of `data$col9` - `data$col15` are "35" then my indicator variable takes on a 1?
Thanks!!!
|
You can use `rowSums` (vectorized solution) like this :
```
set.seed(123)
dat <- matrix(sample(c(35,1:100),size=15*20,rep=T),ncol=15,byrow=T)
cbind(dat,rowSums(dat[,9:15] == 35) > 0)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
[1,] 29 79 41 89 94 4 53 90 55 46 96 45 68 57 10 0
[2,] 90 24 4 33 96 89 69 64 100 66 71 54 60 29 14 0
[3,] 97 91 69 80 2 48 76 21 32 23 14 41 41 37 15 0
[4,] 14 23 47 26 86 4 44 80 12 56 20 12 76 90 37 0
[5,] 67 9 38 27 82 45 81 82 80 44 76 63 71 35 48 1
[6,] 22 38 61 35 11 24 67 42 79 10 43 99 90 89 17 0
[7,] 13 65 34 66 32 18 79 9 47 51 60 33 49 96 48 0
[8,] 89 92 61 41 14 94 30 6 95 72 14 55 96 59 40 0
[9,] 65 32 31 22 37 99 15 9 14 69 62 90 67 74 52 0
[10,] 66 83 79 98 44 31 41 1 18 85 23 24 7 24 73 0
[11,] 85 50 39 24 11 39 57 21 44 22 50 35 65 37 35 1
[12,] 53 74 22 41 26 63 18 87 75 67 62 37 53 88 58 0
[13,] 84 31 71 26 60 48 26 57 92 91 27 32 99 62 94 0
[14,] 47 41 66 15 57 24 97 60 52 40 88 36 29 17 17 0
[15,] 48 25 21 68 4 70 35 41 82 92 28 97 73 69 5 0
[16,] 39 48 56 70 92 62 43 54 5 26 40 19 84 15 81 0
[17,] 55 66 17 63 31 73 40 97 97 73 25 22 59 27 53 0
[18,] 79 16 40 47 87 93 89 68 95 52 58 33 35 2 50 1
[19,] 87 35 7 16 77 74 98 47 7 65 76 13 40 22 5 0
[20,] 39 6 22 5 67 30 10 7 88 76 82 99 10 10 80 0
```
**EDIT**
I replace the `cbind` by `transform`. Since the column will be boolean I coerce it to get 0/1.
```
transform(dat,x=as.numeric((rowSums(dat[,9:15] == 35) > 0)))
```
The result is a data.frame.( coerced from matrix by transform)
**EDIT2** ( as suggested by @flodel)
```
data$indicator <- as.integer(rowSums(data[paste0("col", 9:15)] == 35) > 0)
```
where `data` is the OP's data.frame.
|
What is the width of fixed-width integers in bash?
[Shell Arithmetic](http://www.gnu.org/software/bash/manual/bashref.html#Shell-Arithmetic) says:
>
> Evaluation is done in fixed-width integers with no check for overflow,
> though division by 0 is trapped and flagged as an error.
>
>
>
Example:
```
$ echo $(( 1 << 32 ))
4294967296
$ echo $(( (1 << 64) - 1 ))
0
```
What are integer limits in shell arithmetic in bash?
[@rici pointed out](https://stackoverflow.com/a/26123955/4279) that [POSIX shell](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_04) guarantees **signed long** integer range (as defined by ISO C):
```
-2**31+1 to +2**31-1
```
[@John Zwinck pointed out](https://stackoverflow.com/a/26121803/4279) that [`bash` source code indicates that `intmax_t` is used](http://git.savannah.gnu.org/cgit/bash.git/tree/expr.c#n22):
>
> All arithmetic is done as `intmax_t` integers with no checking for overflow
>
>
>
Does bash guarantee **in its documentation** that it uses `intmax_t` or some other C type for integers?
|
**Bash does not document the precise size of integers, and the size may vary from platform to platform.**
However, it does make an attempt to conform to Posix, which specifies that arithmetic expansion uses signed long arithmetic, which must be at least 32 bits including the sign bit.
Posix does not require integer arithmetic to be modulo 2k for any value of `k` [but see Note 1], although `bash` on common platforms will do so, and it particularly does not guarantee that arithmetic operators will behave exactly as though the values were signed longs. Posix even allows the simulation of integer arithmetic with floating point, provided that the floating point values have sufficient precision:
>
> As an extension, the shell may recognize arithmetic expressions beyond those listed. The shell may use a signed integer type with a rank larger than the rank of signed long. The shell may use a real-floating type instead of signed long as long as it does not affect the results in cases where there is no overflow. (XSH §2.6.4)
>
>
>
That would permit the use of IEEE-754 floating point doubles (53 bits of precision) on a platform where `long` was only 32 bits, for example. While `bash` does not do so -- as documented, `bash` uses a fixed-width integer datatype -- other shell implementations might, and portable code should not make assumptions.
---
**Notes:**
1. Posix generally defers to the ISO C standard, but there are a number of places where Posix adds an additional constraint, some of which are marked as extensions (`CX`):
>
> POSIX.1-2008 acts in part as a profile of the ISO C standard, and it may choose to further constrain behaviors allowed to vary by the ISO C standard. Such limitations and other compatible differences are not considered conflicts, even if a CX mark is missing. The markings are for information only.
>
>
>
One of these additional constraints is the existence of exact-width integer types. Standard C requires the types `int_{least,fast}{8,16,32,64}_t` and their unsigned analogues. It does not require the exact-width types, such as `int32_t`, unless some integer type qualifies. An exact-width type must have exactly the number of bits indicated in its name (i.e. no padding bits) and must have 2's-complement representation. So `INT32_MIN`, if it is defined, must be exactly -231 (§7.20.2.1).
However, Posix *does* require the exact-width types `int{8,16,32}_t` (as well as the unsigned analogues), and also `int64_t` if such a type is provided by the implementation. In particular, `int64_t` is required if the "implementation supports the `_POSIX_V7_LP64_OFF64` programming environment and the application is being built in the `_POSIX_V7_LP64_OFF64` programming environment." (XBD, §13, `stdint.h`) (These requirements are marked as `CX`.)
Despite the fact that `int32_t` must exist, and therefore there must be some 2's complement type available, there is still no guarantee that `signed long` is 2's-complement, and even if it is, there is no guarantee that integer overflow wraps around rather than, for example, trapping.
Most relevant to the original question, though, is the fact that even if `signed long` is the same type as `int64_t` and even if signed integer overflow wraps around, the shell is not under any obligation to actually use `signed long` for arithmetic expansion. It could use any datatype "as long as it does not affect the results in cases where there is no overflow." (XSH, §2.6.4)
|
Convert a string containing a number in scientific notation to a double in PHP
I need help converting a string that contains a number in scientific notation to a double.
Example strings:
"1.8281e-009"
"2.3562e-007"
"0.911348"
I was thinking about just breaking the number into the number on the left and the exponent and than just do the math to generate the number; but is there a better/standard way to do this?
|
PHP is ~~typeless~~ dynamically typed, meaning it has to parse values to determine their types (recent versions of PHP have [type declarations](http://php.net/manual/en/functions.arguments.php#functions.arguments.type-declaration)).
In your case, you may simply perform a numerical operation to force PHP to consider the values as numbers (and it understands the scientific notation `x.yE-z`).
Try for instance
```
foreach (array("1.8281e-009","2.3562e-007","0.911348") as $a)
{
echo "String $a: Number: " . ($a + 1) . "\n";
}
```
just adding 1 (you could also subtract zero) will make the strings become numbers, with the right amount of decimals.
Result:
```
String 1.8281e-009: Number: 1.0000000018281
String 2.3562e-007: Number: 1.00000023562
String 0.911348: Number: 1.911348
```
You might also cast the result using `(float)`
```
$real = (float) "3.141592e-007";
```
|
Javascript regex & Japanese symbols
I use the `search()` method of the string object to find a match between a regular expression and a string.
It works fine for English words:
```
"google".search(/\bg/g) // return 0
```
But this code doesn't work for Japanese strings:
```
"アイスランド語".search(/\bア/g) // return -1
```
How can I change the regex to find a match between Japanese strings and a regular expression?
|
Sadly Javascript is an "ASCII only" regex. No Unicode is supported (I mean that the Unicode non-ASCII characters aren't "divided into classes". So `\d` is only 0-9 for example). If you need advanced regexes (Unicode regexes) in Javascript, you can try <http://xregexp.com/>
And we won't even delve in the problem of surrogate pairs. A character in Javascript is an UTF-16 point, so it isn't always a "full" Unicode character. Fortunately Japanese should entirely be in the BMP (but note that the [Han unification](http://en.wikipedia.org/wiki/Han_unification) is in the Plane 2, so each of those character is 2x UTF-16 characters)
If you want to read something about Unicode, you could start from the Wiki [Mapping of Unicode characters](http://en.wikipedia.org/wiki/Mapping_of_Unicode_characters) for example.
|
Selenium Expected Conditions - possible to use 'or'?
I'm using Selenium 2 / WebDriver with the Python API, as follows:
```
from selenium.webdriver.support import expected_conditions as EC
# code that causes an ajax query to be run
WebDriverWait(driver, 10).until( EC.presence_of_element_located( \
(By.CSS_SELECTOR, "div.some_result")));
```
I want to wait for **either** a result to be returned (`div.some_result`) **or** a "Not found" string. Is that possible? Kind of:
```
WebDriverWait(driver, 10).until( \
EC.presence_of_element_located( \
(By.CSS_SELECTOR, "div.some_result")) \
or
EC.presence_of_element_located( \
(By.CSS_SELECTOR, "div.no_result")) \
);
```
I realise I could do this with a CSS selector (`div.no_result, div.some_result`), but is there a way to do it using the Selenium expected conditions method?
|
I did it like this:
```
class AnyEc:
""" Use with WebDriverWait to combine expected_conditions
in an OR.
"""
def __init__(self, *args):
self.ecs = args
def __call__(self, driver):
for fn in self.ecs:
try:
res = fn(driver)
if res:
return True
# Or return res if you need the element found
except:
pass
```
Then call it like...
```
from selenium.webdriver.support import expected_conditions as EC
# ...
WebDriverWait(driver, 10).until( AnyEc(
EC.presence_of_element_located(
(By.CSS_SELECTOR, "div.some_result")),
EC.presence_of_element_located(
(By.CSS_SELECTOR, "div.no_result")) ))
```
Obviously it would be trivial to also implement an `AllEc` class likewise.
Nb. the `try:` block is odd. I was confused because some ECs return true/false while others will throw `NoSuchElementException` for False. The Exceptions are caught by WebDriverWait so my AnyEc thing was producing odd results because the first one to throw an exception meant AnyEc didn't proceed to the next test.
|
Immersive Mode Android Studio
I want the game that I'm making to run in immersive mode, but android studio doesn't recognize the flag immersive mode because I set my minimum API to 16, and I know immersive mode was added only in KitKat which is later on. Is there any way to have my app run in immersive mode without changing my minimum API?
|
Yes, it is possible, but of course this immersive mode will be only working on devices with KitKat and higher. This, what is weird on your side, is fact, that basing on your words, you cannot even get these flags like this:
```
View.SYSTEM_UI_FLAG_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_FULLSCREEN | View.SYSTEM_UI_FLAG_LAYOUT_STABLE | View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY | View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION;
```
(or part of them). If it is this way, then it is looking, that your compileSdkVersion is lower, than it should be. On start I would advise you to update compileSdkVersion to 22 (and also make targetSdkVersion also 22) (both things you will find in build.gradle)
When you will do this, and you would like to use these flags please in places, where you want to use immersive mode add conditions, that will be looking like this:
```
if (Build.VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
int UI_OPTIONS = View.SYSTEM_UI_FLAG_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_FULLSCREEN | View.SYSTEM_UI_FLAG_LAYOUT_STABLE | View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY | View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION;
getWindow().getDecorView().setSystemUiVisibility(UI_OPTIONS);
}
```
Then it should not mess on older OS.
(**UPDATE**: 2nd block of code was updated)
|
Array.Sort() sorts original array and not just copy
This code snippet is from C# 2010 for Dummies. What confuses me is that when using the Array.Sort() method, both my copy of the array (sortedNames) and the original array (planets) get sorted, even though it only calls the Sort method on sortedNames.
It doesn't matter which array the second foreach loop references, the output is the same.
```
static void Main(string[] args)
{
Console.WriteLine("The 5 planets closest to the sun, in order: ");
string[] planets = new string[] { "Mercury","Venus", "Earth", "Mars", "Jupiter"};
foreach (string planet in planets)
{
Console.WriteLine("\t" + planet);
}
Console.WriteLine("\nNow listed alphabetically: ");
string[] sortedNames = planets;
Array.Sort(sortedNames);
foreach (string planet in planets)
{
Console.WriteLine("\t" + planet);
}
}
```
|
Both `sortedNames` and `planets` refer to the same array. Basically both variables point to the same location in memory, so when you call `Array.Sort` on either variable, the changes to the array are reflected by both variables.
Since arrays in C# are *reference types*, both `sortedNames` and `planets` "point" to the same location in memory.
Contrast this with *value types*, which hold data within their own memory allocation, instead of pointing to another location in memory.
If you wanted to keep `planets` intact, you could use create a brand new array, then use `Array.Copy` to fill the new array with the contents of `planets`:
```
/* Create a new array that's the same length as the one "planets" points to */
string[] sortedNames = new string[planets.Length];
/* Copy the elements of `planets` into `sortedNames` */
Array.Copy(planets, sortedNames, planets.Length);
/* Sort the new array instead of `planets` */
Array.Sort(sortedNames);
```
Or, using LINQ you could use `OrderBy` and `ToArray` to create a new, ordered array:
```
string[] sortedNames = planets.OrderBy(planet => planet).ToArray();
```
Some resources that might help with *value types* and *reference types*:
- [Value types and Reference Types (MSDN)](http://msdn.microsoft.com/en-us/library/4d43ts61%28v=vs.90%29.aspx)
- [What is the difference between a reference type and value type in c#?](https://stackoverflow.com/questions/5057267/what-is-the-difference-between-a-reference-type-and-value-type-in-c)
|
Rails form only show future dates in datetime select field
I have a form that I've made using simple\_form, one of the fields in the form is a datetime. I'd like to limit it so that users can only select dates that are after the datetime as of now so they can't create records for the past.
I've experimented doing this with validations but I'd just like to limit the form options instead. I limited the hour range to what I wanted but can't find something similar for days.
The form for the event
```
<%= simple_form_for @event do |f| %>
<%-# Form input fields -%>
<%= f.input :name, label: 'Title' %>
<%= f.input :event_date, minute_step: 15, start_hour: 7, end_hour: 21 %>
<%= f.input :body %>
<%= f.submit 'Submit' , :class => "btn btn-primary" %>
<% end %>
```
|
**For Rails 4:**
For rails 4, there is the `date_field` helper which can accept both `min` and `max` values.
You can use this to set the value range wanted. so, you will have something like:
```
<%= f.date_field : event_date, min: Date.today %>
```
You can get more information about this from [the documentation here](http://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormHelper.html#method-i-date_field)
**Old answer:**
You can specify the `:start_year` option, as seen on [the documentation](http://api.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#M001698) here:
>
> :start\_year - Set the start year for the year select. Default is Date.today.year - 5 if you are creating new record. While editing existing record, :start\_year defaults to the current selected year minus 5.
>
>
>
So, in this case, what you will have to do is to specify the `:start_year` as the current year.
Hope this answers your question.
|
Spring Server/Client Sessions and Hibernate Sessions
I use Spring 3 and Hibernate 3.6 for developing a webapplication - Im new and Im wondering if I really understand how sessions are working.
Is it correct, that the Sessions between Server and Client, identified by a session id, are different from hibernate sessions?
The session between Server and Client is always a HttpSession.(?) When is it created? When a User logs in or also when an anonymous user requests a page (which is not secured)?
is there any connection between httpsession and hibernate-sessions? Are Hibernate Sessions created by a sessionfactory with no connection to a httpsession? Im not sure to which session hibernate is refering with a command like this:
```
this.sessionFactory.getCurrentSession().save(object);
```
this `getCurrentSession()`: for how long is this hibernate session active? for the whole time a user is logged in? or for just one transaction (which can include multiple data-operations?)
Im sorry for this question which is maybe totally easy to answer, but most documentations are in english and if this is not ones mother tongue understanding is sometimes difficult (mainly because the word "session" is used so often)
thanks for helping me to understand this topic! :-)
|
>
> Is it correct, that the Sessions
> between Server and Client, identified
> by a session id, are different from
> hibernate sessions?
>
>
>
Yes, completely different.
**Reference:** (javax.servlet) [`HttpSession`](http://download.oracle.com/javaee/6/api/javax/servlet/http/HttpSession.html), (Hibernate) [`Session`](http://docs.jboss.org/hibernate/core/3.5/api/org/hibernate/Session.html)
>
> The session between Server and Client
> is always a HttpSession.(?) When is it
> created? When a User logs in or also
> when an anonymous user requests a page
> (which is not secured)?
>
>
>
**See** [Java EE Tutorial > Maintaining Client State](http://download.oracle.com/javaee/6/tutorial/doc/bnagm.html)
>
> is there any connection between
> httpsession and hibernate-sessions?
>
>
>
No, although an [`OpenSessionInViewFilter`](http://static.springsource.org/spring/docs/3.0.x/javadoc-api/index.html?org/springframework/orm/hibernate3/support/OpenSessionInViewFilter.html) can make sure there is a Hibernate `Session` available for every HTTP Request (One Hibernate Session per Request, not per Web Session).
>
> Are Hibernate Sessions created by a
> sessionfactory with no connection to a
> httpsession?
>
>
>
Yes, usually.
>
> Im not sure to which
> session hibernate is refering with a
> command like this:
> "this.sessionFactory.getCurrentSession().save(object);"
>
>
>
Hibernate `Session`
>
> this "getCurrentSession()": for how
> long is this hibernate session active?
> for the whole time a user is logged
> in? or for just one transaction (which
> can include multiple data-operations?)
>
>
>
**See** [Hibernate Reference > Session and Transaction Scopes](http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html/transactions.html#transactions-basics)
|
Why can't a string be nil in Go?
The program [available on The Go Playground](https://play.golang.org/p/2gLydmLeHf) reads
```
package main
import "fmt"
func main() {
var name string = nil
fmt.Println(name)
}
```
and yields an error
```
prog.go:6: cannot use nil as type string in assignment
```
I understand [`""` is the "zero value" for strings](https://tour.golang.org/basics/12). I don't understand why I cannot assign `nil` to my `string`.
|
**The simple answer is that `nil` is not defined to be a valid value for type `string` in the [language specification](https://golang.org/ref/spec).**
...but maybe you want a longer answer?
`nil` is the zero value for pointers, interfaces, channels, slices, maps and function types, and it represents an *uninitialized* state.
Consider the following variable declarations:
```
var a *SomeType
var b interface{}
var c func()
```
It seems natural that all these variables would have a value that represents uninitialized state. `a` has been declared as a pointer, but what would it point to, when we haven't yet pointed it at anything? `nil` is an obvious zero value for these types.
As for channels, slices and maps, their zero value is `nil` for the simple reason that their *implementation* is such that they must be explicitly initialized before they can be used. This is mostly for performance reasons, these types are all represented internally as more or less complex data structures, and initializing them is not free.
However, a `string` doesn't require initialization, and it seems natural that the default, zero value for a new string variable would be an empty string, `""`. Therefore there's simply no reason for `nil` to be a valid string value, and adding it to the specification would only make the language more complicated and much more cumbersome to work with.
Furthermore, what would `nil` of type `string` *represent*? An empty string? We already have that. An uninitialized string? There's no such thing.
|
Using a context bound in a class type parameter
I was under the impression that context bounds would work only on methods:
```
trait Target[T]
class Post {
def pinTo[T : Target](t:T)
}
```
apparently context bounds can be used in `class` too (but not in `trait`):
```
trait Target[T]
class Post[T: Target] {
def pintTo[T](t:T)
}
```
Now I'm confused as to how the evidence can be provided to `Post`?
```
class Business
implicit object ev extends Target[Business] // is implicit necessary here ?
val p = new Post[Business] // ?? how do I provide ev ?
```
related to [Modeling a binary relationship between two types](https://stackoverflow.com/q/20438322/369489)
|
The `A: Foo` notation for context bounds is only a shortcut for asking for an implicit value parameter of type `Foo[A]`. Since traits do not have constructor value parameters, you *can not* use this with a trait:
```
trait Foo[A]
trait Bar[A: Foo] // "error: traits cannot have type parameters with context bounds..."
```
Whereas in classes it's possible:
```
class Bar[A: Foo] {
def foo: Foo[A] = implicitly[Foo[A]]
}
```
Which is just a different way of writing
```
class Bar[A](implicit foo: Foo[A])
```
You provide the evidence like you do in any other normal method call:
```
new Bar[Int]()(new Foo[Int] {}) // explicitly
```
Or:
```
implicit val iFoo = new Foo[Int] {}
new Bar[Int] // implicitly
```
|
What is the ratio distribution of a spacing and the sample mean?
Let $X\_1,\dots,X\_n$ be a sample of iid exponential random variables with mean $\beta$, and let $X\_{(1)},\dots,X\_{(n)}$ be the order statistics from this sample. Let $\bar X = \frac{1}{n}\sum\_{i=1}^n X\_i$.
Define spacings $$W\_i=X\_{(i+1)}-X\_{(i)}\ \forall\ 1 \leq i \leq n-1\,.$$ [It can be shown](http://www.stat.purdue.edu/~dasgupta/orderstats.pdf) that each $W\_i$ is also exponential, with mean $\beta\_i=\frac{\beta}{n-i}$.
**Question:** How would I go about finding $\mathbb{P}\left( \frac{W\_i}{\bar X} > t \right)$, where $t$ is known and non-negative?
**Attempt:** I know that this is equal to $1 - F\_{W\_i}\left(t \bar X\right)$. So I used the law of total probability like so:
$$
\mathbb{P}\left( W\_i > t \bar X \right) = 1 - F\_{W\_i}\left( t \bar X \right) = 1 - \int\_0^\infty F\_{W\_i}(ts)f\_{\bar X}(s) \mathrm{d}s \,,
$$
which turns into a messy but I think tractable integral.
Am I on the right track here? Is this a valid use of the Law of Total Probability?
Another approach might be to look at the difference distribution:
$$
\mathbb{P}\left( W\_i - t \bar X > 0\right)
$$
Or even break apart the sums:
$$
\mathbb{P}\left( W\_i - t \bar X > 0 \right) = P \left( \left(X\_{(i+1)} - X\_{(i)}\right) + \frac{t}{n}\left(X\_{(1)} + \dots + X\_{(n)} \right) \right)
$$
A solution to the exponential case would be great, but even better would be some kind of general constraints on the distribution. Or at the very least, its moments, which would be enough to give me Chebyshev and Markov inequalities.
---
**Update:** here's the integral from the first method:
$$\begin{align}
1 - \int\_0^\infty \left( 1 - \exp \left( -\frac{ts}{\beta\_i} \right) \right) \left( \frac{1}{\Gamma(n)\beta^n} s^{n-1} \exp \left( -\beta s \right) \right) \mathrm{d}s \\
1 - \int\_0^\infty \left( 1 - \exp \left( -\frac{(n-i)ts}{\beta} \right) \right) \left( \frac{1}{\Gamma(n)\beta^n} s^{n-1} \exp \left( -\beta s \right) \right) \mathrm{d}s
\end{align}$$
I've been playing around with it for a little while and I'm not sure where to go with it.
|
The difficulty you have here is that you have an event relating non-independent random variables. The problem can be simplified and solved by manipulating the event so that it compares the independent increments. To do this, we first note that for $X\_1, ..., X\_N \sim \text{IID Exp}(\beta)$, each of the order statistics can be written as:
$$X\_{(k)} = \beta \sum\_{i=1}^{k} \frac{Z\_i}{n-i+1},$$
where $Z\_1, Z\_2, ..., Z\_n \sim \text{IID Exp} (1)$ (see e.g., Renyi 1953, David and Nagaraja 2003). This allows us to write $W\_k = \beta Z\_{k+1} / (n-k)$ and we can write the sample mean as:
$$\begin{equation} \begin{aligned}
\bar{X} \equiv \frac{\beta }{n} \sum\_{k=1}^n X\_{(k)} &= \frac{\beta }{n} \sum\_{k=1}^n \sum\_{i=1}^k \frac{Z\_i}{n-i+1} \\
&= \frac{\beta }{n} \sum\_{i=1}^n \sum\_{k=i}^n \frac{Z\_i}{n-i+1} \\
&= \frac{\beta }{n} \sum\_{i=1}^n Z\_i.
\end{aligned} \end{equation}$$
To facilitate our analysis we define the quantity:
$$a \equiv \frac{t(n-k)}{n-t(n-k)}.$$
For $a > 0$ we then have:
$$\begin{equation} \begin{aligned}
\mathbb{P}(W\_k \geqslant t \bar{X})
&= \mathbb{P} \left( \frac{Z\_{k+1}}{n-k} \geqslant \frac{t}{n} \sum\_{i=1}^n Z\_i \right) \\
&= \mathbb{P} \left( \frac{n}{n-k} \cdot Z\_{k+1} \geqslant t \sum\_{i = 1}^k Z\_i \right) \\
&= \mathbb{P} \left( \left( \frac{n}{n-k} - t \right) Z\_{k+1} \geqslant t \sum\_{i \neq k} Z\_i \right) \\
&= \mathbb{P} \left( \left( \frac{n}{n-k} - t \right) Z \geqslant t G \right) = \mathbb{P} \left( Z \geqslant a G \right),
\end{aligned} \end{equation}$$
where $Z \sim \text{Exp} (1)$ and $G \sim \text{Ga}(n-1, 1)$ are independent random variables. For the trivial case where $t \geqslant n / (n-k)$ we have $\mathbb{P}(W\_k \geqslant t \bar{X}) = 0$. For the non-trivial case where $t < n / (n-k)$ we have $a>0$, and the probability of interest is:
$$\begin{equation} \begin{aligned}
\mathbb{P}(W\_k \geqslant t \bar{X}) &= \int\limits\_0^\infty \text{Ga} (g| n-1, 1 ) \int\limits\_{ag}^\infty \text{Exp}(z|1) dz dg \\
&= \int\limits\_0^\infty \frac{1}{\Gamma{(n-1)}} g^{n-2} \exp{(-g)} \int\limits\_{ag}^\infty \exp{(-z)} dz dg \\
&= \int\limits\_0^\infty \frac{1}{\Gamma{(n-1)}} g^{n-2} \exp{(-g)} \left( 1 - \exp (ag) \right) dg \\
&= \int\limits\_0^\infty \frac{1}{\Gamma{(n-1)}} g^{n-2} \exp{(-g)} dg - \int\limits\_0^\infty \frac{1}{\Gamma{(n-1)}} g^{n-2} \exp{(-(a+1)g)} dg \\
&= 1 - (a+1)^{-(n-1)} \\
&= 1 - \left( 1- \frac{n-k}{n} \cdot t \right)^{n-1}.
\end{aligned} \end{equation}$$
This answer is intuitively reasonable. This probability is strictly decreasing in $t$, with unit probability when $t=0$ and zero probability when $t = \frac{n}{n-k}$.
|
How do you parse large files in Groovy without exceeding heap size?
When parsing a large file I get the following error `Caught: java.lang.OutOfMemoryError: Java heap space`
How do you parse large files in Groovy without exceeding heap size?
example code that fails with large files...
```
import java.io.File
def inputFile = new File("c:/dev/test.txt")
inputFile.getText().eachLine{ it, i ->
... do something with each line
}
```
|
Ensure that the you're iterating over the file in a way that doesn't load the whole file into memory...
- In this case specifically turn **inputFile.getText().eachLine** into **inputFile.eachLine**
- Don't use **.readLines()** as it will try and load the whole file into memory, .**eachLine{..}** should be used instead
- You can also extend the heap size with a jvm flag, eg to 1GB by using `groovy -Xmx1024M myscript.groovy` See also [answer here](https://stackoverflow.com/questions/5459317/how-do-i-increase-groovys-jvm-heap-size)
See this [page](http://groovy.329449.n5.nabble.com/Reading-the-contents-of-a-File-without-impacting-heap-space-Reading-the-file-into-memory-td4582259.html) on the groovy mailing list for more info and further discussion
Code that works without a heap space error...
```
def inputFile = new File("c:/dev/test.txt")
inputFile.eachLine{ it, i ->
... do something with each line
}
```
|
Shell can't find nginx
try to prompt this command `nginx -V` but shell returns `fish: Unknown command “nginx”`
If I try `sudo service nginx status`
I could see `Usage: /etc/init.d/nginx {start|stop|restart|reload|force-reload}`
Why `nginx -V` is not work?
|
`/etc/init.d/nginx` is a shell script to `start/stop/...` the nginx service
```
$ file /etc/init.d/nginx
/etc/init.d/nginx: POSIX shell script, ASCII text executable
```
`nginx` binary is different and usually present in `/usr/sbin/`. Add that directory to your `PATH`.
```
$ file /usr/sbin/nginx
/usr/sbin/nginx: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=c14e988f9696733869b790f49e27a488b0aff4c2, stripped
```
Or view the file `/etc/init.d/nginx` to find out where nginx is installed.
|
UDP sockets in D Programming Language
I'm attempting to convert a python program to D; the program is for sending [Art-Net](http://en.wikipedia.org/wiki/Art-Net) DMX packets.
Python:
```
import sys, socket, math, time
from ctypes import *
class ArtNetDMXOut(LittleEndianStructure):
PORT = 0x1936
_fields_ = [("id", c_char * 8),
("opcode", c_ushort),
("protverh", c_ubyte),
("protver", c_ubyte),
("sequence", c_ubyte),
("physical", c_ubyte),
("universe", c_ushort),
("lengthhi", c_ubyte),
("length", c_ubyte),
("payload", c_ubyte * 512)]
def __init__(self):
self.id = b"Art-Net"
self.opcode = 0x5000
self.protver = 14
self.universe = 0
self.lengthhi = 2
def main():
hostIP = "localhost"
S = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
packet = ArtNetDMXOut()
packet.payload[0] = 255
S.sendto(packet, (hostIP, ArtNetDMXOut.PORT))
if __name__ == "__main__":
main()
```
D:
```
import std.stdio;
import std.socket;
class ArtNetDMX{
char id[8];
ushort opCode;
ubyte verH;
ubyte ver;
ubyte sequence;
ubyte physical;
ushort universe;
ubyte lengthHi;
ubyte length;
ubyte data[511];
this(){
this.id = "ART-NET0";
this.opCode = 0x5000;
this.ver = 14;
this.universe = 0;
this.lengthHi = 2;
}
}
void main() {
auto s = new UdpSocket();
auto addr = new InternetAddress("localhost", 6454);
s.bind(addr);
ArtNetDMX packet = new ArtNetDMX();
packet.data[0] = 255;
s.send(packet);
};
```
My Python code works exactly as intended, but in D I get the error `function std.socket.Socket.send (const(void)[] buf, SocketFlags flags) is not callable using argument types (ArtNetDMX)` at the `s.send(packet);` line.
Am I approaching this correctly? What am I doing wrong?
|
For starters, you'll want `ArtNetDMX` to be a `struct`, not a `class` - since classes in D are reference types and there are no guarantees about class field layout; and since you're sending it over the wire, specify the appropriate alignment (usually 1 if every field is to be packed together):
```
struct ArtNetDMX {
align(1):
....
}
```
In your `main`, you can now allocate an instance of it on the stack:
```
ArtNetDMX packet; // no `new` required
packet.id = "ART-NET0"; // initialize fields
packet.opCode = 0x5000;
packet.ver = 14;
packet.universe = 0;
packet.lengthHi = 2;
```
Or if you perform the same initialization a lot, move it into a function:
```
ArtNetDMX createArtNetDMX()
{
ArtNetDMX packet;
packet.id = "ART-NET0";
packet.opCode = 0x5000;
packet.ver = 14;
packet.universe = 0;
packet.lengthHi = 2;
return packet;
}
```
Finally, `Socket.send` requires its parameter to be a slice (an array or part of). If you end up sending multiple packets at a time, you'd put your packets into an array, then just send the array.
Since you're only sending one packet, you can replace
```
s.send(packet);
```
with
```
s.send((&packet)[0..1]);
```
which is just a safe way to convert an object to a 1-element slice.
|
Have multiple 'open with' applications in context menu
I work with a lot of csv files and I open them with either gedit or libreoffice at different times. I would like to have them both in the context menu to save time.
In the past, there was this sub-menu where I could select open with and it lists other applications, without opening a second menu. This new behaviour wastes several useful seconds.
Is there an option to bring this context menu entry back? Or any hacks to get a similar behaviour?
|
I don't think you can bring the old behaviour back without adapting source code. However, clicking wise, the current behaviour is not that bad. As before, you need three clicks to launch a file/document with another application. The only difference is that the last step is a double-click rather than a single click. Yes, rather than clicking the application and then the "Select" button, you can double-click the application.
The first time, the "Recommended Applications" dialog will be empty. However, that list gets populated with the applications you designate. So in your case, "Text Editor" (gedit) and "Libreoffice Writer" will make it to that list. You select the program with a double click, whereas with the previous approach of the submenu, it would have been a single click.
Personally, I like the interface better, because the program icons are presented in a larger size and the dialog is not prone to closing when you hover the mouse a bit in the wrong direction.
Alternatively, you may workaround with [nautilus script](https://askubuntu.com/questions/281062/how-to-get-nautilus-scripts-working-nautilus), but this is not sensitive to the file that you selected.
As a third option, you can configure fully context sensitive right-click menu items with the third party application [nautilus-actions](https://askubuntu.com/questions/1030940/nautilus-actions-in-18-04). Installation may nowadays be less straightforward, and if you get it working properly, you will face some learning curve. It is powerful but also a bit complex.
|
Can I use Selenium WebDriver with Google Cloud Functions?
I am trying to build a solution with Selenium, can I use Firebase Functions to initialize and load webpages with Selenium?
I have found some resources that state no; however they don't give any source and they are 4 years old..:
- [Using Selenium from within Cloud Functions for Firebase](https://stackoverflow.com/questions/45968756/using-selenium-from-within-cloud-functions-for-firebase)
- [Google Firebase Functions: webdriver.io get source code of a html-website](https://stackoverflow.com/questions/44130594/google-firebase-functions-webdriver-io-get-source-code-of-a-html-website)
How or where can I check if this is still the case?
|
You can't currently use Python to run Selenium scripts in Google Cloud Functions. There's a Feature Request in the Public Issue Tracker currently open, that can be found [here](https://issuetracker.google.com/129757153).
For the Node.js runtime for your Cloud Functions, you could use [puppeteer](https://www.npmjs.com/package/puppeteer) which includes headless Chrome. I found this [blog](https://rominirani.com/using-puppeteer-in-google-cloud-functions-809a14856e14) post that details a use-case.
Or if you are ready to trade Python for Cloud functions, we have other services like : App Engine Flex and Cloud Run where you can get it working in Python.
- [Python Headless Browser for GAE](https://stackoverflow.com/a/51427118/15803365)
- [Python headless Chrome with Cloud Run](https://dev.to/googlecloud/using-headless-chrome-with-cloud-run-3fdp)
I also found this [GitHub link](https://github.com/ccorcos/gcloud-functions-selenium-boilerplate) which vouches to run selenium on Google Cloud Functions. If you're fine with a JavaScript/TypeScript example instead of Python, I recommend that you try this template. However, I have not tested this.
|
Copying many files without stopping on errors on OSX
I need to copy several Gb from an external HD to my main HD and some files will cause errors. If I do this with the finder, it will stop on the first error.
Is there a way to copy everything no matter the errors? Something like copy of [Teracopy](https://www.codesector.com/teracopy) for Windows?
|
In terminal, type in
```
cp -Rp /Volumes/<source-volume>/<source-user>/<source-folder>/ /Volumes/<destination-volume>/<destination-folder>/
```
Destination folder should be a new folder you are creating.
If you get info on the new folder after running this you can see the folder size increase.
Example
```
cp -Rp /Volumes/Macintosh HD/User/Bob/Desktop/stufftocopy/ /Volumes/external/newfolder
```
It will copy and display errors for anything it can't copy but without stopping.
\*If your directory names contain spaces put quotes around the path
Example
```
cp -Rp "/Volumes/Macintosh HD/User/Bob/Desktop/stufftocopy/" /Volumes/external/newfolder
```
|
How to correctly call queryStringParameters for AWS Lambda + API Gateway?
I'm following a tutorial on setting up AWS API Gateway with a Lambda Function to create a restful API. I have the following code:
```
import json
def lambda_handler(event, context):
# 1. Parse query string parameters
transactionId = event['queryStringParameters']['transactionid']
transactionType = event['queryStringParameters']['type']
transactionAmounts = event['queryStringParameters']['amount']
# 2. Construct the body of the response object
transactionResponse = {}
# returning values originally passed in then add separate field at the bottom
transactionResponse['transactionid'] = transactionId
transactionResponse['type'] = transactionType
transactionResponse['amount'] = transactionAmounts
transactionResponse['message'] = 'hello from lambda land'
# 3. Construct http response object
responseObject = {}
responseObject['StatusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps(transactionResponse)
# 4. Return the response object
return responseObject
```
When I link the API Gateway to this function and try to call it using query parameters I get the error:
```
{
"message":"Internal server error"
}
```
When I test the lambda function it returns the error:
```
{
"errorMessage": "'transactionid'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 5, in lambda_handler\n transactionId = event['queryStringParameters']['transactionid']\n"
]
```
Does anybody have any idea of what's going on here/how to get it to work?
|
I recommend adding a couple of diagnostics, as follows:
```
import json
def lambda_handler(event, context):
print('event:', json.dumps(event))
print('queryStringParameters:', json.dumps(event['queryStringParameters']))
transactionId = event['queryStringParameters']['transactionid']
transactionType = event['queryStringParameters']['type']
transactionAmounts = event['queryStringParameters']['amount']
// remainder of code ...
```
That way you can see what is in `event` and `event['queryStringParameters']` to be sure that it matches what you expected to see. These will be logged in CloudWatch Logs (and you can see them in the AWS Lambda console if you are testing events using the console).
In your case, it turns out that your test event included `transactionId` when your code expected to see `transactionid` (different spelling). Hence the KeyError exception.
|
how to get the line number of an error from exec or execfile in Python
Let's say I have the following multi-line string:
```
cmd = """
a = 1 + 1
b = [
2 + 2,
4 + 4,
]
bork bork bork
"""
```
and I want to execute it in a particular scope:
```
scope = {}
exec( cmd, scope )
print scope[ 'b' ]
```
There's a `SyntaxError` at line 6 of the command, and I want to be able to report that to the user. How do I get the line number? I've tried this:
```
try:
exec( cmd, scope ) # <-- let's say this is on line 123 of the source file
except Exception, err:
a, b, c = sys.exc_info()
line_number = c.tb_lineno # <-- this gets me 123, not 6
print "%s at line %d (%s)" % ( a, line_number, b.message )
```
...but I get the line number of the `exec` statement, not the line number within the multi-line command.
Update: it turns out the handling of the type of exception that I arbitrarily chose for this example, the `SyntaxError`, is different from the handling of any other type. To clarify, I'm looking a solution that copes with any kind of exception.
|
For syntax errors, the source line number is available as the `lineno` flag on the exception object itself, in your case stored in `err`. This is specific to syntax errors where the line number is an integral part of the error:
```
>>> cmd = """
... 1 \ +
... 2 * "
... """
>>> try:
... exec cmd
... except SyntaxError as err:
... print err.lineno
...
2
```
If you want to also handle other errors, add a new `except` block `except Exception, err`, and use the `traceback` module to compute the line number for the runtime error.
```
import sys
import traceback
class InterpreterError(Exception): pass
def my_exec(cmd, globals=None, locals=None, description='source string'):
try:
exec(cmd, globals, locals)
except SyntaxError as err:
error_class = err.__class__.__name__
detail = err.args[0]
line_number = err.lineno
except Exception as err:
error_class = err.__class__.__name__
detail = err.args[0]
cl, exc, tb = sys.exc_info()
line_number = traceback.extract_tb(tb)[-1][1]
else:
return
raise InterpreterError("%s at line %d of %s: %s" % (error_class, line_number, description, detail))
```
Examples:
```
>>> my_exec("1+1") # no exception
>>>
>>> my_exec("1+1\nbork")
...
InterpreterError: NameError at line 2 of source string: name 'bork' is not defined
>>>
>>> my_exec("1+1\nbork bork bork")
...
InterpreterError: SyntaxError at line 2 of source string: invalid syntax
>>>
>>> my_exec("1+1\n'''")
...
InterpreterError: SyntaxError at line 2 of source string: EOF while scanning triple-quoted string
```
|
Turning live() into on() in jQuery
My application has dynamically added Dropdowns. The user can add as many as they need to.
I was traditionally using jQuery's `live()` method to detect when one of these Dropdowns was `change()`ed:
```
$('select[name^="income_type_"]').live('change', function() {
alert($(this).val());
});
```
As of jQuery 1.7, I've updated this to:
```
$('select[name^="income_type_"]').on('change', function() {
alert($(this).val());
});
```
Looking at the Docs, that should be perfectly valid (right?) - but the event handler never fires. Of course, I've confirmed jQuery 1.7 is loaded and running, etc. There are no errors in the error log.
What am I doing wrong? Thanks!
|
The [`on` documentation](http://api.jquery.com/on/) states (in bold ;)):
>
> Event handlers are bound only to the currently selected elements; they must exist on the page at the time your code makes the call to `.on()`.
>
>
>
Equivalent to `.live()` would be something like
```
$(document.body).on('change', 'select[name^="income_type_"]', function() {
alert($(this).val());
});
```
Although it is better if you bind the event handler as close as possible to the elements, that is, to an element being closer in the hierarchy.
**Update:** While answering another question, I found out that this is also mentioned in the [`.live` documentation](http://api.jquery.com/live/):
>
> Rewriting the `.live()` method in terms of its successors is straightforward; these are templates for equivalent calls for all three event attachment methods:
>
>
>
> ```
> $(selector).live(events, data, handler); // jQuery 1.3+
> $(document).delegate(selector, events, data, handler); // jQuery 1.4.3+
> $(document).on(events, selector, data, handler); // jQuery 1.7+
>
> ```
>
>
|
How to get step name on checkout page in magento 2?
I added a custom block to "Order Summary" on checkout page. It's displaying on both steps - shipping and payment. But I need to know what is the current step. How I can get it on this block?
|
You can use the `Magento_Checkout/js/view/progress-bar` component or `Magento_Checkout/js/model/step-navigator` componen (used by progress-bar) inside your custom block template/component.
Using progress-bar:
```
var registry = require('uiRegistry'),
progressBar = registry.get('index = progressBar'),
firstStep = progressBar.steps()[0];
progressBar.isProcessed(firstStep); // returns bool, shipping step by default
```
[](https://i.stack.imgur.com/lFBfx.png)
Using step-navigator:
```
var stepnav = require('Magento_Checkout/js/model/step-navigator');
stepnav.getActiveItemIndex() // returns int - 0 for shipping & 1 for payment by default
```
or:
```
var registry = require('Magento_Checkout/js/model/step-navigator');
registry.isProcessed('shipping'); // returns bool, you are on shipping step
```
Note: the second step name is *payment* by default
**PS:** inside your own component you can add the `Magento_Checkout/js/view/progress-bar` or `Magento_Checkout/js/model/step-navigator` to the define list:
```
define(
[
'Magento_Checkout/js/model/step-navigator'
],
function (
stepNavigator
) {
// do something with stepNavigator
}
);
```
|
Why not use ResourceBundle instead of Properties?
This is an easy question to which I can't find a concluding answer.
I can load string properties (e.g.: a query for a prepared statement) from a `config.properties` file. Let's say I want to take the database connection to which to connect.
If I want to take this information from the file, I could do just the following in a class:
```
private static final ResourceBundle BUNDLE = ResourceBundle.getBundle("scheduler");
private static final String DRIVER = BUNDLE.getString("bd.driver");
private static final String CONNECTIONURL =BUNDLE.getString("bd.url");
....
```
But instead I've seen that many people recommend using instead Properties, Then I would have to do the same with something like this (if I want to keep the class static and not have a proper constructor):
```
static {
prop = new Properties();
try { prop.load(ReportsDB.class.getClassLoader().getResourceAsStream("config.properties"));
} catch (IOException ex) {
Logger.getLogger(ReportsDB.class.getName()).log(Level.SEVERE, null, ex);
throw new RuntimeException(ex);
}
}
private static final String DRIVER = prop.getProperty("bd.driver");
private static final String CONNECTIONURL = prop.getProperty("bd.url");
```
So, why shouldn’t I use the `ResourceBundle` instead of `Properties` when the second one is more verbose?
|
>
> So, why shouldn’t I use the ResourceBundle instead of Properties when the second one is more verbose?
>
>
>
Because that's not what `ResourceBundle` is for. The [description of the class](http://docs.oracle.com/javase/7/docs/api/java/util/ResourceBundle.html) starts with:
>
> Resource bundles contain locale-specific objects. When your program needs a locale-specific resource, a String for example, your program can load it from the resource bundle that is appropriate for the current user's locale. In this way, you can write program code that is largely independent of the user's locale isolating most, if not all, of the locale-specific information in resource bundles.
>
>
>
Does any of this sound like your use case? I don't think so.
It sounds like the problem is *purely* the verbosity of loading a properties file: so write a utility method to do that. Then your code can be simply:
```
private static final Properties CONFIGURATION = PropertyUtil.load("scheduler.properties");
private static final String DRIVER = CONFIGURATION.getString("bd.driver");
private static final String CONNECTIONURL = CONFIGURATION.getString("bd.url");
```
Admittedly I'm not keen on having static field initializers in an order-dependent way like that... I'd be tempted to encapsulate all of the configuration in a separate class, so you could write:
```
private static final SchedulerConfiguration CONFIG =
SchedulerConfiguration.load("scheduler.properties");
```
then use `CONFIG.getDriver()` etc which could fetch from the properties each time, or use a field, or whatever.
|
How to capture Chef::Log.info in kitchen test
When I run
```
kitchen test
```
or
```
kitchen test --log-level info
```
No logs that I have in my recipe under test ie.
```
Chef::Log.info("How to make appear in kitchen output?")
```
are displayed in kitchen's output to console. Anyway to make this happen?
driver: vagrant
provisioner: chef-solo
Thanks,
|
Update: Martin's answer is no longer true as of version 1.7.0 of Test Kitchen (See [pull request #950](https://github.com/test-kitchen/test-kitchen/pull/950)).
According to the [Dynamic Configuration doc](http://kitchen.ci/docs/getting-started/dynamic-configuration), "Since Kitchen 1.7.0 the log level for the provisioner is no longer related to the Kitchen log level."
It gives the following example of setting the `log_level` in *.kitchen.yml*:
`provisioner:
name: chef-zero
log_level: <%= ENV['CHEF_LOG_LEVEL'] || auto %>`
My tests confirm that:
- `Chef::Log.debug` calls *aren't* logged when simply running `kitchen converge -l debug`.
- `Chef::Log.debug` calls *are* logged after setting `log_level: debug` in *.kitchen.yml*.
|
"Element not closed" error after upgrading from MVC3 to MVC4
Razor 2 (which ships with MVC4) doesn't seem to be fully backwards compatible with Razor 1 (from MVC3).
Since upgrading, I found an error:
>
> The "[email protected](count" element was not closed. All elements must be either self-closing or have a matching end tag.
>
>
>
The code that caused this was:
```
<[email protected](count == null ? null : " class='has-item'")>
```
What is the solution to this?
|
The Razor parser was re-written for MVC 4, [presumably because](http://www.sebnilsson.com/blog/2012/5/21/aspnet-mvc-4-razor-v2-new-features.html):
>
> The HTML5-specs clearly states that unclosed HTML-tags are supported, but Razor v1 didn't have an advanced enough parser to support this. Razor v2 now supports this with the elements listed in W3C's spec.
>
>
>
The simplest solution here is to add a single space before the `@` symbol:
```
<td @Html.Raw(count == null ? null : " class='has-item'")>
```
However, conditional attributes in Razor with MVC 4 have a more elegant syntax.
```
<td class="@(count == null ? null : "has-item")">
```
When an attribute value resolves to `null`, the attribute is omitted from the element. So the output of this markup is either:
```
<td>
```
...or...
```
<td class="has-item">
```
|
Asp.net mvc 301 redirect from www.domain.com to domain.com
We have a website at domain.com, which is also accessible via a CNAME entry for www.domain.com that points back to domain.com. We'd like all visitors to www.domain.com to be redirected to domain.com using a 301 redirect. What's the best way to implement this in asp.net mvc? In global.asax?
|
I accept that doing this at application level is non-desirable as per the comments to the question.
**Installing the HTTP Redirect feature in IIS7 is the best way to do this.**
In our case, other constraints force us to do this at application level.
Here is the code that we use in global.asax to perform the redirect:
```
private static readonly Regex wwwRegex =
new Regex(@"www\.(?<mainDomain>.*)",
RegexOptions.Compiled
| RegexOptions.IgnoreCase
| RegexOptions.Singleline);
protected void Application_BeginRequest(Object sender, EventArgs e)
{
string hostName = Request.Headers["x-forwarded-host"];
hostName = string.IsNullOrEmpty(hostName) ? Request.Url.Host : hostName;
Match match = wwwRegex.Match(hostName);
if (match.Success)
{
string mainDomain = match.Groups["mainDomain"].Value;
var builder=new UriBuilder(Request.Url)
{
Host = mainDomain
};
string redirectUrl = builder.Uri.ToString();
Response.Clear();
Response.StatusCode = 301;
Response.StatusDescription = "Moved Permanently";
Response.AddHeader("Location", redirectUrl);
Response.End();
}
}
```
|
Is Rust using region-based memory management?
For instance, if we forget about return value optimization, and have the following code:
```
fn func() -> Box<String> {
Box::new(String::new())
}
fn main() {
let boxed_string = func();
}
```
will the compiler create regions to manage the memory or will it use normal heap allocation?
I know this simple code might be optimized, but in a more complex code, it might not be the case.
|
While lifetimes can arguably be called "regions" in the same sense as in region-based memory management, Rust does *not* automatically manage memory based on them. Lifetimes are only used for static analysis. Memory is allocated in the normal ways — registers, stack, heap (some C-style `malloc` function), possible other abstractions like memory pools if manually implemented as in the `typed-arena` crate. This perfectly ordinary memory management is then analyzed by the compiler, using the concept of regions, but that doesn't affect the run time behavior at all. In fact, lifetimes are erased from the program before the machine code is even generated.
However, there might be additional *moves* in your code. `Box::new` is an ordinary function into which the argument is moved, and likewise `String::new`'s return might involve a move.
|
Find which lines in a file contain certain characters
Is there a way to find out if a string contains any one of the characters in a set with python?
It's straightforward to do it with a single character, but I need to check and see if a string contains any one of a set of bad characters.
Specifically, suppose I have a string:
```
s = 'amanaplanacanalpanama~012345'
```
and I want to see if the string contains any vowels:
```
bad_chars = 'aeiou'
```
and do this in a for loop for each line in a file:
```
if [any one or more of the bad_chars] in s:
do something
```
I am scanning a large file so if there is a faster method to this, that would be ideal. Also, not every bad character has to be checked---so long as one is encountered that is enough to end the search.
I'm not sure if there is a builtin function or easy way to implement this, but I haven't come across anything yet. Any pointers would be much appreciated!
|
```
any((c in badChars) for c in yourString)
```
or
```
any((c in yourString) for c in badChars) # extensionally equivalent, slower
```
or
```
set(yourString) & set(badChars) # extensionally equivalent, slower
```
"so long as one is encountered that is enough to end the search." - This will be true if you use the first method.
You say you are concerned with performance: performance should not be an issue unless you are dealing with a huge amount of data. If you encounter issues, you can try:
---
*Regexes*
**edit** Previously I had written a section here on using regexes, via the `re` module, programatically generating a regex that consisted of a single character-class `[...]` and using `.finditer`, with the caveat that putting a simple backslash before everything might not work correctly. Indeed, after testing it, that is the case, and I would definitely not recommend this method. Using this would require reverse engineering the entire (slightly complex) sub-grammar of regex character classes (e.g. you might have characters like `\` followed by `w`, like `]` or `[`, or like `-`, and merely escaping some like `\w` may give it a new meaning).
---
*Sets*
Depending on whether the `str.__contains__` operation is O(1) or O(N), it may be justifiable to first convert your text/lines into a set to ensure the `in` operation is O(1), if you have many badChars:
```
badCharSet = set(badChars)
any((c in badChars) for c in yourString)
```
(it may be possible to make that a one-liner `any((c in set(yourString)) for c in badChars)`, depending on how smart the python compiler is)
---
*Do you really need to do this line-by-line?*
It may be faster to do this once for the entire file O(#badchars), than once for every line in the file O(#lines\*#badchars), though the asymptotic constants may be such that it won't matter.
|
Array of Object Array (2d arrays) JNI
I'm struggling with creating a 2d array of my custom object type ShareStruct:
```
jobjectArray ret ;
jobjectArray ins ;
jobjectArray outs;
jclass myClass = (*env)->FindClass(env,"org/apache/s4/core/ShareStruct");
if (myClass==NULL) fprintf(stderr, "Class ShareStruct not found");
jclass myClassArray = (*env)->FindClass(env, "[Lorg/apache/s4/core/ShareStruct");
if (myClassArray==NULL) fprintf(stderr, "Class ShareStruct[] not found");
ins = (*env)->NewObjectArray(env, in, myClass, NULL);
outs = (*env)->NewObjectArray(env, out, myClass, NULL);
ret = (*env)->NewObjectArray(env, 2, myClassArray, NULL);
```
The first class loading works (the ShareStruct is fine), but the other one (trying to load a ShareStruct[] class) doesn't. I've tried both with and without the L but no luck. Any ideas? I'm new with JNI.
Thanks!
|
This `jclass myClassArray = (*env)->FindClass(env, "[Lorg/apache/s4/core/ShareStruct");` is wrong.
To create the array do something like
```
ret = (*env)->NewObjectArray(env,sizeOfArray,myClass,NULL);
(*env)->SetObjectArrayElement( env, ret,index, sharedStructObj);
```
Here sharedStructObj will have to be created by newObject.
Section 3.3.5 of [JNI programmer's guide](http://java.sun.com/docs/books/jni/download/jni.pdf) has a good related example
This is also nice [Create, populate and return 2D String array from native code (JNI/NDK)](https://stackoverflow.com/questions/6070679/create-populate-and-return-2d-string-array-from-native-code-jni-ndk)
**EDIT based on comment**
```
in = (*env)->NewObjectArray(env,sizeOfArray,myClass,NULL);
out = (*env)->NewObjectArray(env,sizeOfArray,myClass,NULL);
ret= (*env)->NewObjectArray(env,sizeOfArray,myClass,NULL);
(*env)->SetObjectArrayElement( env, ret,0, in);
(*env)->SetObjectArrayElement( env, ret,1, out);
```
|
AS3 Casting one type to another
I have a base class called `Room` and a subclass called `Attic`, and another called `Basement`.
I have a controller class that has an attribute called `CurrentLocation` which is type `Room`. The idea is I want to be able to put `Attic` or `Basement` in that property and get it back, then cast that to whatever type it is.
So if on the controller the content is of type `Attic`, I'm trying to figure out how to explicitly cast it. I thought I knew but its not working... Here's what I thought it would be, borrowing from Java:
```
var myAttic:Attic = (Attic) Controller.CurrentLocation;
```
This gives me a syntax error:
>
> 1086: Syntax error: expecting semicolon before instance.
>
>
>
So how do you cast implicitly? Or can you? I could swear I've done this before as as3.
|
Here are your options for casting in ActionScript 3:
1. Use [`as`](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/operators.html#as).
```
var myAttic:Attic = Controller.CurrentLocation as Attic; // Assignment.
(Controller.CurrentLocation as Attic).propertyOrMethod(); // In-line use.
```
This will assign `null` to `myAttic` if the cast fails.
2. Wrap in `Type()`.
```
var myAttic:Attic = Attic(Controller.CurrentLocation); // Assignment.
Attic(Controller.CurrentLocation).propertyOrMethod(); // In-line use.
```
This throws a [`TypeError`](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/TypeError.html) if the cast fails.
|
Bad base path when browser refreshes, ngView breaks
When I access my page from the index and start browsing everything works fine, but when I am on a route other than `/` for example in `/details/123` and I refresh the page(I have URL rewriting configured) the route is not properly set.
It means, when I check the location path when browsing normally from the index and I am on `/details/123` the location path is `/details/123` as expected but when I refresh the page and I am still on `/details/123` the location path changes to `/123` causing ngView to display the wrong view.
I am using html5 mode and Angular v.1.1.5
**UPDATE:** I created a simple example [here](http://routing-test.herokuapp.com/) to illustrate the problem.
I don't have any special setup, I don't think is a server issue. I have the same problem with a different app in python where the redirection is done inside the application.
The `.htaccess`:
```
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*) /index.php
</IfModule>
```
|
This may be related to a nasty issue that showed up during Angular 1.1.5 and is a bug in the core library. A solution that has worked for many is to add the following tag to your index.html head.
If your application is running at the root of your domain.
```
<head>
...
<base href="/"></base>
...
</head>
```
Or if your application is running in a subdirectory, specify that subdirectory (e.g. 'myapp'):
```
<head>
...
<base href="/myapp/"></base>
...
</head>
```
Additionally, you may also try a new set of rewrite rules. This configuration has worked for many ui-router users.
```
<IfModule mod_rewrite.c>
RewriteEngine on
# Don't rewrite files or directories
RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
# Rewrite everything else to index.html to allow html5 state links
RewriteRule ^ index.html [L]
</IfModule>
```
|
Is it safe to return a const char \* from a function that use a static std::string internally?
I am looking at the following code ( simplified ) and ask myself how safe is it to use this `returnMsg` function:
```
#include <iostream>
using namespace std;
const char *returnMsg(const char *msg)
{
static std::string message;
message = msg;
return message.c_str();
}
int main(int argc, char *argv[])
{
const char *msg1 = returnMsg("Hello world");
printf("msg1 = %p\n", msg1);
cout << msg1 << endl;
const char *msg2 = returnMsg("Good bye");
printf("msg2 = %p\n", msg2);
cout << msg2 << endl;
cout << msg1 << endl;
return 0;
}
```
the output is :
```
msg1 = 0x23a6028
Hello world
msg2 = 0x23a6028
Good bye
Good bye
```
`msg2` is written twice and it's what i expected since the static message variable remains in memory during the lifetime of the program and there is no memory reallocation, so what is written at `msg1` address is replaced by the new content of `msg2`.
However if the size of `msg2` is bigger, there is an internal reallocation within `std::string message` variable and the output is :
```
msg1 = 0x1cc6028
Hello world
msg2 = 0x1cc6058
Good bye looooooooooooooooooooooooooooooooooooooooong
Hello world
```
but I guess that there is no guarantee that `msg1` address will not be reused in the future, and so a new access to `msg1` content might eventually display something different and not coherent.
Does this function needs to be written differently to make it possible to use it without the limitations shown above?
|
>
> Is it safe to return a const char \* from a function that use a static std::string internally?
>
>
>
Yes, that is safe.
But it is not safe to use that pointer after it has been invalidated, which is what the shown program does. The pointer will be invalidated by an assignment in a successive call to that function, if it reallocates. So, the pointer is safe only until the next call to the function (that would cause reallocation).
>
> Does this function needs to be written differently to make it possible to use it without the limitations shown above ?
>
>
>
The function has the described limitations, so of course it has to be written differently to not have those limitations.
The core problem of your approach is that you only have one static string, but want to store multiple strings while not throwing away any of the earlier ones. So, it seems that you need a whole bunch of static strings:
```
const char *returnMsg(const char *msg)
{
static std::forward_list<std::string> messages;
messages.emplace_front(msg);
return messages.front().c_str();
}
```
While this works as you would expect, it is silly. Consider whether you really want to store all strings for the rest of the execution. If not, then static storage is not a solution.
|
How to use get\_operation\_by\_name() in tensorflow, from a graph built from a different function?
I'd like to build a tensorflow graph in a separate function `get_graph()`, and to print out a simple ops `a` in the main function. It turns out that I can print out the value of `a` if I return `a` from `get_graph()`. However, if I use `get_operation_by_name()` to retrieve `a`, it print out None. I wonder what I did wrong here? Any suggestion to fix it? Thank you!
```
import tensorflow as tf
def get_graph():
graph = tf.Graph()
with graph.as_default():
a = tf.constant(5.0, name='a')
return graph, a
if __name__ == '__main__':
graph, a = get_graph()
with tf.Session(graph=graph) as sess:
print(sess.run(a))
a = sess.graph.get_operation_by_name('a')
print(sess.run(a))
```
it prints out
```
5.0
None
```
p.s. I'm using python 3.4 and tensorflow 1.2.
|
Naming conventions in tensorflow are subtle and a bit offsetting at first.
The thing is, when you write
```
a = tf.constant(5.0, name='a')
```
`a` is *not* the `constant` op, but its *output*. [Names of op outputs derive from the op name by adding a number corresponding to its rank](https://stackoverflow.com/questions/36150834/how-does-tensorflow-name-tensors). Here, `constant` has only one output, so its name is
```
print(a.name)
# `a:0`
```
When you run `sess.graph.get_operation_by_name('a')` you *do* get the `constant` op. But what you actually wanted is to get `'a:0'`, the tensor that is the output of this operation, and whose evaluation returns an array.
```
a = sess.graph.get_tensor_by_name('a:0')
print(sess.run(a))
# 5
```
|
How do I add several traces to a single plot?
I would like to add y axis traces to plotly object using map from purrr package. But instead of adding new traces to the existing plot, it creates individual plots for each trace. See:
```
library(purrr)
library(plotly)
data("iris")
p = plot_ly(iris, type = "bar")
xaxis = ~Species
map(c(~Sepal.Length, ~Sepal.Width, ~Petal.Length, ~Petal.Width), ~add_trace(p, x = xaxis, y = .x))
```
Is there a way to remedy this?
|
`map` is the wrong function for this because it iteratively applies the same function to different arguments in turn, by definition.
What you want to do is different: you want to *aggregate* a result over different arguments, by applying a given operation on the current aggregate and the next argument. In functional programming terminology, this is known as a *reduction*, and purrr provides the function [`reduce`](https://purrr.tidyverse.org/reference/reduce.html) for it.
```
trace_vars = c(~Sepal.Length, ~Sepal.Width, ~Petal.Length, ~Petal.Width)
result = reduce(trace_vars, ~ add_trace(.x, x = xaxis, y = .y), .init = p)
```
This is effectively the same as
```
add_trace(
add_trace(
…,
x = xaxis, y = ~Sepal.Width
),
x = xaxis, y = ~Sepal.Length
)
```
You can invert the direction via `reduce`’s `.dir` argument.
To include names, use `reduce2` with an adapted formula. Unfortunately extracting the names from the formulas is a bit annoying:
```
trace_names = map_chr(trace_vars, ~ as.character(.x[[2L]]))
result = reduce2(
trace_vars, trace_names,
~ add_trace(..1, x = xaxis, y = ..2, name = ..3),
.init = p
)
```
|
Unsure of whether to use an unpaired or paired t-test for two different samples
I'm clear on the following:
If there are two different samples, and you wish to test whether they come from the same population we go with an unpaired t-test.
If it is a single sample and we are measuring a scenario of before/after and wish to compare the significance of the results we go with a paired t-test.
However,
I'm faced with a situation where I have TWO different samples of equal size, but they are paired based on a common quality. One undergoes treatment and the other sample doesn't. My intuition says I should go for a paired t-test, since there is a 1-1 pairing within the samples.
In most examples I've looked up, the paired t-test is generally used when the sample is the same.
EDIT: By pairing by a common quality I mean the candidates in the two groups are paired by a common attribute, like say equal height, or weight etc.
|
It's completely reasonable to use a paired t-test when the two samples are not the same individuals, as long as they are meaningfully paired in some way. Conducting an independent samples t-test and a paired t-test asks very different questions, though.
## An example, to illustrate
Let's say you want to test whether teenagers differ from their parents in political orientation, assuming a simplified left-right continuous political scale where 0 means far right and 10 means far left. In general, parents and their children will probably be relatively close to each other on the scale (i.e. conservative parents will be more likely to have conservative kids, and liberal parents will be more likely to have liberal kids). But perhaps teens tend to be more left-leaning than their parents, so the child of a conservative parent may be a little less conservative, and the child of a liberal parent may be even a little more liberal.
If you conduct an independent samples t-test, it will answer the question "Do parents, overall, differ in political orientation from teens, overall?" It will test whether the mean political orientation in parents is different from the mean political orientation in teens. A paired t-test will answer the question "Do teens differ in political orientation from their parents?" It will test whether the mean *difference* in political orientation for all of the parent-teen pairs is different from zero.
## Your data
It's not clear from your description whether you want to look for overall differences between the means of the two samples, or whether you want to know about the difference scores for each matched pair. It is completely reasonable to conduct either the independent or paired analysis --- you should select whichever one will best answer your research question.
Another option which might feel more intuitive for you, depending on how this "matching" process worked, is an [ANCOVA](https://en.wikipedia.org/wiki/Analysis_of_covariance). You can control for the matching variable (height, weight, whatever), and look for differences between the groups after partialing out that variable.
|
Add update or remove WooCommerce shipping order items
I have added shipping cost for the orders that are synced from Amazon. For some reason I had to set custom shipping flat price in woo-orders created for Amazon-order. It is done as follow:
```
$OrderOBJ = wc_get_order(2343);
$item = new WC_Order_Item_Shipping();
$new_ship_price = 10;
$shippingItem = $OrderOBJ->get_items('shipping');
$item->set_method_title( "Amazon shipping rate" );
$item->set_method_id( "amazon_flat_rate:17" );
$item->set_total( $new_ship_price );
$OrderOBJ->update_item( $item );
$OrderOBJ->calculate_totals();
$OrderOBJ->save()
```
The problem is, I have to update orders in each time the status is changed in Amazon, there is no problem doing that, problem is I have to update the shipping cost also if it is updated. But I have not found anyway to do so. Can anyone tell me how to update the shipping items of orders set in this way? Or is it the fact that, once shipping item is set then we cannot update or delete it?
|
To add or update shipping items use the following:
```
$order_id = 2343;
$order = wc_get_order($order_id);
$cost = 10;
$items = (array) $order->get_items('shipping');
$country = $order->get_shipping_country();
// Set the array for tax calculations
$calculate_tax_for = array(
'country' => $country_code,
'state' => '', // Can be set (optional)
'postcode' => '', // Can be set (optional)
'city' => '', // Can be set (optional)
);
if ( sizeof( $items ) == 0 ) {
$item = new WC_Order_Item_Shipping();
$items = array($item);
$new_item = true;
}
// Loop through shipping items
foreach ( $items as $item ) {
$item->set_method_title( __("Amazon shipping rate") );
$item->set_method_id( "amazon_flat_rate:17" ); // set an existing Shipping method rate ID
$item->set_total( $cost ); // (optional)
$item->calculate_taxes( $calculate_tax_for ); // Calculate taxes
if( isset($new_item) && $new_item ) {
$order->add_item( $item );
} else {
$item->save()
}
}
$order->calculate_totals();
```
It should better work…
---
To remove shipping items use te following:
```
$order_id = 2343;
$order = wc_get_order($order_id);
$items = (array) $order->get_items('shipping');
if ( sizeof( $items ) > 0 ) {
// Loop through shipping items
foreach ( $items as $item_id => $item ) {
$order->remove_item( $item_id );
}
$order->calculate_totals();
}
```
---
Related: [Add a shipping to an order programmatically in Woocommerce 3](https://stackoverflow.com/questions/53671945/add-a-shipping-to-an-order-programmatically-in-woocommerce-3/53673366#53673366)
|
Assembly not found at runtime when running a Roslyn script
I am embedding a script engine based on Roslyn into my application and started facing some issues. When loading a reference into my engine, I do the following
```
var engine = new Roslyn.Scripting.CSharp.ScriptEngine();
engine.AddReference("MyAssemblyLocation");
```
Questions:
1. If `MyAssemblyLocation` assembly is referencing other assemblies, how do I tell Roslyn to load them without doing an `engine.AddReference("MyAssemblyLocation");`
2. The host application in which the script is supposed to be running has already loaded some assemblies I would like to use in the script, is there a way for Roslyn to hook itself into the already loaded dlls in memory and not load new instances?
3. Finally, is there a way to tell Roslyn to recursively load all dependencies of `MyAssemblyLocation`
Thank you.
|
1) Short answer: you need to AddReference on all assemblies whose types are you going to use in the script.
Details:
The C# compiler doesn't guess where the dependent files are. An assembly file doesn't specify full paths to all of its dependencies. It only contains their names and that's generally not enough for the compiler to be able to locate the assembly file.
More details:
There are two kinds of dependencies. I'll call them compile time dependencies (a) and runtime dependencies (b).
(a) Suppose you have "public class A : B {}" in assembly A.dll, "public class B { }" in assembly B.dll, and your script is using A, say you create a new instance: "new A()". The compiler requires you to add references to both A.dll and B.dll in this case, since A derives from B and the compiler needs to analyze the inheritance hierarchy of each type you use. So B.dll is a compile-time dependency of your script - the compiler needs it in order to analyze your script properly.
(b) Consider another example: "public class A { public object M() { return new B(); } }" in A.dll and the same B.dll as above. Now when compiling "new A().M()" the compiler doesn't need to know about B, since the reference to B only occurs in the body of method M and the compiler doesn't analyze bodies of imported methods. In this case it suffices to add a reference to A.dll in order to compile the script. When the script executes it calls method M. At that point the CLR loads assembly B. This gets a bit complex, so I'll skip details but in common scenarios we'll be able to locate the assembly for you, so you don't need to add the reference explicitly.
2) I'd recommend using the overload of AddReference that takes Assembly object. Something like:
engine.AddReference(typeof(SomeTypeInAssemblyFoo).Assembly) to load assembly Foo that contains type SomeTypeInAssemblyFoo.
3) Not a straightforward one. You can enumerate all references using Reflection or Roslyn APIs and load each one of them.
Feel free to ask further questions if the explanation is not clear or you wish to know more details.
|
Joining strings to form a URL
I have to construct a URL string from smaller substrings. I'll usually have a `resource url`, an `endpoint`, a `query`, but you can imagine even more parts in creating a full URL.
If, for example, I have this resource URL: `http://foo.com`, this endpoint: `bar/v1`, and this query: `q?var1=33,var2='abc'`, I expect the final URL to be:
```
http://foo.com/bar/v1/q?var1=33,var2='abc'
```
If the strings are formatted following a simple convention (for example, only have leading slashes on a substring) I could simply concatenate them together. The problem however is that these strings will be given as arguments to some library function calls, and hence it is very probable that some library users will miss the convention. It's so easy to add a trailing '/' or omit a leading '/'. So I do not want to impose a convention on this. Instead, I prefer to check and sanitise the arguments. I thought that `urllib.parse.urljoin()` would serve my purpose, but it does not. So I wrote a simple little method:
```
def slash_join(*args):
'''
Joins a set of strings with a slash (/) between them. Useful for creating URLs.
If the strings already have a trailing or leading slash, it is ignored.
Note that the python's urllib.parse.urljoin() does not offer this functionality.
'''
stripped_strings = []
# strip any leading or trailing slashes
for a in args:
if a[0] == '/': start = 1
else: start = 0
if a[-1] =='/':
stripped_strings.append(a[start:-1])
else:
stripped_strings.append(a[start:])
return '/'.join(stripped_strings)
```
### Usage
```
>>> slash_join('http://foo.bar/', '/path/', '/query')
>>> 'http://foo.bar/path/query'
>>> slash_join('http://foo.bar', 'path', 'query')
>>> 'http://foo.bar/path/query'
>>> slash_join('http://foo.bar', 'path/', 'query/')
>>> 'http://foo.bar/path/query'
```
It works fine, but I was wondering if there is a more pythonic way of expressing this, or if I indeed missed a standard library method call that could have helped me.
|
You can simplify the solution if, instead of checking for a presence of the slash, you would `str.strip()` the slashes from both sides of each argument and then `str.join()` the arguments:
```
def slash_join(*args):
return "/".join(arg.strip("/") for arg in args)
```
`urljoin()` unfortunately does not allow to join more than 2 url parts - it can only do a base and one more part. There is though [this clever `functools.reduce()` usage](https://stackoverflow.com/a/36189595/771848) that may help to adapt it here:
```
from urllib.parse import urljoin
from functools import reduce
def slash_join(*args):
return reduce(urljoin, args).rstrip("/")
```
Note that I'm explicitly stripping a possible right slash but not sure if it is actually needed since the url with a trailing slash would still be valid - e.g. `http://foo.com/test/`.
|
LINQ, can't join to string
I have a list of users, each user has list of questions. In my model list of questions should be in string via comma. I try:
```
public List<ITW2012Mobile.ViewModels.AdminSurveyReportModel> SurveyReportList()
{
var q = from i in _dbContext.Users
where i.UserId != null
select new ITW2012Mobile.ViewModels.AdminSurveyReportModel()
{
FirstName = i.FirstName,
LastName = i.LastName,
Question4 = String.Join(", " , (from a in _dbContext.MultipleQuestions where a.MultipleQuestionType.KEY == MultipleQuestionKeys.BENEFITS select a.Question).ToArray())
};
return q.ToList();
}
public class AdminSurveyReportModel
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Question4 { get; set; }
}
```
of course, I get error:
>
> LINQ to Entities does not recognize the method 'System.String
> Join(System.String, System.String[])' method, and this method cannot
> be translated into a store expression.
>
>
>
How to get it correctly?
|
I would suggest doing the `string.Join` operation locally instead using `AsEnumerable`:
```
var q = from i in _dbContext.Users
where i.UserId != null
select new
{
FirstName = i.FirstName,
LastName = i.LastName,
Question4Parts = _dbContext.MultipleQuestions
.Where(a => a.MultipleQuestionType.KEY ==
MultipleQuestionKeys.BENEFITS)
.Select(a => a.Question)
};
return q.AsEnumerable()
.Select(x => new ITW2012Mobile.ViewModels.AdminSurveyReportModel
{
FirstName = x.FirstName,
LastName = x.LastName,
Question4 = string.Join(", ", x.Question4Parts)
})
.ToList();
```
|
The -nodeprecated option in javadoc doesn't seem to work. What can I do?
I have a deprecated method in my class:
```
@Deprecated
public void deprecatedMethod() {
//do bad things
}
```
I don't want that method to appear in the javadoc.
I know there's an option called [-nodeprecated](http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/javadoc.html#nodeprecated) which:
>
> "Prevents the generation of any
> deprecated API at all in the
> documentation."
>
>
>
So I'm using this option and it doesn't exclude the method from javadoc. Is it a bug in javadoc or am I using it wrong? What else can I do?
(I'm using eclipse 3.4.2 to produce javadoc)
|
You have to include "-nodeprecated" option in the Export to javadoc wizard.
Warning: it is a javadoc option, not a VM option.
I've tested it in Eclipse 3.4 and it worked.
**Edit:** If you only include Deprecated annotation it doesn't work. You have to include @deprecated tag inside method javadoc as well.
I don't know if there's a way to tell javadoc to use @Deprecated anotation (which curiously doesn't have a message parameter to document why is deprecated and what else to use).
**Edit:** before-1.5 way of deprecate methods
You have to include a @deprecated tag (or indicator or whatever) with the message you want to display to the user in the javadoc after the "**deprecated**".
```
/**
This method sets the property A.
@see getA
@author helios
@deprecated This method is not sync safe, use setAOk instead
*/
public void setA(String value) ...
```
|
Changing packaging based on active profile in pom
I have a project which I compile with maven. I have different profiles declared in pom.xml.
For some of these profiles, I prefer building a war, and for other profiles I prefer a jar. I use to manually edit the pom.xml file and change packaging variable to either
```
<packaging>war</packaging>
```
or
```
<packaging>jar</packaging>
```
before doing a
```
$ mvn clean package -Pchosenprofile
```
How can I tell mvn the packaging corresponding to each profile so I don't need to edit pom.xml?
|
If you want to use profile you can use something like:
```
<?xml version="1.0" encoding="UTF-8"?>
<project>
<modelVersion>4.0.0</modelVersion>
..
<packaging>${packaging.type}</packaging>
<profiles>
<profile>
<id>webapp</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<packaging.type>war</packaging.type>
</properties>
</profile>
<profile>
<id>batch</id>
<properties>
<packaging.type>jar</packaging.type>
</properties>
</profile>
</profiles>
</project>
```
|
How to set color to secondary drawable in progress bar
I have this code
```
pb = (ProgressBar) findViewById(R.id.progressBar1);
final float[] roundedCorners = new float[] { 5, 5, 5, 5, 5, 5, 5, 5 };
ShapeDrawable pgDrawable = new ShapeDrawable(new RoundRectShape(roundedCorners, null, null));
String MyColor = "#00FF00";
pgDrawable.getPaint().setColor(Color.parseColor(MyColor));
ClipDrawable progress = new ClipDrawable(pgDrawable, Gravity.LEFT, ClipDrawable.HORIZONTAL);
pb.setProgressDrawable(progress);
pb.setBackgroundDrawable(getResources().getDrawable(android.R.drawable.progress_horizontal));
```
The problem in this code is that for the progress drawable and for the secondary progress drawable I have the same color.
How to set the socondary progress color ?
|
Specify a `progressDrawable` like in the example below:
This goes to a layout:
```
<ProgressBar
android:id="@+id/gauge"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="wrap_content"
android:layout_height="4dp"
android:layout_alignParentLeft="true"
android:layout_alignParentRight="true"
android:layout_below="@+id/section"
android:progress="50"
android:progressDrawable="@drawable/progressbar"/>
```
This goes to `drawable/progressbar.xml`, and here you can specify background and colors for both progress bars.
```
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@android:id/background">
<shape>
<gradient
android:startColor="@color/basecolor"
android:endColor="@color/basecolor"
android:angle="270"
/>
</shape>
</item>
<item android:id="@android:id/secondaryProgress">
<clip>
<shape>
<gradient
android:startColor="#808080"
android:endColor="#808080"
android:angle="270"
/>
</shape>
</clip>
</item>
<item android:id="@android:id/progress">
<clip>
<shape>
<gradient
android:startColor="#ffcc33"
android:endColor="#ffcc33"
android:angle="270" />
</shape>
</clip>
</item>
```
|
How should I be implementing the HTTP POST Protocol Binding for SAML WebSSO Profile?
I've implemented my Service Provider and Identify Provider following the SAML Profile for Web SSO using HTTP POST Protocol Binding. However, I am a bit confused as to how the Identity Provider will provide an `<AuthnStatement>` if the HTTP POST coming from the Service Provider is not tied to a session on the Identity Provider.
Could someone enlighten me how one would be able to do this?
The other approach I could use is the HTTP Redirect Binding, but that requires User-Agent intervention (i.e., the browser), often using the User-Agent simply as a pass-thru intermediary to faciliate the Request-Response message exchange. I'd rather use HTTP POST for this reason, because the message exchange occurs server-side, so the user sees nothing happening on their screen.
However, using HTTP Redirect makes more sense to me with respect to how I'd be able to tie a session to a request. Since the HTTP Redirect is facilitated via a User-Agent, the request to the IdP will have a session (if previously authenticated). What I don't get though is how to send an `<AuthnRequest>` on a HTTP Redirect. ***Answered by JST***
So I'm a bit confused and would love to hear what other people are doing. Here are my questions again:
1. Using the HTTP POST Protocol Binding with the `IsPassive` option the `<AuthnRequest>`, how do I tie a request made by the Service Provider to a session on the Identity Provider? In other words, how does the Identity Provider know who is making the request if the POST is coming from the Service Provider which is technically an anonymous session?
2. Using the HTTP Redirect Protocol Binding, how do I send an `<AuthnRequest>` to the Identity Provider if I am using a HTTP Redirect? ***Answered by JST***
---
### UPDATE
Sorry for the confusion if I was unclear in my explanation above. I am implementing both the IdP and SP (via a plugin). The IdP is an existing application for which I want the SP (a third-party system) to use for authentication (i.e., Web SSO). I am developing a simple PoC at the moment. The SP is actually a third-party Spring application for which I am developing a plugin to perform the SAML operations.
I should have mentioned that I am trying to do this using the `IsPassive` option, that meaning the User-Agent doesn't come into play during the message exchange. It is simply the catalyst that gets the SAML-party started. Right? With that in mind, given that the user is anonymous at Step 1, what does the SP send to the IdP to allow the IdP figure out whether the user is already authenticated? Because of IsPassive, the HTTP POST isn't sent via the User-Agent
---
### UPDATE
*Question 1 Revised*: How does the IdP resolve the Principal when the `AuthnRequset` is sent with the `IsPassive` option on?
Straight from the SAML 2.0 Profiles document, page 15, lines 417 to 419:
>
> In step 4, the principal is identified
> by the identity provide by some means
> outside the scope of this profile.
>
>
>
What I'm really after is an explanation how to implement `some means`.
|
The thing to keep in mind is that there's no connection between a session on the IdP and a session on the SP. They don't know about each other, and communicate only through the SAML messages. The general steps for SP-initiated SAML SSO are:
1. Anonymous user visits resource (page) at SP.
2. SP identifies that user needs to be authenticated at IdP.
3. SP constructs AuthnRequest and sends to IdP.
4. IdP does some sort of authentication, constructs SAML Response and sends to SP.
5. SP validates Response and, if valid, does whatever is necessary to identify user at SP and get them to originally requested resource.
Yes, there does need to be some way to connect the SP's AuthnRequest to the IdP's Response. That's covered by the SAML spec: the SP's AuthnRequest includes an ID value, and the corresponding response from the IdP MUST include an InResponseTo attribute (on its SubjectConfirmationData element) with that ID value. The Authentication Request Protocol also allows the SP to pass a RelayState parameter to the IdP, which the IdP is then REQUIRED to pass along unchanged with the SAML Response. You (in the SP role) can use that RelayState value to capture state information allowing the user to be relayed to the originally requested resource.
That implies that when you implement an SP, you'll need some mechanism for recording ID and RelayState values, and your Response processing needs to validate InResponseTo and RelayState values it receives. How you choose to create and interpret RelayState values is up to you, but keep in mind that there is a length limit. (We use random GUID values corresponding to locally saved state data, which has the extra advantage of not giving any hint of meaning to the RelayState values.)
How does the IdP know who is making the request? The AuthnRequest must include an Issuer element that identifies the SP. It might also contain an AssertionConsumerServiceURL (the URL to which the Response is to be sent), or the IdP may have a local mapping of the Issuer to the proper URL.
How do you send an AuthnRequest using HTTP Redirect? The only difference between AuthnRequest sent using POST vs. Redirect, besides using GET rather than POST, is that the AuthnRequest XML has to get compressed (using the DEFLATE encoding).
Hope that answers most of your questions.
|
How to change bash completion result coloring
How to change completion coloring when auto-completing subcommands in bash?[](https://i.stack.imgur.com/Zj7zF.png)
For unknown reason, it looks like the completion results are treated by bash as broken symlinks. And it looks really disturbing.
System info:
- Bash version: 4.4.23(1)-release
- OS: Manjaro Linux
---
Edit: I understand that by setting off the `colored-stats` GNU Readline option in `~/.inputrc` the coloring will be turned off altogether:
```
set colored-stats off
```
But that would also disable other coloring, such as for directory, when auto-completing.
I think this is weird because the broken-symlink-like coloring happens on my Manjaro Linux box at home, but not my Arch Linux box at work. Both are applied with the same `bashrc` and `inputrc`. But I didn't check for other potential difference for now.
---
Edit again: Pasting my `~/.inputrc` for reference:
```
$include /etc/inputrc
$if mode=emacs
# cycle through possible completions
TAB: menu-complete
# complete until the end of common prefix before cycling through possible completions
set menu-complete-display-prefix on
# show possible completions if more than one completions are possible
set show-all-if-ambiguous on
set show-all-if-unmodified on
# do not duplicate characters after the cursor if they consist partially of possbile completion
set skip-completed-text on
# display colors when doing completion as `ls` does
set colored-stats on
# color tab-completion matched prefix part
set colored-completion-prefix on
# fuck off stty key bindings. (stty -a)
set bind-tty-special-chars off
"\C-w": unix-word-rubout
"\eh": kill-region
"\eH": copy-region-as-kill
"\C-x\'": "\'\'\C-b"
"\C-x`": "``\C-b"
"\C-x\"": "\"\"\C-b"
"\C-x{": "{}\C-b"
"\C-x[": "[]\C-b"
"\C-x(": "()\C-b"
"\C-x0": vi-eWord
"\eF": "\C-x0\C-f"
"\eB": vi-bWord
"\eD": "\e \eF\eh"
$endif
```
|
Completion coloring for GNU Readline is enabled with `colored-stats` in `.inputrc`, like you've mentioned.
The colors are determined by `$LS_COLORS`, which can be generated with `dircolors`.
The completion suggestions seem to inherit their colors from the `MISSING` attribute in `~/.dircolors`. Commenting it out (or changing to something less disturbing, e.g. removing `05;` to disable blinking) prints the suggestions in default colors. The corresponding entry in `$LS_COLORS` is `...:mi=03;31:...` (an example with ANSI colors `03` and `31`).
Similar discussion:
<https://bugzilla.redhat.com/show_bug.cgi?id=1648170>
References:
<https://wiki.archlinux.org/index.php/Readline#Colorized_completion>
<https://wiki.archlinux.org/index.php/Color_output_in_console#ls>
[dircolors: modify color settings globaly](https://unix.stackexchange.com/questions/94299/dircolors-modify-color-settings-globaly/94306#94306)
<https://askubuntu.com/questions/466198/how-do-i-change-the-color-for-directories-with-ls-in-the-console>
|
Looping/iterate over the second level nested JSON in go lang
Consider the following code:
```
package main
import (
"encoding/json"
"fmt"
"reflect"
)
func main() {
//Creating the maps for JSON
m := map[string]interface{}{}
//Parsing/Unmarshalling JSON encoding/json
err := json.Unmarshal([]byte(input), &m)
fmt.Println("\nReflect type of Parsing/Unmarshalling Error Object:\n",reflect.TypeOf(err))
fmt.Println("\nParsing/Unmarshalling Error Object:\n",err)
if err != nil {
panic(err)
}
fmt.Println("\nParsed JSON is as follows:\n",m)
fmt.Println("\nReflect type of parsed json object:\n", reflect.TypeOf(m))
for firstLvlkey, firstLvlValue := range m {
fmt.Println("First Level Key:", firstLvlkey)
fmt.Println("First Level Key reflect type of :", reflect.TypeOf(firstLvlkey))
fmt.Println("First Level Value:", firstLvlValue)
fmt.Println("First Level Value reflect type of :", reflect.TypeOf(firstLvlValue))
// <===============================>
//Here I want to iterate/loop over innerJSON1, InnerJSON2 then reach to level InnerInnerJSONArray - fld1 and fld2
// <===============================>
}
}
const input = `
{
"outterJSON":{
"innerJSON1":{
"value1":10,
"value2":22
,
"InnerInnerArray": [ "test1" , "test2"],
"InnerInnerJSONArray": [ {"fld1" : "val1"} , {"fld2" : "val2"} ]
},
"InnerJSON2":"NoneValue"
}
}
`
```
I have some requirement like I want to read/get all the Key and value in `String` type for some processing adn I can't define the `struct` because I will be getting dynamic JSON input (e.g `InnerInnerArray` as a string then second level loop will give me index of array and process each JSON having key `fld1` and `val1`).
I wish to iterate over every key/value pair contained within it, what is the most efficient way of going through the map?
Note: I am Newbie for Go-lang, your suggestion/improvement on question is also most welcome.
|
See [this](https://blog.golang.org/json-and-go) blog entry which thoroughly covers this subject, specifically the section **Decoding arbitrary data**. Using that you can do something like this:
([playground example](https://play.golang.org/p/5s8tVL7BgM))
```
package main
import (
"encoding/json"
"fmt"
)
func main() {
// Creating the maps for JSON
m := map[string]interface{}{}
// Parsing/Unmarshalling JSON encoding/json
err := json.Unmarshal([]byte(input), &m)
if err != nil {
panic(err)
}
parseMap(m)
}
func parseMap(aMap map[string]interface{}) {
for key, val := range aMap {
switch concreteVal := val.(type) {
case map[string]interface{}:
fmt.Println(key)
parseMap(val.(map[string]interface{}))
case []interface{}:
fmt.Println(key)
parseArray(val.([]interface{}))
default:
fmt.Println(key, ":", concreteVal)
}
}
}
func parseArray(anArray []interface{}) {
for i, val := range anArray {
switch concreteVal := val.(type) {
case map[string]interface{}:
fmt.Println("Index:", i)
parseMap(val.(map[string]interface{}))
case []interface{}:
fmt.Println("Index:", i)
parseArray(val.([]interface{}))
default:
fmt.Println("Index", i, ":", concreteVal)
}
}
}
const input = `
{
"outterJSON": {
"innerJSON1": {
"value1": 10,
"value2": 22,
"InnerInnerArray": [ "test1" , "test2"],
"InnerInnerJSONArray": [{"fld1" : "val1"} , {"fld2" : "val2"}]
},
"InnerJSON2":"NoneValue"
}
}
`
```
This will print:
```
//outterJSON
//innerJSON1
//InnerInnerJSONArray
//Index: 0
//fld1 : val1
//Index: 1
//fld2 : val2
//value1 : 10
//value2 : 22
//InnerInnerArray
//Index 0 : test1
//Index 1 : test2
//InnerJSON2 : NoneValue
```
The key thing is that you have to use type assertion when working with interface types. The type switch makes it easy to determine the type as needed. The code will recursively range through any nested array or map so you can add as many levels as you wish and get all your values.
|
How are read/write locks implemented in pthread?
How are they implemented especially in case of pthreads. What `pthread` synchronization APIs do they use under the hood? A little bit of pseudocode would be appreciated.
|
I haven't done any pthreads programming for a while, but when I did, I never used POSIX read/write locks. The problem is that most of the time a mutex will suffice: ie. your critical section is small, and the region isn't so performance critical that the double barrier is worth worrying about.
In those cases where performance is an issue, normally using atomic operations (generally available as a compiler extension) are a better option (ie. the extra barrier is the problem, not the size of the critical section).
By the time you eliminate all these cases, you are left with cases where you have specific performance/fairness/rw-bias requirements that require a true rw-lock; and that is when you discover that all the relevant performance/fairness parameters of POSIX rw-lock are undefined and implementation specific. At this point you are generally better off implementing your own so you can ensure the appropriate fairness/rw-bias requirements are met.
The basic algorithm is to keep a count of how many of each are in the critical section, and if a thread isn't allowed access yet, to shunt it off to an appropriate queue to wait. Most of your effort will be in implementing the appropriate fairness/bias between servicing the two queues.
The following C-like pthreads-like pseudo-code illustrates what I'm trying to say.
```
struct rwlock {
mutex admin; // used to serialize access to other admin fields, NOT the critical section.
int count; // threads in critical section +ve for readers, -ve for writers.
fifoDequeue dequeue; // acts like a cond_var with fifo behaviour and both append and prepend operations.
void *data; // represents the data covered by the critical section.
}
void read(struct rwlock *rw, void (*readAction)(void *)) {
lock(rw->admin);
if (rw->count < 0) {
append(rw->dequeue, rw->admin);
}
while (rw->count < 0) {
prepend(rw->dequeue, rw->admin); // Used to avoid starvation.
}
rw->count++;
// Wake the new head of the dequeue, which may be a reader.
// If it is a writer it will put itself back on the head of the queue and wait for us to exit.
signal(rw->dequeue);
unlock(rw->admin);
readAction(rw->data);
lock(rw->admin);
rw->count--;
signal(rw->dequeue); // Wake the new head of the dequeue, which is probably a writer.
unlock(rw->admin);
}
void write(struct rwlock *rw, void *(*writeAction)(void *)) {
lock(rw->admin);
if (rw->count != 0) {
append(rw->dequeue, rw->admin);
}
while (rw->count != 0) {
prepend(rw->dequeue, rw->admin);
}
rw->count--;
// As we only allow one writer in at a time, we don't bother signaling here.
unlock(rw->admin);
// NOTE: This is the critical section, but it is not covered by the mutex!
// The critical section is rather, covered by the rw-lock itself.
rw->data = writeAction(rw->data);
lock(rw->admin);
rw->count++;
signal(rw->dequeue);
unlock(rw->admin);
}
```
Something like the above code is a starting point for any rwlock implementation. Give some thought to the nature of your problem and replace the dequeue with the appropriate logic that determines which class of thread should be woken up next. It is common to allow a limited number/period of readers to leapfrog writers or visa versa depending on the application.
Of course my general preference is to avoid rw-locks altogether; generally by using some combination of atomic operations, mutexes, STM, message-passing, and persistent data-structures. However there are times when what you really need is a rw-lock, and when you do it is useful to know how they work, so I hope this helped.
EDIT - In response to the (very reasonable) question, where do I wait in the pseudo-code above:
I have assumed that the dequeue implementation contains the wait, so that somewhere within `append(dequeue, mutex)` or `prepend(dequeue, mutex)` there is a block of code along the lines of:
```
while(!readyToLeaveQueue()) {
wait(dequeue->cond_var, mutex);
}
```
which was why I passed in the relevant mutex to the queue operations.
|
USE\_FINGERPRINT is deprecated in API level 28
Constant `USE_FINGERPRINT` was [deprecated in API level 28](https://developer.android.com/reference/android/Manifest.permission#USE_FINGERPRINT) and we should use more generic `USE_BIOMETRIC` which has been added in same API level.
I swap these constants in my Manifest and I'm getting error when calling `FingerprintManagerCompat.from(context).isHardwareDetected()`.
**Error is:**
>
> Missing required permission - USE\_FINGERPRINT
>
>
>
This happens because of `@RequiresPermission("android.permission.USE_FINGERPRINT")` annotation in `FingerprintManagerCompat` in 28.0.0-rc3 support v4 lib.
Is this something I can ignore and continue using new permission?
|
I've faced the same problem, imho the short answer is to ignore the deprecation, as long as you **only** want to support fingerprint authentication in your app.
As stated in the [google dev blog](https://android-developers.googleblog.com/2018/06/better-biometrics-in-android-p.html), since API 28 google comes up with the new [biometrics API](https://source.android.com/security/biometric), which simplifies the whole process of biometrics authentication. They provide a simple builder for the auth-dialog. Additionally, they support face and iris detection, too - imho it is just a matter of time if you want to support it and probably might be worth upgrading it.
The only disadvantage I've discovered so far is that if you want to check if e.g. fingerprint hardware is available, you'll have to start the authentication process to check this out and wait for the [error callback](https://developer.android.com/reference/android/hardware/biometrics/BiometricPrompt.html#BIOMETRIC_ERROR_HW_UNAVAILABLE). The deprecated fingerprint API instead provides methods like [`isHardwareDetected()`](https://developer.android.com/reference/android/support/v4/hardware/fingerprint/FingerprintManagerCompat.html#isHardwareDetected()) or [`hasEnrolledFingerprints()`](https://developer.android.com/reference/android/support/v4/hardware/fingerprint/FingerprintManagerCompat.html#isHardwareDetected()) for this purpose. In this case, you would probably have to re-design your application, if you rely on this information. The reason for the deprecation of those methods is probably, that it only supports fingerprints, therefore it is not a bad idea to upgrade it.
Google has also provided the *compat* `'androidx.biometric:biometric:1.0.0-alpha02'` version for the devices below API 28, it seems that by importing this dependency, you could simply switch to `USE_BIOMETRIC` permission without modifying anything else in your app - you won't be bothered by the warning anymore. Since it is only in alpha stage, I would use it with care. Therefore, as long as you don't use anything from the biometrics API, you could also simply ignore the problem and face it again when you want to support additional biometric authentication methods.
**EDIT**: Now, the beta version of compat library is released, `'androidx.biometric:biometric:1.0.0-beta01'`. For more info on this, [check here](https://developer.android.com/jetpack/androidx/releases/biometric).
Now, the stable version of compat library is released on December 18, 2019, 'androidx.biometric:biometric:1.0.1'. For more info on this [Click here](https://developer.android.com/jetpack/androidx/releases/biometric).
|
Regular expression for just 7 digits and 9 digits
I was searching for regular expression for just 7 digits and another for just 9 digits, all I found was for [0-7] and [0-9], not exact 7 and 9 digits - no shorter no longer-
How can I find those ?
|
Matching seven digits:
```
^\d{7}$
```
Matching nine digits:
```
^\d{9}$
```
If you want to match either seven or nine digits, use this:
```
^(\d{7}|\d{9})$
```
or just this:
```
^\d{7}(\d{2})?$
```
**Quantifier**: The number in curly braces is what we call the *quantifier*, it determines how many repetitions of the preceding pattern (character or group in parentheses) are matched.
**Beginning** and **end** of a string (or line) are denoted with the caret `^` and dollar sign `$` respectively.
The **pipe character** `|` is used to provide two alternative patterns. It is important to know, that it's precedence is *lowest* (thanks raina for reminding me), i.e. it will either match *everything* to its left or to its right, unless constrained with parentheses.
|
How to use GORM for Mongodb in Golang?
I'm new to `go` and `MongoDB`. I want to connect to MongoDB with GORM in `go-lang`. after a lot of searches, I still can't do it.
|
In short: you can't. [GORM](http://gorm.io/) is created for relational databases, and MongoDB is not a relational but a NoSQL database.
And you can't even use GORM with all SQL databases, the [officially supported list](http://gorm.io/docs/connecting_to_the_database.html#Supported-Databases) at the moment is: MySQL, PostgreSQL, SQLite3 and SQL Server, although you can "easily" add support for other SQL servers by [writing GORM dialects](http://gorm.io/docs/dialects.html) for them. But that's the end of it. Adding support for MongoDB would require more work than what your gain would be.
Consider using the [official MongoDB driver](https://github.com/mongodb/mongo-go-driver) which is quite mature now. Or if using GORM is a must for you, you must choose another (not MongoDB, preferably one of the above listed supported) database.
|
Multiple action listeners with a single command component in JSF
Is it possible to invoke more than one listener method using a single command component? For example,
A view scoped bean:
```
@ManagedBean
@ViewScoped
public final class ViewScopedBean implements Serializable
{
@ManagedProperty(value = "#{sessionScopedBean}")
private SessionScopedBean sessionScopedBean; //Getter/Setter.
private static final long serialVersionUID = 1L;
public ViewScopedBean() {}
public void action()
{
//Do something.
sessionScopedBean.action();
}
}
```
A session scoped bean:
```
@ManagedBean
@SessionScoped
public final class SessionScopedBean implements Serializable
{
private static final long serialVersionUID = 1L;
public SessionScopedBean () {}
public void action() {
//Do something.
}
}
```
A command button like the one given below,
```
<h:commandButton value="Action" actionListener="#{viewScopedBean.action}"/>
```
invokes the method `action()` in `ViewScopedBean` which in turn invokes the `action()` method in `SessionScopedBean` by injecting an instance of that bean.
Is it somehow possible do the same on XHTML so that a need to inject a bean just to invoke a method can be eliminated?
|
Use [`<f:actionListener binding>`](http://docs.oracle.com/javaee/6/javaserverfaces/2.1/docs/vdldocs/facelets/f/actionListener.html):
```
<h:commandButton value="Action">
<f:actionListener binding="#{viewScopedBean.action()}"/>
<f:actionListener binding="#{sessionScopedBean.action()}"/>
</h:commandButton />
```
Note the importance of the parentheses in EL. Omitting them would in this particular example otherwise throw a confusing `javax.el.PropertyNotFoundException: Property 'action' not found on type com.example.ViewScopedBean`, because it's by default interpreted as a value expression. Adding parentheses makes it a method expression. See also [Why am I able to bind <f:actionListener> to an arbitrary method if it's not supported by JSF?](https://stackoverflow.com/questions/30744949/why-am-i-able-to-bind-factionlistener-to-an-arbitrary-method-if-its-not-supp)
You could even add an `actionListener` and/or an `action` method to the component the usual way, which is invoked later on. What it has to be unique is the `action` method, which decides the **outcome** for the processing.
Anyway, keep in mind the listeners are always executed before the action and considered a "warming-up" for it. Your best is to perform the whole logic in the action method, even if you need to do bean injections.
**See also:**
- [Call multiple backing bean methods at the same time](https://stackoverflow.com/questions/5020492/call-multiple-backing-bean-methods-at-the-same-time)
- [Differences between action and actionListener](https://stackoverflow.com/questions/3909267/differences-between-action-and-actionlistener/)
|
ScrollView Ignore OnTouch while Scrolling?
I have a ScrollView -> Table => TableRow that I dynamically add Rows to, each row has a LinearLayout inside of it that I attach a OnTouchListener to and then when its touched I do something. At least that was the plan, the problem I am having is that when you scroll in the ScrollView, even while scrolling it fires off these events. This type of behavior does not occur for the other controls I have in the ScrollView such as Buttons, ImageButtons, EditText
My question is how do I get the LinearLayout to ignore these OnTouch events while the ScrollView is scrolling like the Button and EditText fields do?
|
Inside your onTouch event callback add an if statement or case block to that that checks for
`(me.getAction == MotionEvent.ACTION_CANCEL)` similar to this:
```
else if (me.getAction() == MotionEvent.ACTION_CANCEL){
Log.i(myTag, "Action Cancel");
//This means we are scrolling on the list, not trying to press
}
```
It has been a long time since I worked on it, but I know I had to solve this problem at one point, and upon a quick glance just now I think this is what I had to do in order to get it working. It is going to keep receiving callbacks while the list is scrolling, but the action on them should be cancel. So if you set up some sort of if, or switch/case that checks for action\_cancel and does nothing when it is true, from the users perspective the onTouch will "ignore" the events that happen while scrolling.
|
Work on list of tuples in Scala - part 1
I'm new to Scala, and trying to understand how to work on lists of tuples, so I've created a fictive list of people:
```
val fichier = List(("Emma Jacobs","21"), ("Mabelle Bradley","53"), ("Mable Burton","47"))
```
I would like to catch the components of each element (tuple) and use them for other purposes, so I wrote this:
```
def classeur(personne: List[(String, String)]) : String =
personne match {
case Nil => "Empty file"
case h :: t => {
h._1 + "is " + h._2 + "years old"
classeur(t)
}
}
```
Result: Empty file.
What am I misunderstanding, as my `fichier` is not empty? Why does it consider `fichier` to be `Nil`?
|
Your code is almost right. The only issue is that you forgot to concatenate the String to the result of the recursive call:
```
def classeur(personne: List[(String, String)]) : String =
personne match {
case Nil => "Empty file"
case h::t => h._1 + "is " + h._2 + "years old " + classeur(t)
}
```
Here is another option by extracting the values of the tuple in the case statement, which I think may be clearer:
```
def classeur(personne: List[(String, String)]) : String =
personne match {
case Nil => "Empty file"
case (name, age)::t => name + "is " + age + "years old " + classeur(t)
}
```
**EDIT**:
Here is an option with map as suggested in comments:
```
personne.map{case (name, age) => s"$name is $age years old"}.mkString(",")
```
Output:
```
Emma Jacobs is 21 years old,Mabelle Bradley is 53 years old,Mable Burton is 47 years old
```
|
How to define array/object in .env file?
The following is my Javascript object:
```
window.options = {
VAR1: VAL1,
VAR2: VAL2,
VA31: VAL3,
};
```
I want it (either object or array) to be defined in a `.env` file. How can I do that?
|
value in `.env` value can only be string.
Simple workaround is to just store the env value as a comma separated value, like:
```
SOME_VAR=VAL1,VAL2,VAL3
```
and split it on your js file:
```
const someVar = process.env.SOME_VAR.split(",");
console.log(someVar); // [ 'VAL1', 'VAL2', 'VAL3' ]
```
Or use whatever delimiter you want.
---
If you want to store object, (unclean) workaround is to store JSON string as the env value, for example
```
OBJECT_VAL={ "VAR1": "VAL1", "VAR2": "VAL2", "VA31": "VAL3" }
```
and on your js code, you can parse the JSON:
```
const objectVal= JSON.parse(process.env.OBJECT_VAL);
console.log(objectVal); // { VAR1: 'VAL1', VAR2: 'VAL2', VA31: 'VAL3' }
```
---
I personally don't think storing JSON string inside .env is a good idea, so I would like to give my recommendation on better way to store `.env` value and use it on your js code.
### 1. Store env with normal string value, or delimiter separated value
For example:
```
ARRAY=VAL1,VAL2,VAL3
VAR1=VALl1
VAR2=VALl2
VAR3=VALl3
```
### 2. Make a js file to handle env variable
I will call it `env.js`, and on this file I will export object containing all env variable
```
module.exports = {
array: process.env.ARRAY.split(","),
object: {
var1: process.env.VAR1,
var2: process.env.VAR2,
var3: process.env.VAR3,
}
}
```
And on other file, you can just import `env.js` and call the env value
```
const env = require("path/to/env.js");
console.log(env.array); // [ 'VAL1', 'VAL2', 'VAL3' ]
console.log(env.object.var1); // "VAL1"
```
If your project often call `process.env`, this solution might make your code a bit cleaner since you don't need to call `process.` everytime you want to access your env variable.
|
Explanation for cryptic double pointer assignment
I'm reading the c code:
```
void **alignedData = (void **)(((size_t)temp + aligned - 1)&-aligned);
```
I do not known the means, especially the `&-` part.
Can anyone explain it?
Thanks!
|
When using this, `aligned` should be an unsigned type (or the C implementation should be using two’s complement) and have a value that is a power of two. Then this code calculates an amount of memory to be allocated:
- `(size_t) temp` converts `temp` to the unsigned type `size_t`, which is suitable for working with sizes. This will be a number of bytes to be allocated.
- `(size_t) temp + aligned - 1` adds enough bytes to guarantee a multiple of `aligned` falls somewhere between the numbers `temp` and `temp + aligned - 1`, inclusive. For example, if `temp` is 37 and `aligned` is 8, then between 37 and 44 (37+8−1), there is a multiple of 8 (40).
- `-aligned` makes a bit mask with 1 in each bit position that is a multiple of `aligned` and 0 in the lower bits. For example, if `aligned` is 8, then the bits that represent `-aligned` are 111…111000, because the 000 bits at the end represent values of 1, 2, and 4, while the other bits represent values of 8, 16, 32, and so on.
- The `&` (bitwise AND) of `(size_t) temp + aligned - 1` with `-aligned` then clears the low bits, leaving only bits that are multiples of `aligned`. Thus, it produces the multiple of `aligned` that is in the interval. For example, with the values of 37 and 8 mentioned before, `((size_t) temp + aligned - 1) & -aligned` produces 40.
Thus, this expression produces the value of `temp` rounded up to the next multiple of `aligned`. It says “Calculate the number of bytes we need to allocate that is at least `temp` bytes and is a multiple of `aligned`.”
After this, the code converts this number to the type `void **` and uses it to initialize `void **alignedData`. That is bad C code. There is generally no good reason for it. A number of bytes like this should not be used as any kind of pointer. The code may be attempting to “smuggle” this value through a data type it is compelled to use by some other software, but there is likely a better way to do it, such as by allocating memory to hold the value and supplying a pointer to that memory instead of trying to convert the value directly. Finding a better solution requires knowing more context of the code.
|
Update the UI ( Loading and Error Views ) on RecyclerView using RxJava
I have a SearchView that executes a network request to search for some tracks and then populates a RecylerView with the results. I have found this code which works fine.
I have already integrated the RecyclerView EmptyView through the Adapter but now I am trying to integrate the LoadingView(Progress) and ErrorView inside this code. I tried to put the LoadingView(ProgressBar) on Visibility True inside the concatMap but got the error that the “Only the original thread that created a view hierarchy can touch its views.” which can be solved running that on the MainThread but I am sure there is a better way to do this.
Can someone have a better idea where and how the logic about show/hide the ErrorView and LoadingView can be integrated into this code?
I am using also RxBinding. Maybe also using RxRecyclerView would be a good idea?
```
RxSearchView.queryTextChanges(searchView).
filter(charSequence ->
!TextUtils.isEmpty(charSequence))
.throttleLast(100, TimeUnit.DAYS.MILLISECONDS)
.debounce(200, TimeUnit.MILLISECONDS)
.onBackpressureLatest()
.concatMap(searchTerm ->
{
return searchTracks(searchTerm).
.subscribeOn(Schedulers.io())
.onErrorResumeNext(throwable1 -> {
//handle error somehow, change UI
return Observable.empty();
}
);
}
)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(tracks -> {
populateTracks(tracks);
}
});
}, throwable -> {
//show errorView
});
```
|
This here was my complete solution without stripping code for the presentation.
```
RxSearchView.queryTextChanges(searchView)
.skip(1)
.doOnNext(charSequence -> Log.v(TAG, "searching: " + charSequence))
.throttleLast(100, TimeUnit.MILLISECONDS)
.debounce(200, TimeUnit.MILLISECONDS)
.onBackpressureLatest()
.observeOn(AndroidSchedulers.mainThread())
.filter(charSequence -> {
final boolean empty = TextUtils.isEmpty(charSequence);
if (empty) {
Log.v(TAG, "empty view");
mAdapter.clear();
}
return !empty;
})
.concatMap(charSequence -> {
Log.v(TAG, "requesting " + charSequence);
return onErrorResumeNext(
mGitApiService.searchRepositoriesSingle(charSequence.toString())
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread()),
throwable -> {
try {
throw throwable;
} catch (HttpException httpException) {
showEmptyErrorView(httpException.message());
} catch (Throwable other) {
showEmptyErrorView(other.getMessage());
other.printStackTrace();
}
return Observable.empty();
});
})
.doOnNext(charSequence -> Log.v(TAG, "got data"))
.subscribe(response -> {
showRepositories(response.getItems());
}, throwable -> {
throwable.printStackTrace();
showEmptyErrorView(throwable.getMessage());
});
```
so basically whenever you touch your view you have to call `.observeOn(AndroidSchedulers.mainThread())`
|
How to change color of text in JavaFX Label
I am having trouble changing colors of text that are within the JavaFX label class.
This is the code I have so far.
```
package Problem2;
import javafx.application.Application;
import javafx.geometry.Insets;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.control.Slider;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.GridPane;
import javafx.scene.layout.StackPane;
import javafx.scene.paint.Color;
import javafx.scene.text.Text;
import javafx.stage.Stage;
public class Problem2Code extends Application {
Slider[] slider = new Slider[4];
@Override
public void start(Stage primaryStage) throws Exception {
Text text = new Text("Show Colors");
// Bottom pane
Label[] labels = new Label[4];
String[] stringLabels = {"Red", "Green", "Blue", "Opacity"};
GridPane gridPane = new GridPane();
gridPane.setHgap(30);
gridPane.setVgap(5);
gridPane.setPadding(new Insets(25));
gridPane.setAlignment(Pos.CENTER);
for (int i = 0; i < slider.length; i++) {
slider[i] = new Slider();
slider[i].setMin(0);
if (!stringLabels[i].equals("Opacity")) {
slider[i].setMax(255);
slider[i].setValue(255);
} else {
slider[i].setMax(1);
slider[i].setValue(1);
}
labels[i] = new Label(stringLabels[i]);
slider[i].valueProperty()
.addListener((obser, old, newV) -> text.setFill(getColor()));
gridPane.add(labels[i], 0, i);
gridPane.add(slider[i], 1, i);
}
StackPane stackPane = new StackPane(text);
stackPane.setPrefSize(315, 65);
BorderPane borderPane = new BorderPane(stackPane);
borderPane.setBottom(gridPane);
primaryStage.setScene(new Scene(borderPane));
primaryStage.setTitle("Color Changer");
primaryStage.show();
}
private Color getColor() {
// r g b o
double[] rgb = new double[4];
for (int i = 0; i < rgb.length; i++) {
rgb[i] = slider[i].getValue();
}
return Color.rgb((int)rgb[0], (int)rgb[1], (int)rgb[2], rgb[3]);
}
public static void main(String[] args) {
Application.launch(args);
}}
```
When I build it and play with the sliders, this is what it looks like.
[](https://i.stack.imgur.com/TBAkxm.png)
How can I edit the colors of text "Red", "Green", and "Blue", so the text colors matches the words like this?
[](https://i.stack.imgur.com/oAkvh.png)
I believe it has to do something with making an HBox? I tried it with that but didn't know how to do it correctly. I also tried making variables stringLabels1, stringLabels2, stringLabels3, and stringLabels4 for each of the strings, but had trouble with that in the gridPane portion. Having trouble coding either of those.
Please help, thank you.
|
You could use [Color#web](https://docs.oracle.com/javase/8/javafx/api/javafx/scene/paint/Color.html) method:
```
for (int i = 0; i < slider.length; i++) {
slider[i] = new Slider();
slider[i].setMin(0);
labels[i] = new Label(stringLabels[i]);
if (!stringLabels[i].equals("Opacity")) {
slider[i].setMax(255);
slider[i].setValue(255);
labels[i].setTextFill(Color.web(stringLabels[i])); //css alternative: labels[i].setStyle("-fx-text-fill: " + stringLabels[i]);
} else {
slider[i].setMax(1);
slider[i].setValue(1);
}
slider[i].valueProperty()
.addListener((obser, old, newV) -> text.setFill(getColor()));
gridPane.add(labels[i], 0, i);
gridPane.add(slider[i], 1, i);
}
```
---
Side note: you could use one listener for all four sliders:
```
ChangeListener<Number> listener = (obser, old, newV) -> text.setFill(getColor());
for (int i = 0; i < slider.length; i++) {
..
slider[i].valueProperty().addListener(listener);
..
}
```
|
Removing NA in correlation matrix
I am doing a correlation matrix for a dataframe of 4000 variable and I would like to remove the variables showing > 0.5 correlation, so I am using this command from the {caret} package.
```
removeme <- findCorrelation(corrMatrix, cutoff = 0.5, verbose = FALSE)
Error in if (mean(x[i, -i]) > mean(x[-j, j])) { :
missing value where TRUE/FALSE needed
```
The data I have is highly variable, and I get NA values here and there. To start with, I couldn't find something that can deal with NA values on the help page of this command, so I decided to remove the NA values myself.
Some variables show NA values all the way across the data, and some show few NA values. I am trying to remove the variables that are causing any NA values, so that I would be able to use the above command. Here's a minimal example of what my data looks like
```
dput(df) <- structure(list(GK = 1:10, HGF = c(0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L), HJI = c(2L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L),
HDF = c(5L, 6L, 8L, 9L, 5L, 2L, 4L, 3L, 2L, 1L), KLJG = c(0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), KLJA = c(0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L), KDA = c(10L, 11L, 15L, 18L,
11L, 10L, 10L, 15L, 12L, 13L), OIE = c(NA, NA, NA, NA, NA,
NA, NA, NA, NA, NA), AFE = c(0L, 0L, 0L, 1L, 0L, 0L, NA,
NA, NA, NA)), .Names = c("GK", "HGF", "HJI", "HDF", "KLJG",
"KLJA", "KDA", "OIE", "AFE"), class = "data.frame", row.names = c(NA,
-10L))
corrMatrix <- cor(df,use="pairwise.complete.obs")
```
What would be the best idea to get rid of these annoying variables? I have tried Many commands but did not get to an ideal one that would get rid of these variables. Here are one of my trials:
```
removeme <- corrMatrix[,which(as.numeric(rowSums(is.na(corrMatrix))) > 100)]
```
The issue with this command that if there was over a 100 faulty variables (giving NA in correlation matrix) the normal variables will be removed, as the columns of the normal variable will have > 100 NA values.
I hope this edit made my question more clear. Cheers.
|
If you simply want to get rid of any column that has one or more `NA`s, then just do
```
x<-x[,colSums(is.na(x))==0]
```
However, even with missing data, you can compute a correlation matrix with no `NA` values by specifying the `use` parameter in the function `cor`. Setting it to either `pairwise.complete.obs` or `complete.obs` will result in a correlation matrix with no `NA`s.
`complete.obs` will ignore all rows with missing data, whereas `pairwise.complete.obs` will just ignore the missing pairs of data. Note that, although `pairwise.complete.obs` "sounds better" because it uses more of the available data, but it isn't guaranteed to produce a positive-definite correlation matrix, which could be a problem.
```
> set.seed(123)
> x<-array(rnorm(500),c(100,5))
> x[sample(500,3)]<-NA
> cor(x)
[,1] [,2] [,3] [,4] [,5]
[1,] 1 NA NA NA NA
[2,] NA 1 NA NA NA
[3,] NA NA 1 NA NA
[4,] NA NA NA 1.00000000 -0.01925986
[5,] NA NA NA -0.01925986 1.00000000
> cor(x,use="pairwise.complete.obs")
[,1] [,2] [,3] [,4] [,5]
[1,] 1.00000000 -0.04377085 -0.18049501 -0.04914247 -0.19374986
[2,] -0.04377085 1.00000000 0.01296008 0.02606083 -0.12333765
[3,] -0.18049501 0.01296008 1.00000000 -0.03218139 -0.02675554
[4,] -0.04914247 0.02606083 -0.03218139 1.00000000 -0.01925986
[5,] -0.19374986 -0.12333765 -0.02675554 -0.01925986 1.00000000
> cor(x,use="complete.obs")
[,1] [,2] [,3] [,4] [,5]
[1,] 1.00000000 -0.06263112 -0.17914810 -0.02574970 -0.20504268
[2,] -0.06263112 1.00000000 0.01263764 0.02543900 -0.12571570
[3,] -0.17914810 0.01263764 1.00000000 -0.03866312 -0.02520500
[4,] -0.02574970 0.02543900 -0.03866312 1.00000000 -0.01688848
[5,] -0.20504268 -0.12571570 -0.02520500 -0.01688848 1.00000000
```
|
SettingsProviderAttribute replacement for application-level custom SettingsProvider
In a .NET application, if you have specific Settings need, such as storing them in DB, then you could replace `LocalFileSettingsProvider` with a custom settings provider of your, examples:
[Create a Custom Settings Provider to Share Settings Between Applications](http://www.codeproject.com/Articles/136152/Create-a-Custom-Settings-Provider-to-Share-Setting)
[Creating a Custom Settings Provider](http://www.codeproject.com/Articles/20917/Creating-a-Custom-Settings-Provider)
To declare the settings class (the one that inherits `ApplicationSettingsBase`) that you want to use a specific provider, you decorate it with `SettingsProviderAttribute` and pass your provider type as a parameter `[SettingsProvider(typeof(MyCustomProvider))]`, otherwise it will use the default `LocalFileSettingsProvider`.
My question: **Is there a configuration or a trick I could use to force my application to use my custom provider through-out the application without using an attribute?**
The reason is that I am loading plugins via MEF and the plugins might be written via 3rd party and I don't want them to be concerned with how settings are being dealt with.
|
You could try the following code. It basically changes the default provider to an arbitrary one during the construction of the Settings object. Note that I never tested this code.
```
internal sealed partial class Settings {
public Settings() {
SettingsProvider provider = CreateAnArbitraryProviderHere();
// Try to re-use an existing provider, since we cannot have multiple providers
// with same name.
if (Providers[provider.Name] == null)
Providers.Add(provider);
else
provider = Providers[provider.Name];
// Change default provider.
foreach (SettingsProperty property in Properties)
{
if (
property.PropertyType.GetCustomAttributes(
typeof(SettingsProviderAttribute),
false
).Length == 0
)
{
property.Provider = provider;
}
}
}
}
```
|
use jQuery to expand/collapse ul list - having problems
I'm trying to create a blog archive list which shows all articles by year and month (which I've done with PHP/MySQL)
Now I'm trying to make it so that on page load, all years are collapsed except the latest year/month and also that each will collapse/expand on click.
At the moment my jQuery click function will open or close all of the li elements rather than just the one I click. I'm still pretty new to jQuery so am not sure how to make it just affect the list section that I click on.
Any help would be grand!
Here's my code so far (the list is generated from PHP/MySQL loops)
```
<ul class="archive_year">
<li id="years">2012</li>
<ul class="archive_month">
<li id="months">September</li>
<ul class="archive_posts">
<li id="posts">Product Review</li>
<li id="posts">UK men forgotten how to act like Gentlemen</li>
<li id="posts">What Do Mormons Believe? Ex-Mormon Speaks Out</li>
<li id="posts">Here is a new post with lots of text and a long title</li>
</ul>
<li id="months">August</li>
<ul class="archive_posts">
<li id="posts">A blog post with an image!</li>
</ul>
</ul>
<li id="years">2011</li>
<ul class="archive_month">
<li id="months">July</li>
<ul class="archive_posts">
<li id="posts">New Blog!</li>
</ul>
</ul>
<li id="years">2009</li>
<ul class="archive_month">
<li id="months">January</li>
<ul class="archive_posts">
<li id="posts">Photography 101</li>
</ul>
</ul>
</ul>
```
And here is the jQuery so far:
```
$(document).ready(function() {
//$(".archive_month ul:gt(0)").hide();
$('.archive_month ul').hide();
$('.archive_year > li').click(function() {
$(this).parent().find('ul').slideToggle();
});
$('.archive_month > li').click(function() {
$(this).parent().find('ul').slideToggle();
});
});
```
I was experimenting with the $(".archive\_month ul:gt(0)").hide(); but it didn't work as expected, it would switch the open and closed around.
Any help/thoughts?
Also, here is a fiddle for live example: <http://jsfiddle.net/MrLuke/VNkM2/1/>
|
First about the issues:
1. **ID-s must be unique!**
2. You have to properly nest your `<li>`-s
---
And here is how you can solve the problem - [**DEMO**](http://jsfiddle.net/VNkM2/2/)
***jQuery***
```
$('.archive_month ul').hide();
$('.months').click(function() {
$(this).find('ul').slideToggle();
});
```
***HTML*** *(fixed)*
```
<ul class="archive_year">
<li class="years">2012
<ul class="archive_month">
<li class="months">September
<ul class="archive_posts">
<li class="posts">Article 1</li>
<li class="posts">Article 2</li>
<li class="posts">Article 3</li>
<li class="posts">Article 4</li>
</ul>
</li>
<li class="months">August
<ul class="archive_posts">
<li class="posts">Article 1</li>
</ul>
</li>
</ul>
</li>
<li class="years">2011</li>
<ul class="archive_month">
<li class="months">July
<ul class="archive_posts">
<li class="posts">Article 1</li>
</ul>
</li>
</ul>
</li>
<li class="years">2009</li>
<ul class="archive_month">
<li class="months">January
<ul class="archive_posts">
<li class="posts">Article 1</li>
</ul>
</li>
</ul>
</ul>
```
|
Jarque-Bera normality test in R
Jarque-Bera normality test has significant p-values even when there is skewness and kurtosis. Does that mean test is infering data distribution is approximately normal?
|
You may have misunderstood something about hypothesis testing or maybe about goodness-of-fit tests, or perhaps specifically about the ["Jarque-Bera" test\*](http://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test).
Note that you reject when the p-value is small, when happens when the skewness and kurtosis differ from their expected values under normality.
The test statistic is of the form (from page 1 of Bowman and Shenton's [paper](http://www.jstor.org/discover/10.2307/2335355)):
$$\frac{n}{6} S^2 + \frac{n}{24} (K-3)^2\,,$$
where $S$ is the sample skewness and $K$ is the sample kurtosis (i.e. $K-3$ is 'excess kurtosis')
The null hypothesis is of normality, and rejection of the hypothesis (because of a significant p-value) leads to the conclusion that the distribution from which the data came is non-normal.
The test is specifically looking for skewness and kurtosis that is different from that of the normal (it squares the standardized deviations and sums them) and will tend to be significant when skewness and kurtosis deviating from the values at the normal are present.
Which is to say - when you get a significant test statistic with this test, it's explicitly because the sample skewness or kurtosis (or both) are different from what you expect to see with a sample from normal distribution.
Take care, however -- the asymptotic approximation on which the test is based comes in only *very* slowly (see the image near the bottom of [this answer](https://stats.stackexchange.com/questions/81074/how-useful-is-the-clt-in-applications/81087#81087); also [see here](https://stats.stackexchange.com/a/1723/805) and [here](https://stats.stackexchange.com/questions/62291/can-one-measure-the-degree-of-empirical-data-being-gaussian/62320#62320) for some additional points). I wouldn't rely on it without simulating the distribution of the test statistic unless $n$ is a good deal larger than say 100.
Here's an example of the joint distribution in normal samples at n=30 (simulated values):

-- as you see, not at all close to bivariate normal.
---
\*(The development of the test precedes their 1980 paper; it shouldn't be named for them. D'Agostino & Pearson (1973), and then Bowman & Shenton (1975) were there well before for example, and the latter discussed the relevant issues in more detail (including the slow convergence and the shape of the joint distribution in small samples - though their diagrams seem as if they may contain an error), but one can readily see that the idea of basing a goodness of fit test on skewness and kurtosis together comes even earlier than those prior papers.)
|
Javascript complicated valueOf method
What does `({}).valueOf.call(myvar)` do?
>
> *it converts any value to an object (an object remains unchanged, a primitive is converted to an instance of a wrapper type).*
>
>
>
My question is how?Can someone give The longer answer how this is done behind the scene.Since `valueOf()` method is meant to return primitive values not object .
```
console.log{name:"sameer"}.valueOf() //returns an object but cant be displayed since toString() method will be called by js so [object Object] gets displayed which is a string ,how to display the exact return value from valueOf() method .Is there a way?
```
|
Hello [again](https://stackoverflow.com/questions/16559957/object-prototype-valueof-method)! Once more, we face the mighty opponent. Before we begin, let's dispel one false thought:
>
> valueOf() method is meant to return primitive values not object .
>
>
>
Not accurate. `valueOf` returns an object if a primitive value was passed to it. If you do `valueOf(object)`, you'd get the same object: `valueOf(object) === object`. You can trivially see that:
```
var obj = {};
obj.valueOf() === obj; //true
```
Now, for the more interesting question: How is `valueOf` defined? Let's look at the ES5 specification along with the v8 and spidermonkey sources.
`valueOf` ([spec](http://es5.github.io/#x15.2.4.4), [v8](https://github.com/v8/v8/blob/d9f3c73f3dd06df708014726b307d9001cfec7cd/src/v8natives.js#L244), [spidermonkey](https://github.com/mozilla/mozilla-central/blob/15f32ad930343f949f3cd427e083f877c1a0f336/js/src/builtin/Object.cpp#L352)):
```
function ObjectValueOf() {
return ToObject(this);
}
```
As we can see, it simply returns `ToObject`, as defined in the spec. The rabbit hole emerges.
`ToObject` ([spec](http://es5.github.io/#x9.9), [v8](https://github.com/v8/v8/blob/d9f3c73f3dd06df708014726b307d9001cfec7cd/src/runtime.js#L583), [spidermonkey](https://github.com/mozilla/mozilla-central/blob/15f32ad930343f949f3cd427e083f877c1a0f336/js/src/jsobj.h#L1420))
```
function ToObject(x) {
if (IS_STRING(x)) return new $String(x);
if (IS_SYMBOL(x)) return new $Symbol(x);
if (IS_NUMBER(x)) return new $Number(x);
if (IS_BOOLEAN(x)) return new $Boolean(x);
if (IS_NULL_OR_UNDEFINED(x) && !IS_UNDETECTABLE(x)) {
throw %MakeTypeError('null_to_object', []);
}
return x;
}
```
Jackpot. We can see the entire flow here. If it's a string, number, boolean, etc return a wrapper (`$String` and `$Boolean` and the likes represent the actual String or Number; see [here](https://github.com/v8/v8/blob/d9f3c73f3dd06df708014726b307d9001cfec7cd/src/runtime.js#L42)); if it's an invalid argument, throw an error; otherwise, return the argument.
The spidermonkey source for that one goes deeper down the rabbit hole. It defines `ToObject` as such:
```
JS_ALWAYS_INLINE JSObject *
ToObject(JSContext *cx, HandleValue vp)
{
if (vp.isObject())
return &vp.toObject();
return ToObjectSlow(cx, vp, false);
}
```
So if it's not an Object, call `ToObjectSlow`. Buckle up Alice, there'll be C++. We need to take a look at what [`ToObejctSlow`](https://github.com/mozilla/mozilla-central/blob/15f32ad930343f949f3cd427e083f877c1a0f336/js/src/jsobj.cpp#L4871) does:
```
JSObject *
js::ToObjectSlow(JSContext *cx, HandleValue val, bool reportScanStack)
{
if (val.isNullOrUndefined()) {
...error throwing magic here...
return NULL;
}
return PrimitiveToObject(cx, val);
}
```
More indirection after looking whether the argument was null or undefined. The finale is [here](https://github.com/mozilla/mozilla-central/blob/15f32ad930343f949f3cd427e083f877c1a0f336/js/src/jsobj.cpp#L4829):
```
JSObject *
PrimitiveToObject(JSContext *cx, const Value &v)
{
if (v.isString()) {
Rooted<JSString*> str(cx, v.toString());
return StringObject::create(cx, str);
}
if (v.isNumber())
return NumberObject::create(cx, v.toNumber());
JS_ASSERT(v.isBoolean());
return BooleanObject::create(cx, v.toBoolean());
}
```
Pretty much the same as the v8 version, only with different taxonomy.
---
Now, as I said before, I think your question has more to do with the medium of representing the object you see. Firebug and chrome's devtools are more than apt at displaying an object. However, if you try to `alert` it, you'll see the unfortunate `[object Object]`, because that's what `({}).toString()` gives you (since it gives out a string of the form `[object InternalClassName]`, again, as we've seen before).
As a bonus, try `console.dir({foo : 'bar'})`
|
Automatically copy property values from one object to another of a different type but the same protocol (Objective-C)
I have two classes with the same set of properties, declared using the @property directive in a protocol, they both implement. Now I was wondering if it is possible to automatically populate an instance of the first class with the values from an instance of the second class (and vice-versa).
I would like this approach to be robust, so that if I change the of properties declared in the protocol there will be no need to add extra code in the copying methods.
|
Yes, given the exact context there could be various approaches to this problem.
One I can think of at the moment is to first get all the properties of source object then use `setValue:value forKey:key` to set the values on the target object.
Code to retrieve all custom properties:
```
-(NSSet *)propertyNames {
NSMutableSet *propNames = [NSMutableSet set];
unsigned int outCount, i;
objc_property_t *properties = class_copyPropertyList([self class], &outCount);
for (i = 0; i < outCount; i++) {
objc_property_t property = properties[i];
NSString *propertyName = [[[NSString alloc]
initWithCString:property_getName(property)] autorelease];
[propNames addObject:propertyName];
}
free(properties);
return propNames;
}
```
You may want to checkout the [Key-Value Coding Programming Guide](http://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/KeyValueCoding/Articles/KeyValueCoding.html) for more information.
|
Is there a shorthand for std::lock\_guard lock(m)?
Exactly what the question states. In C++, ideally 11, but curious about 14 and later too, is there a shorthand syntax for:
```
std::mutex someMutex;
std::lock_guard<std::mutex> lg(someMutex);
```
Ideally something that infers the type of mutex to avoid the refactoring if I ever wanted to change to a `std::recursive_mutex`.
In other words, a way to do this:
```
std::mutex someMutex;
std::lock_guard lg(someMutex);
```
Or
```
auto lg = make_lock_guard(someMutex);
```
For all the type deduction powers of modern C++, it just seems awfully redundant to go typing `std::lock_guard<std::mutex>` every time I want to make one.
|
For pre-C++17:
```
template<class Mutex>
std::lock_guard<Mutex> make_lock_guard(Mutex& mutex) {
mutex.lock();
return { mutex, std::adopt_lock };
}
```
Use as:
```
std::mutex someMutex;
auto&& lg = make_lock_guard(someMutex);
```
This takes advantage of the fact that copy-list-initialization doesn't create an additional temporary (even conceptually). The one-parameter constructor is `explicit` and can't be used for copy-list-initialization, so we lock the mutex first and then use the `std::adopt_lock` constructor.
The return value is then directly bound to `lg`, which extends its lifetime to that of the reference, once again creating no temporary (even conceptually) in the process.
|
ReadAllLines for a Stream object?
There exists a `File.ReadAllLines` but not a `Stream.ReadAllLines`.
```
using (Stream stream = Assembly.GetExecutingAssembly().GetManifestResourceStream("Test_Resources.Resources.Accounts.txt"))
using (StreamReader reader = new StreamReader(stream))
{
// Would prefer string[] result = reader.ReadAllLines();
string result = reader.ReadToEnd();
}
```
Does there exist a way to do this or do I have to manually loop through the file line by line?
|
You can write a method which reads line by line, like this:
```
public IEnumerable<string> ReadLines(Func<Stream> streamProvider,
Encoding encoding)
{
using (var stream = streamProvider())
using (var reader = new StreamReader(stream, encoding))
{
string line;
while ((line = reader.ReadLine()) != null)
{
yield return line;
}
}
}
```
Then call it as:
```
var lines = ReadLines(() => Assembly.GetExecutingAssembly()
.GetManifestResourceStream(resourceName),
Encoding.UTF8)
.ToList();
```
The `Func<>` part is to cope when reading more than once, and to avoid leaving streams open unnecessarily. You could easily wrap that code up in a method, of course.
If you don't need it all in memory at once, you don't even need the `ToList`...
|
Access request in \_\_init\_\_ in ModelForm
How can I access request in `__init__` form?
**forms.py**
```
class MyForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.request = kwargs.pop('request', None)
super(MyForm, self).__init__(*args, **kwargs)
def clean(self):
... access the request object via self.request ...
```
**views.py**
```
myform = MyForm(request.POST, request=request)
```
but what if I use class based views `FormView`?
|
If you need to access request in your `MyForm` you can override the `FormView.get_form_kwargs` method.
```
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs.update({'request': self.request})
return kwargs
```
[FormView source](https://github.com/django/django/blob/master/django/views/generic/edit.py)
>
>
> ```
> class FormMixin(ContextMixin):
> ...
> def get_form(self, form_class=None):
> """
> Returns an instance of the form to be used in this view.
> """
> if form_class is None:
> form_class = self.get_form_class()
> return form_class(**self.get_form_kwargs())
>
> def get_form_kwargs(self):
> """
> Returns the keyword arguments for instantiating the form.
> """
> kwargs = {
> 'initial': self.get_initial(),
> 'prefix': self.get_prefix(),
> }
>
> if self.request.method in ('POST', 'PUT'):
> kwargs.update({
> 'data': self.request.POST,
> 'files': self.request.FILES,
> })
> return kwargs
> if form_class is None:
> form_class = self.get_form_class()
> return form_class(**self.get_form_kwargs())
>
> ```
>
>
|
Calculate mean for selected rows for selected columns in pandas data frame
I have pandas df with say, 100 rows, 10 columns, (actual data is huge). I also have row\_index list which contains, which rows to be considered to take mean. I want to calculate mean on say columns 2,5,6,7 and 8. Can we do it with some function for dataframe object?
What I know is do a for loop, get value of row for each element in row\_index and keep doing mean. Do we have some direct function where we can pass row\_list, and column\_list and axis, for ex `df.meanAdvance(row_list,column_list,axis=0)` ?
I have seen DataFrame.mean() but it didn't help I guess.
```
a b c d q
0 1 2 3 0 5
1 1 2 3 4 5
2 1 1 1 6 1
3 1 0 0 0 0
```
I want mean of `0, 2, 3` rows for each `a, b, d` columns
```
a b d
0 1 1 2
```
|
To select the rows of your dataframe you can use iloc, you can then select the columns you want using square brackets.
For example:
```
df = pd.DataFrame(data=[[1,2,3]]*5, index=range(3, 8), columns = ['a','b','c'])
```
gives the following dataframe:
```
a b c
3 1 2 3
4 1 2 3
5 1 2 3
6 1 2 3
7 1 2 3
```
to select only the 3d and fifth row you can do:
```
df.iloc[[2,4]]
```
which returns:
```
a b c
5 1 2 3
7 1 2 3
```
if you then want to select only columns b and c you use the following command:
```
df[['b', 'c']].iloc[[2,4]]
```
which yields:
```
b c
5 2 3
7 2 3
```
To then get the mean of this subset of your dataframe you can use the df.mean function. If you want the means of the columns you can specify axis=0, if you want the means of the rows you can specify axis=1
thus:
```
df[['b', 'c']].iloc[[2,4]].mean(axis=0)
```
returns:
```
b 2
c 3
```
As we should expect from the input dataframe.
For your code you can then do:
```
df[column_list].iloc[row_index_list].mean(axis=0)
```
EDIT after comment:
New question in comment:
I have to store these means in another df/matrix. I have L1, L2, L3, L4...LX lists which tells me the index whose mean I need for columns C[1, 2, 3]. For ex: L1 = [0, 2, 3] , means I need mean of rows 0,2,3 and store it in 1st row of a new df/matrix. Then L2 = [1,4] for which again I will calculate mean and store it in 2nd row of the new df/matrix. Similarly till LX, I want the new df to have X rows and len(C) columns. Columns for L1..LX will remain same. Could you help me with this?
Answer:
If i understand correctly, the following code should do the trick (Same df as above, as columns I took 'a' and 'b':
first you loop over all the lists of rows, collection all the means as pd.series, then you concatenate the resulting list of series over axis=1, followed by taking the transpose to get it in the right format.
```
dfs = list()
for l in L:
dfs.append(df[['a', 'b']].iloc[l].mean(axis=0))
mean_matrix = pd.concat(dfs, axis=1).T
```
|
Record a macro of windows commands
Using Windows 7, is there an in-built or third-party method of recording windows commands and being able to play them back?
As a very basic example of what I would like to be able to do, imagine having two desktop themes that you like to alternate between. The Windows commands would be something like
- Open personalisation applet
- select second theme
- ok
(I'm calling this a macro because of the equivalent capability in Word.)
|
It sounds like you are looking for [AutoHotKey](http://www.autohotkey.com/):
>
> AutoHotkey is a free, open-source utility for Windows. With it, you can:
>
>
> - Automate almost anything by sending keystrokes and mouse clicks. You can write a mouse or keyboard macro by hand or use the macro recorder.
> - Create hotkeys for keyboard, joystick, and mouse. Virtually any key, button, or combination can become a hotkey.
> - Expand abbreviations as you type them. For example, typing "btw" can automatically produce "by the way".
> - Create custom data-entry forms, user interfaces, and menu bars. See GUI for details.
> - Remap keys and buttons on your keyboard, joystick, and mouse.
> - Respond to signals from hand-held remote controls via the WinLIRC client script.
> - Run existing AutoIt v2 scripts and enhance them with new capabilities.
> - Convert any script into an EXE file that can be run on computers that don't have AutoHotkey installed.
>
>
>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.