prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
Redirect stdout to logger in Python Can I redirect all output from `stdout` to a logger I have set up with the standard `logging` module? (I have os.system calls whose output I'd also like to see or occational print statements)
You might be able to make use of the suggestion in [this post](http://plumberjack.blogspot.com/2009/09/how-to-treat-logger-like-output-stream.html), summarised below: ``` import logging class LoggerWriter: def __init__(self, logger, level): self.logger = logger self.level = level def write(self, message): if message != '\n': self.logger.log(self.level, message) def main(): logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("demo") info_fp = LoggerWriter(logger, logging.INFO) debug_fp = LoggerWriter(logger, logging.DEBUG) print >> info_fp, "An INFO message" print >> debug_fp, "A DEBUG message" if __name__ == "__main__": main() ``` When run, the script prints: ``` INFO:demo:An INFO message DEBUG:demo:An DEBUG message ```
Compiling Programs from Within Emacs? What is the best way to compile programs inside emacs? I am currently opening a separate buffer with `C-x 3` and running eshell inside it using `M-x eshell` then invoking either make or clang directly; most of the time I do have a Makefile set up. Is there any advantage with running the compilation process using `M-x compile` vs running make inside eshell? Any other ways to do it and what are the advantages/disadvantages of those?
The easiest way to do this is to use the Emacs built-in `compile` command. `M-x compile` should do you fine. You can then edit the command that will be run (by default `make -k`) and then hit return to run the compilation. Emacs will then parse the output and if it finds any errors they will link to the source files so you can open them in a buffer. Positives about it are: - Parsing of the output buffer - Memorisation of the compile command between invocations - Compilation output is shown in a non-selected buffer, you can quickly edit the file you were working on and fix any silly errors. - `M-n` and `M-p` scroll by *error messages* Most of these features are provided by the `compilation-minor-mode` minor mode though not the actual compilation command and buffer. Once you have run a compilation command in `eshell` you could probably get similar results by setting the minor mode to `compilation-minor-mode`.
Oracle / SQL Developer displaying (null) in column I have any oracle table that sometimes has valid null values is some cells. In SQL Developer it displays in the null cell as (null). This is not a problem but in a grid I have on a jsp page is also displays as (null) and I need it just to be blank. NVL does not work unless I use a space. I was wondering if there is an oracle setting or something to have valid null cells just be blank ? thank you
**Modifying SQL Developer Preferences** Jeff Smith, the product manager for SQL Developer, blogged about this here, [http://www.thatjeffsmith.com](http://www.thatjeffsmith.com/archive/2011/11/sql-developer-quick-tip-take-the-guess-work-out-of-null/). Just navigate to the SQL Developer tool bar as follows: **Preferences>Database>Advanced** and change the value in the field, "Display Null Value As", to nothing as seen here: ![enter image description here](https://i.stack.imgur.com/OxMdb.png) ~~~~~~~~~~~~~~~~~~~~~~~~~~ With a jsp, one could create a method for scenarios where a column is null (to display the null as other than '(null)').
How to disable the method return type hint in IntellijIdea scala plugin Intellij Idea scala plugin automatically show the method's return type in gray which is annoying in most cases. How to suppress this feature
This feature has been added in [IntelliJ IDEA 2018.1](https://www.jetbrains.com/idea/specials/idea/whatsnew.html) > > The editor can now show inline hints for parameter names, method > result types, and variable types. You can also easily use the settings > to customize when such a hint should be shown. > > > You can disable it from `Preference` -> `Editor` -> `General` -> `Appearance` [![How to switch scala function return type hints](https://i.stack.imgur.com/g6a0p.png)](https://i.stack.imgur.com/g6a0p.png) Uncheck "Show function return type hints(Scala)" Or simply right click on the type hints [![enter image description here](https://i.stack.imgur.com/GcABO.png)](https://i.stack.imgur.com/GcABO.png) And click on `Function return type hints` Once you disable it, the former way is the only way to re-enable it.
When to use Android’s LiveData and Observable field? I’m implementing a MVVM and data-binding and I’m trying to understand when should I use Observable field over LiveData? I already run through different documentations and discovered that LiveData is lifecycle aware, but in sample codes in Github these two are being used in ViewModel at the same time. So, I’m confused if LiveData is better than Observable field, why not just use LiveData at all?
Both have their use-cases, for instance: - If you want a life-cycle tolerant container for your UI state model, `LiveData` is the answer. - If you want to make the UI update itself when a piece of logic is changed in your view model, then use `ObservableFields`. I myself prefer using a combination of `LivaData` and `ObservableField/BaseObservable`, the `LiveData` will normally behave as a life-cycle aware data container and also a channel between the VM and the View. On the other hand the UI state model objects that are emitted through the `LiveData` are themselves `BaseObservable` or have their fields as `ObservableField`. That way I can use the `LiveData` for total changes of the UI state. And set values to the UI state model `ObservableField` fields whenever a small portion of the UI is to be updated. *Edit*: Here is a quick illustration on a UserProfile component for example: **UIStateModel** ``` data class ProfileUIModel( private val _name: String, private val _age: Int ): BaseObservable() { var name: String @Bindable get() = _name set(value) { _name = value notifyPropertyChanged(BR.name) } var age: Int @Bindable get() = _age set(value) { _age = value notifyPropertyChanged(BR.age) } } ``` **ViewModel** ``` class UserProfileViewModel: ViewModel() { val profileLiveData: MutableLiveData = MutableLiveData() ... // When you need to rebind the whole profile UI object. profileLiveData.setValue(profileUIModel) ... // When you need to update a specific part of the UI. // This will trigger the notifyPropertyChanged method on the bindable field "age" and hence notify the UI elements that are observing it to update. profileLiveData.getValue().age = 20 } ``` **View** You'll observe the profile LiveData changes normally. **XML** You'll use databinding to bind the UI state model. ***Edit***: Now the mature me prefers [Immutability](https://medium.com/tribalscale/understanding-immutability-fdd627b66e58) instead of having mutable properties as explained in the answer.
Why is SSH key authentication failing for this user? (CENTOS 7) I'm trying to debug the fact that a new user account cannot successfully SSH into a Centos 7 server using RSA key authentication via the command ``` ssh theuser@theserver ``` The following observations can be made: - The user account (theuser) exists and is not locked - theuser's home folder contains a .ssh directory (700 permission) containing an authorized\_keys file (600 permission) - the authorized\_keys file contains a copy of the public key from the local machine - the local machine's ~/.ssh/config file points to the correct key file to use for this server - ssh can be successfully achieved by entering theuser's password once the key authentication has failed - ssh by public key can be achieved with a different user account - the /var/log/secure file does not log anything when the key is refused as theuser Can anyone suggest any next steps I should take to try and find the source of this problem, as my colleagues and I are stuck? Edit: included ssh -vvv output ``` debug1: Host 'theserver' is known and matches the ECDSA host key. debug1: Found key in /Users/ambulare/.ssh/known_hosts:20 debug3: send packet: type 21 debug2: set_newkeys: mode 1 debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey after 134217728 blocks debug2: key: /Users/ambulare/.ssh/server_isr_id_rsa_ambulare (0x7fc#obfuscated#), explicit debug3: send packet: type 5 debug3: receive packet: type 7 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512> debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,password debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/ambulare/.ssh/server_isr_id_rsa_ambulare debug3: send_pubkey_test debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password ```
Thanks to @DevilaN's suggestion, I have resolved the problem. Trying ssh-copy-id returned an error "permission denied on authorized\_keys". As it was a permissions error, I went back to checking ownership and permissions on the authorized\_keys file and despite my setting the ownership to theuser on this file (as in my original question), clearly I or my colleagues had done something since the initial setup that had led to the ownership being changed to "root". It was a simple ownership problem. ``` chown theuser:theuser authorized_keys ``` and voila, ssh is working. For anyone coming across this answer via a Google search: it seems ownership by the wrong user of the authorized\_keys file will cause an ssh login attempt to fail silently without returning or logging anywhere that it is a permissions error - until you do ssh-copy-id.
Angular2 subscribing to changes to @Input in Child Component I have a parent and child component. Parent component has `index` and passes this through to the child component as an `@Input`. This `index` value constantly changes in the parent component, and inside my child component, I want to run a function everytime the Parent component changes `index`. This has a slide component that constantly changes it. How can I achieve this? I've tried with the below code (`this.index.subscribe(() =>`), but I'm getting that it's not a function everytime I try and initialise the subscription. EDIT: There is no related code in the `Child` template that could impact this, so it's not provided. `ngOnChange` doesn't seem to work as the change is happening in the directive as opposed to the `parent`'s template. Child: ``` import {Component, OnInit, ViewChild, Input} from '@angular/core'; import {Observable} from 'rxjs/Observable'; @Component({ selector: "child", templateUrl: "components/child/child.html", }) export class ChildComponent implements OnInit { @Input() index: string; currentIndex: any; constructor() {} ngOnInit() {} ngOnChanges(){} ngAfterViewInit() { console.log("index " + this.index); this.currentIndex = this.index.subscribe(() => { console.log("index " + this.index); }) } ngOnDestroy() { this.currentIndex.unsubscribe(); } } ``` Parent: ``` import {Component, ElementRef, OnInit, ViewChild} from '@angular/core'; import {Page} from "ui/page"; import {ChildComponent} from '/components/child/child.component' @Component({ selector: "parent", template: "<child [index]="index"></child>", directives: [ChildComponent] }) export class ParentComponent implements OnInit { index: string = "0,1"; constructor(private page: Page) { } } ```
<https://angular.io/docs/ts/latest/api/core/index/Input-var.html> To quote: > > Angular automatically updates data-bound properties during change > detection. > > > If you need to do some processing on the input, look at the get and set. <https://angular.io/docs/ts/latest/cookbook/component-communication.html#!#parent-to-child-setter> From the documentation, here is an example. ``` import { Component, Input } from '@angular/core'; @Component({ selector: 'name-child', template: ` <h3>"{{name}}"</h3> ` }) export class NameChildComponent { _name: string = '<no name set>'; @Input() set name(name: string) { this._name = (name && name.trim()) || '<no name set>'; } get name() { return this._name; } } ``` You don't need to use an observable.
How to highlight multiple keywords/words in a string with Regex? I have the following case that I am trying to solve. **Javascript Method that highlights keywords in a phrase.** ``` vm.highlightKeywords = (phrase, keywords) => { keywords = keywords.split(' '); let highlightedFrase = phrase; angular.forEach(keywords, keyword => { highlightedFrase = highlightedFrase.replace(new RegExp(keyword + "(?![^<])*?>)(<\/[a-z]*>)", "gi"), function(match) { return '<span class="highlighted-search-text">' + match + </span>'; }); }); return $sce.trustAsHtml(highlightedFrase) } ``` How can I write a regular expression that will match this case so that I can replace the substrings `keyowrds = 'temperature high'` `phrase = 'The temperature is <span class="highlight">hig</span>h'` **ReGex Case** <https://regex101.com/r/V8o6gN/5>
If I'm not mistaken, your basically wanting to find each word that is a word in your `keywords` variable and match them in your string so you can wrap them in a span. You'll want to first turn your keywords into a RegExp, then do a global match. Something like this: ``` const keywordsString = "cake pie cookies"; const keywords = keywordsString.split(/\s/); // equivalent to: /(cake|pie|cookies)/g const pattern = new RegExp(`(${keywords.join('|')})`, 'g'); const phrase = "I like cake, pie and cookies"; const result = phrase.replace(pattern, match => `<span>${match}</span>`); console.log(result); ``` Basically, you want a pattern where your keywords are pipe (`|`) separated and wrapped in parentheses (`()`). Then you just want to do a global search (`g` flag) so you match all of them. With the global flag, there is no need to do a loop. You can get them all in one shot.
Flush a d3 v4 transition Does someone know of a way to 'flush' a transition. I have a transition defined as follows: ``` this.paths.attr('transform', null) .transition() .duration(this.duration) .ease(d3.easeLinear) .attr('transform', 'translate(' + this.xScale(translationX) + ', 0)') ``` I am aware I can do ``` this.paths.interrupt(); ``` to stop the transition, but that doesn't finish my animation. I would like to be able to 'flush' the transition which would immediately finish the animation.
If I understand correctly (and I might not) there is no out of the box solution for this without going under the hood a bit. However, I believe you could build the functionality in a relatively straightforward manner if `selection.interrupt()` is of the form you are looking for. To do so, you'll want to create a new method for d3 selections that access the transition data (located at: `selection.node().__transition`). The transition data includes the data on the tweens, the timer, and other transition details, but the most simple solution would be to set the duration to zero which will force the transition to end and place it in its end state: *The \_\_transition data variable can have empty slots (of a variable number), which can cause grief in firefox (as far as I'm aware, when using forEach loops), so I've used a keys approach to get the non-empty slot that contains the transition.* ``` d3.selection.prototype.finish = function() { var slots = this.node().__transition; var keys = Object.keys(slots); keys.forEach(function(d,i) { if(slots[d]) slots[d].duration = 0; }) } ``` **If working with delays**, you can also trigger the timer callback with something like: `if(slots[d]) slots[d].timer._call();`, as setting the delay to zero *does not* affect the transition. Using this code block you call `selection.finish()` which will force the transition to its end state, click a circle to invoke the method: ``` d3.selection.prototype.finish = function() { var slots = this.node().__transition; var keys = Object.keys(slots); keys.forEach(function(d,i) { if(slots[d]) slots[d].timer._call(); }) } var svg = d3.select("body") .append("svg") .attr("width", 500) .attr("height", 500); var circle = svg.selectAll("circle") .data([1,2,3,4,5,6,7,8]) .enter() .append("circle") .attr("cx",50) .attr("cy",function(d) { return d * 50 }) .attr("r",20) .on("click", function() { d3.select(this).finish() }) circle .transition() .delay(function(d) { return d * 500; }) .duration(function(d) { return d* 5000; }) .attr("cx", 460) .on("end", function() { d3.select(this).attr("fill","steelblue"); // to visualize end event }) ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.12.0/d3.min.js"></script> ``` Of course, if you wanted to keep the method d3-ish, return the selection so you can chain additional methods on after. And for completeness, you'll want to ensure that there is a transition to finish. With these additions, the new method might look something like: ``` d3.selection.prototype.finish = function() { // check if there is a transition to finish: if (this.node().__transition) { // if there is transition data in any slot in the transition array, call the timer callback: var slots = this.node().__transition; var keys = Object.keys(slots); keys.forEach(function(d,i) { if(slots[d]) slots[d].timer._call(); }) } // return the selection: return this; } ``` Here's a [bl.ock](https://bl.ocks.org/andrew-reid/d92de15ef9694f12cf5695271dd73cb8) of this more complete implementation. --- The above is for version 4 and 5 of D3. To replicate this in version 3 is a little more difficult as timers and transitions were reworked a bit for version 4. In version three they are a bit less friendly, but the behavior can be achieved with slight modification. For completeness, [here's a block](https://bl.ocks.org/Andrew-Reid/ae14de2768b5d07d9fbf554ae02caf02) of a d3v3 example.
Upload Files In Folders Using Google Drive API I currently have the below code that can successfully upload a file to google drive in the root directory. How would I have to change the below code such that if `$folderName` exist, it uploads the file under that folder but if `$folderName` doesn't exists, it creates the folder, calls it `$folderName`, and then adds the file underneath it. ``` function uploadFiles($filePath, $fileName, $folderName) { $file = new Google_Service_Drive_DriveFile(); $file->setName($fileName); $file->setDescription('A test document'); $data = file_get_contents($filePath); $createdFile = $this->service->files->create($file, array( 'data' => $data, 'uploadType' => 'multipart' )); } ```
I believe your goal and situation as follows. - You want to upload a file to the specific folder using googleapis for php. - When the specific folder is not existing, you want to create the folder and upload the file to the folder. - When the specific folder is existing, you want to upload the file to the folder. - You have already been able to upload a file to Google Drive using Drive API. ### Modification points: - In this case, at first, it is required to confirm whether the specific folder is existing. So in this case, the method of "Files: list" in Drive API is used. So the flow of the modified script is as follows. 1. Search the existing folder using the folder name. 2. When the folder of the folder name is NOT existing, the folder is created by the folder name and the folder ID of the created folder is returned. - In this case, the method of "Files: create" is used. 3. When the folder of the folder name is existing, the folder ID is returned. 4. The file is uploaded to the folder using the folder ID. ### Modified script: ``` function uploadFiles($filePath, $fileName, $folderName) { // 1. Search the existing folder using the folder name. $res = $this->service->files->listFiles(array("q" => "name='{$folderName}' and trashed=false")); $folderId = ''; if (count($res->getFiles()) == 0) { // 2. When the folder of the folder name is NOT existing, the folder is created by the folder name and the folder ID of the created folder is returned. $file = new Google_Service_Drive_DriveFile(); $file->setName($folderName); $file->setMimeType('application/vnd.google-apps.folder'); $createdFolder = $this->service->files->create($file); $folderId = $createdFolder->getId(); } else { // 3. When the folder of the folder name is existing, the folder ID is returned. $folderId = $res->getFiles()[0]->getId(); } // 4. The file is uploaded to the folder using the folder ID. $file = new Google_Service_Drive_DriveFile(); $file->setName($fileName); $file->setDescription('A test document'); $file->setParents(array($folderId)); $data = file_get_contents($filePath); $createdFile = $this->service->files->create($file, array( 'data' => $data, 'uploadType' => 'multipart' )); } ``` ### References: - [Files: list](https://developers.google.com/drive/api/v3/reference/files/list) - [Files: create](https://developers.google.com/drive/api/v3/reference/files/create)
Rails Rounding float with different options I have a form where user enters a decimal value and a drop down with four options: `Dollar (.00)`, `Quarter (.00, .25, .50, .75)`, `Dime (.10, .20, .30 .. .90)` and `Penny (.01, .02, .03 ... .99)`. Also there is an option to select either round `UP` or `DOWN`. These options are used to round the value entered by the user. I monkey patched the `Float` class and added `round_to_quarter` that works fine: ``` class Float def round_to_quarter (self * 4).round / 4.0 end def round_to_dime #TODO end def round_to_penny #TODO end def round_to_dollar #TODO end end 9.22.round_to_quarter #=> 9.25 ``` How do I round the value for Dime(.10, .20, .30 .. .90) and Penny (.01, .02, .03 ... .99) options and round up and down? The Ruby version is 2.2.3
Here's a generic way to do it for any precision: ``` class Float def round_currency(precision: 1, direction: :none) round_method = case direction when :none then :round when :up then :ceil when :down then :floor end integer_value = (self * 100).round ((integer_value / precision.to_f).send(round_method) * precision / 100.0) end end # USAGE 9.37.round_currency(direction: :none, precision: 10) # => 9.4 9.37.round_currency(direction: :up, precision: 25) # => 9.5 9.37.round_currency(direction: :none) # => 9.37 # Precision is defined in pennies: 10 dime, 25 quarter, 100 dollar. 1 penny is default ``` This code converts the float into an integer first to ensure accuracy. Be wary using `ceil` and `floor` with floating number arithmetic - due to accuracy errors you could get odd results e.g. `9.37 * 100 = 936.9999999999999`. If you `floor` the result, you'll end up rounding to 9.36
NSURLConnection sendSynchronousRequest - background to foreground I m using sendSynchronousRequest to get the data from the server. I know that synchronous will wait until the data received for that request. But the problem comes when user by mistake enters some non-existing url and than tries to get response. In this case, if user goes in to background and than comes into foreground it shows only black screen. It only shows status bar. Also its not showing any background application. I have to press Home button to come out of my application. On simulator, After 1+ minute it shows me the message that "Request time out" (No crash). On Device, within 1 min application get crashes. Any suggestion. Any Help. This is really a serious issue in my app. Thanks.
Just like Julien said, the watchdog is killing your app. To answer some questions: - why does this happen only on the simulator? Because when you're debugging the watchdog leaves your app alone, it can take time. - why does this happen only when the user enters a wrong url? Because of the system timeout, the system will keep trying for 60 secs if it can't find a server. - so the problem is synchronous vs asynchronous? No, the problem is the thread, you can do the same operation in a background thread, just don't do it on the main thread and the watchdog will leave you alone. - why is the screen black when the app comes up? Remember, you are making blocking stuff on the main thread, the thread that draws... Hope that was all. Let me know if I missed something.
Delphi RTTI to iterate properties of Generic record type I have several classes with properties of simple types (Integer, Boolean, string) and some Nullable's: ``` Nullable<T> = record private FValue: T; FHasValue: IInterface; function GetValue: T; function GetHasValue: Boolean; public constructor Create(AValue: T); property HasValue: Boolean read GetHasValue; property Value: T read GetValue; end; ``` Eg. ``` TMyClass1 = class(TCommonAncestor) private FNumericvalue: Double; FEventTime: Nullable<TDateTime>; public property NumericValue: Double read FNumericValue write FNumericValue; property EventTime: Nullable<TDateTime> read FEventTime write FEventTime; end; ``` and ``` TMyClass2 = class(TCommonAncestor) private FCount: Nullable<Integer>; FName: string; public property Count: Nullable<Integer> read FCount write FCount; property Name: string read FName write FName; end; ``` etc.... Given a descendant of TCommonAncestor, I would like to use RTTI to iterate all public properties and list their name and value, unless it is a Nullable where T.HasValue returns false. I am using Delphi XE2. **EDIT: added what I have so far.** ``` procedure ExtractValues(Item: TCommonAncestor); var c : TRttiContext; t : TRttiType; p : TRttiProperty; begin c := TRttiContext.Create; try t := c.GetType(Item.ClassType); for p in t.GetProperties do begin case p.PropertyType.TypeKind of tkInteger: OutputDebugString(PChar(Format('%se=%s', [p.Name,p.GetValue(Item).ToString])); tkRecord: begin // for Nullable<Double> p.PropertyType.Name contains 'Nullable<System.Double>' // but how do I go about accessing properties of this record-type field? end; end; end; finally c.Free; end; end; ```
The following works for me in XE2: ``` uses System.SysUtils, System.TypInfo, System.Rtti, System.StrUtils, Winapi.Windows; type Nullable<T> = record private FValue: T; FHasValue: IInterface; function GetHasValue: Boolean; function GetValue: T; procedure SetValue(const AValue: T); public constructor Create(AValue: T); function ToString: string; // <-- add this for easier use! property HasValue: Boolean read GetHasValue; property Value: T read GetValue write SetValue; end; TCommonAncestor = class end; TMyClass1 = class(TCommonAncestor) private FNumericvalue: Double; FEventTime: Nullable<TDateTime>; public property NumericValue: Double read FNumericValue write FNumericValue; property EventTime: Nullable<TDateTime> read FEventTime write FEventTime; end; TMyClass2 = class(TCommonAncestor) private FCount: Nullable<Integer>; FName: string; public property Count: Nullable<Integer> read FCount write FCount; property Name: string read FName write FName; end; ... constructor Nullable<T>.Create(AValue: T); begin SetValue(AValue); end; function Nullable<T>.GetHasValue: Boolean; begin Result := FHasValue <> nil; end; function Nullable<T>.GetValue: T; begin if HasValue then Result := FValue else Result := Default(T); end; procedure Nullable<T>.SetValue(const AValue: T); begin FValue := AValue; FHasValue := TInterfacedObject.Create; end; function Nullable<T>.ToString: string; begin if HasValue then begin // TValue.ToString() does not output T(Date|Time) values as date/time strings, // it outputs them as floating-point numbers instead, so do it manually... if TypeInfo(T) = TypeInfo(TDateTime) then Result := DateTimeToStr(PDateTime(@FValue)^) else if TypeInfo(T) = TypeInfo(TDate) then Result := DateToStr(PDateTime(@FValue)^) else if TypeInfo(T) = TypeInfo(TTime) then Result := TimeToStr(PDateTime(@FValue)^) else Result := TValue.From<T>(FValue).ToString; end else Result := '(null)'; end; procedure ExtractValues(Item: TCommonAncestor); var c : TRttiContext; t : TRttiType; p : TRttiProperty; v : TValue; m : TRttiMethod; s : string; begin c := TRttiContext.Create; t := c.GetType(Item.ClassType); for p in t.GetProperties do begin case p.PropertyType.TypeKind of tkRecord: begin if StartsText('Nullable<', p.PropertyType.Name) then begin // get Nullable<T> instance... v := p.GetValue(Item); // invoke Nullable<T>.ToString() method on that instance... m := c.GetType(v.TypeInfo).GetMethod('ToString'); s := m.Invoke(v, []).AsString; end else s := Format('(record type %s)', [p.PropertyName.Name]); end; else s := p.GetValue(Item).ToString; end; OutputDebugString(PChar(Format('%s=%s', [p.Name, s]))) end; end; ``` ``` var Item1: TMyClass1; Item2: TMyClass2; begin Item1 := TMyClass1.Create; try Item1.NumericValue := 123.45; Item1.EventTime.SetValue(Now); ExtractValues(Item1); { Output: NumericValue=123.45 EventTime=10/19/2017 1:25:05 PM } finally Item1.Free; end; Item1 := TMyClass1.Create; try Item1.NumericValue := 456.78; //Item1.EventTime.SetValue(Now); ExtractValues(Item1); { Output: NumericValue=456.78 EventTime=(null) } finally Item1.Free; end; Item2 := TMyClass2.Create; try Item2.Count.SetValue(12345); Item2.Name := 'test'; ExtractValues(Item2); { Output: Count=12345 Name=test } finally Item2.Free; end; Item2 := TMyClass2.Create; try //Item2.Count.SetValue(12345); Item2.Name := 'test2'; ExtractValues(Item2); { Output: Count=(null) Name=test2 } finally Item2.Free; end; end; ```
-pthread, -lpthread and minimal dynamic linktime dependencies This [answer](https://stackoverflow.com/questions/23250863/difference-between-pthread-and-lpthread-while-compiling) suggest `-pthread` is preferable to `-lpthread` because predefined macros. Empirically, `-pthread` gives me only one extra macro: `#define _REENTRANT 1` and it also appears to force `libpthread.so.0` as a dynamic linktime dependency. When I compile with `-lpthread`, that dependency is only added if I actually call any of the `pthread` functions. This is preferably to me, because then I wouldn't have to treat multithreaded programs differently in my build scripts. So my question is, what else is there to `-pthread` vs `-lpthread` and is it possible to use use `-pthread` without forcing said dynamic linktime dependency? Demonstration: ``` $ echo 'int main(){ return 0; }' | c gcc -include pthread.h -x c - -lpthread && ldd a.out | grep pthread $ echo 'int main(){ return pthread_self(); }' | c gcc -include pthread.h -x c - -lpthread && ldd a.out | grep pthread libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x0000003000c00000) $ echo 'int main(){ return 0; }' | c gcc -include pthread.h -x c - -pthread && ldd a.out | grep pthread libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x0000003000c00000) ```
The idea that you should use GCC's special option `-pthread` instead of `-lpthread` is outdated by probably some decade and a half (with respect to glibc, that is). In modern glibc, the switch to threading is entirely dynamic, based on whether the pthreads library is linked or not. Nothing in the glibc headers changes its behavior based on whether `_REENTRANT` is defined. As an example of the dynamic switching, consider `FILE *` streams. Certain operations on streams are locking, like `putc`. Whether you're compiling a single-threaded program or not, it calls the same `putc` function; it is not re-routed by the preprocessor to a "pthread-aware" `putc`. What happens is that do-nothing stub functions are used to go through the motions of locking and unlocking. These functions get overridden to real ones when the threading library is linked in. --- --- I just did a cursory `grep` through the include file tree of a glibc installation. In `features.h`, `_REENTRANT` causes `__USE_REENTRANT` to be defined. In turn, exactly one thing seems to depend on whether `__USE_REENTRANT` is present, but has a parallel condition which also enables it. Namely, in `<unistd.h>` there is this: ``` #if defined __USE_REENTRANT || defined __USE_POSIX199506 /* Return at most NAME_LEN characters of the login name of the user in NAME. If it cannot be determined or some other error occurred, return the error code. Otherwise return 0. This function is a possible cancellation point and therefore not marked with __THROW. */ extern int getlogin_r (char *__name, size_t __name_len) __nonnull ((1)); #endif ``` This looks dubious and is obsolete; I can't find it in the master branch of the glibc git repo. And, oh look, **just mere days ago** (December 6) a commit was made on this topic: <https://sourceware.org/git/?p=glibc.git;a=commit;h=c03073774f915fe7841c2b551fe304544143470f> ``` Make _REENTRANT and _THREAD_SAFE aliases for _POSIX_C_SOURCE=199506L. For many years, the only effect of these macros has been to make unistd.h declare getlogin_r. _POSIX_C_SOURCE >= 199506L also causes this function to be declared. However, people who don't carefully read all the headers might be confused into thinking they need to define _REENTRANT for any threaded code (as was indeed the case a long time ago). ``` Among the changes: ``` --- a/posix/unistd.h +++ b/posix/unistd.h @@ -849,7 +849,7 @@ extern int tcsetpgrp (int __fd, __pid_t __pgrp_id) __THROW; This function is a possible cancellation point and therefore not marked with __THROW. */ extern char *getlogin (void); -#if defined __USE_REENTRANT || defined __USE_POSIX199506 +#ifdef __USE_POSIX199506 /* Return at most NAME_LEN characters of the login name of the user in NAME. If it cannot be determined or some other error occurred, return the error code. Otherwise return 0. ``` **See?** :)
Python sort using key and lambda, what does lambda do? So I have a list of values like so: ``` { "values": [ { "date": "2015-04-15T11:15:34", "val": 30 }, { "val": 90, "date": "2015-04-19T11:15:34" }, { "val": 25, "date": "2015-04-16T11:15:34" } ] } ``` that I'm parsing in with pythons deafault json parser into a list like so: ``` with open(file) as f: data = json.load(f) values = data["values"] ``` I'm then trying to sort the data by date like so: ``` values.sort(key=lambda values: values["date"]) ``` And this works (to my knowledge). My question is why does it work? If I can't access values["date"] then why can I use this lambda function? values can't take a key like "date" only an integer. What I mean by this is I can only access values like so: values[0], values[1], etc... because it's a list not a dictionary. So if this lambda functions equivalent is this: ``` def some_method(values): return values[“date”] ``` then this is invalid because values is a list not a dictionary. I can't access values["date"]. So why can I just pass in the date through the function like this? Also if you could explain lambda in depth that would be appreciated. I've read other posts on stack overflow about it but they just don't make sense to me. Updated question with more information to make the problem more clear.
A `lambda` expression is simply a concise way of writing a function. It's especially handy in cases like the one you give where you only need to use the function once, and it needs to be used as an expression (e.g. an argument to a function). Here's an alternative version of your example, using `def` statement instead of a `lambda` expression: ``` def keyfunc(values): return values["date"] values.sort(key=keyfunc) ``` That's two lines longer, and leaves behind an extra variable in the current namespace. If the lambda version is just as clear as the `def` version, it's generally a better choice. It looks like your confusion may come from the extra use of the name `values` in the function. That's simply a poorly chosen argument name. The `list.sort` method will call the `key` function once for each value in the list, passing the value as the first positional argument. That value will be bound to whatever variable name is used in the function declaration (regardless of whether it's a `def` or a `lambda`). In your example, a better name might be `val` or `item`, since it's going to be just a single item from the `values` list. The name could really be whatever you want (and indeed, `values` works fine, it just looks confusing). This would be clearer: ``` values.sort(key=lambda val: val["date"]) ``` Or: ``` def keyfunc(val): return val["date"] values.sort(key=keyfunc) ```
Best way to incorporate Volley (or other library) into Android Studio project I've seen different advice on the best way to do this [This question](https://stackoverflow.com/questions/17218295/android-gradle-build-system-create-jar-not-library) covers creating a jar. Elsewhere, I've seen advice to simply copy the volley source into your own project. [This section](http://tools.android.com/tech-docs/new-build-system/user-guide#TOC-Library-projects) on libraries at android.com would seem the most authoritative. However, after compiling volley, I don't have an aal library, whereas that section says I should have. So my question is this: I have an existing Android Studio project with a standard layout, and a git repository; what should I do to add volley? Where should I download it to? How should I add it to Android Studio? Which Gradle files, if any, do I need to modify. Hopefully, for those of you have done this a few times, this should be bread-and-butter stuff, but I haven't been able to find a straightforward description. -- **Updating**, per Scott Barta's suggestion. The gradle.build file in the volley repository has this line. ``` apply plugin: 'android-library' ``` According to [the documentation](http://tools.android.com/tech-docs/new-build-system/user-guide#TOC-Library-projects): "*Library projects do not generate an APK, they generate a .aar package (which stands for Android archive).*" However, when I build the volley project, no .aar is created. My feeling is that as Volley is a library project, created by the Android team, it is most probably intended to be generated and used as .aar package. Any advice on whether it would be preferable to generate a .aar, and how to do that, would be appreciated.
**LATEST UPDATE:** Use the official version from jCenter instead. ``` dependencies { compile 'com.android.volley:volley:1.0.0' } ``` The dependencies below points to deprecated volley that is no longer maintained. **ORIGINAL ANSWER** You can use this in dependency section of your build.gradle file to use volley ``` dependencies { compile 'com.mcxiaoke.volley:library-aar:1.0.0' } ``` **UPDATED:** Its not official but a mirror copy of official Volley. It is regularly synced and updated with official Volley Repository so you can go ahead to use it without any worry. <https://github.com/mcxiaoke/android-volley>
AngularJS, ngRepeat, and default checked radio button When using ngRepeat, 3 pairs of radio buttons can be created using the following code: ``` <div ng-repeat="i in [1,2,3]"> <input type="radio" name="radio{{i}}" id="radioA{{i}}" value="A" checked> A <input type="radio" name="radio{{i}}" id="radioB{{i}}" value="B"> B </div> ``` For some reason, only the last pair of radio buttons generated by ngRepeat is affected by the `checked` attribute. Is this because of the way AngularJS updates the view? Is there a way to fix it?
That is possibly because when browser renders the radio buttons (as ng-repeat expands) all your radios have same name i.e `"name="radio{{i}}"` angular has not expanded it yet, hence the checked property is not applied properly among all of them. So you would need to use [`ng-attr-name`](https://docs.angularjs.org/guide/directive#-ngattr-attribute-bindings) so that angular adds expanded name attribute later. So try:- ``` <div ng-repeat="i in [1,2,3]"> <input type="radio" ng-attr-name="radio{{i}}" ng-attr-id="radioA{{i}}" value="A" checked> A <input type="radio" ng-attr-name="radio{{i}}" ng-attr-id="radioB{{i}}" value="B"> B </div> ``` Or use [`ng-checked="true"`](https://docs.angularjs.org/api/ng/directive/ngChecked) so that checked attribute is applied as ng-checked directive expands. i.e example ``` <input type="radio" name="radio{{i}}" ng-attr-id="radioA{{i}}" value="A" ng-checked="true"> A ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div ng-app> <div ng-repeat="i in [1,2,3]"> <input type="radio" ng-attr-name="radio{{i}}" ng-attr-id="radioA{{i}}" value="A" checked> A <input type="radio" ng-attr-name="radio{{i}}" ng-attr-id="radioB{{i}}" value="B"> B </div> </div> ```
Can I access child elements within a directive in Angular2? I'm trying to create a directive that accepts in input a `icon` property which would be the icon name. So the directive internally would try to find a `span` element where it will apply a class. I wonder if this is possible from within the directive applied to the parent. Or do I have to create a directive for the child too? Here's my HTML code: ``` <div sfw-navbar-square sfw-navbar-icon> <span class="mdi mdi-magnify"></span> </div> ``` Here's the directive itself: ``` import { Directive, ElementRef, Renderer } from '@angular/core'; @Directive({ selector: '[sfw-navbar-square]' }) export class NavbarSquareDirective { // Here I'd like to define a input prop that takes a string constructor(private el: ElementRef, private renderer: Renderer) { this.renderer.setElementClass(this.el.nativeElement, 'navbar-square-item', true); this.renderer.setElementClass(this.el.nativeElement, 'pointer', true); this.renderer.setElementClass(this.el.nativeElement, 'met-blue-hover', true); // Here I'd like to pass that string as a class for the span child element. Can I have access to it from here? } } ```
You can just use an input as you normally would. DOM manipulation would normally be done in the ngAfterViewInit when all views are initialized, but it will probably also work in the ngOnInit as the icon property will be set and you don't have any ViewChildren you try to access. HTML: ``` <div sfw-navbar-square [sfwNavbarIcon]="'my-icon'"> <span class="mdi mdi-magnify"></span> </div> ``` Here's the directive itself (Angular 4): ``` import { Directive, ElementRef, Renderer2 } from '@angular/core'; @Directive({ selector: '[sfw-navbar-square]' }) export class NavbarSquareDirective { @Input('sfwNavbarIcon') icon:string; constructor(private el: ElementRef, private renderer: Renderer2) { this.renderer.addClass(this.el.nativeElement, 'navbar-square-item'); this.renderer.addClass(this.el.nativeElement, 'pointer'); this.renderer.addClass(this.el.nativeElement, 'met-blue-hover'); } ngAfterViewInit() { let span = this.el.nativeElement.querySelector('span'); this.renderer.addClass(span, this.icon); } } ```
How can I make my R session vanilla? This is a follow up for clarification of a previous question, [How can I ensure a consistent R environment among different users on the same server?](https://stackoverflow.com/questions/12519273/how-can-i-ensure-a-consistent-r-environment-among-different-users-on-the-same-se) I'd like to enter a "vanilla" R session from within R, e.g. similar to what I would obtain if I launched R using the command `R --vanilla`. For example, I would like to write a script that is not confounded by a particular user's custom settings. In particular, I'd like the following - doesn't read R history, profile, or environment files - doesn't reload data or objects from previous sessions `help("vanilla")` does not return anything, and I am not familiar enough with the scope of custom settings to know how to get out of all of them. Is there a way to enter new, vanilla environment? (`?new.env` does not seem to help)
You can't just make your current session vanilla, but you can start a fresh vanilla R session from within R like this ``` > .Last <- function() system("R --vanilla") > q("no") ``` --- I think you'll probably run into a problem using the above as is because after R restarts, the rest of your script will not execute. With the following code, R will run `.Last` before it quits.  `.Last` will tell it to restart without reading the site file or environment file, and without printing startup messages. Upon restarting, it will run your code (as well as doing some other cleanup). ``` wd <- getwd() setwd(tempdir()) assign(".First", function() { #require("yourPackage") file.remove(".RData") # already been loaded rm(".Last", pos=.GlobalEnv) #otherwise, won't be able to quit R without it restarting setwd(wd) ## Add your code here message("my code is running.\n") }, pos=.GlobalEnv) assign(".Last", function() { system("R --no-site-file --no-environ --quiet") }, pos=.GlobalEnv) save.image() # so we can load it back when R restarts q("no") ```
Trust Store vs Key Store - creating with keytool I understand that the keystore would usually hold private/public keys and the trust store only public keys (and represents the list of trusted parties you intend to communicate with). Well, that's my first assumption, so if that's not correct, I probably haven't started very well... **I was interested though in understanding how / when you distinguish the stores when using keytool.** So, far I've created a keystore using ``` keytool -import -alias bob -file bob.crt -keystore keystore.ks ``` which creates my keystore.ks file. I answer `yes` to the question do I trust bob but it is unclear to me if this has created a keystore file or a truststore file? I can set up my application to use the file as either. ``` -Djavax.net.ssl.keyStore=keystore.ks -Djavax.net.ssl.keyStorePassword=x -Djavax.net.ssl.trustStore=keystore.ks -Djavax.net.ssl.trustStorePassword=x ``` and with `System.setProperty( "javax.net.debug", "ssl")` set, I can see the certificate under trusted certifications (but not under the keystore section). The particular certificate I'm importing has only a public key and I intend to use it to send stuff over an SSL connection to Bob (but perhaps that's best left for another question!). Any pointers or clarifications would be much appreciated. Is the output of keytool the same whatever you import and its just convention that says one is a keystore and the other a trust store? What's the relationship when using SSL etc?
The terminology is a bit confusing indeed, but both `javax.net.ssl.keyStore` and `javax.net.ssl.trustStore` are used to specify which keystores to use, for two different purposes. Keystores come in various formats and are not even necessarily files (see [this question](https://stackoverflow.com/questions/6157550/question-on-java-keystores/6157716#6157716)), and `keytool` is just a tool to perform various operations on them (import/export/list/...). The `javax.net.ssl.keyStore` and `javax.net.ssl.trustStore` parameters are the default parameters used to build `KeyManager`s and `TrustManager`s (respectively), then used to build an `SSLContext` which essentially contains the SSL/TLS settings to use when making an SSL/TLS connection via an `SSLSocketFactory` or an `SSLEngine`. These system properties are just where the default values come from, which is then used by `SSLContext.getDefault()`, itself used by `SSLSocketFactory.getDefault()` for example. (All of this can be customized via the API in a number of places, if you don't want to use the default values and that specific `SSLContext`s for a given purpose.) The difference between the `KeyManager` and `TrustManager` (and thus between `javax.net.ssl.keyStore` and `javax.net.ssl.trustStore`) is as follows (quoted from the [JSSE ref guide](http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#RelsTM_KM)): > > TrustManager: Determines whether the > remote authentication credentials (and > thus the connection) should be > trusted. > > > KeyManager: Determines which > authentication credentials to send to > the remote host. > > > (Other parameters are available and their default values are described in the [JSSE ref guide](http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#Customization). Note that while there is a default value for the trust store, there isn't one for the key store.) Essentially, the keystore in `javax.net.ssl.keyStore` is meant to contain your private keys and certificates, whereas the `javax.net.ssl.trustStore` is meant to contain the CA certificates you're willing to trust when a remote party presents its certificate. In some cases, they can be one and the same store, although it's often better practice to use distinct stores (especially when they're file-based).
How to distribute Swift Library without exposing the source code? The first thing I tried is to create a [static library](https://stackoverflow.com/questions/24041962/static-linking-with-swift-xcode6-beta) but later I found out that it's not supported yet. Apple Xcode Beta 4 Release Notes: > > Xcode does not support building static libraries that include Swift > code. (17181019) > > > I was hoping that Apple will be able to add this in the next Beta release or the GA version but I read the following on [their blog](https://developer.apple.com/swift/blog/?id=2): > > While your app’s runtime > compatibility is ensured, the Swift language itself will continue to > evolve, and the binary interface will also change. To be safe, all > components of your app should be built with the same version of Xcode > and the Swift compiler to ensure that they work together. > > > This means that frameworks need to be managed carefully. For instance, > if your project uses frameworks to share code with an embedded > extension, you will want to build the frameworks, app, and extensions > together. **It would be dangerous to rely upon binary frameworks that > use Swift — especially from third parties**. As Swift changes, those > frameworks will be incompatible with the rest of your app. When the > binary interface stabilizes **in a year or two, the Swift runtime will > become part of the host OS and this limitation will no longer exist.** > > > The news is really alarming for me a person who writes components for other developers to use and include in their apps. Is this means that I have to distribute the source code or wait for two years?. Is there any other way to distribute the library without exposing the code (company policy)? Update: Is Swift code obfuscation an option at this point ?
Swift is beta now, and even for 1.0 Apple has been pretty clear they're after a restricted feature set -- better to do a small number of things well than to try to do everything. So for now, there's no way to distribute binary static libraries. Presumably that'll change sometime after Swift 1.0. For now, you can: - Distribute source - Ship a binary *framework* (instead of a library) if you're okay with the ABI being fragile - Use ObjC for library code You can always combine approaches, too: e.g., implement the critical (secret) details of your library in ObjC, and ship Swift source that wraps it in a nice Swift API. Obfuscating code written in a language that's very much subject to change sounds like a recipe for a maintenance nightmare.
vector method push\_all is not found for a custom struct So in this simple example ``` #![feature(collections)] struct User { reference: String, email: String } fn main() { let rows = vec![ vec!["abcd".to_string(), "[email protected]".to_string()], vec!["efgh".to_string(), "[email protected]".to_string()], vec!["wfee".to_string(), "[email protected]".to_string()], vec!["rrgr".to_string(), "[email protected]".to_string()] ]; let mut rows_mut: Vec<Vec<String>> = Vec::new(); rows_mut.push_all(&rows); let mut users_mut: Vec<User> = Vec::new(); let users = vec![ User { reference: "ref1".to_string(), email: "[email protected]".to_string() }, User { reference: "ref2".to_string(), email: "[email protected]".to_string() } ]; users_mut.push_all(&users); } ``` I'm getting an error ``` src/main.rs:24:12: 24:28 error: no method named `push_all` found for type `collections::vec::Vec<User>` in the current scope src/main.rs:24 users_mut.push_all(&users); ^~~~~~~~~~~~~~~~ error: aborting due to previous error ``` Why does it work for `Vec<String>`, but not for `Vec<User>`? Is the only way in this case to iterate and add elements one by one?
Look at [the definition of `push_all`](http://doc.rust-lang.org/std/vec/struct.Vec.html#method.push_all): ``` impl<T> Vec<T> where T: Clone { fn push_all(&mut self, other: &[T]); } ``` > > Appends all elements in a slice to the `Vec`. > > > Iterates over the slice `other`, **clones each element**, and then appends it to this `Vec`. The `other` vector is traversed in-order. > > > (Emphasis mine.) Your type must implement `Clone` because it clones each value. `String` does; `User` doesn’t. You can add `#[derive(Clone)]` to it. If you are willing to consume the source vector, you should use `x.extend(y.into_iter())` which avoids needing to clone the values. Of course, for this trivial case if it’s purely the difference in `mut`ness, just add the `mut` in the initial pattern (if it’s a function argument this works too, the bit before the colon in each argument is a pattern, like with `let`, so `fn foo(mut x: Vec<T>) { … }` works fine and is equivalent to `fn foo(x: Vec<T>) { let mut x = x; … }`.)
How do I get a single total of lines with `wc -l`? I've added a git alias to give me the line counts of specific files in my history: ``` [alias] lines = !lc() { git ls-files -z ${1} | xargs -0 wc -l; }; lc ``` However, `wc -l` is reporting multiple totals, such that if I have more than ~100k lines, it reports the total for them, then moves on. Here's an example: ### <100k lines (desired output) ``` $ git lines \*.xslt 46 packages/NUnit-2.5.10.11092/doc/files/Summary.xslt 232 packages/NUnit-2.5.10.11092/samples/csharp/_UpgradeReport_Files/UpgradeReport.xslt 278 total ``` ### >100k lines (had to pipe to `grep "total"`) ``` $ git lines \*.cs | grep "total" 123569 total 107700 total 134796 total 111411 total 44600 total ``` How do I get a true total from `wc -l`, not a series of subtotals?
Try this, and apologies for being obvious: ``` cat *.cs | wc -l ``` or, with git: ``` git ls-files -z ${1} | xargs -0 cat | wc -l ``` If you actually want the output to look like `wc` output, with both individual counts and a sum, you could use `awk` to add up the individual lines: ``` git ls-files -z ${1} | xargs -0 wc -l | awk '/^[[:space:]]*[[:digit:]]+[[:space:]]+total$/{next} {total+=$1;print} END {print total,"total"}' ``` That won't be lined up as nicely as `wc` does it, in case that matters to you. To do that, you'd need to read the entire input and save it, computing the total, and then use the total to compute the field width before using that field width to print a formatted output of the remembered lines. Like home renovation projects, `awk` scripts are never ever really finished. (Note to enthusiastic editors: the regular expression in the first `awk` condition is in case there is a file whose name starts with "total" and a space; otherwise, the condition could have been the much simpler `$2 == "total"`.)
Redis Pub/Sub with Spring Data Redis: Messages arrive in wrong order I am attempting to implement a chat using Redis publish/subscribe with Spring Data Redis. I use the RedisTemplate to publish messages, shown below: ``` public class RedisPublisher { @Autowired private RedisTemplate<String, Object> redisTemplate; public void publish(ChannelTopic channelTopic, Object channelMessage) { redisTemplate.convertAndSend(channelTopic.getTopic(), channelMessage); } } ``` And to receive messages I have a MessageListener as shown below: ``` public class RedisConsumer implements MessageListener { MessageSerializer serializer = new MessageSerializer(); AtomicInteger atomicInteger = new AtomicInteger(0); @Override public void onMessage(Message message, byte[] pattern) { Object obj = serializer.deserialize(message.getBody()); if(obj != null && obj instanceof RedisMessage) { System.err.println("Received message(" + atomicInteger.incrementAndGet() + ") " + obj.toString()); } } ``` Messages are published like so: ``` final ChannelTopic channelTopic=connectionManager.subscribe("topic"); new Thread(new Runnable() { public void run() { Thread.sleep(5000); for (int i = 0; i < 10; i++) { redisPublisher.publish(channelTopic, new RedisMessage(i + 1)); } } }).run(); ``` However, the received messages appear to be delivered in wrong order: ``` Received message(1) message id: 3 Received message(2) message id: 2 Received message(3) message id: 1 Received message(4) message id: 4 Received message(5) message id: 5 Received message(6) message id: 6 Received message(7) message id: 7 Received message(8) message id: 8 Received message(9) message id: 9 Received message(10) message id: 10 ``` Is it possible to send/receive messages synchronously using the RedisTemplate/MessageListener provided by Spring? The current code-base is small and can be viewed at [GitHub](https://github.com/abergz/Messaging/tree/master/src/main/java/com/messaging "GitHub-link").
Redis PubSub is known to deliver the messages in order (guaranteed at least if you use one connection and trigger `PUBLISH`. The `PUBLISH` command returns the number of clients that were notified). The cause for the out-of-order is the way how Spring Data Redis dispatches the messages by default. The notification is handled on different threads and that's the reason. Thanks for the code, it helped me to quickly reproduce the behavior. I can think of two possible strategies to address this issue: 1. You can however supply an executor that honors the order within `RedisMessageListenerContainer`. Right now, any form of synchronization I'm thinking of, would harm performance. 2. Implement an own message listener on top of `BinaryJedisPubSub`. You're in control over the messages and you can omit the executor issue. HTH, Mark
Twitter share button does not display twitter card The Card validator is displaying my summary card correctly, however, when I actually try to share it buy clicking the "twitter-share-button" it does not display a twitter card, it only shows a regular tweet. **This what the validator says:** > > Your site is whitelisted for summary\_large\_image card > > > INFO: Page fetched successfully INFO: 18 metatags were found INFO: > twitter:card = summary\_large\_image tag found INFO: Card loaded > successfully > > > **These are the tags I'm using:** ``` <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@username"> <meta name="twitter:creator" content="@username"> <meta name="twitter:title" content="Some Title"> <meta name="twitter:description" content="Some description."> <meta name="twitter:image" content="http://example.com/images/dog.jpg"> ``` **This is the html:** ``` <a class="twitter-share-button" href="https://twitter.com/intent/tweet?text=Testing Testing Testing"> Tweet</a> ```
For anyone strugling with tyhe same issue, I'm responding to my own question. Here's what I found out: **1.** First off, keep in mind that the card only shows in the final posted Tweet - not in the Tweet composer window. However, in my case, it wasn't showing in the final posted tweet either. **2.** You have to include the link to the page you want to share after the text parameter. Additionally you need to encode its URL value. So, instead of `text=http://example.com` use `text=http%3A%2F%2Fexample.com`. If you go to <http://www.w3schools.com/tags/ref_urlencode.asp> you can make this encoding convertion automaticaly. **3.** I've been told that sometimes there is a delay as twitter's crawler caches your site, which would explain why the card doesn't show in the final posted tweet in your fist attempts to share it. In my case it took 48 hours, after which, clicking the share button will post the card immediately. Hope this my be helpful to someone else...
What are pending signals? When looking at the limits of a running process, I see ``` Max pending signals 15725 ``` - What is this? - How can I determine a sensible value for a busy service? Generally, I can't seem to find a page that explains what each limit is. Some are pretty self-explanatory (max open files), some less so (max msgqueue size).
According to the [manual page](http://linux.die.net/man/2/sigpending) of `sigpending`: > > sigpending() returns the set of signals that are pending for delivery > to the calling thread (i.e., the signals which have been raised while > blocked). > > > So, it is meant the signals (sigterm, sigkill, sigstop, ...) that are waiting until the process comes out of the `D` (uninterruptible sleep) state. Usually a process is in that state when it is waiting for I/O. That sleep can't be interrupted. Even sigkill (`kill -9`) can't and the kernel waits until the process wakes up (the signal is pending for delivery so long). For the other unclear values, I would take a look in the [manual page](http://linux.die.net/man/5/limits.conf) of `limits.conf`.
Understanding pdo mysql transactions The [PHP Documentation](http://php.net/manual/en/pdo.transactions.php) says: > > If you've never encountered transactions before, they offer 4 major > features: Atomicity, Consistency, Isolation and Durability (ACID). In > layman's terms, any work carried out in a transaction, even if it is > carried out in stages, is guaranteed to be applied to the database > safely, and without interference from other connections, when it is > committed. > > > **QUESTION:** Does this mean that I can have two separate php scripts running transactions simultaneously without them interfering with one another? --- **ELABORATING ON WHAT I MEAN BY** "*INTERFERING*"**:** Imagine we have the following `employees` table: ``` __________________________ | id | name | salary | |------+--------+----------| | 1 | ana | 10000 | |------+--------+----------| ``` If I have two scripts with similar/same code and they run at the exact same time: **script1.php** and **script2.php** (both have the same code): ``` $conn->beginTransaction(); $stmt = $conn->prepare("SELECT * FROM employees WHERE name = ?"); $stmt->execute(['ana']); $row = $stmt->fetch(PDO::FETCH_ASSOC); $salary = $row['salary']; $salary = $salary + 1000;//increasing salary $stmt = $conn->prepare("UPDATE employees SET salary = {$salary} WHERE name = ?"); $stmt->execute(['ana']); $conn->commit(); ``` and assuming the sequence of events is as follows: - **script1.php** selects data - **script2.php** selects data - **script1.php** updates data - **script2.php** updates data - **script1.php** commit() happens - **script2.php** commit() happens What would the resulting salary of ana be in this case? - Would it be 11000? And would this then mean that 1 transaction will overlap the other because the information was obtained before either commit happened? - Would it be 12000? And would this then mean that regardless of the order in which data was updated and selected, the `commit()` function forced these to happen individually? Please feel free to elaborate as much as you want on how transactions and separate scripts can interfere (or don't interfere) with one another.
You are not going to find the answer in php documentation because this has nothing to do with php or pdo. Innodb table engine in mysql offers 4 so-called [isolation levels](http://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html) in line with the sql standard. The isolation levels in conjunction with blocking / non-blocking reads will determine the result of the above example. You need to understand the implications of the various isolation levels and choose the appropriate one for your needs. To sum up: if you use serialisable isolation level with autocommit turned off, then the result will be 12000. In all other isolation levels and serialisable with autocommit enabled the result will be 11000. If you start using locking reads, then the result could be 12000 under all isolation levels.
How should the interquartile range be calculated in Python? I have a list of numbers `[1, 2, 3, 4, 5, 6, 7]` and I want to have a function to return the interquartile range of this list of numbers. The interquartile range is the difference between the upper and lower quartiles. I have attempted to calculate the interquartile range using NumPy functions and using Wolfram Alpha. I find all of the answers, from my manual one, to the NumPy one, tothe Wolfram Alpha, to be different. I do not know why this is. My attempt in Python is as follows: ``` >>> a = numpy.array([1, 2, 3, 4, 5, 6, 7]) >>> numpy.percentile(a, 25) 2.5 >>> numpy.percentile(a, 75) 5.5 >>> numpy.percentile(a, 75) - numpy.percentile(a, 25) # IQR 3.0 ``` My attempt in Wolfram Alpha is as follows: - ["first quartile 1, 2, 3, 4, 5, 6, 7": 2.25](http://www.wolframalpha.com/input/?i=first+quartile+1%2C+2%2C+3%2C+4%2C+5%2C+6%2C+7) - ["third quartile 1, 2, 3, 4, 5, 6, 7": 5.75](http://www.wolframalpha.com/input/?i=third+quartile+1%2C+2%2C+3%2C+4%2C+5%2C+6%2C+7) - (comment: 5.75 - 2.25 = 3.5) - ["interquartile range 1, 2, 3, 4, 5, 6, 7": ~3.5](https://www.wolframalpha.com/input/?i=interquartile+range+1%2C+2%2C+3%2C+4%2C+5%2C+6%2C+7) So, I find that the values returned by NumPy and Wolfram Alpha for what I think are the first quartile, the third quartile and the interquartile range are not consistent. Why is this? What should I be doing in Python to calculate the interquartile range correctly? As far as I am aware, the interquartile range of `[1, 2, 3, 4, 5, 6, 7]` should be the following: ``` median(5, 6, 7) - median(1, 2, 3) = 4. ```
You have 7 numbers which you are attempting to split into quartiles. Because 7 is not divisible by 4 there are a couple of different ways to do this as mentioned [here](https://en.wikipedia.org/wiki/Quartile). Your way is the first given by that link, wolfram alpha seems to be using the third. Numpy is doing basically the same thing as wolfram however its interpolating based on percentiles (as shown [here](https://en.wikipedia.org/wiki/Percentile#The_Linear_Interpolation_Between_Closest_Ranks_method)) rather than quartiles so its getting a different answer. You can choose how numpy handles this using the interpolation option (I tried to link to the documentation but apparently I'm only allowed two links per post). You'll have to choose which definition you prefer for your application.
momentjs startOf does not work on existing date object I'm trying to use moment to get the start of a day. I get different results with the following: ``` moment().startOf('day'); //good moment(new Date()).startOf('day'); //this does not work! ``` fiddle: <https://jsfiddle.net/y1of77wx/> The practical case is that I'm doing this in a function that takes in a date object as an argument: ``` function doWork(dt) { return moment(dt).startOf('day'); } ``` I'm sure the solution is simple but I'm just missing something.
I suggest to use [`format()`](http://momentjs.com/docs/#/displaying/format/) to display the value of a moment object. As the [Internal Properties](http://momentjs.com/guides/#/lib-concepts/internal-properties/) guide states: > > Moment objects have several internal properties that are prefixed with `_`. > > > The most commonly viewed internal property is the `_d` property that holds the JavaScript Date that Moment wrappers. Frequently, developers are confused by console output of the value of `_d`. > > > ... > > > To print out the value of a Moment, use `.format()`, `.toString()` or `.toISOString()` > > > Here a snippet showing the correct results: ``` console.log(moment().startOf('day').format()); console.log(moment(new Date()).startOf('day').format()); ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js"></script> ```
why android.hardware.telephony feature is being added automatically in android app While releasing the apk to play store, I found out that the my app requires the `android.hardware.telephony` feature but I haven't added it in manifest anywhere. I have also check the merged manifest in android studio and it also does not contain this feature so I think no third party sdk is adding this. What could be the source of this feature? For reference, I have following permissions declared in the manifest: ``` <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.MANAGE_ACCOUNTS" /> <uses-permission android:name="android.permission.READ_PHONE_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.READ_SMS" /> <uses-permission android:name="android.permission.RECEIVE_SMS" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> ``` Also, when I make it optional using below code, the app is available on devices without this feature: ``` <uses-feature android:name="android.hardware.telephony" android:required="false" /> ``` So why is required by default without adding it anywhere?
Google Play automatically adds some features, depending on which permissions you have requested. As you have requested `READ_SMS` and `RECEIVE_SMS` permissions, this implies you use the `telephony` feature. So, Google Play reacts as if you had the following in your `AndroidManifest.xml`: ``` <uses-feature android:name="android.hardware.telephony" android:required="true" /> ``` When you manually add this and declare it `required="false"`, this tells Google Play that whilst you do ask for the permission, you can handle the case where the user does not have the `telephony` feature. This is confirmed via [this note in the docs](https://developer.android.com/guide/practices/compatibility.html#Features): > > Note: Some system permissions implicitly require the availability of a device feature. For example, if your app requests permission to access to BLUETOOTH, this implicitly requires the FEATURE\_BLUETOOTH device feature. > > > The full list of permissions and the feature requirements implied is [available here](https://developer.android.com/guide/topics/manifest/uses-feature-element.html#permissions), and includes your situation: [![sms permissions implying telephony](https://i.stack.imgur.com/R1Jfx.png)](https://i.stack.imgur.com/R1Jfx.png) Finally, with your `ACCESS_COARSE_LOCATION` you are also declaring a feature requirement on `android.hardware.location`, just for your information. There's also further detailed information over on [the GameDev StackExchange](https://gamedev.stackexchange.com/a/112920/115906).
Java: Get rid of `Cipher.init()` overhead I need to increase performance of the following method: ``` private byte[] decrypt(final byte[] encrypted, final Key key) throws ... { this.cipher.init(Cipher.DECRYPT_MODE, key); return this.cipher.doFinal(encrypted); } ``` The `cipher` object ("AES/ECB/NoPadding") is initialized in the constructor, so that it can be reused. ECB is used since the `encrypted` array will contain always only 16 bytes of data (i.e. 1 block of data). 128 bit key is used. This method is called millions of times to decrypt 16 byte chunk, each time with a different key. E.g. the method is called like this: ``` final List<Key> keys = List with millions of keys final byte[] data = new byte[] { ... 16 bytes of data go here ...} for (final Key key : keys) { final byte[] encrypted = decrypt(data, key); // Do something with encrypted } ``` The `Cipher.init()` takes most of the time of the `decrypt` method, since the data is so small. I.e. with over 12 million invocations, `Cipher.init()` takes 3 microseconds on average while `Cipher.doFinal()` takes < 0.5 microseconds on average. - What takes so long in `Cipher.init()`? - Is there any way to speed up this code using Java only? For instance by taking advantage of the fact I will be decrypting always only a single block of data? - Would it be faster to use a C/C++ implementation and call it using JNI? If so, is there any well-established library for that? I use JDK 1.8.0\_73 and my processor supports AES-NI.
> > What takes so long in Cipher.init()? > > > During initialization a user-supplied key data is expanded into session keys used by encryption and decryption routines. > > Is there any way to speed up this code using Java only? For instance by taking advantage of the fact I will be decrypting always only a single block of data? > > > Yes, but this would require rewriting the essential part of AES algorithm. You may find it in JDK sources in [AESCrypt.java](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/0f6de62683a2/src/share/classes/com/sun/crypto/provider/AESCrypt.java). Alternatively, you may store a list of preinitialized Ciphers instead of Keys. > > Would it be faster to use a C/C++ implementation and call it using JNI? If so, is there any well-established library for that? > > > Most likely, yes. `libcrypto` which is a part of OpenSSL will help. [Here is an example](https://wiki.openssl.org/index.php/EVP_Symmetric_Encryption_and_Decryption).
When does scheme evaluate quote? `(car ''abracadabra)` is equivalent to `(car (quote (quote abracadabra))`, and it evaluates to `(car (quote abracadabra))` --> `quote` On the othe hand `(car (quote (a b)))` evaluates to `a`, which to me makes sense intuitively. So my question is, why does Scheme not evaluate the second quote in `(car (quote (quote abracadabra))` (i.e. evaluate `(car (quote abracadabra))` to `(car abracadabra)`), but does evaluade the quote in `(car (quote (a b)))` (i.e. why isn't the answer `quote`)?
In this expression: ``` (car (quote (quote abracadabra))) => 'quote ``` The inner `quote` doesn't get evaluated, it's just a *symbol*, with no particular meaning. You might as well change it for anything else, with the same results: ``` (car (quote (foobar abracadabra))) => 'foobar ``` Inside a quoted expression, other expressions won't get evaluated. We could use [quasiquoting](http://docs.racket-lang.org/reference/quasiquote.html) to force evaluation, now this will attempt to evaluate the inner quote, resulting in a different error for each case: ``` (car (quasiquote (unquote (quote abracadabra)))) ; (car `,(quote abracadabra)) => car: contract violation expected: pair? given: 'abracadabra (car (quasiquote (unquote (foobar abracadabra)))) ; (car `,(foobar abracadabra)) => foobar: unbound identifier in module in: foobar ```
Uncaught SyntaxError: Unexpected token - due to double backslash I seem to have a strange problem, I get a "Uncaught SyntaxError: Unexpected token P" error. It is due to the double backslash. But double backslash is need to escape a backslash and this seems to be 100% valid JSON which is generated from php's json\_encode function. ``` var urls = '{"MyApp\\Posts\\Post":"foo","MyApp\\Threads\\Thread":"bar"}'; obj = jQuery.parseJSON(urls); ```
If you `console.log(urls)`, you can see the string value that is passed to the JSON parser: ``` {"MyApp\Posts\Post":"foo","MyApp\Threads\Thread":"bar"} ``` However, `\` is the escape character in JSON and `\P` is an invalid escape sequence. --- "The problem" is that backslash is also the escape character in a JS string. If you want to produce a literal backslash in a JS string for JSON, you have to double escape it: ``` var urls = '{"MyApp\\\\Posts\\\\Post":"foo","MyApp\\\\Threads\\\\Thread":"bar"}'; ``` That said, there is no value in having a string literal with JSON in JS. You can just use an object literal: ``` var urls = {"MyApp\\Posts\\Post":"foo","MyApp\\Threads\\Thread":"bar"}; ``` --- **Note:** If the JSON *is not* in a string literal, but you get it as response from an Ajax call, for example, then ``` {"MyApp\\Posts\\Post":"foo","MyApp\\Threads\\Thread":"bar"}' ``` is valid JSON.
Angular 4 Lazy Loading and Routes not working I have a module with the routes of my app. One of this routes is a lazy loading module. The problem is that this lazy loading module haves inside a routes for child components. But on my router config this routes don't appears... So when i call the lazy module don't show anything on my screen. Here is my router config (main module): ``` export const MODULE_ROUTES: Route[] =[ { path: '', redirectTo: 'dashboard', pathMatch: 'full' }, { path: 'dashboard', component: HomeComponent, canActivate: [AuthGuard] }, { path: 'calendar', loadChildren: 'app/dashboard/calendar/calendar-module.module#CalendarModuleModule',canActivate: [AuthGuard]}, { path: '**', component: NoPageFoundComponent, pathMatch: 'full' } ] . . . @NgModule({ imports: [ RouterModule.forChild(MODULE_ROUTES) . . . ``` On my lazy module: ``` export const MODULE_CALENDAR_ROUTES: Route[] = [ { path: 'calendar', component: CalendarComponent, canActivateChild: [AuthGuard, CalendarGuard], children: [ { path: '', component: MainCalendarComponent, canActivateChild: [AuthGuard, CalendarGuard] }, { path: 'user', component: EditEventComponent, canActivateChild: [AuthGuard, CalendarGuard] } ] } ] . . . @NgModule({ imports: [ SharedModule, . . . RouterModule.forChild(MODULE_CALENDAR_ROUTES) ``` If i print my router config this routes declaren on my lazy module don't show: ``` Routes: [ { "path": "dashboard", "canActivate": [ null ] }, { "path": "calendar", "loadChildren": "app/dashboard/calendar/calendar-module.module#CalendarModuleModule", "canActivate": [ null ] }, { "path": "**", "pathMatch": "full" }, { "path": "dashboard" } ] ``` Can you help me?
The problem was with the way I've declared my route on my lazy module: ``` export const MODULE_CALENDAR_ROUTES: Route[] = [ { path: 'calendar', component: CalendarComponent, canActivateChild: [AuthGuard, CalendarGuard], children: [ { path: '', component: MainCalendarComponent, canActivateChild: [AuthGuard, CalendarGuard] }, { path: 'user', component: EditEventComponent, canActivateChild: [AuthGuard, CalendarGuard] } ] } ] ``` The `path` of `CalendarComponent` had to change from: ``` path: 'calendar', // wrong component: CalendarComponent, ... ``` to the below: ``` path: '', // right component: CalendarComponent, ... ``` Thanks to @jotatoledo on gitter that help me to solve this.
Azure App Service Plan - Pricing Model Questions While doing some research for a customer some doubts have rise regarding how the pricing on App Service plans work and what would be the best way to configure the ARM templates for different projects. What I would like to confirm (and I don't think that Azure documentation is very clear on that) is if you pay only for the App Service Plan itself, even if you don't have any web apps running. Imagining that I would like to have 2 web apps running on a Standard tier, is there any difference of having a different app service plan for each that I can later change independently for a single project? (e.g. one of the web apps may require more compute power in the future while the other doesn't). And if by any change we remove the web apps from the subscription but still leave there the App Service Plan configuration... is anything paid in that case, even if there is no computing happening? Thanks for the help everyone. Cheers
TL;DR You pay for an App Service Plan, not for an App Service. Looking at [App Service Plans details](https://azure.microsoft.com/en-us/pricing/details/app-service/plans/), you see the number of Apps an App Service Plan can have. - A Free plan can have 10 - A Shared plan can have 100 - A Consumption plan for Functions can have 500 - All other plans can have *unlimited* Now for your question: you don't pay per App Service. You pay per App Service Plan. And this is *regardless the number of apps*. This is because you pay for the fact the plan is there, ready and waiting. You've reserved the resources specified in the tier you selected. The fact you're not using them isn't of influence. Of course, consumption plan is the odd one out, since there you **do** pay only for compute used. If you take a look at [Pricing - App Service](https://azure.microsoft.com/en-us/pricing/details/app-service/windows/), you also see the pricing tables specify pricing for App Service **Plans**, not the App Services. **EDIT:** > > I would pay more for 2 service plans with 1 app each than 1 service plan with 2 apps on top > > > Yes. Since an App Service Plan is an abstraction for (something like) a virtual machine, having 2 App Service Plans means having 2 virtual machines that are reserved for you. **EDIT2:** As @Juunas said in his comment: > > you do pay per VM in your plan, so if you set scale out to 3 instances, you have 3 VMs running your app, so pay 3 times more than with scale out set to 1. > > > So 1 plan with 1 instance and 2 apps is cheaper than 2 plans with 1 instance and 1 app per plan. But if the 1 plan has 2 instances and 2 apps, you pay the same as in the 2-plans-1-instance scenario. This is starting to get complex ...
HTML ellipsis without specifying the particular width I'm trying to achieve the following layout (please see the screenshot below). - If both SPAN and B fit the box - they just go one after another. - If they don't - SPAN has an ellipsis but B is displayed completely (it is never bigger than a whole block). B can be included into the SPAN - but it didn't help me. I was also trying to use tables, even nested ones - nothing helps.. **Expected behavior:** [![enter image description here](https://i.stack.imgur.com/cIuV6.png)](https://i.stack.imgur.com/cIuV6.png) **Initial snippet:** ``` div { width: 200px; white-space: nowrap; border: 1px solid red; margin-top: 10px; } span { overflow: hidden; text-overflow: ellipsis; } b { padding-left: 10px; } ``` ``` <div> <span>test</span> <b>12345</b> </div> <div> <span>test test test test test test test test test test test</span> <b>50</b> </div> ```
Just add `display: flex` to the div container: ``` div { display: flex; /* new */ width: 200px; white-space: nowrap; border: 1px solid red; margin-top: 10px; } span { overflow: hidden; text-overflow: ellipsis; } b { padding-left: 10px; } ``` ``` <div> <span>test</span> <b>12345</b> </div> <div> <span>test test test test test test test test test test test</span> <b>50</b> </div> ``` A combination of flex default settings, including `flex-shrink: 1`, enable the ellipsis to render in a [**flex formatting context**](https://www.w3.org/TR/css-flexbox-1/#flex-containers).
Why is scalac complaining about method + on String? Given the following Scala code: ``` def stream(callback: (String, Array[Byte]) => Unit) { callback("application/json;charset=utf-8", Array(1)) callback("application/xml;charset=utf-8", Array(1)) } @Test def whenStreaming_thenShouldSeeSameMessages() { var actualMessages = List.empty[(String, Array[Byte])] sut.stream { (contentType, payload) => actualMessages += (contentType, payload) } expect(List("application/json;charset=utf-8", "application/xml;charset=utf-8")) { actualMessages } } ``` Why am I getting the following error message: ``` error: too many arguments for method +: (other: String)java.lang.String actualMessages += (contentType, payload) ``` Specifically, payload is highlighted in IDEA, and on the command line, += is highlighted. I really don't understand that error message.
The method to create a new list with an appended element is actually `:+`. To update the list `var` with an appended version, you need to do, ``` actualMessages :+= (contentType, payload) ``` Or you can prepend with, ``` actualMessages +:= (contentType, payload) // or, equivalently, // actualMessages ::= (contentType, payload) ``` (For big lists, prepending is actually preferable, as it's O(1) compared to O(n) for appending. One strategy is to use prepending to build the list in reverse, and then at the end call `List.reverse`.) With the `+` operator you're trying to use, Scala thinks you want to do string concatenation, ``` scala> List("hello") + " world" res0: java.lang.String = List(hello) world ``` This is an unfortunate carry-over from Java. Generic string concatenation is implemented in Scala using an implicit conversion defined in [scala.Predef](http://www.scala-lang.org/api/current/index.html#scala.Predef%24), ``` implicit def any2stringadd (x: Any): StringAdd ``` What's happening is that `List` does not implement a `+` method, so the compiler looks for a conversion to something that does provide `+`, and finds `any2stringadd`. This is how, e.g., `true + " false"` works.
Error connecting to Slack IRC gateway I'm using the [cl-irc](https://www.common-lisp.net/project/cl-irc/) library to connect to Slack, via the IRC gateway Slack provides. However I'm getting the following error when I try to start the message loop with `read-message-loop`: ``` error while parsing arguments to DESTRUCTURING-BIND: invalid number of elements in ("duncan_bayne" "Welcome" "to" "Slack" "IRC" "Gateway" "server" "[email protected]") to satisfy lambda list (CL-IRC:NICKNAME CL-IRC::WELCOME-MESSAGE): exactly 2 expected, but 8 found [Condition of type SB-KERNEL::ARG-COUNT-ERROR] ... Backtrace: 0: ((:METHOD CL-IRC::DEFAULT-HOOK (CL-IRC:IRC-RPL_WELCOME-MESSAGE)) #<CL-IRC:IRC-RPL_WELCOME-MESSAGE irc.tinyspeck.com RPL_WELCOME {1007FC6293}>) [fast-method] 1: ((:METHOD CL-IRC::APPLY-TO-HOOKS (T)) #<CL-IRC:IRC-RPL_WELCOME-MESSAGE irc.tinyspeck.com RPL_WELCOME {1007FC6293}>) [fast-method] 2: ((:METHOD CL-IRC:IRC-MESSAGE-EVENT (T CL-IRC:IRC-MESSAGE)) #<unavailable argument> #<CL-IRC:IRC-RPL_WELCOME-MESSAGE irc.tinyspeck.com RPL_WELCOME {1007FC6293}>) [fast-method] 3: ((:METHOD CL-IRC:READ-MESSAGE (CL-IRC:CONNECTION)) #<CL-IRC:CONNECTION myob.irc.slack.com {10068E8ED3}>) [fast-method] 4: ((:METHOD CL-IRC:READ-MESSAGE-LOOP (T)) #<CL-IRC:CONNECTION myob.irc.slack.com {10068E8ED3}>) [fast-method] 5: (SB-INT:SIMPLE-EVAL-IN-LEXENV (CL-IRC:READ-MESSAGE-LOOP *CONN*) #<NULL-LEXENV>) 6: (EVAL (CL-IRC:READ-MESSAGE-LOOP *CONN*)) ``` While in the REPL I see: ``` UNHANDLED-EVENT:3672562852: RPL_MYINFO: irc.tinyspeck.com duncan_bayne "IRC-SLACK gateway" ``` I'm not sure what I'm doing wrong here; I'm fairly sure it's not my hooks, because the problem persists even if I disable them all. Also, I can use the connection as expected - say, joining a channel and sending messages - provided I don't try to start the message loop. At a guess, I'd say Slack is responding to connection with an unexpected message?
The fix as suggested by @jkilski is to modify cl-irc to accept the slightly unusual (but probably standards-compilant?) responses from Slack: ``` (in-package #:cl-irc) (defmethod default-hook ((message irc-rpl_welcome-message)) (with-slots (connection host user arguments) message (destructuring-bind (nickname &rest welcome-message) arguments (setf (user connection) (make-user connection :nickname nickname :hostname host :username user))))) (in-package #:irc) (defmethod default-hook ((message irc-rpl_namreply-message)) (let* ((connection (connection message))) (destructuring-bind (nick chan-visibility channel &optional names) (arguments message) (declare (ignore nick)) (let ((channel (find-channel connection channel))) (setf (visibility channel) (or (second (assoc chan-visibility '(("=" :public) ("*" :private) ("@" :secret)) :test #'string=)) :unknown)) (unless (has-mode-p channel 'namreply-in-progress) (add-mode channel 'namreply-in-progress (make-instance 'list-value-mode :value-type :user))) (dolist (nickname (tokenize-string names)) (let ((user (find-or-make-user connection (canonicalize-nickname connection nickname)))) (unless (equal user (user connection)) (add-user connection user) (add-user channel user)) (set-mode channel 'namreply-in-progress user) (let* ((mode-char (getf (nick-prefixes connection) (elt nickname 0))) (mode-name (when mode-char (mode-name-from-char connection channel mode-char)))) (when mode-name (if (has-mode-p channel mode-name) (set-mode channel mode-name user) (set-mode-value (add-mode channel mode-name (make-mode connection channel mode-name)) user)))))))))) ``` I've applied to join the dev mailing list and will be submitting a patch shortly.
Unity C# - Spawning GameObjects randomly around a point I am not sure how to approach this problem or whether there are any built in Unity functions that can help with this problem so any advice is appreciated. Here is an image that'll help describe what I want to do: [![enter image description here](https://i.stack.imgur.com/sGDrc.jpg)](https://i.stack.imgur.com/sGDrc.jpg) I want to spawn Game Objects around a given point within the limits of a set radius. However their position in this radius should be randomly selected. This position should have the same Y axis as the origin point (which is on the ground). The next main problem is that each object should not clash and overlap another game object and should not enter their personal space (the orange circle). My code so far isn't great: ``` public class Spawner : MonoBehaviour { public int spawnRadius = 30; // not sure how large this is yet.. public int agentRadius = 5; // agent's personal space public GameObject agent; // added in Unity GUI Vector3 originPoint; void CreateGroup() { GameObject spawner = GetRandomSpawnPoint (); originPoint = spawner.gameObject.transform.position; for (int i = 0; i < groupSize; i++) { CreateAgent (); } } public void CreateAgent() { float directionFacing = Random.Range (0f, 360f); // need to pick a random position around originPoint but inside spawnRadius // must not be too close to another agent inside spawnRadius Instantiate (agent, originPoint, Quaternion.Euler (new Vector3 (0f, directionFacing, 0f))); } } ``` Thank you for any advice you can offer!
For spawning the object within the circle, you could define the radius of your spawn circle and just add random numbers between -radius and radius to the position of the spawner like this: ``` float radius = 5f; originPoint = spawner.gameObject.transform.position; originPoint.x += Random.Range(-radius, radius); originPoint.z += Random.Range(-radius, radius); ``` For detecting if the spawn point is to close to another game object, how about checking the distance between them like so: ``` if(Vector3.Distance(originPoint, otherGameObject.transform.position < personalSpaceRadius) { // pick new origin Point } ``` I'm not that skilled in unity3d, so sry for maybe not the best answer^^ **Also**: To check which gameobjects are in the spawn area in the first place, you could use the Physics.OverlapSphere Function defined here: <http://docs.unity3d.com/ScriptReference/Physics.OverlapSphere.html>
Getting non-diagonal elements in dataframe Following [pandas DataFrame diagonal](https://stackoverflow.com/questions/37310264/pandas-dataframe-diagonal), I can get diagonal elements using `np.diag`. How can I get the non-diagonal elements in a dataframe (assuming that dataframe is size n x n)
I'll use @Matt's same dataframe `xf` ``` xf = pd.DataFrame(np.random.rand(5, 5)) ``` However, I'll point out that if the diagonal happens to be equal to zero, Using `np.diag(np.diag(xf)) != 0` will break down. The way to guarantee that you are masking the diagonal is to evaluate if the row indices are **not equal** to the column indices. **Option 1** *`numpy.indices`* Conveniently, `numpy` provides those as well via the `np.indices` function. Observe what they look like ``` rows, cols = np.indices((5, 5)) print(rows) [[0 0 0 0 0] [1 1 1 1 1] [2 2 2 2 2] [3 3 3 3 3] [4 4 4 4 4]] print(cols) [[0 1 2 3 4] [0 1 2 3 4] [0 1 2 3 4] [0 1 2 3 4] [0 1 2 3 4]] ``` And where they are equal... The diagonal. ``` print((cols == rows).astype(int)) [[1 0 0 0 0] [0 1 0 0 0] [0 0 1 0 0] [0 0 0 1 0] [0 0 0 0 1]] ``` So with these, we can mask where they are equal with ``` xf.mask(np.equal(*np.indices(xf.shape))) 0 1 2 3 4 0 NaN 0.605436 0.573386 0.978588 0.160986 1 0.295911 NaN 0.509203 0.692233 0.717464 2 0.275767 0.966976 NaN 0.883339 0.143704 3 0.628941 0.668836 0.468928 NaN 0.309901 4 0.286933 0.523243 0.693754 0.253426 NaN ``` --- We can make is a bit faster with ``` pd.DataFrame( np.where(np.equal(*np.indices(xf.shape)), np.nan, xf.values), xf.index, xf.columns ) ``` **Option 2** *`numpy.arange` with slice assignment* ``` v = xf.values.copy() i = j = np.arange(np.min(v.shape)) v[i, j] = np.nan pd.DataFrame(v, xf.index, xf.columns) 0 1 2 3 4 0 NaN 0.605436 0.573386 0.978588 0.160986 1 0.295911 NaN 0.509203 0.692233 0.717464 2 0.275767 0.966976 NaN 0.883339 0.143704 3 0.628941 0.668836 0.468928 NaN 0.309901 4 0.286933 0.523243 0.693754 0.253426 NaN ``` --- ``` %%timeit v = xf.values.copy() i = j = np.arange(np.min(v.shape)) v[i, j] = np.nan pd.DataFrame(v, xf.index, xf.columns) %timeit pd.DataFrame(np.where(np.eye(np.min(xf.shape)), np.nan, xf.values), xf.index, xf.columns) %timeit pd.DataFrame(np.where(np.equal(*np.indices(xf.shape)), np.nan, xf.values), xf.index, xf.columns) %timeit xf.mask(np.equal(*np.indices(xf.shape))) %timeit xf.mask(np.diag(np.diag(xf.values)) != 0) %timeit xf.mask(np.eye(np.min(xf.shape), dtype=bool) 10000 loops, best of 3: 74.5 µs per loop 10000 loops, best of 3: 85.7 µs per loop 10000 loops, best of 3: 77 µs per loop 1000 loops, best of 3: 519 µs per loop 1000 loops, best of 3: 517 µs per loop 1000 loops, best of 3: 528 µs per loop ```
id -u $var gives the same output if $var has a value or not I'm writing a script to configure new debian installs while finding the best solution to confirming that a user exists in the script, the best way I found gives me wierd output. **PROBLEM:** `id -u $var` and `id -u $varsome` give the same output even though `var` has a value (the username) and `varsome` has no value ``` [19:49:24][username] ~ ~↓↓$↓↓ var=`whoami` [19:53:38][username] ~ ~↓↓$↓↓ id -u $var 1000 [19:53:42][username] ~ ~↓↓$↓↓ echo $? 0 [19:53:49][username] ~ ~↓↓$↓↓ id -u $varsome 1000 [19:09:56][username] ~ ~↓↓$↓↓ echo $? 0 [20:10:18][username] ~ ~↓↓$↓↓ bash --version GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2016 Free Software Foundation, Inc. Licens GPLv3+: GNU GPL version 3 eller senere <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. [20:27:08][username] ~ ~↓↓$↓↓ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" ``` I got the command from this question on stackoverflow: [Check Whether a User Exists](https://stackoverflow.com/a/14811915/6372809) **QUESTIONS:** - What is happening here? - Is there a better way you can find to verify a user exist's in a script? - Pointers on the script well appreciated
Since the variable expansion wasn't quoted, the empty word that results from `$varsome` being expanded is removed completely. Let's make a function that prints the number of arguments it gets and compare the quoted and non-quoted case: ``` $ args() { echo "got $# arguments"; } $ var="" $ args $var got 0 arguments $ args "$var" got 1 arguments ``` The same happens in your case with `id`: `id -u $var` is exactly the same as just `id -u` when `var` is empty. Since `id` doesn't see a username, it by default prints the current user's information. If you quote `"$var"`, the result is different: ``` $ var="" $ id -u "$var" id: ‘’: no such user ``` With that fixed, you can use `id` to find if a user exists. (We don't need the outputs here though, so redirect them away.) ``` check_user() { if id -u "$1" >/dev/null 2>&1; then echo "user '$1' exists" else echo "user '$1' does not exist" fi } check_user root check_user asdfghjkl ``` That would print `user 'root' exists` and `user 'asdfghjkl' does not exist`. --- This is a bit of the inverse of the usual problems that come from the unexpected word splitting of unquoted variables. But the basic issue is the same and fixed by what half the answers here say: always quote the variable expansions (unless you know you want the unquoted behaviour). See: - [When is double-quoting necessary?](https://unix.stackexchange.com/q/68694/170373) - [Why does my shell script choke on whitespace or other special characters?](https://unix.stackexchange.com/q/131766/170373) - [BashGuide on Word Splitting](http://mywiki.wooledge.org/WordSplitting)
Logout in token-based authentication and Single page application I'm using JWT authentication for my Spring Boot application. The front-end is going to be a single-page-application. Should I handle logout functionality on the server side? As far as I understand, there's no way to invalidate a JWT token unless we have a stateful server (storing logged out tokens for the maximum lifetime of a token). The SPA passes the JWT token every time making a request in its header, and it can delete it from the localStorage when the user visits `/logout` without making a call to the server. What are the potential issues? Is this idea used anywhere else? What is the best practice?
First of all you must decide wether you want a statefull server, or not. ### statefull server this is easy. Just send a logout request to the server and throw the session away. That's it. That's the safest way. ### stateless server I like stateless servers because you don't have to manage the state. But of course you have a tradeoff. In this case the securety. There is **no** way to logout because you don't have a session that you can invalidate on the server side. So an attacker which steels your JWT-token can use the session until it ends and there's nothing to do to prevent this. But you can do something to avoid that the atacker can get the JWT Token. Here are some things that you can do and that you have done already right 1. Don't use cookies to send the token. The way you do it is perfect. Use the header. This is encryptet by SSL and Single-page-app must send it on purpose. 2. Put a timestamp into the JWT-Token so that it invalidates itselve after some time. But be aware of effects like Timezones and Clocks wich are out of sync. 3. Put some browser fingerprinting information into the Token like the OS or the Browser version. In that way the atacker has also to fake that. But these mechanisms are all there to make it harder for an atacker. A real logout is not possible. ### Note If you use JWT correctly your server will have a "state". You have to define a secret that is the same on all servers. That's something you must be aware of, if you use multiple servers.
Javascript Mouseover bubbling from children Ive got the following html setup: ``` <div id="div1"> <div id="content1">blaat</div> <div id="content1">blaat2</div> </div> ``` it is styled so you can NOT hover div1 without hovering one of the other 2 divs. Now i've got a mouseout on div1. The problem is that my div1.mouseout gets triggered when i move from content1 to content2, because their mouseouts are bubbling. and the event's target, currentTarget or relatedTarget properties are never div1, since it is never hovered directly... I've been searching mad for this, but I can only find articles and solutions for problems who are the reverse of what I need. It seems trivial but I can't get it to work... The mouseout of div1 should ONLY get triggered when the mouse leaves div1. One of the possibilities would be to set some data on mouse enter and mouseleave, but I'm convinced this should work out of the box, since it is just a mouseout... EDIT: ``` bar.mouseleave(function(e) { if ($(e.currentTarget).attr('id') == bar.attr('id')) { bar.css('top', '-'+contentOuterHeight+'px'); $('#floatable-bar #floatable-bar-tabs span').removeClass('active'); } }); ``` changed the mouseout to mouseleave and the code worked...
Use the `mouseleave` event instead or `mouseout` for this, it handles your specific issue. [See here for details](http://api.jquery.com/mouseleave/) From the docs on the difference: > > The mouseleave event differs from mouseout in the way it handles event bubbling. If mouseout were used in this example, then when the mouse pointer moved out of the Inner element, the handler would be triggered. This is usually undesirable behavior. The mouseleave event, on the other hand, only triggers its handler when the mouse leaves the element it is bound to, not a descendant. So in this example, the handler is triggered when the mouse leaves the Outer element, but not the Inner element. > > > Example markup: ``` <div id="outer"> Outer <div id="inner"> Inner </div> </div> ```
How to sort a list containing bounded set of values in linear time when length is unknown? Given a list of integers whose length is unknown, and each of its elements lies between 1 to 1000, how does one sort this list in linear time?
You know that every element of your `int arr[];` is in `[1;1000]`. So have an array of counters, `int cnt[1001];` in C parlance. Clear it (all zeros). Then, read the `arr[]` array sequentially. Suppose that you have read the value `x` at index `i` (so `x==arr[i]`). Then increment its counter, so `cnt[x]++;` When you have reached the end of the input array `arr`, iterate on `cnt` so `for (int i=0; i<=1000; i++)` and output the number `i` exactly `cnt[i]` times. This is O(n) (because the bound 1000 is a constant). This sort is often known as the [counting sort](https://en.wikipedia.org/wiki/Counting_sort).
Delete Item from List using Knockout.js I am trying to delete an item from a list. I am using knockout.js with the mapping plugin. My code looks like this: ## Serialize to Json ``` @{ var jsonData = new HtmlString(new JavaScriptSerializer().Serialize(Model));} ``` ## Template ``` <script type="text/html" id="imgsList"> {{each model.Imgs}} <div style="float:left; margin: 10px 10px 10px 0;"> <div><a href="${Filename}"><img src="${Filename}" style="width:100px;"></img></a></div> <div data-bind="click: deleteImage">Delete</div> </div> {{/each}} </script> ``` ## K.O. JavaScript ``` <script type="text/javascript"> $(function() { //KO Setup var viewModel = { "model": ko.mapping.fromJS(@jsonData), "deleteImage" : function(item) {alert(item.Filename + ' deleted.');} } ko.applyBindings(viewModel); }); </script> ``` ## The HTML ``` <div data-bind="template: 'imgsList'"></div> ``` ## The Question **Everything works as expected. A list of images shows up with delete buttons, however, when you click a button item.Filename is undefined. Thoughts?** Edit: Taken from the KNockout.js Manual: "When calling your handler, Knockout will supply the current model value as the first parameter. This is particularly useful if you’re rendering some UI for each item in a collection, and you need to know which item’s UI was clicked." It appears that I am not getting back the Img object I am expecting. I don't know what I am getting back!
When you use {{each}} syntax in jQuery Templates, the data context is whatever the overall template is bound against. In your case, that is the entire view model. A few options: 1- you can use your current code and pass the item that you are "eaching" on to the function like ( <http://jsfiddle.net/rniemeyer/qB9tp/1/> ): ``` <div data-bind="click: function() { $root.deleteImage($value); }">Delete</div> ``` Using an anomymous function in the data-bind is pretty ugly though. There are better options. 2- you can use the `foreach` parameter of the template binding, which works with jQuery Templates and is more efficient than {{each}} like ( <http://jsfiddle.net/rniemeyer/qB9tp/2/> ): ``` <script type="text/html" id="imgsList"> <div style="float:left; margin: 10px 10px 10px 0;"> <div> <a href="${Filename}">${Filename}</a> </div> <div data-bind="click: $root.deleteImage">Delete</div> </div> </script> <div data-bind="template: { name: 'imgsList', foreach: model.Imgs }"></div> ``` Now, the context of the template is the individual image object and calling `$root.deleteImage` will pass it as the first argument. 3- Since, the jQuery Templates plugin is deprecated and Knockout now supports native templates, you might want to choose removing your dependency on the jQuery Templates plugin. You could still use a named template (just need to replace any jQuery Templates syntax with data-bind attributes) like: <http://jsfiddle.net/rniemeyer/qB9tp/3/> or even remove the template and just go with the `foreach` control-flow binding like: <http://jsfiddle.net/rniemeyer/qB9tp/4/> ``` <div data-bind="foreach: model.Imgs"> <div style="float:left; margin: 10px 10px 10px 0;"> <div> <a data-bind="text: Filename, attr: { href: Filename }"></a> </div> <div data-bind="click: $root.deleteImage">Delete</div> </div> </div> ``` 4- While I prefer option #3, you could even choose to use event delegation and attach a "live" handler like: <http://jsfiddle.net/rniemeyer/qB9tp/5/> ``` $("#main").on("click", ".del", function() { var data = ko.dataFor(this); viewModel.deleteImage(data); }); ``` This can be especially beneficial if you would be attaching a large number of the same handlers via the `click` binding (like in a grid).
Use msysgit/"Git for Windows" to navigate Windows shortcuts? I use msysgit on Windows to use git, but I often want to navigate through a Windows-style `*.lnk` shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: ``` PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory ``` Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a `*.lnk` file? **EDIT**: I've heard that [the inverse of this exists](http://cygwin.com/ml/cygwin/2001-02/msg01260.html) for cygwin, allowing you to create a symlink which works with explorer.
## \*ahem\* First, [compile](http://www.autohotkey.com/docs/Scripts.htm#ahk2exe) the following [AutoHotkey](http://autohotkey.com/) script: ``` FileGetShortcut, %1%, shortcut_target FileAppend, %shortcut_target%, * ExitApp ``` --- Place the `.EXE` file in a [%PATH%](https://en.wikipedia.org/wiki/PATH_%28variable%29) directory. I named mine `follow.exe`. Now, you can effectively follow a Windows `.LNK` file in your working directory by using the following syntax: ``` cd `follow Shortcut.lnk` ``` where `Shortcut.lnk`'s **target** is a valid directory. --- ## Demonstration: ![shortcut](https://i.stack.imgur.com/K1Rt1.png) ![git bash](https://i.stack.imgur.com/a6O8c.png) --- Once you've set up your `follow.exe`, you can add the following shell function to your `~/.bashrc` file to simplify the syntax even further. *Thanks, [Daniel](https://superuser.com/questions/408585/use-msysgit-git-for-windows-to-navigate-windows-shortcuts#comment469290_411363)!* ``` function cd { if [[ ".lnk" = "${1:(-4)}" && -f "$1" ]] ; then builtin cd "$( follow "$1" )" ; else builtin cd "$1" ; fi } ``` Now, you can follow `.LNK` files with just `cd`! ``` cd Shortcut.lnk ``` --- ## Demonstration: ![cd](https://i.stack.imgur.com/F9E9E.png)
How can I create a Perl subroutine that accepts more than one block? With prototypes, you can create a subroutine that accepts a block of code as its first parameter: ``` sub example (&) { my $code_ref = shift; $code_ref->(); } example { print "Hello\n" }; ``` How can I do the same thing, but with more than one block of code? I want to use blocks of codes, not variables or `sub { ... }`. This does not work: ``` sub example2 (&&) { my $code_ref = shift; my $code_ref2 = shift; $code_ref->(); $code_ref2->(); } example2 { print "One\n" } { print "Hello\n" }; ``` It gives this error: ``` Not enough arguments for main::example2 ```
I hope you realise that this is just code seasoning, and all you are achieving is a tidier syntax at the expense of clarity? Perl won't allow you to pass more than one bare block to a subroutine, but the second actual parameter could be a call to a subroutine that also takes a single block and simply returns the code reference. This program demonstrates. Note that I have chosen `please` and `also` as names for the subroutines. But you must use something that is both appropriate to your own code's functionality and very unlikely to clash with forthcoming extensions to the core language. ``` use strict; use warnings; sub please(&$) { my ($code1, $code2) = @_; $code1->(); $code2->(); } sub also(&) { $_[0]; } please { print "aaa\n" } also { print "bbb\n" }; ``` **output** ``` aaa bbb ```
How to generate random dates between date range inside pandas column? I have df that looks like this ``` df: id dob 1 7/31/2018 2 6/1992 ``` I want to generate 88799 random dates to go into column `dob` in the dataframe, between the dates of `1960-01-01` to `1990-12-31` while keeping the format `mm/dd/yyyy` no time stamp. How would I do this? I tried: ``` date1 = (1960,01,01) date2 = (1990,12,31) for i range(date1,date2): df.dob = i ```
I would figure out how many days are in your date range, then select 88799 random integers in that range, and finally add that as a timedelta with `unit='d'` to your minimum date: ``` min_date = pd.to_datetime('1960-01-01') max_date = pd.to_datetime('1990-12-31') d = (max_date - min_date).days + 1 df['dob'] = min_date + pd.to_timedelta(pd.np.random.randint(d,size=88799), unit='d') >>> df.head() dob 0 1963-03-05 1 1973-06-07 2 1970-08-24 3 1970-05-03 4 1971-07-03 >>> df.tail() dob 88794 1965-12-10 88795 1968-08-09 88796 1988-04-29 88797 1971-07-27 88798 1980-08-03 ``` **EDIT** You can format your dates using `.strftime('%m/%d/%Y')`, but note that this will slow down the execution significantly: ``` df['dob'] = (min_date + pd.to_timedelta(pd.np.random.randint(d,size=88799), unit='d')).strftime('%m/%d/%Y') >>> df.head() dob 0 02/26/1969 1 04/09/1963 2 08/29/1984 3 02/12/1961 4 08/02/1988 >>> df.tail() dob 88794 02/13/1968 88795 02/05/1982 88796 07/03/1964 88797 06/11/1976 88798 11/17/1965 ```
How to convert 32 bit VBA code into 64 bit VBA code Im trying to run a macro code but since I'm using a 64 bit Excel 2016 this code is not working. Please help me how to fix this. ``` Private Declare Function FindWindowEx Lib "User32" Alias "FindWindowExA" _ (ByVal hWnd1 As Long, ByVal hWnd2 As Long, ByVal lpsz1 As String, _ ByVal lpsz2 As String) As Long Private Declare Function IIDFromString Lib "ole32" _ (ByVal lpsz As Long, ByRef lpiid As GUID) As Long Private Declare Function AccessibleObjectFromWindow Lib "oleacc" _ (ByVal hWnd As Long, ByVal dwId As Long, ByRef riid As GUID, _ ByRef ppvObject As Object) As Long ```
These should work on 64 bit Excel ``` Private Declare PtrSafe Function FindWindowEx Lib "user32" Alias "FindWindowExA" _ (ByVal hWnd1 As LongPtr, ByVal hWnd2 As LongPtr, ByVal lpsz1 As String, _ ByVal lpsz2 As String) As LongPtr Private Declare PtrSafe Function IIDFromString Lib "ole32" _ (ByVal lpsz As LongPtr, ByRef lpiid As GUID) As Long Private Declare PtrSafe Function AccessibleObjectFromWindow Lib "oleacc" _ (ByVal Hwnd As LongPtr, ByVal dwId As Long, ByRef riid As GUID, _ ByRef ppvObject As Object) As Long ``` If you need it to run on both you can use the following `#If VBA7` ``` #If VBA7 Then '64 bit declares here #Else '32 bit declares here #End If ``` A nice resource for PtrSafe Win32 API declares can be found here: [Win32API\_PtrSafe.txt](http://www.cadsharp.com/docs/Win32API_PtrSafe.txt)
BackgroundService implementation that prevents AspNetCore to start/stop correctly I am using a [BackgroundService](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.backgroundservice?view=aspnetcore-2.2) object in an aspnet core application. Regarding the way the operations that run in the ExecuteAsync method are implemented, the Aspnet core fails to initialize or stop correctly. Here is what I tried: I implemented the abstract `ExecuteAsync` method the way it is explained in the [documentation](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.2&tabs=visual-studio). the pipeline variable is an `IEnumerable<IPipeline>` that is injected in the constructor. ``` public interface IPipeline { Task Start(); Task Cycle(); Task Stop(); } ... protected override async Task ExecuteAsync(CancellationToken stoppingToken) { log.LogInformation($"Starting subsystems"); foreach(var engine in pipeLine) { try { await engine.Start(); } catch(Exception ex) { log.LogError(ex, $"{nameof(engine)} failed to start"); } } log.LogInformation($"Runnning main loop"); while(!stoppingToken.IsCancellationRequested) { foreach(var engine in pipeLine) { try { await engine.Cycle(); } catch(Exception ex) { log.LogError(ex, $"{engine.GetType().Name} error in Cycle"); } } } log.LogInformation($"Stopping subsystems"); foreach(var engine in pipeLine) { try { await engine.Stop(); } catch(Exception ex) { log.LogError(ex, $"{nameof(engine)} failed to stop"); } } } ``` Because of the current development state of the project, there are many "nop" Pipeline that contains an empty Cycle() operation that is implemented this way: ``` public async Task Cycle() { await Task.CompletedTask; } ``` What I noticed is: - If **at least one** IPipeline object contains an actual asynchronous method (await Task.Delay(1)), then everything runs smoothly and I can stop the service gracefully using CTRL+C. - If **all** IPipeline objects contains await `Task.CompletedTask;`, Then on one hand, aspnetcore fails to initialize correctly. I mean, there is no "Now listening on: <http://[::]:10001> Application started. Press Ctrl+C to shut down." on the console. On the other, when I hit CTRL+C, the console shows "Application is shutting down..." but the cycle loop continues to run as if the CancellationToken was never requested to stop. So basically, if I change a single Pipeline object to this: ``` public async Task Cycle() { await Task.Delay(1); } ``` Then everything is fine, and I dont understand why. Can someone explain me what I did not understood regarding Task processing ?
The simplest workaround is to add `await Task.Yield();` as line one of `ExecuteAsync`. I am not an expert... but the "problem" is that all the code inside this `ExecuteAsync` actually running synchronously. If all the "cycles" return a Task that has completed synchronously (as Task.CompletedTask will be) then the `while` and therefore the `ExecuteAsync` method never "yield"s. The framework essentially does a foreach over the registered `IHostedService`s and calls `StartAsync`. If your service does not yield then the foreach gets blocked. So any other services (including the AspNetCore host) will not be started. As bootstrapping cannot finish, things like ctrl-C handling etc also never get setup. Putting `await Task.Delay(1)` in one of the cycles "releases" or "yields" the Task. This allows the host to "capture" the task and continue. Which allows the wiring up of Cancellation etc to happen. Putting `Task.Yield()` at the top of `ExecuteAsync` is just the more direct way of achieving this and means the cycles do not need to be aware of this "issue" at all. note: this is usually only really an issue in testing... 'cos why would you have a no-op cycle? note2: If you are likely to have "compute" cycles (ie they don't do any IO, database, queues etc) then switching to ValueTask will help with perf/allocations.
How to unpack packed-refs? I cloned a project from github with *git clone --mirror*. That left me with a repository with a *packed-refs* file, a *.pack* and an *.idx* file. For developement purposes I want to look at the loose objects, so I unpacked the objects with *git unpack-objects < <pack file>* which worked fine (I unpacked the pack file into a new repo if you're wondering). Only thing is that *refs/heads/* is still empty, all the refs are still only in *packed-refs* but I need them in *refs/heads/*. I wasn't able to find a command that would extract or unpack those references and I can somehow not believe that I would have to do this by hand (or via pipes). So actually I have two questions: 1. Is there an easy way to "restore" refs from *packed-refs*? 2. If not, why isn't there? If there's a command for unpacking objects, what is the reasoning behind not providing the same for refs (don't forget that there's even a command *git pack-refs*...) Thanks for any tips and ideas.
The short answer is "no" - there is no "easy way" to unpack the refs the way you're asking. The slightly longer answer is, each ref is just a 41-byte text file (40 byte SHA1 in hex + newline) in a specific path, so the "hard" version just requires something like this in your `~/.gitconfig`: ``` [alias] unpack-refs = "!bash -c 'IFS=$''\\n''; for f in $(git show-ref --heads); do /bin/echo ''Writing '' $(echo $f | cut -c42-); echo $(echo $f | cut -c1-40) > \"${GIT_DIR:-.git}/$(echo $f | cut -c42-)\"; done'" ``` Took a little trickiness to figure out how to get it to work properly, but there you go! Now you have 'git unpack-refs' and it does what you expect, and as a bonus it even works with $GIT\_DIR if that's set (otherwise it assumes you're in the root of the git tree). If you haven't read up on git aliases, <https://git.wiki.kernel.org/index.php/Aliases> is a great reference and even includes an example 'git alias' extension you can use to extend your own aliases.
How to get Main Window (App Delegate) from other class (subclass of NSViewController)? I'm trying to change my windows content, from other class , that is subclass of NSViewController.I'm trying code below, but it doesn't do anything. ``` [NSApplication sharedApplication]mainWindow]setContentView:[self view]]; //code in NSViewController [NSApplication sharedApplication]mainWindow] // returns null ``` I tried to add ``` [window makeMainWindow]; ``` in App Delegate class, but it won't help. Did I miss something? P.S. Also I'm using code below to call any delegate function in my class, ``` [(appDelegate *) [[NSApplication sharedApplication]delegate]MyMethod]; ``` but I wonder is there something better, wihtout importing delegate class. Something like this ``` [[NSApplication sharedApplication]delegate]MyMethod]; ``` (it gives warning)
For the [mainWindow method the docs say](http://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSApplication_Class/Reference/Reference.html#//apple_ref/occ/instm/NSApplication/mainWindow): > > This method might return nil if the application’s nib file hasn’t finished loading, if the receiver is not active, or if the application is hidden. > > > I just created a quick test application and I placed the following code: ``` NSLog(@"%@", [[NSApplication sharedApplication] mainWindow]); ``` into my `applicationDidFinishLaunching:aNotification` method, and into an action method which I connected to a button in the main window of my application. On startup, the `mainWindow` was nil, but when I click the button (after everything is up and running and displayed), the `mainWindow` was no longer nil. `NSApplication` provides other methods which you may be useful to you: - `- windows` - an array of all the windows; - `– keyWindow` - gives the window that is receiving keyboard input (or nil); - `– windowWithWindowNumber:` - returns a window corresponding to the window number - if you know the number of the window whose contents you wish to replace you could use this; - `– makeWindowsPerform:inOrder:` - sends a message to each window - you could use this to test each window to see if it's the one you are interested in. With regard to calling methods on the `delegate`, what you say gives a warning works fine for me. For example, this works with no warnings: ``` NSLog(@"%@", [[[NSApplication sharedApplication]delegate] description]); ``` What exactly is the warning you receive? Are you trying to call a method that doesn't exist?
Unable to install cider to emacs - package not found Fresh install of Ubuntu 20.04 Added openjdk-11 and lein 2.9.3 ``` $ sudo apt-add-repository ppa:kelleyk/emacs ``` Installed GNU Emacs 26.3 (build 2, x86\_64-pc-linux-gnu, GTK+ Version 3.24.14) of 2020-03-26, modified by Debian Tried [the instructions](https://docs.cider.mx/cider/basics/installation.html) on the cider Getting Started page ``` M-x package-refresh-contents M-x package install <RET> cider <RET> ``` The cider package isn't found. Tried to package-list-packages - list doesn't contain cider. What am I missing? Finally did the steps from the following link to get it to work [Brave Clojure book companion repo](https://github.com/flyingmachine/emacs-for-clojure)
Not sure why the cider instructions don't mention this. You need to create a ~/.emacs.d/init.el file with the following contents ``` (require 'package) (add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t) (package-initialize) (when (not package-archive-contents) (package-refresh-contents)) ``` Save and restart emacs. `M-x package-list-packages` check the archive column shows packages from **gnu** as well as **melpa** archives. - Sometimes you would see an error *Failed to download ‘gnu’ archive.* - this one is a flaky one. A restart and/or `M-x package-refresh-contents` fixed it for me. Now we have the sources configured correctly. 1. Install: `M-x package-install <RET> cider <RET>` 2. `M-x package-list-packages` - Move to the end of the listing to see Status=installed packages. 3. Test: `M-x cider-jack-in`. Answer no to the prompt indicating you are not in a clojure project. Soon you should be dropped to a `user>` prompt - ready to REPL and roll.
How to create nested branches in metaflow? I am using `metaflow` to create a text processing pipeline as follows:- ``` ___F------ ______ D---| | | |___G---| |__> ____B-----| |----->H | |______E_________________> ^ A -| | |____C________________________________| ``` As per the [documentation](https://docs.metaflow.org/metaflow/basics#branch), `branch` allows to compute steps in parallel and it is used to compute (B, C), (D, E) and (F, G) in parallel. Finally all the branches are joined at H. Following is the code to implement this logic:- ``` from metaflow import FlowSpec, step class TextProcessing(FlowSpec): @step def a(self): .... self.next(self.b, self.c) @step def c(self): result1 = {} .... self.next(self.join) @step def b(self): .... self.next(self.d, self.e) @step def e(self): result2 = [] ..... self.next(self.join) @step def d(self): .... self.next(self.f, self.g) @step def f(self): result3 = [] .... self.next(self.join) @step def g(self): result4 = [] ..... self.next(self.join) @step def join(self, results): data = [results.c.result, results.e.result2, result.f.result3, result.g.result4] print(data) self.next(self.end) @step def end(self): pass etl = TextProcessing() ``` On running `python main.py run`, I am getting following error:- ``` Metaflow 2.2.10 executing TextProcessing for user:ubuntu Validating your flow... Validity checker found an issue on line 83: Step join seems like a join step (it takes an extra input argument) but an incorrect number of steps (c, e, f, g) lead to it. This join was expecting 2 incoming paths, starting from splitted step(s) f, g. ``` Can someone point out where I am going wrong?
After going through [docs](https://docs.metaflow.org/metaflow/basics#branch) again carefully, I realised that I wasn't handling joins properly. As per docs for `metaflow-2.2.10`:- > > Note that you can nest branches arbitrarily, that is, you can branch inside a branch. Just remember to join all the branches that you create. > > > which means every branch should be joined. In order to join values from branches, `metaflow` provides `merge_artifacts` utility function to aid in propagating unambiguous values. Since, there are three branches in the workflow, therefore added three join steps to merge results. Following changes worked for me:- ``` from metaflow import FlowSpec, step class TextProcessing(FlowSpec): @step def a(self): .... self.next(self.b, self.c) @step def c(self): result1 = {} .... self.next(self.merge_3) @step def b(self): .... self.next(self.d, self.e) @step def e(self): result2 = [] ..... self.next(self.merge_2) @step def d(self): .... self.next(self.f, self.g) @step def f(self): result3 = [] .... self.next(self.merge_1) @step def g(self): result4 = [] ..... self.next(self.merge_1) @step def merge_1(self, results): self.result = { 'result4' : results.g.result4, 'result3' : results.f.result3 } self.next(self.merge_2) @step def merge_2(self, results): self.result = { 'result2' : results.e.result2, **results.merge_1.result } self.merge_artifacts(results, include=['result']) self.next(self.merge_3) @step def merge_3(self, results): self.result = { 'c' : results.c.result1, **results.merge_2.result } self.merge_artifacts(results, include=['result']) self.next(self.end) @step def end(self): print(self.result) etl = TextProcessing() ```
Match shadowing example in the Patterns section of the Rust book is very perplexing In learning Rust, I encountered the following [in the official Rust book](https://doc.rust-lang.org): > > There’s one pitfall with patterns: like anything that introduces a new > binding, they introduce shadowing. For example: > > > > ``` > let x = 'x'; > let c = 'c'; > > match c { > x => println!("x: {} c: {}", x, c), > } > > println!("x: {}", x) > > ``` > > This prints: > > > > ``` > x: c c: c > x: x > > ``` > > In other words, `x =>` matches the pattern and introduces a new binding > named `x` that’s in scope for the match arm. Because we already have a > binding named `x`, this new `x` shadows it. > > > I don't understand two things: 1. Why does the match succeed? Shouldn't the differing value of `c` and `x` cause this to fail? 2. How does the match arm `x` binding get set to `'c'`? Is that somehow the return of the `println!` expression?
Your two points are caused by the same root problem. Coincidentally, the reason that this section *exists* is to point out the problem you asking about! I'm afraid that I'm basically going to regurgitate what the book says, with different words. Check out this sample: ``` match some_variable { a_name => {}, } ``` In this case, the match arm will **always succeed**. Regardless of the value in `some_variable`, it will always be bound to the name `a_name` inside that match arm. It's important to get this part first — the name of the variable that is bound has no relation to anything outside of the `match`. Now we turn to your example: ``` match c { x => println!("x: {} c: {}", x, c), } ``` The *exact same* logic applies. The match arm with always match, and regardless of the value of `c`, it will always be bound to the name `x` inside the arm. The *value* of `x` from the outer scope (`'x'` in this case) has no bearing whatsoever in a pattern match. --- If you wanted to use the value of `x` to control the pattern match, you can use a *match guard*: ``` match c { a if a == x => println!("yep"), _ => println!("nope"), } ``` Note that in the match guard (`if a == x`), the variable bindings `a` and `x` go back to acting like normal variables that you can test.
Drag and drop with RXjs I'm struggling with a drag and drop behavior with RXJS. I would like to start drag an element after 250ms mouse down for not hijack click events on that element. So far the start drag works but stop drag never get called. Anyone know why? ``` let button = document.querySelector('.button'); let mouseDownStream = Rx.Observable.fromEvent(button, 'mousedown'); let mouseUpStream = Rx.Observable.fromEvent(button, 'mouseup'); let dragStream = mouseDownStream .flatMap((event) => { return Rx.Observable .return(event) .delay(250) .takeUntil(mouseUpStream) }); let dropStream = mouseUpStream .flatMap((event) => { return Rx.Observable .return(event) .skipUntil(dragStream) }); dragStream.subscribe(event => console.log('start drag')); dropStream.subscribe(event => console.log('stop drag')); ``` [JSBin](https://jsbin.com/zefajevuki/edit?js,console,output)
I've updated your code-sample to make it run, what I did: - exchanged the `flatMap`s with `switchMap`s (`switchMap` is an alias for `flatMapLatest`) this will ensure that it only takes the latest events and in case a new event is emitted, it will cancel any old subevent => in this case `flatMap` might work okay, but it is safer to use `switchMap`, also a rule of thumb: when in doubt: use `switchMap` - `dropStream` is based on/initiated by `dragStream` now - removed the `skipUntil`, which was a racing-condition issue because it would have first triggered after the next dragStream-emission after some mouseUp (which would require traveling back in time) - exchanged the mouseUp-target from `button` to `document` (more a convenience-thing, and not really essential for the while thing to work) ``` let button = document.querySelector('.button'); let mouseDownStream = Rx.Observable.fromEvent(button, 'mousedown'); let mouseUpStream = Rx.Observable.fromEvent(document, 'mouseup'); let dragStream = mouseDownStream .switchMap((event) => { return Rx.Observable .return(event) .delay(250) .takeUntil(mouseUpStream) }); let dropStream = dragStream .switchMap(() => mouseUpStream.take(1)) dragStream.subscribe(event => console.log('start drag')); dropStream.subscribe(event => console.log('stop drag')); ``` ``` <!DOCTYPE html> <html> <head> <script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/4.1.0/rx.all.js"></script> <meta charset="utf-8"> <title>JS Bin</title> </head> <body> <button class="button">Button</button> </body> </html> ```
Swap out generally used functions in C++ Hey there, I have a big code file currently in C++ with many functions which I often use in different versions of my program so I thought about swapping out the generally used functions: E.g. I have: ``` void doSomething(mxArray *in) { mexPrintf("Hello Matlab"); } ``` whereas `mxArray` and `mexPrintf` are defined in another file which comes from matlab (mex.h). In the mainfile I have now: ``` #include "mex.h" #include "_general.h" ``` and I wonder that I didn't get any compiler errors when not including `mex.h` in `_general.cpp` also because the file itself NEEDs it obviously. Would you better do the include or doesn't it matgter in this case it is included AFTER mex.h has already been included in the mainfile? Thanks!
C++ (unlike C) will refuse to make dubious assumptions about the signature of functions for which it hasn't seen a declaration, and compilation will fail with an error message. If you don't get an error, then some header is picking up a declaration for that function. If you can't find it, then try running only the preprocessor stage (e.g. for GCC you'd use `g++ -E`) and inspecting the output to see the declaration... your compiler may leave comments about which file has included bits of code which can be helpful in understanding the situation. For example, if \_general.cpp includes \_general.h which includes mex.h then that's workable, and there's no need to include it directly from \_general.cpp. But, if it can be removed from \_general.h as it's only needed in the implementation of "general", then that's much better again. If some other code, say "libraryX" uses mex.h for it's internal needs without exposing mex-related functionality through its public API, then it's better NOT to assume it will continue to include mex.h for you and include it yourself.
JMeter-what is difference between use ${} or vars.get() to get value of variable Today when I was debugging my JMeter script, I found a problem that confused me a lot. 1. CSV Data Config element: in CSV, I set variable `userId` to `1001200` [![enter image description here](https://i.stack.imgur.com/zbz3n.png)](https://i.stack.imgur.com/zbz3n.png) 2. Then run script below, and get different value of "userId" when using `${userId}` and `vars.get("userId")`. I think they should be same value, but it seems not. After run `vars.put("userId", "-111")`, `${userId}` and `vars.get("userId")` get different values: [![enter image description here](https://i.stack.imgur.com/QaWDX.png)](https://i.stack.imgur.com/QaWDX.png) so it seems `${}` and `vars.get()` have some difference even though their variable is same, does anyone know the reason? Thanks in advance.
Answer provided by @user7294900 refers to the case when *Cache compiled script* option is checked. But even if it's not checked, your script will resolve variables defined as `${varName}` **before** execution, while `vars.get("varName")` is resolved **during** execution. Before JMeter is about to run any element (sampler, pre- or post-processor), it will take (every) text field and will resolve any variables, functions or inline code, identified by `${...}` to their values available at the time of the evaluation. Thus syntax `${...}` converts variable into a constant string and your code (for Groovy or any other execution engine) will look like this: ``` log.info("***" + "1001200" + "***"); log.info("***" + vars.get("userId") + "***"); vars.put("userId", "-111"); log.info("***" + "1001200" + "***"); log.info("***" + vars.get("userId") + "***"); ``` Thus no matter how you change variable during execution, it won't change since it's no longer a variable. But `vars.get("userId")` on the other hand, is a function call and will check variable value every single time.
Commands to cut videos in half horizontally or vertically, and rejoin them later How can I use ffmpeg to cut video in half, vertically or horizontally by resolution; so for a 640 x 480 video, I want to separate that into two halves with a resolution of 320 x 480, or to separate it into two halves horizontally with a resolution of 640 x 240; and afterward, I need to be able to join the separated videos again to make a single video with the original resolution. How can this be done?
# separate into two halves Use the [crop filter](https://ffmpeg.org/ffmpeg-filters.html#crop): ### vertical (top/bottom) ``` ffmpeg -i input -filter_complex "[0]crop=iw:ih/2:0:0[top];[0]crop=iw:ih/2:0:oh[bottom]" -map "[top]" top.mp4 -map "[bottom]" bottom.mp4 ``` ### horizontal (left/right) ``` ffmpeg -i input -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" left.mp4 -map "[right]" right.mp4 ``` # join the separated videos Use the vstack/hstack filters as shown in [Vertically or horizontally stack several videos using ffmpeg?](https://stackoverflow.com/a/33764934/1109017)
Autoload php classes from subfolders using underscore (PEAR style) notation I'm a bit new to Object Oriented PHP and MVC's so I really need some help please. I have an MVC style folder structure with subfolders in the filesystem - e.g. `view/classes/subfolder/classname.php` I'm using mod\_rewrite for human friendly URL's, as `/classname` or `/foldername/calssname`, which are then passed to a page loader as underscore separated values. - e.g. `foldername_classname` ``` // Page Loader File require_once($_SERVER['DOCUMENT_ROOT'].'/local/classLoader.php'); session_start(); $page = new $_REQUEST['page']; ``` I have previously been using an [if / else if / else] block to test in each possible folder but this seems inefficient, so I'm looking for a better way to have the autoloader look in many different locations. Here's my latest failure, which doesn't manage to find any of the classes requested and just outputs an exception for each ending up with a fatal error!: ``` function classToPath($class) { $path = str_replace('_', '/', $class) . '.php'; return $path; } function autoloadController($class) { echo 'LoadController'.' '; $root = '/controller/classes/'; $pathtoclass = $root.classToPath($class); try { if( file_exists($pathtoclass) ) require_once($pathtoclass); else throw new Exception('Cannot load controller '.$class); } catch(Exception $e) { echo 'Controller exception: ' . $e->getMessage() . " "; } } function autoloadModel($class) { echo 'LoadModel'.' '; $root = '/model/classes/'; $pathtoclass = $root.classToPath($class); try { if( file_exists($pathtoclass) ) require_once($pathtoclass); else throw new Exception('Cannot load model '.$class); } catch(Exception $e) { echo 'Model exception: ' . $e->getMessage() . " "; } } function autoloadView($class) { echo 'LoadView'.' '; $root = '/view/classes/'; $pathtoclass = $root.classToPath($class); try { if( file_exists($pathtoclass) ) require_once($pathtoclass); else throw new Exception('Cannot load view '.$class); } catch(Exception $e) { echo 'View exception: ' . $e->getMessage() . " "; } } spl_autoload_register('autoloadController'); spl_autoload_register('autoloadModel'); spl_autoload_register('autoloadView'); ``` --- I was also wondering exactly how the URL to folder/class mapping should work: - i.e. URL: `/foldername/classname` mod\_rewritten to `foldername_classname`; with a class filename of `classname.php` under the `foldername` folder; and a php class definition of `class foldername_classname extends another_class { etc.` Is this the correct method?
Rather by chance, relating to [a different matter](https://stackoverflow.com/questions/5649457/is-it-possible-to-have-two-classes-with-the-same-name-if-theyre-in-different-fol), I found [the solution to my problem](https://stackoverflow.com/questions/5280347/autoload-classes-from-different-folders)! So to answer my own questions (and hopefully help others): - the filename should be just the last part of the URL i.e. `.../not-this-part/but-this` - the folder structure should be a map of the URL structure e.g. `.../folder/subfolder/class` - and the class should be defined as the full path but with underscores instead of forward slashes e.g. `class folder_subfolder_class {` I then wrote a function for each class system (model, view & controller) and used `spl_autoload_register()` to register each function as an `__autoload` function. Thus... ``` function loadController($class) { $path = $_SERVER['DOCUMENT_ROOT'].'/application/controller/'; $filename = str_replace( '_', DIRECTORY_SEPARATOR, strtolower($class) ).'.php'; if( file_exists($path.$filename) ) { require_once($path.$filename); } } etc..etc.. spl_autoload_register('loadController'); ... ``` For those wondering about the `mod_rewrite` part... ``` IfModule mod_rewrite.c RewriteEngine On RewriteBase / RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^([a-z]+)/([a-z]+)/?$ /?page=$1_$2 [QSA,NC,L] RewriteRule ^([a-z]+)/?$ /?page=$1 [QSA,NC,L] /IfModule ```
What is best practice for batch drawing objects with different transformations? I'm conceptualising a good approach to rendering as many disjointed pieces of geometry with a single draw call in OpenGL, and the wall I'm up against is the best way to do so when each piece has a different translation and maybe rotation, since you don't have the luxury of updating the model view uniform between single object draws. I've read a few other questions here and elsewhere and it seems the directions people are pointed in are quite varied. It would be nice to list the main methods of doing this and attempt to isolate what is most common or recommended. Here are the ideas I've considered: [edit: removed mention of Instancing as it doesn't really apply here] 1. Creating matrix transformations in the shader. Here I'd send a translation vector or maybe a rotation angle or quaternion as part of the attributes. The advantage is it would work cross-platform including mobile. But it seems a bit wasteful to send the exact same transformation data for every single vertex in an object, as an attribute. Without instancing, I'd have to repeat these identical vectors or scalars for a single object many many times in a VBO as part of the interleave array, right? The other drawback is I'm relying on the shader to do the math; I don't know if this is wise or not. 2. Similar to 1), but instead of relying on the shader to do the matrix calculations, I instead do these on the client side but still send through the final model view matrix as a stream of 16 floats in the VBO. But as far as I can tell, without instancing, I'd have to repeat this identical stream for every single vertex in the VBO, right? Just seems wasteful. The tradeoff with 2) above is that I am sending more data in the VBO per vertex (16 floats rather than a 3-float vector for translation and maybe a 4 float quaternion), but requiring the shader to do less work. 3. Skip all the above limitations and instead compromise with a separate draw call for each object. This is what is typically "taught" in the books I'm reading, no doubt for simplicity's sake. Are there other common methods than these? As an academic question, I'm curious if all the above are feasible and "acceptable" or if one of them is clearly a winner over the others? If I was to exclusively use desktop GL, is instancing the primary way for achieving this?
Two considerations: > > Generally speaking, if you have multiple objects, with each object > using independent transforms, you use multiple draw calls. That's what > they're there for. The old NVIDIA "Batch Batch Batch" presentation > cited between 10,000 and 40,000 draw calls per-frame (in D3D. More in > GL) for a 1GHz GPU. Nowadays, you're looking at rather more than that. > So unless you're dealing with tens of thousands of individual objects, > all of them being different (so no instancing), odds are good that > you'll be fine. > > > Another idea: > > Take the modelview matrix calculations out of the shader entirely and just pass the vertices after multiplication. This allows a single draw call for many objects in different orientations and translations. The cost just comes at all the CPU calculations, but I suppose if that bottleneck is not as big as the bottleneck of multiple draw calls, it would be worth it. > > > (Taken from [here](http://www.opengl.org/discussion_boards/showthread.php/181519-What-is-best-practice-for-batch-drawing-objects-with-different-transformations?p=1250196#post1250196).)
getting elements from mysql table to arraylist i have an ArrayList that looks like this: ``` [ 1 2011-05-10 1 22.0, 2 2011-05-10 2 5555.0, 3 2011-05-11 3 123.0, 4 2011-05-11 2 212.0, 5 2011-05-30 1 3000.0, 6 2011-05-30 1 30.0, 7 2011-06-06 1 307.0, 8 2011-06-06 1 307.0, 9 2011-06-06 1 307.0, 10 2011-06-08 2 3070.0, 11 2011-06-03 2 356.0, 12 2011-05-10 2 100.0, 13 2011-05-30 1 3500.0, 14 2011-05-10 3 1000.0, 15 2011-05-10 3 1000.0, 16 2011-05-07 1 5000.0, 17 2011-05-07 4 500.0, 18 2011-08-07 3 1500.0, 19 2011-08-08 6 11500.0, 20 2011-08-08 4 11500.0, 21 2011-08-08 7 11500.0, 22 2011-06-07 8 3000.0] ``` Here is the code how i got this arraylist: ``` @Override public ArrayList<Expenses> getExpenses() { ArrayList<Expenses> expenses = new ArrayList<Expenses>(); try { Statement stmt = myConnection.createStatement(); ResultSet result = stmt.executeQuery("SELECT * FROM expenses"); while(result.next()){ Expenses expense = new Expenses(); expense.setNum(result.getInt(1)); expense.setPayment(result.getString(2)); expense.setReceiver(result.getInt(3)); expense.setValue(result.getDouble(4)); expenses.add(expense); } } catch (SQLException e){ System.out.println(e.getMessage()); } return expenses; } ``` but what i want to get to an arraylist so that each element of array wasn't the line of the table (what i have now), but every individual element of the table should be the element of array ( [1, 2011-05-10, 1, 22.0, 2, 2011-05-10, 2, 5555.0, 3, 2011-05-11, 3, 123.0,]. Can anyone help me with that?
The only way you could do add into an ArrayList elements of different type, will be to treat them as general objects. However the code you already have is much superior. ``` @Override public ArrayList<Object> getExpenses() { ArrayList<Object> expenses = new ArrayList<Object>(); try { Statement stmt = myConnection.createStatement(); ResultSet result = stmt.executeQuery("SELECT * FROM expenses"); while(result.next()) { expenses.add(new Integer(result.getInt(1))); expenses.add(result.getString(2)); expenses.add(new Integer(result.getInt(3))); expenses.add(result.getDouble(4)); } } catch (SQLException e) { System.out.println(e.getMessage()); } return expenses; } ```
JavaFx 13 - TableView Vertical ScrollBar handler returns NullPointerException I need to handle JavaFx13 scroll to bottom event, but this code: ``` @Override public void initialize(URL location, ResourceBundle resources) { // ... // ScrollBar verticalBar = (ScrollBar) this.emailsTable.lookupAll(".scroll-bar"); ScrollBar verticalBar = (ScrollBar) this.emailsTable.lookup(".scroll-bar:vertical"); verticalBar.valueProperty().addListener((obs, oldValue, newValue) -> { // <-- Line 49 // if (verticalBar.getOrientation() != Orientation.VERTICAL) return; if (newValue.doubleValue() >= verticalBar.getMax()) { System.out.println("BOTTOM!"); } }); // ... } ``` ...returns that error at FXML load: ``` Caused by: java.lang.NullPointerException at it.unito.prog.views.MainView.initialize(MainView.java:49) at javafx.fxml/javafx.fxml.FXMLLoader.loadImpl(FXMLLoader.java:2573) ... 19 more ```
You can scroll using [`tableView.scrollTo()`](https://openjfx.io/javadoc/13/javafx.controls/javafx/scene/control/TableView.html#scrollTo(int)) to scroll to either an index or a specific item. To be notified that a scroll has occurred, you can use [`tableView.setOnScrollTo()`](https://openjfx.io/javadoc/13/javafx.controls/javafx/scene/control/TableView.html#setOnScrollTo(javafx.event.EventHandler)). *Don't use a lookup for this task* The scroll bar is only shown as needed. It may or may not be there when you look it up. If you try to [`lookup()`](https://openjfx.io/javadoc/13/javafx.graphics/javafx/scene/Node.html#lookup(java.lang.String)) the scroll bar before you add items to the table view, or before you add the table view to a scene, or before the scene has undergone a rendering pass, a scroll bar is almost certain to not be there. If the scroll bar is not there when you try to look it up then the lookup method will return a null value (which is what you are seeing). Even if a scroll bar is there when you first look it up, it may be subsequently removed and a new one added as needed, so your original reference will end up becoming invalid. So I don't recommend your lookup based approach.
PDF: How starting Position is Calculated through TM and TD I reading PDF Specification and unable to calculate the starting offset of x axis the visible text is as under: ``` Preface vii Acknowledgments ix INTRODUCTION 3 ``` PDF Text uncompress stream text as under: ``` 10 0 0 10 99 475.09 Tm <-New setting fontsize=10 and x and y axis -.2 Tc <-Character spacing [( P)-207(r)-181(e)-211(f)-187(a)-207(c)-191(e)-200( )-500( )-500( )]TJ ^Array having text /F2 1 Tf <-New font is set 7.5 0 0 7.5 137.289 475.09 Tm <-New settings Fontsize=7.5 and x and Y axis .002 Tc <-Character spacing (vii)Tj <-Text String /F6 1 Tf <-New Font 10 0 0 10 144.857 475.09 Tm <-New settings Fontsize=10 and x and Y axis -.2 Tc <-Character spacing ( )Tj <-Text String ``` What would be new x axis and y axis now ``` -4.5857 -1.3 TD <- What x-axis? [( A)-226(c)-190(k)-202(n)-201(o)-197(w)-192(l) -199(e)-200(d)-211(g)-216(m)-200(e)-201(n)-204(t)-201(s) -200( )-500( )-500( )]TJ /F2 1 Tf <- New Font 7.5 0 0 7.5 178.759 462.09 Tm <-New Text Matrix 0 Tc <-Character spacing (ix)Tj <- text string /F6 1 Tf 10 0 0 10 184.309 462.09 Tm -.2 Tc ( )Tj ``` Means x axis must be equal to 462.09? But how it is calculated ``` /F2 1 Tf 7 0 0 7 99.4 434.09 Tm .1599 Tc -.3799 Tw [( IN)-19.1(T)-10.1(R)1.9(ODUCT)-20.1(I)6.9(O)6.9(N)-.1( )-660( )-660( )]TJ /F2 1 Tf -5.5427 -1.8857 TD -.22 Tc [( )-42.9( )]TJ 7 0 0 7 99.6 407.69 Tm .1663 Tc [(CH)5.3(APT)-13.7(E)2.3(R)6.3( )]TJ ``` Again the Same thing repeting but how to calculate x axis position
You are missing that `Tm` sets a *matrix*, not just the font size: ``` 10 0 0 10 144.857 475.09 Tm <-New settings Fontsize=7.5 and x and Y axis ``` In addition to the font scale, this also sets the origin at `137.289 475.09`. Next, the instruction ``` -4.5857 -1.3 TD <- What x-axis? ``` moves `-4.5857` 'units' across and `-1.3` 'units' down. The size of the units is in *text space*, that is, scaled horizontally by `10` and vertically by `10` -- two separate calculations. That comes down to horizontally -45.857 and vertically -13 units, in *graphics* space and relative to the origin: ``` 137.289 + 10*-4.5857 = 91.432 (horizontal) 475.09 + 10*-1.3 = 462.09 (vertical) ```
How to use constructor with parameters from one without parameters I have a controller which takes a `DbContext` and one who does not. I.e.,: ``` public CalendarController() { var db = new DbContext(); CalendarController(db); // <= Not allowed } public CalendarController(IDbContext db) { _calendarManager = new CalendarManager(db); _userManager = new UserManager(db); _farmingActionManager = new FarmingActionManager(db); _cropManager = new CropManager(db); } ``` Unfortunately the above gives an error on the `CalendarController(db)` line: > > Expression denotes a 'type', where a 'variable', 'value' or 'method group` was expected > > > Is it possible to call one constructor from another? I don't want to duplicate all the code.
You have to chain to the constructor like this: ``` public CalendarController() : this(new DbContext()) { } ``` That's the syntax for chaining to *any* constructor in the same class - it doesn't have to be from a parameterless one to a parameterized one, although it does have to be to a different one :) Note that this comes before the body of the constructor, so you can't do anything else first, although you *can* call static methods to compute arguments to the other constructor. You can't use *instance* methods or properties though. So this is okay: ``` public Foo() : this(GetDefaultBar()) { } public Foo(int bar) { ... } private static int GetDefaultBar() { ... } ``` But this isn't: ``` public Foo() : this(GetDefaultBar()) { } public Foo(int bar) { ... } private int GetDefaultBar() { ... } ``` The syntax to chain to a constructor in the base class is similar: ``` public Foo() : base(...) ``` If you don't specify either `: base(...)` or `: this(...)` the compiler implicitly adds a `: base()` for you. (That doesn't necessarily chain to a parameterless constructor; it could have optional parameters.)
How do you manage your Eclipse installation? How do you manage your Eclipse installation, i.e. the basic installation, plug-ins and workspace settings with regard to consistent updates (including major ones, 3.5 => 3.6) and usage on two or more computers (desktop + notebook). My current setup is to basically managing the installation on several installations in parallel, i.e. manually add new plug-ins I installed on one to the other, and when I haven't used one in a long time to copy the whole directory from one location to the other. For updates I usually run it about once a month to get the latest versions, major updates I do manually by downloading the basic distribution and re-installing all the plug-ins in the matching version for the new major Eclipse version. However, this approach has some drawbacks: - time intensive - update inconsistencies (Update sites change location, update doesn't work because of some version inconsistency between plug-ins that requires a lot of manual fixing, etc) (this has gotten better with 3.5 but still bugs me) - no "global" update site, I manually have to manage several locations I tried alternatives like Yoxos for configuration management but there plug-ins were missing and / or not that well tested together as I expected. I took a look at Idea as an IDE, the one thing I really loved was the update management: centralized and 90% of the functionality I'd be using are provided as a core that is tested and updated as one. Thus the question: How do you manage your Eclipse installations and deal with updates? From my experience with other Eclipse users they have at least the same problem with updates, but I haven't heard of a solution yet.
I've heard good things from other developers about Google's [Workspace Mechanic](http://code.google.com/a/eclipselabs.org/p/workspacemechanic/). That's what they use inside Google to manage Eclipse environments across teams. It was open sourced in May 2010, and you may find more information in the [blog post](http://google-opensource.blogspot.com/2010/05/introducing-workspace-mechanic-for.html). *Note that the Workspace Mechanic does not yet manage plug-in installations (see [discussion thread](http://groups.google.com/group/workspacemechanic/browse_thread/thread/1ec0f87adb81882f)): it remembers "plugin preferences", but installing the plug-in themselves is not yet supported.*
Email sending error with c# and Godaddy server Hi i am using the hosting from goodaddy, mi aplication was made in c#, the problem is when send an email, this is my code ``` public ResponseDto sendEmail(EmailDto emailDto) { try { MailMessage message = new MailMessage(); SmtpClient client = new SmtpClient(); message.From = new MailAddress("[email protected]"); message.Subject = "have a new mail"; message.Body = "info: \n " + "name: " + emailDto.Name + "\mail: "+ emailDto.Email + "\nMessage: " + emailDto.Message; message.To.Add("[email protected]"); client.EnableSsl = true; client.UseDefaultCredentials = false; client.Port = 587; client.Host = "smtp.gmail.com"; client.Credentials = new System.Net.NetworkCredential("[email protected]", "(password"); client.Send(message); } catch (Exception e) { return new ResponseDto { message = e.ToString() , Success = false }; } return new ResponseDto { message = "success", Success = true }; } ``` my email that send the email is gmail, the error is the next > > System.Net.Mail.SmtpException: Failure sending mail. ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions 74.125.199.109:587 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception) --- End of inner exception stack trace --- at System.Net.ServicePoint.GetConnection(PooledStream PooledStream, Object owner, Boolean async, IPAddress& address, Socket& abortSocket, Socket& abortSocket6) at System.Net.PooledStream.Activate(Object owningObject, Boolean async, GeneralAsyncDelegate asyncCallback) at System.Net.PooledStream.Activate(Object owningObject, GeneralAsyncDelegate asyncCallback) at System.Net.ConnectionPool.GetConnection(Object owningObject, GeneralAsyncDelegate asyncCallback, Int32 creationTimeout) at System.Net.Mail.SmtpConnection.GetConnection(ServicePoint servicePoint) at System.Net.Mail.SmtpTransport.GetConnection(ServicePoint servicePoint) at System.Net.Mail.SmtpClient.GetConnection() at System.Net.Mail.SmtpClient.Send(MailMessage message) --- End of inner exception stack trace --- at System.Net.Mail.SmtpClient.Send(MailMessage message) at Core.EmailManager.sendEmail(EmailDto emailDto) > > > What could be the problem?
I solved, I hope that this aswer helps somebody, this the code is here: ``` public ResponseDto sendEmail(EmailDto emailDto) { try { MailMessage message = new MailMessage(); SmtpClient client = new SmtpClient(); message.From = new MailAddress("[email protected]"); message.Subject = "have a new mail"; message.Body = "info: \n " + "name: " + emailDto.Name + "\mail: "+ emailDto.Email + "\nMessage: " + emailDto.Message; message.To.Add("[email protected]"); client.EnableSsl = false; client.UseDefaultCredentials = false; client.Port = 25; client.Host = "relay-hosting.secureserver.net"; client.Send(message); } catch (Exception e) { return new ResponseDto { message = e.ToString() , Success = false }; } return new ResponseDto { message = "success", Success = true }; } ``` the problem was that other relay/smtp servers will not work from our hosting. and i don't need specify the password. and i change the port and the host, this is the link that help me https://au.godaddy.com/community/Developer-Cloud-Portal/Unable-to-send-email-from-C-net-application-from-website/td-p/2394
Alternative of CADisplayLink for Mac OS X Is iOS there is CADisplayLink, in Mac OS X there is CVDisplayLink, but I can't find a way to use it, all the examples are related to OpenGL. I created this custom UIView and I want to translate it to a NSView ``` #import "StarView.h" #import <QuartzCore/QuartzCore.h> #define MAX_FPS (100.0) #define MIN_FPS (MAX_FPS/40.0) #define FRAME_TIME (1.0 / MAX_FPS) #define MAX_CPF (MAX_FPS / MIN_FPS) #define aEPS (0.0001f) @implementation StarView @synthesize starImage = _starImage; @synthesize x, y; - (void)baseInit { _starImage = nil; CADisplayLink *displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(runLoop)]; [displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode]; } - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { [self baseInit]; } return self; } - (id)initWithCoder:(NSCoder *)aDecoder { if ((self = [super initWithCoder:aDecoder])) { [self baseInit]; } return self; } // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { [self.starImage drawAtPoint:CGPointMake(self.x, self.y)]; } -(void) cycle { self.x += 5; if (self.x > 230) { self.x = 0; } } - (void) runLoop { //NSLog(@"called"); static CFTimeInterval last_time = 0.0f; static float cycles_left_over = 0.0f; static float dt2 = 0.0f; float dropAnimRate = (2.1f/25.0f); CFTimeInterval current_time = CACurrentMediaTime(); float dt = current_time - last_time + cycles_left_over; dt2 += dt; [self setNeedsDisplay]; if (dt > (MAX_CPF * FRAME_TIME)) { dt = (MAX_CPF * FRAME_TIME); } while (dt > FRAME_TIME) { if (dt2 > (dropAnimRate - aEPS)){ [self cycle]; dt2 = 0.0f; } dt -= FRAME_TIME; } cycles_left_over = dt; last_time = current_time; } @end ``` The part that I can't translate is this one ``` - (void)baseInit { _starImage = nil; CADisplayLink *displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(runLoop)]; [displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode]; } ``` I know that I can use a NSTimer, but it doesn't have the same accuracy
You can configure a CVDisplayLink to work independently of OpenGL. The following is code that I've used to set up a CVDisplayLink to trigger regular capture and rendering from an industrial camera: ``` CGDirectDisplayID displayID = CGMainDisplayID(); CVReturn error = kCVReturnSuccess; error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink); if (error) { NSLog(@"DisplayLink created with error:%d", error); displayLink = NULL; } CVDisplayLinkSetOutputCallback(displayLink, renderCallback, (__bridge void *)self); ``` my `renderCallback` function looks something like this: ``` static CVReturn renderCallback(CVDisplayLinkRef displayLink, const CVTimeStamp *inNow, const CVTimeStamp *inOutputTime, CVOptionFlags flagsIn, CVOptionFlags *flagsOut, void *displayLinkContext) { return [(__bridge SPVideoView *)displayLinkContext renderTime:inOutputTime]; } ``` You'll have to replace the above with your own class and callback method, of course. The `__bridge` in the code is there for ARC support, so you may not need that in a manually reference counted environment. To start capturing events from the display link, you'd use ``` CVDisplayLinkStart(displayLink); ``` and to stop ``` CVDisplayLinkStop(displayLink); ``` When done, be sure to clean up after your created CVDisplayLink using `CVDisplayLinkRelease()`. If I may make one last comment, the reason why you see CVDisplayLink normally paired with OpenGL is that you usually don't want to be doing rapid refreshes on the screen using Core Graphics. If you need to be animating something at a 30-60 FPS rate, you're going to either want to draw directly using OpenGL or use Core Animation and let it handle the animation timing for you. Core Graphics drawing is not the way to fluidly animate things.
xslt need to select a single quote i need to do this: ``` <xsl:with-param name="callme" select="'validateInput(this,'an');'"/> ``` I've read [Escape single quote in xslt concat function](https://stackoverflow.com/questions/2887281/escape-single-quote-in-xslt-concat-function) and it tells me to replace `'` with `&apos;` I've done that yet its still not working.. Does anyone know how do we fix this?: ``` <xsl:with-param name="callme" select="'validateInput(this,&apos;an&apos;);'"/> ```
**Something simple that can be used in any version of XSLT**: ``` <xsl:variable name="vApos">'</xsl:variable> ``` **I am frequently using the same technique for specifying a quote**: ``` <xsl:variable name="vQ">"</xsl:variable> ``` **Then you can intersperse any of these variables into any text** using the standard XPath function `concat()`: ``` concat('This movie is named ', $vQ, 'Father', $vApos, 's secrets', $vQ) ``` **So, this transformation**: ``` <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:variable name="vApos">'</xsl:variable> <xsl:variable name="vQ">"</xsl:variable> <xsl:template match="/"> <xsl:value-of select= "concat('This movie is named ', $vQ, 'Father', $vApos, 's secrets', $vQ) "/> </xsl:template> </xsl:stylesheet> ``` **produces**: ``` This movie is named "Father's secrets" ```
How to Render a custom view with plugins in Zend Framework 2 I'm working on an app that needs to send an email after a process is complete. Since the email needs to be HTML I had the bright idea of rendering a view as the email message body so that I can implement a "Click here to see this on your browser" functionality. This is all taking part inside a controller that implements AbstractRestfulController so the view itself resides in my front end module so that it can be accessed from a URL through a browser. However I am getting an > > No RouteStackInterface instance provided > > > error when I try to render the view. This is my code: ``` use Zend\View\HelperPluginManager; use Zend\View\Resolver; use Zend\View\Renderer\PhpRenderer; //Instantiate the renderer $renderer = new PhpRenderer(); $resolver = new Resolver\TemplateMapResolver(array( 'layout' => realpath(__DIR__ . '/../../../../Application/view/layout/email.layout.phtml'), 'email/view' => realpath(__DIR__ . '/../../../../Application/view/application/email/view.phtml') ) ); $renderer->setResolver($resolver); $renderer->url()->setRouter($this->getEvent()->getRouter()); ``` I saw on the API documentation that you can set the router to the URL plugin by giving it a RouteStackInterface, hence the last line. However, that didn't seem to work either. I would like to use the same view to send an HTML email message that has links in the message body & to display on the browser through a URL. Any ideas/suggestions as to how to accomplish this? **EDIT/SOLUTION:** As per dotwired's answer below, getting the instance of the renderer from the service manager causes the plugins to be instantiated correctly. So this is the code that worked: module.config.php: ``` array('view_manager' => array( 'template_map' => array( 'layout/email' => __DIR__ . '/../../Application/view/layout/email.layout.phtml', 'email/share' => __DIR__ . '/../../Application/view/application/email/share.phtml', 'email/view' => __DIR__ . '/../../Application/view/application/email/view.phtml', ), ), ); ``` REST controller: ``` use Zend\View\Model\ViewModel; //get the renderer $renderer = $this->getServiceLocator()->get('Zend\View\Renderer\RendererInterface'); //Create the views $shareView = new ViewModel(array('data' => $data)); $shareView->setTemplate('email/view'); $emailLayout = new ViewModel(array('subject' => $this->_subject, 'content' => $renderer->render($shareView))); $emailLayout->setTemplate('layout/email'); //Render the message $markup = $renderer->render($emailLayout); ``` Using the renderer from the service manager the $this->url() view helper work without issue.
Just use the module.config.php of your module to specify your email template, like: ``` 'view_manager' => array( 'template_path_stack' => array( 'user' => __DIR__ . '/../view' ), 'template_map' => array( 'email/view' => __DIR__ . '/../view/application/email/view.phtml' ) ), ``` After which you can go on with [this part](http://framework.zend.com/manual/2.0/en/modules/zend.mail.message.html) of the documentation. You can then pass your view template from the renderer to the MimePart which will be used by the MimeMessage like ``` $viewModel = new \Zend\View\Model\ViewModel(); $viewModel->setTemplate('email/view'); $renderer = $this->serviceLocator->get('Zend\View\Renderer\RendererInterface'); $htmlPart = new \Zend\Mime\Part($renderer->render($viewModel)); ```
Inserting rows of zeros at specific places along the rows of a NumPy array I have a two column numpy array. I want to go through each row of the 2nd column, and take the difference between each set of 2 numbers (9.6-0, 19.13-9.6, etc). If the difference is > 15, I want to insert a row of 0s for both columns. I really only need to end up with values in the first column (I only need the second to determine where to put 0s), so if it's easier to split them up that would be fine. This is my input array: ``` [[0.00 0.00] [1.85 9.60] [2.73 19.13] [0.30 28.70] [2.64 38.25] [2.29 47.77] [2.01 57.28] [2.61 66.82] [2.20 76.33] [2.49 85.85] [2.55 104.90] [2.65 114.47] [1.79 123.98] [2.86 133.55]] ``` and it should turn into: ``` [[0.00 0.00] [1.85 9.60] [2.73 19.13] [0.30 28.70] [2.64 38.25] [2.29 47.77] [2.01 57.28] [2.61 66.82] [2.20 76.33] [2.49 85.85] [0.00 0.00] [2.55 104.90] [2.65 114.47] [1.79 123.98] [2.86 133.55]] ```
Assuming `A` as the input array, here's a vectorized approach based on initialization with zeros - ``` # Get indices at which such diff>15 occur cut_idx = np.where(np.diff(A[:,1]) > 15)[0] # Initiaize output array out = np.zeros((A.shape[0]+len(cut_idx),2),dtype=A.dtype) # Get row indices in the output array at which rows from A are to be inserted. # In other words, avoid rows to be kept as zeros. Finally, insert rows from A. idx = ~np.in1d(np.arange(out.shape[0]),cut_idx + np.arange(1,len(cut_idx)+1)) out[idx] = A ``` Sample input, output - ``` In [50]: A # Different from the one posted in question to show variety Out[50]: array([[ 0. , 0. ], [ 1.85, 0.6 ], [ 2.73, 19.13], [ 2.2 , 76.33], [ 2.49, 85.85], [ 2.55, 104.9 ], [ 2.65, 114.47], [ 1.79, 163.98], [ 2.86, 169.55]]) In [51]: out Out[51]: array([[ 0. , 0. ], [ 1.85, 0.6 ], [ 0. , 0. ], [ 2.73, 19.13], [ 0. , 0. ], [ 2.2 , 76.33], [ 2.49, 85.85], [ 0. , 0. ], [ 2.55, 104.9 ], [ 2.65, 114.47], [ 0. , 0. ], [ 1.79, 163.98], [ 2.86, 169.55]]) ```
Monodroid Javascript Call-back I'm trying to use monodroid with webkit to create an app. I am having a problem with letting the html page call a javascript method, which would be an interface to a method in my app. There is a tutorial on this at <http://developer.android.com/guide/webapps/webview.html> for how to do it in java, but the same code does not work on C#. This exchange at [call monodroid method from javascript example](https://stackoverflow.com/questions/9402531/call-monodroid-method-from-javascript-example) linked a few threads about using JNI to get around a problem with monodroid and the javascript interface method, but I haven't been able to get it to work. Now, I am trying to use some code instructions but to no success: ``` // Java class RunnableInvoker { Runnable r; public RunnableInvoker (Runnable r) { this.r = r; } // must match the javascript name: public void doSomething() { r.run (); } } From C#, you'd create a class that implements Java.Lang.IRunnable: // C# class SomeAction : Java.Lang.Object, Java.Lang.IRunnable { Action a; public void SomeAction(Action a) {this.a = a;} public void Run () {a();} } Then to wire things up: // The C# action to invoke var action = new SomeAction(() => {/* ... */}); // Create the JavaScript bridge object: IntPtr RunnableInvoker_Class = JNIEnv.FindClass("RunnableInvoker"); IntPtr RunnableInvoker_ctor = JNIEnv.GetMethodID (RunnableInvoker_Class, "<init>", "(Ljava/lang/Runnable;)V"); IntPtr instance = JNIEnv.NewObject(RunnableInvoker_Class, RunnableInvoker_ctor, new JValue (action)); // Hook up WebView to JS object web_view.AddJavascriptInterface (new Java.Lang.Object(instance, JniHandleOwnership.TransferLocalRef), "Android"); ``` This code is supposed to be able to make it so someone on the html page inside the app can click on a button, invoke the java, which will then invoke C#. This hasnt worked. I was wondering if anyone had an idea of what was the problem, or another idea so I can use monodroid to let a html button loaded in webkit to call a c# method, or to be able to have my c# code call a javascript method.
Let's take a step back. You want to invoke C# code from JavaScript. If you don't mind squinting just-so, it's quite straightforward. First, let's start with our Layout XML: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <WebView android:id="@+id/web" android:layout_width="fill_parent" android:layout_height="wrap_content" /> </LinearLayout> ``` Now we can get to app itself: ``` [Activity (Label = "Scratch.WebKit", MainLauncher = true)] public class Activity1 : Activity { const string html = @" <html> <body> <p>This is a paragraph.</p> <button type=""button"" onClick=""Foo.run()"">Click Me!</button> </body> </html>"; protected override void OnCreate (Bundle bundle) { base.OnCreate (bundle); // Set our view from the "main" layout resource SetContentView (Resource.Layout.Main); WebView view = FindViewById<WebView>(Resource.Id.web); view.Settings.JavaScriptEnabled = true; view.SetWebChromeClient (new MyWebChromeClient ()); view.LoadData (html, "text/html", null); view.AddJavascriptInterface(new Foo(this), "Foo"); } } ``` `Activity1.html` is the HTML content we're going to show. The only interesting thing is that we provide a `/button/@onClick` attribute which invokes the JavaScript fragment `Foo.run()`. Note the method name ("run") and that it starts with a lowercase 'r'; we will return to this later. There are three other things of note: 1. We enable JavaScript with `view.Settings.JavaScriptEnabled=true`. Without this, we can't use JavaScript. 2. We call `view.SetWebChromeClient()` with an instance of a `MyWebChromeClient` class (defined later). This is a bit of "cargo-cult programming": if we don't provide it, things don't work; I don't know why. If we instead do the seemingly equivalent `view.SetWebChromeClient(new WebChromeClient())`, we get an error at runtime: ``` E/Web Console( 4865): Uncaught ReferenceError: Foo is not defined at data:text/html;null,%3Chtml%3E%3Cbody%3E%3Cp%3EThis%20is%20a%20paragraph.%3C/p%3E%3Cbutton%20type=%22button%22%20onClick=%22Foo.run()%22%3EClick%20Me!%3C/button%3E%3C/body%3E%3C/html%3E:1 ``` This makes no sense to me either. 3. We call `view.AddJavascriptInterface()` to associate the JavaScript name `"Foo"` with an instance of the class `Foo`. Now we need the `MyWebChromeClient` class: ``` class MyWebChromeClient : WebChromeClient { } ``` Note that it doesn't actually do anything, so it's all the more interesting that just using a `WebChromeClient` instance causes things to fail. :-/ Finally, we get to the "interesting" bit, the `Foo` class which was associated above with the `"Foo"` JavaScript variable: ``` class Foo : Java.Lang.Object, Java.Lang.IRunnable { public Foo (Context context) { this.context = context; } Context context; public void Run () { Console.WriteLine ("Foo.Run invoked!"); Toast.MakeText (context, "This is a Toast from C#!", ToastLength.Short) .Show(); } } ``` It just shows a short message when the `Run()` method is invoked. # How this works During the Mono for Android build process, [Android Callable Wrappers](http://docs.xamarin.com/android/advanced_topics/architecture/android_callable_wrappers) are created for every `Java.Lang.Object` subclass, which declares all overridden methods and all implemented Java interfaces. This includes the above `Foo` class, resulting in the Android Callable Wrapper: ``` package scratch.webkit; public class Foo extends java.lang.Object implements java.lang.Runnable { @Override public void run () { n_run (); } private native void n_run (); // details omitted for clarity } ``` When `view.AddJavascriptInterface(new Foo(this), "Foo")` was invoked, this wasn't associating the JavaScript `"Foo"` variable with the C# type. This was associating the JavaScript `"Foo"` variable with an Android Callable Wrapper instance associated with the instance of the C# type. (Ah, indirections...) Now we get to the aforementioned "squinting." The C# `Foo` class implemented the `Java.Lang.IRunnable` interface, which is the C# binding for the `java.lang.Runnable` interface. The Android Callable Wrapper thus declares that it implements the `java.lang.Runnable` interface, and declares the `Runnable.run` method. Android, and thus JavaScript-within-Android, does not "see" your C# types. They instead see the Android Callable Wrappers. Consequently, the JavaScript code isn't calling `Foo.Run()` (capital 'R'), it's calling `Foo.run()` (lowercase 'r'), because the type that Android/JavaScript has access to declares a `run()` method, *not* a `Run()` method. When JavaScript invokes `Foo.run()`, then the Android Callable Wrapper `scratch.webview.Foo.run()` method is invoked which, through the joy that is JNI, results in the execution of the `Foo.Run()` C# method, which is really all you wanted to do in the first place. # But I don't like run()! If you don't like having the JavaScript method named `run()`, or you want parameters, or any number of other things, your world gets much more complicated (at least until Mono for Android 4.2 and `[Export]` support). You would need to do one of two things: 1. Find an existing bound interface or virtual class method that provides the name and signature that you want. Then override the method/implement the interface, and things look fairly similar to the example above. 2. Roll your own Java class. Ask on the [monodroid mailing list](http://lists.ximian.com/pipermail/monodroid/) for more details. This answer is getting long as it is.
How can i add variables inside Java 15 text block feature? Just came across a new feature in Java 15 i.e. "TEXT BLOCKS". I can assume that a variable can be added inside a text block by concatenating with a "+" operator as below: ``` String html = """ <html> <body> <p>Hello, """+strA+"""</p> </body> </html> """; ``` But are they providing any way so that we can add variables the way which is becoming popular among many other languages as below: ``` String html = """ <html> <body> <p>Hello, ${strA}</p> </body> </html> """; ``` This question might sound silly but it may be useful in certain scenario.
From [the spec for text blocks](https://openjdk.java.net/jeps/355): > > Text blocks do not directly support string interpolation. > Interpolation may be considered in a future JEP. > > > "String interpolation" meaning > > evaluating a string literal containing one or more placeholders, > yielding a result in which the placeholders are replaced with their > corresponding values > > > from [Wikipedia](https://en.wikipedia.org/wiki/String_interpolation) --- As stated above, maybe we'll get it in the future. Though it is difficult to say how they could possibly implement that without breaking backwards compatibility -- what happens if my string contains `${}`, for example? The Java language designers rarely add anything that is likely to break backwards compatibility. It seems to me that they would be better off either supporting it immediately, or never. Maybe it would be possible with a new kind of text block. Rather than the delimiter being `"""`, they could use `'''` to denote a parameterized text block, for example.
Unclear on what will happen during next Windows 10 major OS update I installed Windows 10 Pro from scratch in September 2017. I built USB installer with the latest OS version at the time. My Advanced options for updates are as follows: Branch readiness: Semi-Annual Channel Feature update: Defer for 365 days Quality update: Defer for 30 days Basically I am REALLY not into being a beta-tester for Microsoft. Currently I am at Version 1709 (OS Build 16299.611) I am unclear on when the major OS update will occur to me and what will happen when it does (never had one happen yet). As it is now October 2018 - does it mean I will be upgraded to the new OS version any time now whether I want it or not? What version will I be upgraded to? 1803 or 1809? How does upgrade really happen? Will I be given an opportunity to make a full OS backup before? Or will Windows just quietly download everything required and start it whenever it feels like?
> > Basically I am REALLY not into being a beta-tester for Microsoft. > > > `Semi-Annual Channel` is considered the most stable release channel. `Semi-Annual Channel (Targeted)` is considered to be stable but is targeted to known compatible configurations. In other words `Semi-Annual Channel` is released to everyone, and `Semi-Annual Channel (Targeted)` is only pushed to systems that are known to be compatible with it. [![enter image description here](https://i.stack.imgur.com/hU9lg.png)](https://i.stack.imgur.com/hU9lg.png) > > I am unclear on when the major OS update will occur to me and what will happen when it does (never had one happen yet). > > > Your system would be updated to the current `Semi-Annual Channel` build which is currently, as of October 2018, is Windows 10 Version 1803. [![enter image description here](https://i.stack.imgur.com/NUcKK.png)](https://i.stack.imgur.com/NUcKK.png) [![enter image description here](https://i.stack.imgur.com/caMHM.png)](https://i.stack.imgur.com/caMHM.png) > > As it is now October 2018 - does it mean I will be upgraded to the new OS version any time now whether I want it or not? > > > Eventually, you would be forced to install the next build, contained within the `Semi-Annual Channel` branch. You have Windows 10 configured to wait 365 days, until the next feature update, within the `Semi-Annual Channel` branch will be installed. **When the update will be installed exactly on your system cannot be determined with the information you have provided.** You really should be running 1803 at this point anyways. > > All releases of Windows 10 have 18 months of servicing for all > editions--these updates provide security and feature updates for the > release. Customers running Enterprise and Education editions have an > additional 12 months of servicing for specific Windows 10 releases, > for a total of 30 months from initial release. > > > Source: [Semi-Annual Channel](https://docs.microsoft.com/en-us/windows/deployment/update/waas-overview#semi-annual-channel) > > What version will I be upgraded to? 1803 or 1809? > > > You would be upgraded to which version is in the `Semi-Annual Channel` branch at the time of the upgrade. > > How does upgrade really happen? > > > The feature update would be downloaded in the background, and since you are running 1709, the majority of the installation would happen in the background. At some point, you would be prompted to perform a restart to finish the installation of the feature update. > > Will I be given an opportunity to make a full OS backup before? > > > You make a habit of making routine system backups. > > Will Windows just quietly download everything required and start it whenever it feels like? > > > Yes; Feature updates are required since you are running Windows 10 Professional.
Castle Windsor: Auto-register types from one assembly that implement interfaces from another I use [Castle Windsor](http://www.castleproject.org/container/) as my [IoC container](http://martinfowler.com/articles/injection.html). I have an application that has a structure similar to the following: - MyApp.Services.dll - `IEmployeeService` - `IContractHoursService` - `...` - MyApp.ServicesImpl.dll - `EmployeeService : MyApp.Services.IEmployeeService` - `ContractHoursService : MyApp.Services.IContractHoursService` - `...` I use the [XML configuration](http://www.castleproject.org/container/documentation/v21/usersguide/externalconfig.html) at the moment, and every time I add a new IService/Service pair, I have to add a new component to the XML configuration file. I want to switch all this over to the [fluent registration API](http://using.castleproject.org/display/IoC/Fluent+Registration+API) but haven't worked out *exactly* the right recipe to do what I want yet. Can anyone help? The lifestyles will all be `singleton`. Many thanks in advance.
With `AllTypes` you can easily do this: From <http://stw.castleproject.org/(S(nppam045y0sdncmbazr1ob55))/Windsor.Registering-components-by-conventions.ashx>: > > Registering components one-by-one can be very repetitive job. Also remembering to register each new type you add can quickly lead to frustration. Fortunately, you don't have to do it, at least always. By using AllTypes entry class you can perform group registration of types based on some specified characteristics you specify. > > > I think your registration would look like: ``` AllTypes.FromAssembly(typeof(EmployeeService).Assembly) .BasedOn<IEmployeeService>() .LifeStyle.Singleton ``` If you implement a base type, like `IService` on your interfaces, you can register them all at once using the following construct: ``` AllTypes.FromAssembly(typeof(EmployeeService).Assembly) .BasedOn<IService>() .WithService.FromInterface() .LifeStyle.Singleton ``` For more examples, see the article. This has a very good description on what the possibilities are.
Regular Expression to clear attributes from a html tag I have a pretty simple reg ex question. My HTML tag looks like the following: ``` <body lang=EN-US link=blue vlink=purple> ``` I want to clear all attributes and just return `<body>` There are a number of other HTML tags whose attributes I'd like to clear so I hope to reuse the solution. How to do this with a regular expression? Thanks, B.
Use [HtmlAgilityPack](http://html-agility-pack.net/?z=codeplex) like this: ``` public string RemoveAllAttributesFromEveryNode(string html) { var htmlDocument = new HtmlAgilityPack.HtmlDocument(); htmlDocument.LoadHtml(html); foreach (var eachNode in htmlDocument.DocumentNode.SelectNodes("//*")) eachNode.Attributes.RemoveAll(); html = htmlDocument.DocumentNode.OuterHtml; return html; } ``` Call this method passing the html that you want to remove all attributes from. [xpath](/questions/tagged/xpath "show questions tagged 'xpath'") will help you a lot with this. Don't use a regex for html files that may contain scripts, as in Javascript, the characters `<` and `>` are not tag delimiters but operators. A Regexp will probably match these operators as if they were tags, which will completely mess up the document.
What are the advantages of just-in-time compilation versus ahead-of-time compilation? I've been thinking about it lately, and it seems to me that most advantages given to **JIT** compilation should more or less be attributed to the intermediate format instead, and that jitting in itself is not much of a good way to generate code. So these are the main **pro-JIT** compilation arguments I usually hear: 1. **Just-in-time compilation allows for greater portability.** Isn't that attributable to the intermediate format? I mean, nothing keeps you from compiling your virtual bytecode into native bytecode once you've got it on your machine. Portability is an issue in the 'distribution' phase, not during the 'running' phase. 2. **Okay, then what about generating code at runtime?** Well, the same applies. Nothing keeps you from integrating a just-in-time compiler for a real just-in-time need into your native program. 3. **But the runtime compiles it to native code just once anyways, and stores the resulting executable in some sort of cache somewhere on your hard drive.** Yeah, sure. But it's optimized your program under time constraints, and it's not making it better from there on. See the next paragraph. It's not like **ahead-of-time** compilation had no advantages either. **Just-in-time** compilation has time constraints: you can't keep the end user waiting forever while your program launches, so it has a tradeoff to do somewhere. Most of the time they just optimize less. A friend of mine had *profiling evidence* that inlining functions and unrolling loops "manually" (obfuscating source code in the process) had a positive impact on performance on his **C#** number-crunching program; doing the same on my side, with my **C** program filling the same task, yielded no positive results, and I believe this is due to the extensive transformations my compiler was allowed to make. And yet we're surrounded by jitted programs. **C#** and **Java** are everywhere, Python scripts can compile to some sort of bytecode, and I'm sure a whole bunch of other programming languages do the same. There must be a good reason that I'm missing. So what makes **just-in-time** compilation so superior to **ahead-of-time** compilation? --- **EDIT** To clear some confusion, maybe it would be important to state that I'm all for an intermediate representation of executables. This has a lot of advantages (and really, most arguments for **just-in-time** compilation are actually arguments for an intermediate representation). My question is about how they should be compiled to native code. Most runtimes (or compilers for that matter) will prefer to either compile them just-in-time or ahead-of-time. As **ahead-of-time** compilation looks like a better alternative to me because the compiler has more time to perform optimizations, I'm wondering why Microsoft, Sun and all the others are going the other way around. I'm kind of dubious about profiling-related optimizations, as my experience with **just-in-time** compiled programs displayed poor basic optimizations. I used an example with C code only because I needed an example of **ahead-of-time** compilation versus **just-in-time** compilation. The fact that **C** code wasn't emitted to an intermediate representation is irrelevant to the situation, as I just needed to show that **ahead-of-time** compilation can yield better immediate results.
The [ngen tool page](http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.80).aspx) spilled the beans (or at least provided a good comparison of native images versus JIT-compiled images). Executables that are compiled ahead-of-time typically have the following benefits: 1. Native images load faster because they don't have much startup activities, and require a static amount of fewer memory (the memory required by the JIT compiler); 2. Native images can share library code, while JIT-compiled images cannot. Just-in-time compiled executables typically have the upper hand in these cases: 1. Native images are larger than their bytecode counterpart; 2. Native images must be regenerated whenever the original assembly or one of its dependencies is modified. The need to regenerate an image that is ahead-of-time compiled every time one of its components is a **huge** disadvantage for native images. On the other hand, the fact that JIT-compiled images can't share library code can cause a serious memory hit. The operating system can load any native library at one physical location and share the immutable parts of it with every process that wants to use it, leading to significant memory savings, especially with system frameworks that virtually every program uses. (I imagine that this is somewhat offset by the fact that JIT-compiled programs only compile what they actually use.) The general consideration of Microsoft on the matter is that large applications typically benefit from being compiled ahead-of-time, while small ones generally don't.
x86 opcode alignment references and guidelines I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.
I would recommend against inserting nops except for the alignment of branch targets. On some specific CPUs, branch prediction algorithms may penalize control transfers to control transfers, and so a nop may be able to act as a flag and invert the prediction, but otherwise it is unlikely to help. Modern CPU's are going to translate your ISA ops into ***[micro-ops](http://en.wikipedia.org/wiki/Micro-operation)*** anyway. This may make classical alignment techniques less important, as presumably the micro-operation transcoder will leave out nops and change both the size and alignment of the secret true machine ops. However, by the same token, optimizations based on first principles should do little or no harm. The theory is that one makes better use of the cache by starting loops at cache line boundaries. If a loop were to start in the middle of a cache line, then the first half of the cache line would be unavoidably loaded and kept loaded during the loop, and this would be wasted space in the cache if the loop is longer than 1/2 of a cache line. Also, for branch targets, the initial load of the cache line loads the largest forward window of instruction stream when the target is aligned. Regarding separating in-line instructions that are not branch targets with nops, there are few reasons for doing this on modern CPU's. (There was a time when RISC machines had ***[delay slots](http://en.wikipedia.org/wiki/Delay_slot)*** which often led to inserting nops after control transfers.) Decoding the instruction stream is easy to pipeline and if an architecture has odd-byte-length ops you can be assured that they are decoded reasonably.
Make radio group labels display nicely inline with radio buttons using pure CSS Previously I used a table to display radio buttons with labels. Now I am trying to do this same thing using only CSS. ![this was made using a table](https://i.stack.imgur.com/NDmSJ.jpg) What can I do in CSS that will make the label text sit nicely on top of the related radio button? This is the HTML: ``` <div class="controls"> <label class="radio"> <input type="radio" name="cfimb_5" id="id_cfimb_5_1" value="1"> Never </label> <label class="radio"> <input type="radio" name="cfimb_5" id="id_cfimb_5_2" value="2"> </label> <label class="radio"> <input type="radio" name="cfimb_5" id="id_cfimb_5_3" value="3"> Sometimes </label> <label class="radio"> <input type="radio" name="cfimb_5" id="id_cfimb_5_4" value="4"> </label> <label class="radio"> <input type="radio" name="cfimb_5" id="id_cfimb_5_5" value="5"> Frequently </label> </div> ``` Here is a codepen: <http://codepen.io/anon/pen/keuhl>
Historically, you could use margins to achieve this layout: ``` .controls label { display: inline-block; width: 90px; height: 20px; text-align: center; vertical-align: top; padding-top: 40px; } .controls input { display: block; margin: 0 auto -40px; } ``` ``` <div class="controls"> <label class="radio"> <input type="radio" name="foo" value="1"> Never </label> <label class="radio"> <input type="radio" name="foo" value="2"> </label> <label class="radio"> <input type="radio" name="foo" value="3"> Sometimes </label> <label class="radio"> <input type="radio" name="foo" value="4"> </label> <label class="radio"> <input type="radio" name="foo" value="5"> Frequently </label> </div> ``` On modern browsers that support [flexbox](https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout/Flexbox), the solution is considerably easier: ``` .controls { display: flex; } .radio { flex: 1 0 auto; display: flex; flex-direction: column; align-items: center; } ``` ``` <div class="controls"> <label class="radio"> <input type="radio" name="foo" value="1"> Never </label> <label class="radio"> <input type="radio" name="foo" value="2"> </label> <label class="radio"> <input type="radio" name="foo" value="3"> Sometimes </label> <label class="radio"> <input type="radio" name="foo" value="4"> </label> <label class="radio"> <input type="radio" name="foo" value="5"> Frequently </label> </div> ```
How to optimize for integer parameters (and other discontinuous parameter space) in R? How does one optimize if the parameter space is only integers (or is otherwise discontinuous)? Using an integer check in optim() does not seem to work and would be very inefficient anyways. ``` fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] value<-100 * (x2 - x1 * x1)^2 + (1 - x1)^2 check.integer <- function(N){ !length(grep("[^[:digit:]]", as.character(N))) } if(!all(check.integer(abs(x1)), check.integer(abs(x2)))){ value<-NA } return(value) } optim(c(-2,1), fr) ```
Here are a few ideas. **1. Penalized optimization.** You could round the arguments of the objective function and add a penalty for non-integers. But this creates a lot of local extrema, so you may prefer a more robust optimization routine, e.g., differential evolution or particle swarm optimization. ``` fr <- function(x) { x1 <- round( x[1] ) x2 <- round( x[2] ) value <- 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 penalty <- (x1 - x[1])^2 + (x2 - x[2])^2 value + 1e3 * penalty } # Plot the function x <- seq(-3,3,length=200) z <- outer(x,x, Vectorize( function(u,v) fr(c(u,v)) )) persp(x,x,z, theta = 30, phi = 30, expand = 0.5, col = "lightblue", border=NA, ltheta = 120, shade = 0.75, ticktype = "detailed") ``` ![perspective plot](https://i.stack.imgur.com/UNnWn.png) ``` library(RColorBrewer) image(x,x,z, las=1, useRaster=TRUE, col=brewer.pal(11,"RdYlBu"), xlab="x", ylab="y" ) ``` ![image plot](https://i.stack.imgur.com/dLz6u.png) ``` # Minimize library(DEoptim) library(NMOF) library(pso) DEoptim(fr, c(-3,-3), c(3,3))$optim$bestmem psoptim(c(-2,1), fr, lower=c(-3,-3), upper=c(3,3)) DEopt(fr, list(min=c(-3,-3), max=c(3,3)))$xbest PSopt(fr, list(min=c(-3,-3), max=c(3,3)))$xbest ``` **2. Exhaustive search.** If the search space is small, you can also use a grid search. ``` library(NMOF) gridSearch(fr, list(seq(-3,3), seq(-3,3)))$minlevels ``` **3. Local search, with user-specified neighbourhoods.** Without tweaking the objective function, you could use some form of local search, in which you can specify which points to examine. This should be much faster, but is extremely sensitive to the choice of the neighbourhood function. ``` # Unmodified function f <- function(x) 100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2 # Neighbour function # Beware: in this example, with a smaller neighbourhood, it does not converge. neighbour <- function(x,...) x + sample(seq(-3,3), length(x), replace=TRUE) # Local search (will get stuck in local extrema) library(NMOF) LSopt(f, list(x0=c(-2,1), neighbour=neighbour))$xbest # Threshold Accepting TAopt(f, list(x0=c(-2,1), neighbour=neighbour))$xbest ``` **4. Tabu search.** To avoid exploring the same points again and again, you can use [tabu search](http://en.wikipedia.org/wiki/Tabu_search), i.e., remember the last k points to avoid visiting them again. ``` get_neighbour_function <- function(memory_size = 100, df=4, scale=1){ # Static variables already_visited <- NULL i <- 1 # Define the neighbourhood values <- seq(-10,10) probabilities <- dt(values/scale, df=df) probabilities <- probabilities / sum(probabilities) # The function itself function(x,...) { if( is.null(already_visited) ) { already_visited <<- matrix( x, nr=length(x), nc=memory_size ) } # Do not reuse the function for problems of a different size stopifnot( nrow(already_visited) == length(x) ) candidate <- x for(k in seq_len(memory_size)) { candidate <- x + sample( values, p=probabilities, length(x), replace=TRUE ) if( ! any(apply(already_visited == candidate, 2, all)) ) break } if( k == memory_size ) { cat("Are you sure the neighbourhood is large enough?\n") } if( k > 1 ) { cat("Rejected", k - 1, "candidates\n") } if( k != memory_size ) { already_visited[,i] <<- candidate i <<- (i %% memory_size) + 1 } candidate } } ``` In the following example, it does not really work: we only move to the nearest local minimum. And in higher dimensions, things get even worse: the neighbourhood is so large that we never hit the cache of already visited points. ``` f <- function(x) { result <- prod( 2 + ((x-10)/1000)^2 - cos( (x-10) / 2 ) ) cat(result, " (", paste(x,collapse=","), ")\n", sep="") result } plot( seq(0,1e3), Vectorize(f)( seq(0,1e3) ) ) LSopt(f, list(x0=c(0,0), neighbour=get_neighbour_function()))$xbest TAopt(f, list(x0=c(0,0), neighbour=get_neighbour_function()))$xbest optim(c(0,0), f, gr=get_neighbour_function(), method="SANN")$par ``` Differential evolution works better: we only get a local minimum, but it is better than the nearest one. ``` g <- function(x) f(x) + 1000 * sum( (x-round(x))^2 ) DEoptim(g, c(0,0), c(1000,1000))$optim$bestmem ``` Tabu search is often used for purely combinatorial problems (e.g., when the search space is a set of trees or graphs) and does not seem to be a great idea for integer problems.
Why can't I read /dev/stdout with a text editor? I just started learning how *Everything Is A File*TM on Linux, which made me wonder what would happen if I literally read from /dev/stdout: ``` $ cat /dev/stdout ^C $ tail /dev/stdout ^C ``` (The `^C` is me killing the program after it hangs). When I try with `vim`, I get the unthinkable message: "/dev/stdout" is not a file. Gasp! So what gives, why am I getting hangups or error messages when I try to read these "files"?
> > why am I getting hangups > > > You aren't getting "hangups" from `cat(1)` and `tail(1)`, they're just blocking on read. `cat(1)` waits for input, and prints it as soon as it sees a complete line: ``` $ cat /dev/stdout foo foo bar bar ``` Here I typed `foo``Enter``bar``Enter``CTRL`-`D`. `tail(1)` waits for input, and prints it only when it can detect `EOF`: ``` $ tail /dev/stdout foo bar foo bar ``` Here I typed again `foo``Enter``bar``Enter``CTRL`-`D`. > > or error messages > > > Vim is the only one that gives you an error. It does that because it [runs](https://github.com/vim/vim/blob/dce1a496bdb61784f2b725c754b90b613bef8bcb/src/fileio.c#L442) `stat(2)` against `/dev/stdout`, and it finds it doesn't have the `S_IFREG` bit set. `/dev/stdout` is a file, but not a *regular* file. In fact, there's some dance in the kernel to give it an entry in the filesystem. On Linux: ``` $ ls -l /dev/stdout lrwxrwxrwx 1 root root 15 May 8 19:42 /dev/stdout -> /proc/self/fd/1 ``` On OpenBSD: ``` $ ls -l /dev/stdout crw-rw-rw- 1 root wheel 22, 1 May 7 09:05:03 2015 /dev/stdout ``` On FreeBSD: ``` $ ls -l /dev/stdout lrwxr-xr-x 1 root wheel 4 May 8 21:35 /dev/stdout -> fd/1 $ ls -l /dev/fd/1 crw-rw-rw- 1 root wheel 0x18 May 8 21:35 /dev/fd/1 ```
How can I send a simple HTTP request with a lwIP stack? Please move/close this if the question isn't relevant. Core: Cortex-M4 Microprocessor: TI TM4C1294NCPDT. IP Stack: lwIP 1.4.1 I am using this microprocessor to do some data logging, and I want to send some information to a separate web server via a HTTP request in the form of: `http://123.456.789.012:8800/process.php?data1=foo&data2=bar&time=1234568789` and I want the processor to be able to see the response header (i.e if it was 200 OK or something went wrong) - it does not have to do display/recieve the actual content. lwIP has a http server for the microprocessor, but I'm after the opposite (microprocessor is the client). I am not sure how packets correlate to request/response headers, so I'm not sure how I'm meant to actually send/recieve information.
This ended up being pretty simple to implement, forgot to update this question. I pretty much followed the instructions given on [this](http://lwip.wikia.com/wiki/Raw/TCP) site, which is the Raw/TCP 'documentation'. Basically, The HTTP request is encoded in TCP packets, so to send data to my PHP server, I sent an HTTP request using TCP packets (lwIP does all the work). The HTTP packet I want to send looks like this: > > HEAD /process.php?data1=12&data2=5 HTTP/1.0 > > > Host: mywebsite.com > > > To "translate" this to text which is understood by an HTTP server, you have to add "\r\n" carriage return/newline in your code. So it looks like this: ``` char *string = "HEAD /process.php?data1=12&data2=5 HTTP/1.0\r\nHost: mywebsite.com\r\n\r\n "; ``` Note that the end has two lots of "\r\n" You can use GET or HEAD, but because I didn't care about HTML site my PHP server returned, I used HEAD (it returns a 200 OK on success, or a different code on failure). The lwIP raw/tcp works on callbacks. You basically set up all the callback functions, then push the data you want to a TCP buffer (in this case, the TCP string specified above), and then you tell lwIP to send the packet. Function to set up a TCP connection (this function is directly called by my application every time I want to send a TCP packet): ``` void tcp_setup(void) { uint32_t data = 0xdeadbeef; /* create an ip */ struct ip_addr ip; IP4_ADDR(&ip, 110,777,888,999); //IP of my PHP server /* create the control block */ testpcb = tcp_new(); //testpcb is a global struct tcp_pcb // as defined by lwIP /* dummy data to pass to callbacks*/ tcp_arg(testpcb, &data); /* register callbacks with the pcb */ tcp_err(testpcb, tcpErrorHandler); tcp_recv(testpcb, tcpRecvCallback); tcp_sent(testpcb, tcpSendCallback); /* now connect */ tcp_connect(testpcb, &ip, 80, connectCallback); } ``` Once a connection to my PHP server is established, the 'connectCallback' function is called by lwIP: ``` /* connection established callback, err is unused and only return 0 */ err_t connectCallback(void *arg, struct tcp_pcb *tpcb, err_t err) { UARTprintf("Connection Established.\n"); UARTprintf("Now sending a packet\n"); tcp_send_packet(); return 0; } ``` This function calls the actual function tcp\_send\_packet() which sends the HTTP request, as follows: ``` uint32_t tcp_send_packet(void) { char *string = "HEAD /process.php?data1=12&data2=5 HTTP/1.0\r\nHost: mywebsite.com\r\n\r\n "; uint32_t len = strlen(string); /* push to buffer */ error = tcp_write(testpcb, string, strlen(string), TCP_WRITE_FLAG_COPY); if (error) { UARTprintf("ERROR: Code: %d (tcp_send_packet :: tcp_write)\n", error); return 1; } /* now send */ error = tcp_output(testpcb); if (error) { UARTprintf("ERROR: Code: %d (tcp_send_packet :: tcp_output)\n", error); return 1; } return 0; } ``` Once the TCP packet has been sent (this is all need if you want to "hope for the best" and don't care if the data actually sent), the PHP server return a TCP packet (with a 200 OK, etc. and the HTML code if you used GET instead of HEAD). This code can be read and verified in the following code: ``` err_t tcpRecvCallback(void *arg, struct tcp_pcb *tpcb, struct pbuf *p, err_t err) { UARTprintf("Data recieved.\n"); if (p == NULL) { UARTprintf("The remote host closed the connection.\n"); UARTprintf("Now I'm closing the connection.\n"); tcp_close_con(); return ERR_ABRT; } else { UARTprintf("Number of pbufs %d\n", pbuf_clen(p)); UARTprintf("Contents of pbuf %s\n", (char *)p->payload); } return 0; } ``` p->payload contains the actual "200 OK", etc. information. Hopefully this helps someone. I have left out some error checking in my code above to simplify the answer.
How to make all org-files under a folder added in agenda-list automatically? I am using org-mode to write notes and org-agenda to organize all notes, especially to search some info. by keyword or tag. **C-c a m** can search some files by tag inputed, **C-c a s** by keyword ,those functions from org-agenda are well to utilize, however, I need to add org-file into the agenda-list by hand. I added some codes into **.emacs**, such as ``` (setq org-agenda-files (list "path/folder/*.org")) ``` or ``` (setq org-agenda-files (file-expand-wildcards "path/folder/*.org")) ``` but, both failed to add files under the folder specified into agenda-list automatically, so I can't search keyword or tag among those org-files, unless that I open a org-file and type **C-c [** to add it into agenda-list. How can I make all org-files under a folder automatically added in agenda?
Just naming the directory should be enough. For example this works for me very well: ``` (setq org-agenda-files '("~/org")) ``` Also take a look at `org-agenda-text-search-extra-files`; it lets you add extra files included only in text searches. A typical value might be, ``` (setq org-agenda-text-search-extra-files '(agenda-archives "~/org/subdir/textfile1.txt" "~/org/subdir/textfile1.txt")) ``` **Caveat:** If you add a file to the directory after you have started Emacs, it will not be included. **Edit:** (2018) To include all files with a certain extension in the extra files list you can try the following function I wrote sometime back ([a more recent version might be available here](https://github.com/suvayu/.emacs.d/blob/master/lisp/nifty.el)). ``` ;; recursively find .org files in provided directory ;; modified from an Emacs Lisp Intro example (defun sa-find-org-file-recursively (&optional directory filext) "Return .org and .org_archive files recursively from DIRECTORY. If FILEXT is provided, return files with extension FILEXT instead." (interactive "DDirectory: ") (let* (org-file-list (case-fold-search t) ; filesystems are case sensitive (file-name-regex "^[^.#].*") ; exclude dot, autosave, and backupfiles (filext (or filext "org$\\\|org_archive")) (fileregex (format "%s\\.\\(%s$\\)" file-name-regex filext)) (cur-dir-list (directory-files directory t file-name-regex))) ;; loop over directory listing (dolist (file-or-dir cur-dir-list org-file-list) ; returns org-file-list (cond ((file-regular-p file-or-dir) ; regular files (if (string-match fileregex file-or-dir) ; org files (add-to-list 'org-file-list file-or-dir))) ((file-directory-p file-or-dir) (dolist (org-file (sa-find-org-file-recursively file-or-dir filext) org-file-list) ; add files found to result (add-to-list 'org-file-list org-file))))))) ``` You can use it like this: ``` (setq org-agenda-text-search-extra-files (append (sa-find-org-file-recursively "~/org/dir1/" "txt") (sa-find-org-file-recursively "~/org/dir2/" "tex"))) ``` **Edit:** (2019) As mentioned in the answer by @mingwei-zhang and the comment by @xiaobing, `find-lisp-find-files` from `find-lisp` and `directory-files-recursively` also provides this functionality. However, please note in these cases the file name argument is a (greedy) regex. So something like `(directory-files-recursively "~/my-dir" "org")` will give you all Org files including backup files (`*.org~`). To include only `*.org` files, you may use `(directory-files-recursively "~/my-dir" "org$")`.
Flask-restful: marshal complex object to json I have a question regarding flask restful extension. I'm just started to use it and faced one problem. I have `flask-sqlalchemy` entities that are connected many-to-one relation and I want that restful endpoint return parent entity with all its children in `json` using marshaller. In my case Set contains many parameters. I looked at flask-restful [docs](http://flask-restful.readthedocs.org/en/latest/fields.html) but there wasn't any explanation how to solve this case. Seems like I'm missing something obvious but cannot figure out any solution. Here is my code: ``` # entities class Set(db.Model): id = db.Column("id", db.Integer, db.Sequence("set_id_seq"), primary_key=True) title = db.Column("title", db.String(256)) parameters = db.relationship("Parameters", backref="set", cascade="all") class Parameters(db.Model): id = db.Column("id", db.Integer, db.Sequence("parameter_id_seq"), primary_key=True) flag = db.Column("flag", db.String(256)) value = db.Column("value", db.String(256)) set_id = db.Column("set_id", db.Integer, db.ForeignKey("set.id")) # marshallers from flask.ext.restful import fields parameter_marshaller = { "flag": fields.String, "value": fields.String } set_marshaller = { 'id': fields.String, 'title': fields.String, 'parameters': fields.List(fields.Nested(parameter_marshaller)) } # endpoint class SetApi(Resource): @marshal_with(marshallers.set_marshaller) def get(self, set_id): entity = Set.query.get(set_id) return entity restful_api = Api(app) restful_api.add_resource(SetApi, "/api/set/<int:set_id>") ``` Now when i call `/api/set/1` I get server error: `TypeError: 'Set' object is unsubscriptable` So I need a way to correctly define set\_marshaller that endpoint return this json: ``` { "id": : "1", "title": "any-title", "parameters": [ {"flag": "any-flag", "value": "any-value" }, {"flag": "any-flag", "value": "any-value" }, ..... ] } ``` I appreciate any help.
I found solution to that problem myself. After playing around with `flask-restful` i find out that i made few mistakes: Firstly `set_marshaller` should look like this: ``` set_marshaller = { 'id': fields.String, 'title': fields.String, 'parameters': fields.Nested(parameter_marshaller) } ``` Restless marshaller can handle case if parameter is list and marshals to `json` list. Another problem was that in API Set parameters has lazy loading, so when i try to marshall Set i got `KeyError: 'parameters'`, so I need explicitly load parameters like this: ``` class SetApi(Resource): @marshal_with(marshallers.set_marshaller) def get(self, set_id): entity = Set.query.get(set_id) entity.parameters # loads parameters from db return entity ``` Or another option is to change model relationship: ``` parameters = db.relationship("Parameters", backref="set", cascade="all" lazy="joined") ```
How can I "pad" names I have a df that looks like this: ``` df <- data.frame("Logger" = c("119_1", "1","2","3","119_2","5","6","7","119_3","7","8","9"), "Temp" =c (4.5, 5.7, 3.8, 8.9, 8.6, 10.5, 11.0, 7.8, 5.6, 7.8, 9.9, 17.3), "RH" = c(6.5, 2.7, 11.8, 4.9, 3.6, 12.5, 115.0, 3.8, 9.6, 1.8, 3.9,5.3)) ``` However, I want to "pad up" the names of the variable "Logger" so that all numbers get replaced by the last name until the next. So, basically I want an output like this: ``` df_desired <- data.frame("Logger" = c("119_1", "119_1","119_1","119_1","119_2","119_2","119_2","119_2","119_3","119_3","119_3","119_3"), "Temp" =c (4.5, 5.7, 3.8, 8.9, 8.6, 10.5, 11.0, 7.8, 5.6, 7.8, 9.9, 17.3), "RH" = c(6.5, 2.7, 11.8, 4.9, 3.6, 12.5, 115.0, 3.8, 9.6, 1.8, 3.9,5.3)) ``` How do I do that? I actually have no clue. I know that the padr package exists, but it just works with dates and NAs I guess.
You can replace the values that do not have `_` with `NA` and then use `fill`. ``` library(dplyr) library(tidyr) df %>% mutate(Logger = replace(Logger, !grepl('_', Logger), NA)) %>% fill(Logger) # Logger Temp RH #1 119_1 4.5 6.5 #2 119_1 5.7 2.7 #3 119_1 3.8 11.8 #4 119_1 8.9 4.9 #5 119_2 8.6 3.6 #6 119_2 10.5 12.5 #7 119_2 11.0 115.0 #8 119_2 7.8 3.8 #9 119_3 5.6 9.6 #10 119_3 7.8 1.8 #11 119_3 9.9 3.9 #12 119_3 17.3 5.3 ```
Set the time-of-day on a ZonedDateTime in java.time? How can I alter the time-of-day portion of an existing [`ZonedDateTime`](http://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html) object? I want to keep the date and time zone but alter the hour and minutes.
# tl;dr ``` zdt.with ( LocalTime.of ( 16 , 15 ) ) ``` # Immutable objects The java.time classes use the [Immutable Objects](https://en.wikipedia.org/wiki/Immutable_object) pattern to create fresh objects rather than alter (“mutate”) the original object. # `with()` The [`ZonedDateTime::with`](http://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html#with-java.time.temporal.TemporalAdjuster-) method is a flexible way to generate a new `ZonedDateTime` based on another but with some particular difference. You can pass any object implementing the [`TemporalAdjustor`](http://docs.oracle.com/javase/8/docs/api/java/time/temporal/TemporalAdjuster.html) interface. In this case we want to change just the time-of-day. A [`LocalTime`](http://docs.oracle.com/javase/8/docs/api/java/time/LocalTime.html) object represents a time-of-day without any date and without any time zone. And `LocalTime` implements the [`TemporalAdjustor`](http://docs.oracle.com/javase/8/docs/api/java/time/temporal/TemporalAdjuster.html) interface. So just that time-of-day value is applied while keeping the date and the time zone as-is. ``` ZonedDateTime marketOpens = ZonedDateTime.of ( LocalDate.of ( 2016 , 1 , 4 ) , LocalTime.of ( 9 , 30 ) , ZoneId.of ( "America/New_York" ) ); ZonedDateTime marketCloses = marketOpens.with ( LocalTime.of ( 16 , 0 ) ); ``` Double-check that the duration of the span of time is as expected, six and a half hours. ``` Duration duration = Duration.between ( marketOpens , marketCloses ); ``` Dump to console. ``` System.out.println ( "marketOpens: " + marketOpens + " | marketCloses: " + marketCloses + " | duration: " + duration ); ``` > > marketOpens: 2016-01-04T09:30-05:00[America/New\_York] | marketCloses: 2016-01-04T16:00-05:00[America/New\_York] | duration: PT6H30M > > > Keep in mind that in this example we are also **implicitly adjusting the seconds and fractional second** in the time-of-day. The `LocalTime` object carries with it the hour, minute, second, and fractional second. We specified an hour and a minute. Our omission of a second and fractional second resulted in a default value of `0` for both during construction of our `LocalTime`. All four aspects of the `LocalTime` were applied to get our fresh `ZonedDateTime`. Quite a few classes implement the [`TemporalAdjustor`](http://docs.oracle.com/javase/8/docs/api/java/time/temporal/TemporalAdjuster.html) interface. See the list on that class doc, including `LocalDate`, `Month`, `Year`, and more. So you can pass any of those to alter that aspect of a date-time value. Read the comment by Hochschild. You must understand the behavior when you specify a time-of-day that is invalid for a specific date & zone. For example, during a Daylight Saving Time (DST) cut-over.
Making Glowing Background with CSS only I want to create this background in CSS only. [![enter image description here](https://i.stack.imgur.com/J189y.jpg)](https://i.stack.imgur.com/J189y.jpg) I want to do it with CSS to avoid responsive issues.
You can make use of `radial-gradient` to produce the glow effect. You can change the colors to be inline with the image. One thing you should note is the browser support for CSS gradients. IE < 10 do not support them. If you need support for older browsers then CSS gradients would not help. ``` body { background-image: radial-gradient(circle, rgb(49, 144, 228) 0%, rgb(29, 84, 166) 100%); height: 100vh; } ``` ``` <!-- prefix free library included only to avoid vendor prefixes --> <script src="https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js"></script> ``` --- I can't see any extra steps in between but if you are looking for several steps of varying percentages then have a look at the below snippet: ``` body { background-image: radial-gradient(circle, rgb(49, 144, 228) 0%, rgb(41, 122, 204) 30%, rgb(29, 84, 166) 70%); height: 100vh; } ``` ``` <!-- prefix free library included only to avoid vendor prefixes --> <script src="https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js"></script> ```
Populating NSTableview from a mutable array I've been attempting this for two days, and constantly running into dead ends. I've been through Aaron Hillegass's Cocoa Programming for MAC OS X, and done all the relevant exercises dealing with NSTableview and mutable arrays, and I have been attempting to modify them to suit my needs. However none of them seem to be using an array with objects as a data source, it seems to use the tableview as the datasource. I'm trying to implement Jonas Jongejan's "*reworking"* of my code [here](https://stackoverflow.com/questions/3345238/sorting-shuffling-an-nsmutuable-array), with a Cocoa front end to display the results. Any pointers or suggestions I know this should be simple, but I'm lost in the wilderness here. I can populate the table by setting the array
It's pretty simple really, once you get to understand it (of course!). You can't use an NSArray directly as a table source. You need to either create a custom object that implements [NSTableViewDataSource](http://developer.apple.com/mac/library/documentation/Cocoa/Reference/ApplicationKit/Protocols/NSTableDataSource_Protocol/Reference/Reference.html) or implement that protocol in some existing class - usually a controller. If you use Xcode to create a standard document based application, the document controller class - (it will be called MyDocument) is a good class to use. You need to implement at least these two methods: ``` – numberOfRowsInTableView: – tableView:objectValueForTableColumn:row: ``` If you have a mutable array whose values you'd like to use in a table view with one column, something like the following should do as a start: ``` – numberOfRowsInTableView: (NSTableView*) aTableView { return [myMutableArray count]; } – tableView: (NSTableView*) aTableView objectValueForTableColumn: (NSTableColumn *)aTableColum row: (NSInteger)rowIndex { return [myMutableArray objectAtIndex: rowIndex]; } ``` It has just occurred to me that you could add the above two methods as a category to NSArray replacing `myMutableArray` with `self` and then you can use an array as a data source. --- Anyway, with a mutable array, it is important that any time you change it, you need to let the table view know it has been changed, so you need to send the table view `-reloadData`. --- If your table view has more than one column and you want to populate it with properties of objects in your array, there's a trick you can do to make it easier for yourself. Let's say the objects in your array are instances of a class called Person with two methods defined: ``` -(NSString*) givenName; -(NSString*) familyName; ``` and you want your table view to have a column for each of those, you can set the `identifier` property of each column to the name of the property in Person that that column displays and use something like the following: ``` – tableView: (NSTableView*) aTableView objectValueForTableColumn: (NSTableColumn *)aTableColum row: (NSInteger)rowIndex { Person* item = [myMutableArray objectAtIndex: rowIndex]; return [item valueForKey: [tableColumn identifier]]; } ``` If you replace `valueForKey:` with `valueForKeyPath:` and your Person class also has the following methods: ``` -(Person*) mother; -(Person*) father; -(NSString*) fullName; // concatenation of given name and family name ``` you can add table columns with identifiers like: `father.fullName` or `mother.familyName` and the values will be automatically populated.
Is there a way to find out which Python method can raise which Exception or Error Is there a way to find out which Python method can raise which Exception or Error? I didn't find much about it in the official Python documentation.
In general, the answer is no. Some of the exceptions are documented but most just follow a general pattern that can be learned. *SyntaxError* is checked first and is raised for syntactically invalid code. *NameError* arises when a variable is undefined (not assigned yet or misspelled). *TypeError* is raised for the wrong number of arguments or mismatched data types. *ValueError* means the type is correct and but the value doesn't make sense for the function (i.e. negative inputs to math.sqrt(). If the value is an index in a sequence lookup, *IndexError* is raised. If the value is a key for a mapping lookup, *KeyError* is raised. Another common exception is *AttributeError* for missing attributes. *IOError* is for failed I/O. And *OSError* for operating system errors. Besides learning the common patterns, it is usually easy to just run a function and see what exception it raises in a given circumstance. In general a function cannot know or document all the possible errors, because the inputs can raise their own exceptions. Consider this function: ``` def f(a, b): return a + b ``` It can raise *TypeError* if the number of arguments are wrong or if *a* doesn't support the `__add__` method. However, the underlying data can raise different exceptions: ``` >>> f(10) Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> f(10) TypeError: f() takes exactly 2 arguments (1 given) >>> f(10, 20) 30 >>> f('hello', 'world') 'helloworld' >>> f(10, 'world') Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> f(10, 'world') File "<pyshell#2>", line 2, in f return a + b TypeError: unsupported operand type(s) for +: 'int' and 'str' >>> class A: def __init__(self, x): self.x = x def __add__(self, other): raise RuntimeError(other) >>> f(A(5), A(7)) Traceback (most recent call last): File "<pyshell#13>", line 1, in <module> f(A(5), A(7)) File "<pyshell#2>", line 2, in f return a + b File "<pyshell#12>", line 5, in __add__ raise RuntimeError(other) RuntimeError: <__main__.A instance at 0x103ce2ab8> ```
Xamarin entry control TextChanged event looping round On my form I have 3 `entry` controls. I'm trying to validate the 'Age' control, with the following validation rules: - Cannot enter more than 3 digits - Cannot enter a decimal place (.) - Cannot enter a hyphen (-) To do this, I've set the 'TextChanged' property of my control to be ``` TextChanged="OnAgeTextChanged" ``` My `OnAgeTextChanged` method is: ``` private void OnAgeTextChanged(object sender, TextChangedEventArgs e) { var entry = (Entry)sender; try { if (entry.Text.Length > 3) { string entryText = entry.Text; entry.TextChanged -= OnAgeTextChanged; entry.Text = e.OldTextValue; entry.TextChanged += OnAgeTextChanged; } string strName = entry.Text; if (strName.Contains(".") || strName.Contains("-")) { strName = strName.Replace(".", "").Replace("-", ""); entry.Text = strName; } } catch(Exception ex) { Console.WriteLine("Exception caught: {0}", ex); } } ``` However, when the if conditions are met, the event is being looped over multiple times, causing the application to run slowly. For example, if I enter my age as 1234, it loops over the code multiple times so there's a delay, with the delay increasing each time the text gets changed. What other way can I achieve this validation, but without the event being called multiple times? **EDIT** After updating the code to remove the `TextChanged` trigger on my control before re-assigning it at the end of the method, it still loops over multiple times, and the number of loops increases with each key press. Entry control `xaml` ``` <Entry x:Name="txtAge" Placeholder="Age" Keyboard="Numeric" TextColor="DarkBlue" PlaceholderColor="DarkBlue" Completed="AgeCompleted" HorizontalOptions="Start" WidthRequest="55" TextChanged="OnAgeTextChanged" /> ``` `TextChanged` event ``` private void OnAgeTextChanged(object sender, TextChangedEventArgs e) { var entry = (Entry)sender; try { entry.TextChanged -= OnAgeTextChanged; if (entry.Text.Length > 3) { entry.Text = e.OldTextValue; } string strName = entry.Text; if (strName.Contains(".") || strName.Contains("-")) { strName = strName.Replace(".", "").Replace("-", ""); entry.Text = strName; } } catch(Exception ex) { Console.WriteLine("Exception caught: {0}", ex); } finally { entry.TextChanged += OnAgeTextChanged; } } ```
The way that I resolved this issue in the end was using a separate class to handle my validations. My validation class: ``` using System.ComponentModel; using System.Runtime.CompilerServices; public class viewModel : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string age_; public string Age { get { return age_; } set { if (age_ != value) { age_ = ProcessAge(value); OnPropertyChanged(); } } } private string ProcessAge(string age) { if (string.IsNullOrEmpty(age)) return age; if (age.Length > 3) age = age.Substring(0, 3); if (age.StartsWith("0")) age = age.Remove(0, 1); return age.Replace(".", "").Replace("-", ""); } private void OnPropertyChanged([CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } ``` Then, I can bind the form to use this class, with: ``` public MainPage() { InitializeComponent(); BindingContext = new viewModel(); } ``` And finally, to bind the entry control to use the Age property, I set the `Text` property ``` Text="{Binding Age, Mode=TwoWay}" ``` What this now does, is every time the value in the Age control changes, it will look to the Age property in the new class and see that to set it, it needs to go through `ProcessAge` to validate it, and this is where the checks are now done. This is faster, as it only occurs once per key press and there's no fiddling around required with subscribing and unsubscribing the `TextChanged` event and no loops.
How do I add Google Maps on my site? I have a form and I want to add a "select location" option. How can I do this, and how can I place a pin as the selected location?
You may want to consider using the [Google Maps API](http://code.google.com/apis/maps/), as [davek already suggested](https://stackoverflow.com/questions/2624833/how-add-google-map-on-site/2624856#2624856). The following example may help you getting started. All you would need to do is to change the JavaScript variable `userLocation` with the location chosen by your users from the drop-down field you mention. ``` <!DOCTYPE html> <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"/> <title>Google Maps API Demo</title> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=false" type="text/javascript"></script> </head> <body onunload="GUnload()"> <div id="map_canvas" style="width: 400px; height: 300px"></div> <script type="text/javascript"> var userLocation = 'London, UK'; if (GBrowserIsCompatible()) { var geocoder = new GClientGeocoder(); geocoder.getLocations(userLocation, function (locations) { if (locations.Placemark) { var north = locations.Placemark[0].ExtendedData.LatLonBox.north; var south = locations.Placemark[0].ExtendedData.LatLonBox.south; var east = locations.Placemark[0].ExtendedData.LatLonBox.east; var west = locations.Placemark[0].ExtendedData.LatLonBox.west; var bounds = new GLatLngBounds(new GLatLng(south, west), new GLatLng(north, east)); var map = new GMap2(document.getElementById("map_canvas")); map.setCenter(bounds.getCenter(), map.getBoundsZoomLevel(bounds)); map.addOverlay(new GMarker(bounds.getCenter())); } }); } </script> </body> </html> ``` The above example would render a map like the one below: ![Render google map in based on selected location](https://i.stack.imgur.com/X3ffa.png) The map will not show if the [Google Client-side Geocoder](http://code.google.com/apis/maps/documentation/services.html#Geocoding) cannot retreive the coordinates from the address.
Posting to controller with jquery ajax in CakePHP I want to post data to a controller in CakePHP, but posting with JQuery always results in an error and I can't figure out why. In my view I have the following method, that posts the data to the controller page ``` function RenameNode(name, id) { $.ajax({ type: "POST", url: '<?php echo Router::url(array('controller' => 'categories', 'action' => 'rename')); ?>', data: { id: id, name: name }, success: function(){ } }); } ``` My controller method looks like this: ``` public function rename($id = null, $name = null) { if ($this->request->is('get')) { throw new MethodNotAllowedException(); } if(!$id) { $id = @$this->request->query('id'); } if(!$name) { $name = @$this->request->query('name'); } if (!$id) { throw new NotFoundException(__('No id')); } $category = $this->Category->findById($id); if (!$category) { throw new NotFoundException(__('Invalid category')); } $this->autoRender = false; $this->layout = 'ajax'; if ($this->request->is('post') || $this->request->is('put')) { $this->Category->id = $id; $this->request->data['Category']['name'] = $name; if ($this->Category->save($this->request->data)) { $this->Session->setFlash(__('The category has been updated.')); $this->redirect(array('action' => 'index')); } else { $this->Session->setFlash(__('Unable to update the category.')); } } } ``` When I do a post with the jquery method, I keep getting the following error message in my log: ``` 2013-05-20 11:34:25 Error: [NotFoundException] No id Request URL: /cakephp/categories/rename Stack Trace: #0 [internal function]: CategoriesController->rename() ``` When I comment the request checks for get and post, the controller itself works perfectly when I call it with /categories/rename?id=1&name=test. For some reason the ajax way doesn't work, but I can't figure out why. Any ideas? **Update** I fixed it by changing the following code, now it works perfectly ``` if(!$id) { $id = @$this->request->query('id'); } if(!$name) { $name = @$this->request->query('name'); } ``` to ``` if(!$id) { $id = @$this->request->data('id'); } if(!$name) { $name = @$this->request->data('name'); } ```
You are not including the `id` and/or `name` in the URL you're posting to; ``` echo Router::url(array('controller' => 'categories', 'action' => 'rename')); ``` Will output; ``` /categories/rename ``` But you're *expecting* ``` /categories/rename/1/test ``` Or ``` /categories/rename?id=1&name=test ``` Change the URL in your AJAX code to something like; ``` echo Router::url(array( 'controller' => 'categories', 'action' => 'rename', 0 => $this->request->params['pass'][0], 1 => $this->request->params['pass'][1] )); ``` Which should output the right url, containing the *original* `id` and `name` of the current request (e.g. /categories/rename/123/oldname)
Monitor vs Watch on SourceForge Tracker What exactly is the difference watching and monitoring a project on the SourceForge Tracker UI? The tool tips aren't really helpful. Hold your mouse over the "Monitor" button and it says "Monitor this project". Hold it over "Watch" and it says nothing. > > ![sourceforge panel with buttons](https://i.stack.imgur.com/AS9dG.png) > > >
The "monitoring" and "watching" features are described on the SourceForge Wiki: - [Activity watches](https://sourceforge.net/apps/trac/sourceforge/wiki/Activity%20watches) - [Monitoring](https://sourceforge.net/apps/trac/sourceforge/wiki/Monitoring) Monitoring gives you email alerts when things happen to a project (changes to Tracker bugs, posts in forums, etc.), whereas Watching customizes your SourceForge web page to show you projects/people you're interested in following. If you're getting too much email, you can go to your "Account" page on SourceForge, and click "Monitoring" or "Watching", and see/adjust your settings from there. An alternative to these features is to use RSS -- for example, on a project's File Downloads page, there are RSS links for each directory, so you can see in your RSS reader when new files show up for a project.
How to suballocate buffers in Vulkan A recommended approach for memory management in Vulkan is sub-allocation of buffers, for instance see the image below. I'm trying to implement "the good" approach. I have a system in place that can tell me where within a Memory Allocation is available, so I can bind a sub area of a single large buffer. However, I can't find the mechanism to do this, or am just misunderstanding what is happening, as the bind functions take a buffer as input, and an offset. I can't see how to specify the size of the binding other than through the existing buffer. So I have a few questions I guess: - are the dotted rectangles in the image below just bindings, or are they additional buffers? - if they are bindings, how do I tell Vulkan (ideally using VMA) to use that subsection of the buffer? - if they are additional buffers, how do I create them? - if neither, what are they? I have read up on a few custom allocators, but they seem to follow the "bad" approach, returning offsets into large allocations for binding, so still plenty of buffers but lower allocation counts. To be clear, I am not using custom allocator callbacks other than through VMA; the "system" to which I refer above sits on top of the VMA calls. Any pointers much appreciated! [![here](https://i.stack.imgur.com/QkBHm.png)](https://i.stack.imgur.com/QkBHm.png)
> > are the dotted rectangles in the image below just bindings, or are they additional buffers? > > > They represent the actual data. So the "Index" block is the range of storage that contains vertex indices. > > if they are bindings, how do I tell Vulkan (ideally using VMA) to use that subsection of the buffer? > > > That depends on the particular nature of how you're using that `VkBuffer` as a resource. Generally speaking, every function that uses a `VkBuffer` as a resource takes a byte offset that represents where to start reading from. Many such functions also take a size which coupled with the offset represents the full quantity of data that can be read through that particular resource. For example, `vkCmdBindVertexBuffers` takes an array of `VkBuffer`s, and for each `VkBuffer` it also takes a byte offset that represents the starting point for that vertex buffer. `VkDescriptorBufferInfo`, the structure that represents a buffer used by a descriptor, takes a `VkBuffer`, a byte offset, and a size. The vertex buffer (and index buffer) bindings don't have a size, but they don't *need one*. Their effective size is defined by the rendering command used with them (and the index data being read by it). If you render using 100 32-bit indices, then the expectation is that the index buffer's size, minus the starting offset, should be at least 400 bytes. And if it isn't, UB results.
F# Pattern-matching by type How pattern-matching by type of argument works in F#? For example I'm trying to write simple program which would calculate square root if *number* provided or return it's argument otherwise. ``` open System let my_sqrt x = match x with | :? float as f -> sqrt f | _ -> x printfn "Enter x" let x = Console.ReadLine() printfn "For x = %A result is %A" x (my_sqrt x) Console.ReadLine() ``` I get this error: ``` error FS0008: This runtime coercion or type test from type 'a to float involves an indeterminate type based on information prior to this program point. Runtime type tests are not allowed on some types. Further type annotations are needed. ``` Since `sqrt` works with `float` I check for `float` type, but guess there could be better solution - like check if input is *number* (in general) and if so, cast it to float?
The problem here is that the type of `x` is actually a `string`. Adding that it comes from `Console.ReadLine`, what kind of information is stored in that string is only possible to determine at runtime. This means that you can't use neither pattern matching, nor pattern matching with coercion here. But you can use [Active Patterns](http://msdn.microsoft.com/en-us/library/dd233248.aspx). As what actual data is stored in `x` is only known at runtime, you have to parse the string and see what is contains. So suppose you are expecting a `float`, but you can't be sure since user can input whatever they want. We are going to try and parse our string: ``` let my_sqrt x = let success, v = System.Single.TryParse x // the float in F# is represented by System.Single in .NET if success then sqrt v else x ``` But this won't compile: > > This expression was expected to have type float32 but here has type string > > > The problem is that the compiler inferred the function to return a `float32`, based on the expression `sqrt (System.Single.Parse(x))`. But then if the `x` doesn't parse to float, we intend to just return it, and as `x` is a string we have an inconsistency here. To fix this, we will have to convert the result of `sqrt` to a string: ``` let my_sqrt x = let success, v = System.Single.TryParse x if success then (sqrt v).ToString() else x ``` Ok, this should work, but it doesn't use pattern matching. So let's define our "active" pattern, since we can't use regular pattern matching here: ``` let (|Float|_|) input = match System.Single.TryParse input with | true, v -> Some v | _ -> None ``` Basically, this pattern will match only if the `input` can be correctly parsed as a floating point literal. Here's how it can be used in your initial function implementation: ``` let my_sqrt' x = match x with | Float f -> (sqrt f).ToString() | _ -> x ``` This looks a lot like your function, but note that I still had to add the `.ToString()` bit. Hope this helps.
Divide UIImage into two parts along a UIBezierPath How to divide this `UIImage` by the black line into two parts. The upper contour set of `UIBezierPath`. ![](https://i.stack.imgur.com/JhFvp.png) I need to get two resulting `UIImage`s. So is it possible?
The following set of routines create versions of a UIImage with either only the content **inside** a path, or only content **outside** that path. Both make use of the `compositeImage` method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the [CGContext Reference](https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGContext/Reference/reference.html#//apple_ref/c/tdef/CGBlendMode) for all the modes. The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip. ``` - (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend; { // Create a new image of the same size as the source. UIGraphicsBeginImageContext([sourceImage size]); // First draw an opaque path... [path fill]; // ...then composite with the image. [sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0]; // With drawing complete, store the composited image for later use. UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext(); // Graphics contexts must be ended manually. UIGraphicsEndImageContext(); return maskedImage; } - (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath; { return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn]; } - (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath; { return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut]; } ```
How to read / write a struct in Binary Files? I am facing a small problem. I have a struct, which has a vector. Note that the vector is dynamic per every iteration. Now, in a particular iteration, how do I store the struct which contains a vector of size n to a binary file? Also, when retrieving, assume that I know how the size of the vector, how to I retrieve from the binary file, the struct variable containing the vector of all the stored elements? I am able to store something to the binary file (as I can see the size increasing when writing), but when I am trying to retrieve back the elements, I am getting size of vector to be zero. **Unfortunately, I have to achieve this using the standard STL and not use any third-party libraries.**
You should have a look at [Boost Serialization](http://www.boost.org/doc/libs/1_46_1/libs/serialization/doc/index.html). **If you can't use 3rd party libraries**, you must know that C++ doesn't support serialization directly. This means you'll have to do it yourself. [This article](http://www.devarticles.com/c/a/Cplusplus/Serialize-Your-Class-into-Streams-in-C/) shows a nice way of serializing a custom object to the disk and retrieving it back. And [this tutorial](http://www.functionx.com/cpp/articles/serialization.htm) shows you how to get started right now with *fstream*. This is my attempt: **EDIT**: since the OP asked how to store/retrieve more than record I decided to updated the original code. So, what changed? Now there's an *array* `student_t apprentice[3];` to store information of 3 students. The entire array is serialized to the disk and then it's all loaded back to the RAM where reading/searching for specific records is possible. Note that this is a very very small file (84 bytes). I do not suggest this approach when searching records on huge files. ``` #include <fstream> #include <iostream> #include <vector> #include <string.h> using namespace std; typedef struct student { char name[10]; int age; vector<int> grades; }student_t; int main() { student_t apprentice[3]; strcpy(apprentice[0].name, "john"); apprentice[0].age = 21; apprentice[0].grades.push_back(1); apprentice[0].grades.push_back(3); apprentice[0].grades.push_back(5); strcpy(apprentice[1].name, "jerry"); apprentice[1].age = 22; apprentice[1].grades.push_back(2); apprentice[1].grades.push_back(4); apprentice[1].grades.push_back(6); strcpy(apprentice[2].name, "jimmy"); apprentice[2].age = 23; apprentice[2].grades.push_back(8); apprentice[2].grades.push_back(9); apprentice[2].grades.push_back(10); // Serializing struct to student.data ofstream output_file("students.data", ios::binary); output_file.write((char*)&apprentice, sizeof(apprentice)); output_file.close(); // Reading from it ifstream input_file("students.data", ios::binary); student_t master[3]; input_file.read((char*)&master, sizeof(master)); for (size_t idx = 0; idx < 3; idx++) { // If you wanted to search for specific records, // you should do it here! if (idx == 2) ... cout << "Record #" << idx << endl; cout << "Name: " << master[idx].name << endl; cout << "Age: " << master[idx].age << endl; cout << "Grades: " << endl; for (size_t i = 0; i < master[idx].grades.size(); i++) cout << master[idx].grades[i] << " "; cout << endl << endl; } return 0; } ``` **Outputs**: ``` Record #0 Name: john Age: 21 Grades: 1 3 5 Record #1 Name: jerry Age: 22 Grades: 2 4 6 Record #2 Name: jimmy Age: 23 Grades: 8 9 10 ``` **Dump of the binary file**: ``` $ hexdump -c students.data 0000000 j o h n \0 237 { \0 � � { � 025 \0 \0 \0 0000010 ( � � \b 4 � � \b 8 � � \b j e r r 0000020 y \0 � \0 � � | \0 026 \0 \0 \0 @ � � \b 0000030 L � � \b P � � \b j i m m y \0 \0 \0 0000040 � 6 � \0 027 \0 \0 \0 X � � \b d � � \b 0000050 h � � \b 0000054 ```
Sorting with stochastic comparisions Given a list where for every pair of elements (A, B) the probabilities P(A > B), P(A < B), and P(A = B) is known, how do you determine the most probable sorted permutation?
Let's ignore `P(A=B)` (we can say it splits evenly among `<,>` and change them to `<=,>=`). Now, let's look at a similar, yet intuitively easier problem: - let's find the best assignment such that `P(arr[0]<arr[1])*...*P(arr[i]<arr[i+1])*...*P(arr[n-2]<arr[n-1])` is maximal. - This is easier problem since we now take into account only adjacent elements, (and not for example `P(arr[0]<arr[n-1])` - we use 'less' information. [proof is missing atm]. Now, we are looking to maximize the probability, which is equivalent to maximizing: ``` log{P(arr[0]<arr[1])} + ... + log{P(arr[n-2]<arr[n-1])} ``` Which in its turn is equivalent to minimizing: ``` -log{P(arr[0]<arr[1])} - ... - log{P(arr[n-2]<arr[n-1])} ``` This is a [TSP](http://en.wikipedia.org/wiki/Travelling_salesman_problem) with weights on edges: ``` w(v,u) = -log(P(v<u)) ``` However, TSP is NP-Complete, and unless the missing proof hypothesis is wrong (still thinking on it...) - it means that there is no known polynomial solution to this problem, or at the very least to the adjacent elements only variation.
DateTime format in C# I have a simple problem: I want to write the date in an asp:label in the following format: MMM dd, yyyy I tried this: ``` lblDate.Text = System.DateTime.Today.ToString("MMM dd, yyyy"); ``` the result is: "okt. 12, 2012" 1. I want to remove the dot from it. 2. The month's name should begin with a capital letter. the right format: "Okt 12, 2012" how can I do this?
You can use the [AbbreviatedMonthNames](http://msdn.microsoft.com/en-us/library/system.globalization.datetimeformatinfo.abbreviatedmonthnames.aspx) property for this: ``` CultureInfo ci = CultureInfo.CreateSpecificCulture("en-US"); DateTimeFormatInfo dtfi = ci.DateTimeFormat; dtfi.AbbreviatedMonthNames = new string[] { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec", "" }; lblDate.Text = DateTime.Now.ToString("MMM dd, yyyy", dtfi); ``` Then output will be `Okt 12, 2012`
C# unit test for a method which calls Console.ReadLine() I want to create a unit test for a member function of a class called `ScoreBoard` which is storing the top five players in a game. The problem is that the method I created a test for (`SignInScoreBoard`) is calling `Console.ReadLine()` so the user can type their name: ``` public void SignInScoreBoard(int steps) { if (topScored.Count < 5) { Console.Write(ASK_FOR_NAME_MESSAGE); string name = Console.ReadLine(); KeyValuePair<string, int> pair = new KeyValuePair<string, int>(name, steps); topScored.Insert(topScored.Count, pair); } else { if (steps < topScored[4].Value) { topScored.RemoveAt(4); Console.Write(ASK_FOR_NAME_MESSAGE); string name = Console.ReadLine(); topScored.Insert(4, new KeyValuePair<string, int>(name, steps)); } } } ``` Is there a way to insert like ten users so I can check if the five with less moves (steps) are being stored?
You'll need to refactor the lines of code that call Console.ReadLine into a separate object, so you can stub it out with your own implementation in your tests. As a quick example, you could just make a class like so: ``` public class ConsoleNameRetriever { public virtual string GetNextName() { return Console.ReadLine(); } } ``` Then, in your method, refactor it to take an instance of this class instead. However, at test time, you could override this with a test implementation: ``` public class TestNameRetriever : ConsoleNameRetriever { // This should give you the idea... private string[] names = new string[] { "Foo", "Foo2", ... }; private int index = 0; public override string GetNextName() { return names[index++]; } } ``` When you test, swap out the implementation with a test implementation. Granted, I'd personally use a framework to make this easier, and use a clean interface instead of these implementations, but hopefully the above is enough to give you the right idea...