prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
php for "high-traffic" websites I have read that PHPs "probable" weakness is how it handles "concurrency". With only sessions and cookies to keep track of user state, how can PHP handle the following situations with high accuracy: 1. multiple users check out with one item that has only 1 stock in inventory (sorry for grammar mistakes, but you pretty much get the picture already) 2. multiple users logging into the same user account using the same login details 3. multiple users editing an the same image at the same time (though this rarely happens in real life) or any other transactions that demands multiple thread handling (I apologize if I misused terms here)
These aren't real concurrency issues. While it's true that PHP as an environment lacks in thread capability, any web server utilizing a PHP module will have multiple threads, each with thier own active PHP environment inside it, all utilizing the same resources. You would run into these problems with Java, .Net, Perl, or any other web application language. 1. You need a transaction on your database, probably with a write lock so that other users can't read it and run the checkout process while someone else is checking out. This is not a language thread issue, it's a database transactional issue. 2. This isn't a threading issue either. Sessions are fairly trivial with all the tools available, and I've never heard of a "one thread per session" style of implementation on any language platform (that would be non-trivial, difficult to implement, and would just add overhead). You either allow multiple session tokens to be active for one account (user can log in multiple times on different tabs or web browsers if they want), or you don't (all session tokens are cleared each time a login procedure occurs so that only one token is active). 3. An odd one, but I'm not sure how threading fits here either. Image editing would have to be done client-side in the browser. You can't keep "threads" open to a user's browser in any language... HTTP doesn't work like that. You'd send them the image and you're done until they hit "save" and send it back. If you're worried about users overwriting each other's changes, again, you'd just have to put a transactional lock on it. I'd probably just "version" each image, and if an update occurred from one user while another was editing it, you'd inform the other user that they needed to refresh their copy. As far as I'm aware, no language uses threads to accomplish any of these tasks. Because of the stateless nature of HTTP communication, cookies are sessions are a mainstay of *every* web language, so no matter what platform you use, you're going to see very much the same strategy in all of them for handling a given problem.
Visual Studio Code Intellisense stopped to work on C# files I realized that I can't use `ctrl + .` shortcut to import other `C#` classes. This shortcut works just fine for other file types like typescript. I have uninstalled and installed back again. I also installed old version of VS code too. But nothing worked. [![enter image description here](https://i.stack.imgur.com/pbvlJ.png)](https://i.stack.imgur.com/pbvlJ.png) As I shown above, there is no import feature shows up when I do `ctrl + .`. Those are only the choices I was given. It's fine using without it but it's also annoying to import those files manually typed.
The C# language features in VS Code are provided by a service called OmniSharp . A couple of things you could try: ### A) Restart OmniSharp 1. Open the Command Palette (`Ctrl` + `Shift` + `P`) 2. Type `Omnisharp: Restart OmniSharp` [![Command Palette > OmniSharp](https://i.stack.imgur.com/sHbo9.png)](https://i.stack.imgur.com/sHbo9.png) ### B) View OmniSharp Logs 1. Open Output Panel (`Ctrl` + `'`) 2. Select `OmniSharp Log` from the dropdown) [![Output Panel > OmniSharp](https://i.stack.imgur.com/sXj8n.png)](https://i.stack.imgur.com/sXj8n.png) ### C) Enable OmniSharp Logging 1. [Enabling C# debugger logging](https://github.com/OmniSharp/omnisharp-vscode/wiki/Enabling-C%23-debugger-logging) in Omnisharp ``` "configurations": [ { "...": "...", "logging": { "engineLogging": true } }, { "...": "..." } ] ```
Redshift Not Connecting to Host via Python Script I currently have a .csv file in an S3 bucket that I'd like to append to a table in a Redshift database using a Python script. I have a separate file parser and upload to S3 that work just fine. The code I have for connecting to/copying into the table is below here. I get the following error message: > > OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out (0x0000274C/10060) > Is the server running on host "redshift\_cluster\_name.unique\_here.region.redshift.amazonaws.com" (18.221.51.45) and accepting > TCP/IP connections on port 5439? > > > I can confirm the following: - Port is 5439 - Not encrypted - Cluster name/DB name/username/password are all correct - Publicly accessible set to "Yes" What should I be fixing to make sure I can connect my file in S3 to Redshift? Thank you all for any help you can provide. Also I have looked around on Stack Overflow and [ServerFault](https://serverfault.com/questions/656079/unable-to-connect-to-public-postgresql-rds-instance) but these seem to either be for MySQL to Redshift or the solutions (like the linked ServerFault CIDR solution) did not work. Thank you for any help! ``` DATABASE = "db" USER = "user" PASSWORD = "password" HOST = "redshift_cluster_name.unique_here.region.redshift.amazonaws.com" PORT = "5439" SCHEMA = "public" S3_FULL_PATH = 's3://bucket/file.csv' #ARN_CREDENTIALS = 'arn:aws:iam::aws_id:role/myRedshiftRole' REGION = 'region' ############ CONNECTING AND CREATING SESSIONS ############ connection_string = f"redshift+psycopg2://{USER}:{PASSWORD}@{HOST}:{PORT}/{DATABASE}" engine = sa.create_engine(connection_string) session = sessionmaker() session.configure(bind=engine) s = session() SetPath = f"SET search_path TO {SCHEMA}" s.execute(SetPath) ########################################################### ############ RUNNING COPY ############ copy_command = f ''' copy category from '{S3_FULL_PATH}' credentials 'aws_iam_role={ARN_CREDENTIALS}' delimiter ',' region '{REGION}'; ''' s.execute(copy_command) s.commit() ###################################### #################CLOSE SESSION################ s.close() ############################################## ```
Connecting via a Python program would require the same connectivity as connecting from an SQL Client. I created a new cluster so I could document the process for you. Here's the steps I took: - **Created a VPC** with CIDR of `10.0.0.0/16`. I don't really need to create another VPC, but I want to avoid any problems with prior configurations. - **Created a Subnet** in the VPC with CIDR of `10.0.0.0/24`. - Created an **Internet Gateway** and attached it to the VPC. - Edited the **default Route Table** to send `0.0.0.0/0` traffic to the Internet Gateway. (I'm only creating a public subnet, so don't need a route table for private subnet.) - Created a **Redshift Cluster Subnet Group** with the single subnet I created. - Launch a 1-node **Redshift cluster** into the Cluster Subnet Group. `Publicly accessible = Yes`, default Security Group. - Went back to the VPC console to edit the **Default Security Group**. Added an Inbound rule for Redshift from Anywhere. - Waited for the Cluster to become ready. - I then used [DbVisualizer](https://dbvis.com/) to login to the database. **Success!** The above steps made a publicly-available Redshift cluster and I connected to it from my computer on the Internet.
What can fail WifiP2pManager.connect? I have a code that perform a Wifi P2p discovery, presents the nearby devices to the user and let him select to which device he wants to connect. The discovery is working as expected, but when I try to actually connect to the selected device, the system calls `ActionListener.onFailure` and passes the reason code for "Internal Error". This is the code that initiates the connection: ``` public void connectToDevice(WifiP2pDevice device) { Log.i(TAG, "Initiating connection to " + device.deviceAddress); stopScan(); WifiP2pConfig config = new WifiP2pConfig(); config.deviceAddress = device.deviceAddress; config.wps.setup = WpsInfo.PBC; // Since we wish to send a friend request, it will be easier if // we'll end up as a client because we will have the group owner's // address immediately. config.groupOwnerIntent = 0; mP2pManager.connect(mChannel, config, mConnectionListener); } ``` And the `mConnectionListener` is defined as follows: ``` protected ActionListener mConnectionListener = new ActionListener() { @Override public void onSuccess() { Log.i(TAG, "Conection initiated successfuly"); } @Override public void onFailure(int reason) { String reasonString = reason(reason); Log.e(TAG, "Error while connecting to Wifi peer: " + reasonString); } }; ``` The devices are not part of any group when this error is thrown, and this happens when either device (Nexus 4 & Nexus 7) is the initiator. Any ideas what might be the problem?
After hours of digging inside the Android Source code, I found the problem. The error was thrown by the `WifiP2pService` class. The reason was that the device I was trying to connect to was not on the internal nearby peers list. But why the device is not on the peers list!? After more digging I saw that when a scan is finished the peers list is cleared. So what failed my connection is the `stopScan()` method that I invoked just before initiating the connection. After removing this line the connection established successfully. **Why did I stopped the scan?** I came to WiFi-Direct immediately after I finished implementing the feature in Bluetooth. The documentation on Bluetooth specifically says to stop any ongoing scan before connecting to device in order to save bandwidth and speed up the process. So I thought to do the same on WiFi-Direct. **Don't do that.**
System.Beep equivalent Is there any class for making sound in speakers by different tones? System.Beep() is primitive, I can't send there frequency in double. I want, let's say, play A sound or B# sound. I want to function call to be like: ``` double d = 425,545; int duration = 500; int volume = 0.8; f(d, duration, volume) f(ToneClass.A, duration, volume)//or like this ```
Integer part for frequency is enough. your ears wont even notice decimal part. ``` int frq = 425; int duration = 500; Console.Beep(frq, duration); ``` You can see piano key frequencies from here. <https://en.wikipedia.org/wiki/Piano_key_frequencies> Some notes. Use enum instead of ints. also use `Task.Delay(duration).Wait();` for rest . (by rest i mean silence notes) You can use some formulas to calculate piano key and then key frequency instead of using large enum or large number of hard coded ints. Also you should consider note lengths. You can see them here <https://en.wikipedia.org/wiki/Note_value> Here is the piece of "Mountain King" by Edvard Grieg :D ``` static int GetPianoKey(string note) { int key = -1; switch (note[0]) { case 'A': key = 1; break; case 'B': key = 3; break; case 'C': key = 4; break; case 'D': key = 6; break; case 'E': key = 8; break; case 'F': key = 9; break; case 'G': key = 11; break; } if (note.Length == 2) { return key + 12*(note[1] - '0'); } if (note.Length == 3) { return key + 12*(note[2] - '0') + (note[1] == 'b' ? -1 : 1); } throw new ApplicationException("Wrong note."); } static int GetNoteFrequency(string note) { return (int) (Math.Pow(1.05946309436, GetPianoKey(note) - 49)*440); } static int GetTickDuration(int tempo) { return 60000/tempo; } private static void Main(string[] args) { int duration = GetTickDuration(120); // 120 bpm. duration for quarter note for (int i = 0; i < 2; i++) { Console.Beep(GetNoteFrequency("A3"), duration / 2); // eighth note ==> duration/2 Console.Beep(GetNoteFrequency("B3"), duration / 2); Console.Beep(GetNoteFrequency("C3"), duration / 2); Console.Beep(GetNoteFrequency("D3"), duration / 2); Console.Beep(GetNoteFrequency("E3"), duration / 2); Console.Beep(GetNoteFrequency("C3"), duration / 2); Console.Beep(GetNoteFrequency("E3"), duration / 2); Task.Delay(duration/2).Wait(); // eighth rest ==> duration/2 Console.Beep(GetNoteFrequency("D#3"), duration / 2); Console.Beep(GetNoteFrequency("B3"), duration / 2); Console.Beep(GetNoteFrequency("D#3"), duration / 2); Task.Delay(duration / 2).Wait(); Console.Beep(GetNoteFrequency("D3"), duration / 2); Console.Beep(GetNoteFrequency("Bb3"), duration / 2); Console.Beep(GetNoteFrequency("D3"), duration / 2); Task.Delay(duration / 2).Wait(); Console.Beep(GetNoteFrequency("A3"), duration / 2); Console.Beep(GetNoteFrequency("B3"), duration / 2); Console.Beep(GetNoteFrequency("C3"), duration / 2); Console.Beep(GetNoteFrequency("D3"), duration / 2); Console.Beep(GetNoteFrequency("E3"), duration / 2); Console.Beep(GetNoteFrequency("C3"), duration / 2); Console.Beep(GetNoteFrequency("E3"), duration / 2); Console.Beep(GetNoteFrequency("A4"), duration / 2); Console.Beep(GetNoteFrequency("G3"), duration / 2); Console.Beep(GetNoteFrequency("E3"), duration / 2); Console.Beep(GetNoteFrequency("C3"), duration / 2); Console.Beep(GetNoteFrequency("E3"), duration / 2); Console.Beep(GetNoteFrequency("G3"), duration * 2); // half note ==> duration*2 } } ```
How do i get the sum of values of an Object Array :java8 in variable "a" i create a `double` array, and in "maxA" i get the sum of the values. Now in variable "b" i create an object array with double values, now i want to have the sum of these values using the `stream` value. thanks for help ``` double[] a = new double[] {3.0,1.0}; double maxA = Arrays.stream(a).sum(); ObjectWithDoubleValue o1 = new ObjectWithDoubleValue (3.0); ObjectWithDoubleValue o2 = new ObjectWithDoubleValue (1.0); ObjectArray[] b = {o1 , o2}; double maxB = ?; ```
Use `mapToDouble` which will return a `DoubleStream` and use `getter` function of your class to get value from your object and eventually apply `sum` ``` Arrays.stream(aa).mapToDouble(ObjectWithDoubleValue::getValue).sum() ``` where `getValue` is a `getter` function of your class ``` class ObjectWithDoubleValue{ double a; public double getValue(){ return a; } } ``` --- Sample ``` ObjectWithDoubleValue a1= new ObjectWithDoubleValue(); a1.a=3.0; ObjectWithDoubleValue a2= new ObjectWithDoubleValue(); a2.a=3.0; ObjectWithDoubleValue[] aa={a1,a2}; System.out.println(Arrays.stream(aa).mapToDouble(ObjectWithDoubleValue::getValue).sum()); ``` Output : ``` 6.0 ```
How to extract the inner text and XML of node as string? I have the following XML structure: ``` <?xml version="1.0"?> <main> <node1> <subnode1> <value1>101</value1> <value2>102</value2> <value3>103</value3> </subnode1> <subnode2> <value1>501</value1> <value2>502</value2> <value3>503</value3> </subnode2> </node1> </main> ``` In Delphi I am looking for a function which returns the inner text and XML of a node as a string. For example for `<node1>` the string should be (if possible including indents and line breaks): ``` <subnode1> <value1>101</value1> <value2>102</value2> <value3>103</value3> </subnode1> <subnode2> <value1>501</value1> <value2>502</value2> <value3>503</value3> </subnode2> ``` I cannot find such a function in Delphi 10. Is there such a function? Or what is the best approach to implement one in Delphi 10?
The *correct* way to handle this is to use an actual XML library, such as Delphi's native [`TXMLDocument`](http://docwiki.embarcadero.com/Libraries/en/Xml.XMLDoc.TXMLDocument) component or [`IXMLDocument`](http://docwiki.embarcadero.com/Libraries/en/Xml.XMLIntf.IXMLDocument) interface (or any number of 3rd party XML libraries that are available for Delphi). You can load your XML into it, then find the [`IXMLNode`](http://docwiki.embarcadero.com/Libraries/Rio/en/Xml.XMLIntf.IXMLNode) for the `<node1>` element (or whichever element you want), and then read its [`XML`](http://docwiki.embarcadero.com/Libraries/en/Xml.XMLIntf.IXMLNode.XML) property as needed. For example: ``` uses ..., Xml.XMLIntf, Xml.XMLDoc; var XML: DOMString; Doc: IXMLDocument; Node: IXMLNode; begin XML := '<?xml version="1.0"?><main><node1>...</node1></main>'; Doc := LoadXMLData(XML); Node := Doc.DocumentElement; // <main> Node := Node.ChildNodes['node1']; XML := Node.XML; ShowMessage(XML); end; ``` Or: ``` uses ..., Xml.XMLIntf, Xml.xmldom, Xml.XMLDoc; var XML: DOMString; Doc: IXMLDocument; Node: IXMLNode; XPath: IDOMNodeSelect; domNode: IDOMNode; begin XML := '<?xml version="1.0"?><main><node1>...</node1></main>'; Doc := LoadXMLData(XML); XPath := Doc.DocumentElement.DOMNode as IDOMNodeSelect; domNode := XPath.selectNode('/main/node1'); Result := TXMLNode.Create(domNode, nil, (Doc as IXmlDocumentAccess).DocumentObject); XML := Node.XML; ShowMessage(XML); end; ```
Ctrl + Click on a Firefox tab places a \* on it. What does this signify? When I do `Ctrl` + Click on a Firefox tab, the display in the tab includes an asterisk (\*) before the page name. What does this signify? Does this have any special significance?
This is part of the [Tab Mix Plus](http://addons.mozilla.org/en-US/firefox/addon/tab-mix-plus/) extension. The default defined action for `Ctrl` + `Click` is `Selects the tab for merging`. You can then merge tabs into one window with `Ctrl` + `Shift` + `U`. Tabs selected for merging are indicated with a `(*)` before the page name (that's an asterisk enclosed in parentheses). Editor's note: The shortcut used to be `Ctrl` + `Shift` + `M`, but that now opens a firefox dev tool. After extensive online searching for the new shortcut, I had to search the source code of the extension. [![Screenshot of Tab Mix Plus <kbd>Ctrl</kbd> + <kbd>Click</kbd> options](https://i.stack.imgur.com/iiS0N.png) Click for full size](https://i.stack.imgur.com/iiS0N.png)
Is there a difference between speaker and headphone jacks/ports? I have a 2.1 speaker setup going into my computer, but primarily plug them into the headphone jack as it is easier to access. I do this because I switch between a couple different devices with these speakers. At one point I plugged them into the speaker port and noticed a very slight difference in the volume. Now both volumes in the properties are at the same level, but the noise coming out was slightly different. So do the 2 ports have different "levels" of output? Volume, bass, treble...?
It depends on what hardware you have in the computer, but **there usually is a difference between speaker and headphone ports** - specifically, relating to the max/min speaker/headphone impedance values you can use with either port. Certain sound cards, for example the [Auzentech X-Fi-Forte](http://www.auzentech.com/site/products/x-fi_forte.php), include a built-in headphone amplifier on the headphone port. Taking a look at the actual output port specifications, we can also see different loading levels for the headphone and other line-out ports: ``` Headphone Load Impedance: 16 - 600 Ξ© Line Output Impedance: 330 Ξ© Line/Aux Input Impedance: 10 kΞ© (10,000 Ξ©) ``` This is why many sound cards specify to **not** use a passive (i.e. unamplified) speaker with certain ports, as the lower impedance may cause too much current draw, and possibly damage the particular port. --- The general thing to note here, though, is **impedance matching your speakers/headphones to the appropriate port**, and in general, your speakers go to the speaker port, and your (unpowered) headphones go to the headphone port, precisely for the reasons outlined above. This also explains why you might notice a slight difference in the volume levels between the two ports.
How do you decide your side projects At any given time, I usually have a bunch of ideas for weekend/side projects that I can work on. The ideas can be generally be categorized into these: 1. Self Learning: Learning a new language/ technology/ framework 2. Work related: Learning/Doing something that would help you at work 3. Money: Projects that (you think) can make some money 4. Fun/Utility projects These are just the rough categories that I can think of and there can be more/other ways of classification. My question is based on your experience what should drive the decision of what kind of project to work on. What parameters apart from the type of project should impact this decision (time, effort, money...)
I actually wrote a [blogpost](http://jasonmbaker.com/how-to-have-a-side-project) about this a while back. To summarize, the major guidelines I try to stick to in coming up with side projects is: 1. Have fun 2. Learn something 3. Make it timeless (in other words, make it something you can come back to later) 4. Don't limit yourself to just code (I learn a lot from my blog) 5. Write something I'll actually use (because I personally am more likely to stick with it that way). To answer your question a bit more directly, I generally try to do #1 and #4 almost exclusively with my side projects. That said, I gave having fun the number 1 spot for a reason. If you aren't having fun, it isn't a side project. It's work.
Using REST API and send POST request ``` POST localhost:5000/registrar { "enrollId": "jim", "enrollSecret": "6avZQLwcUe9b" } ``` How do I use this in a javascript file? Do I use JSON or JQuery? And how do I invoke the request function in .html?
Use jquery for this: ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.0.0/jquery.min.js"></script> ``` And call function ``` $(document).ready(function(){ $.post('localhost:5000/registrar', { "enrollId": "jim", "enrollSecret": "6avZQLwcUe9b" }, function(serverResponse){ //do what you want with server response }) }) ``` Same without shorthand to handle errors: ``` $.ajax({ type: "POST", url: 'localhost:5000/registrar', data: { "enrollId": "jim", "enrollSecret": "6avZQLwcUe9b" }, success: function(){$('#register').html('<h1>Login successfull</h1>');}, error: function(){$('#register').html('<h1>Login error</h1>');}, dataType: dataType }); ```
How to use foreign key when querying from two tables My question is quite basic. It is about how can i build my query with using a foreign key to select certain information from two tables? ``` table vehicles +-------+----------+------------+ | id_car| car_model| car_owner | +-------+----------+------------+ | 1 | VW | 132| +-------+----------+------------+ | 2 | VW | 200| +-------+----------+------------+ table users +-------+----------+------------+ |user_id| user_name| user_phone | +-------+----------+------------+ | 132 | Peter| 555-555 | +-------+----------+------------+ | 200 | Jim | 555-333 | +-------+----------+------------+ ``` `car_owner` is foreign key in `vehicles` table which references to the primary key `user_id` of `users` table. So someone is searching for all VW cars and i want to have as populate the following information as html(yes, I know that this is not the correct way - i use this just to simplify the example and show which information from each table goes): ``` > echo "Car model:". `vehicles.car_model` > echo "Car owner:". `users.user_name` > echo "Contacts: ". `users.user_phone` ``` thanks in advance.
I'm not sure, if you understood what foreign keys are used for. A foreign key basically says "for this entry there has to be an entry in the parent table". You said `user_id is foreign key in vehicle table`, which is not clear for me. So, let's assume you have a table definition like this: ``` CREATE TABLE vehicles (`id_car` int, `car_model` varchar(2), `car_owner` int); CREATE TABLE users (`user_id` int, `user_name` varchar(5), `user_phone` varchar(7) , CONSTRAINT `fk_your_foreign_key` FOREIGN KEY (user_id) REFERENCES vehicles(car_owner) ); ``` When you want to insert a new user into the table, the user\_id must be an existing entry in car\_owner column in vehicles table. Foreign keys are there to implement business rules. Does every user necessarily have to be a car owner? Or the other way round, does every car have to be owned by someone? If you can answer both questions with no, then don't implement any foreign keys for this case. But do so, if you can answer yes for sure. To get the information you're looking for just do ``` SELECT * FROM vehicles INNER JOIN users ON vehicles.car_owner = users.user_id ```
NodaTime get Country Time based on CountryCode I've a requirement where Admin will select some `countries List` and `Time for Alert` for users for that Countries List. Lets Say the Admin selects `24/07/2014 09:00 AM` in the Countries `India,Malaysia and Canada`, They need to get the Alerts based on `their TimeZone` and every country user need to get the Alert at the `User's Local 9 AM Time`. So,I only have their `country Codes` like **IN,MY,CA** So,I thought of getting their TimeZones and calculating based on the `Server Time`. For Example : My Server is Located in `Canada`.So,I thought of calculating the `Time to Alert` in `India` based on the `India TimeZone` and `save the Time in Db`.So my windows service will run at that India Time and push the Alert. But for this,I need to save multiple records with different times in Db. So,For getting the `TimeZone` based on the CountryCode, I've used **NodaTime** ``` var CountryInfo = (from location in TzdbDateTimeZoneSource.Default.ZoneLocations where location.CountryCode.Equals(CountryCode, StringComparison.OrdinalIgnoreCase) select new { location.ZoneId, location.CountryName }) .FirstOrDefault(); ``` I'm getting the `TimeZoneID` from this Query. Can we get the `CurrentDate and Time` of the `CountryCode` based on the `Admin's Selected DateTime`?
Your code is nearly correct - although it can be simplified somewhat, and made more testable, and it should handle the case where there's no such zone... ``` // You should use dependency injection really - that makes the code much more // testable. Basically, you want an IClock... IClock clock = SystemClock.Instance; var countryCode = "IN"; var location = TzdbDateTimeZoneSource.Default.ZoneLocations .FirstOrDefault(l => l.CountryCode == countryCode); if (location == null) { // This is up to you - there's no location with your desired country code. } else { var zone = DateTimeZoneProviders.Tzdb[location.ZoneId]; var zonedNow = clock.Now.InZone(zone); // Now do what you want with zonedNow... I'd avoid going back into BCL // types, myself. } ``` Bear in mind that this assumes there's just *one* time zone for a country - that's not always the case. Think of the US, where there are *lot* of time zones...
Prevent trailing and leading spaces in VS Code multiline comments When multiline comment is added with Shift+Ctrl+A, trailing space is added at `/*` line, this may cause linter problems: [![[multiline comment](https://i.stack.imgur.com/MsLfq.png)](https://i.stack.imgur.com/MsLfq.png) I'm comfortable with `no-trailing-spaces` linter rule regarding comments and would prefer to not change it because of VS Code quirks. Leading space before `*/` doesn't cause linter problems but I would like to get rid of it because it looks displaced and I don't use intermediate asterisks in multiline comments like: ``` /* * multiline * comment */ ``` Can a way spaces are added to multiline comments be changed in Visual Studio Code?
EDIT for v1.42 (setting is "Comments: Insert Space") : Apparently coming to v1.42 is this setting `editor.insertSpaceAfterComment` which seems to solve your problem in the block comment case. But if disabled no spaces will be inserted after line `//` comment notation, so `//comment starts immediately` instead of `// comment starts after space`. This may or may not be acceptable to you. [![enter image description here](https://i.stack.imgur.com/CarXO.gif)](https://i.stack.imgur.com/CarXO.gif) See <https://github.com/microsoft/vscode/pull/41747> --- If you have ``` "editor.trimAutoWhitespace": true ``` when you save the file it will remove that trailing whitespace. Alternatively, using the command `editor.action.trimTrailingWhitespace` will also remove the trailing spaces in the file `Ctrl`-`K` `Ctrl`-`X`. Modifying the built-in snippets is tricky since they can be overridden upon updates. You could make a **macro** that deletes the space in one go. I presume you meant `Shift`-`Alt`-`A`: that is the command for toggling block comments on my vscode. You said Shift+Ctrl+A in your question which is unbound for me. Using the extension [multiCommand](https://marketplace.visualstudio.com/items?itemName=ryuta46.multi-command): (in your settings.json) ``` { "command": "multiCommand.blockComment", "sequence": [ "editor.action.blockComment", "editor.action.trimTrailingWhitespace", "cancelSelection", "deleteRight" ] }, ``` Those last two commands get rid of the leading space before the `*/` as you requested. In your keybindings.json: ``` { "key": "shift+alt+a", "command": "-editor.action.blockComment", "when": "editorTextFocus && !editorReadonly" }, { "key": "shift+alt+a", "command": "multiCommand.blockComment", }, ``` And then invoke with `Shift`-`Alt`-`A`, toggling off still works too. [![Demo of macro running](https://i.stack.imgur.com/0rYpN.gif)](https://i.stack.imgur.com/0rYpN.gif). [The gif goes a little nuts on the entered keystrokes, it Is only `Shift`-`Alt`-`A`.]
Global instance of a class or static class with initialization method I have a class that handles the Localization of my application. My goal is that the class is usable in the entire application so I made it static. This allows me to call the code below anywhere in my application. ``` Localizer.GetString(/* Key to localize */) ``` The method uses several fields in the Localizer class. To make sure these fields are set, a `Init` method was added to initialize the Localizer. If the user of my class forgets to call `Init` at for example the start-up of the application, exceptions will occur because the fields are not set. One solution I'm thinking about is making the Localizer class not static, add a constructor that sets the fields and initialize the class in a global `static class` in my application, something like this: ``` public static class Globals { public static Localizer Localize = new Localizer(/* Field arguments here */); } ``` Now I'm not sure what the best approach is. Either 1. Static Localizer but user has to make sure `Init` is called before using the class. 2. Instantiated Localizer in a global static class in the application. One more note: The user has no access to the source of the class.
An alternative to what you're doing would be to use **dependency injection**. Dependency injection is a super fancy name for **passing stuff into things instead of those things accessing that stuff** directly. I know that's a vague statement - but if your class takes an argument for a field instead of creating the type itself - it's already using dependency injection. So, let's say you have a `Localizer` class. It has *no* static methods and there is *no* static instance of a localizer just being global. You create a `Localizer` instance specialized to your needs once when the app boots up: ``` var localizer = new Localizer(...); ``` Then, whenever a component needs the localizer - you pass it around ``` var component = new MyComponent(localizer); // we pass the values in ``` This makes the localizer easily modifiable, makes the classes easy to test in isolation and makes it easy to configure different components differently (what if you want the help page to always be in English all of a sudden? Or some other *specific page*?). If it's still unclear [here's a nice talk by Misko Havery about not looking for things](https://www.youtube.com/watch?v=RlfLCWKxHJ0). There is also [a nice Martin Fowler article](http://martinfowler.com/articles/injection.html) about it but it's probably a bit harder to follow. The only tedious thing here is that you need to pass it around any time. I don't mind the explicitness but a lot of people prefer using dependency injection containers to manage the overhead.
vscode debug ES6 application I have VSCode 0.5.0. I set the compilerOptions flag to "ES6" and the editor started recognizing my ES6 code as correct. I have babel installed. My Mocha tests use the babel compilers and my tests pass. My app runs from the command line with no problems when I launch it with babel-node . When I debug the app from within VSCode, it starts up without the ES6 support, and the app fails for ES6 syntax issues. Are there debug settings that I missed turning on?
By default VSCode launches node just with a --debug-brk option. This is not enough to enable ES6 support. If you can find out what options 'babel-node' passes to node, you could specify the same options in the VSCode launch config (through the runtimeArgs attribute). But this does not solve the issue that babel-node transpiles your ES6 code before running it. Alternatively you could try to set the 'runtimeExecutable' in your launch config to 'babel-node'. This approach works with other node wrappers, but I haven't verified that is works with babel-node. A third option (which should work) is to use the attach mode of VSCode: for this launch babel-node from the command line with the '--debug' option. It should print a port number. Then create an 'attach' launch config in VSCode with that port.
loadview getting called infinite times I have two ViewControllers in my app `ViewController1.m` and `ViewController2.m`. In `AppDelegate` I am having this code. ``` self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; // Override point for customization after application launch. if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) { self.viewController = [[ViewController1 alloc] initWithNibName:@"ViewController_iPhone" bundle:nil]; } else { self.viewController = [[ViewController1 alloc] initWithNibName:@"ViewController_iPad" bundle:nil]; } self.window.rootViewController = self.viewController; [self.window makeKeyAndVisible]; ``` In ViewController1.m I have added a button and on button click I am displaying another view controller `ViewController2.m` like this: ``` ViewController2 * obj = [[ViewController2 alloc] initWithNibName:nil bundle:nil]; [self.view addSubview:obj.view]; ``` In the `loadView` of ViewController2.m I am adding another button like this ``` NSLog(@"\n Load view called"); UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [button addTarget:self action:@selector(onButtonClicked:) forControlEvents:UIControlEventTouchUpInside]; [button setTitle:@"Back to previous view" forState:UIControlStateNormal]; button.frame = CGRectMake(80.0, 210.0, 160.0, 40.0); [self.view addSubview:button]; ``` When I run my app, on clicking the button present in ViewController1.m the app hangs and `loadView` of ViewController2.m starts getting called infinitely. I dont know the reasin behind this problem, I just wanted to load another ViewController on a button click and I am not using any Navigation Controller. Can someone point out the reason behind this issue?
Don't do this in `loadView`. Instead move your code to `viewDidLoad`. The problem is you're accessing `self.view` within `loadView`, which basically calls `loadView` since the initial call to `loadView` never returned. Sorry for the tongue twister... `loadView` is called automatically when the view hasn't yet been instantiated. Only when it returns is the view initialization complete. If it hasn't return and you try to access the `view` property, it'll call it again. And in your case, it's recursive because you are still in the `loadView` method. From the [docs](http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIViewController_Class/Reference/Reference.html): > > The view controller calls this method when its view property is requested but is currently nil. This method loads or creates a view and assigns it to the view property. > > >
How to get row value inside updateItem() of CellFactory ``` ageColumn.setCellFactory(param -> new TableCell<Person, String>(){ @Override protected void updateItem(String item, boolean empty) { param.getCellFactory(). super.updateItem(item, empty); setText(empty ? null : String.valueOf(item)); if(person.getName.equals("MATEUS")) { setStyle("-fx-background-color: red;"); } } }); ``` How to get this "Person" which is the row value from the Table? I can only get the value from the cell, but not the entire object.
You can do ``` Person person = getTableView().getItems().get(getIndex()); ``` You can also do ``` Person person = (Person) getTableRow().getItem(); ``` but this is less desirable (in my opinion) because `getTableRow()` returns a raw type, and consequently it requires the unchecked downcast. Obviously either of these only works if `empty` is `false`, so they should be inside a check for that: ``` ageColumn.setCellFactory(param -> new TableCell<Person, String>(){ @Override protected void updateItem(String item, boolean empty) { super.updateItem(item, empty); if (empty) { setText(null); setStyle(""); } else { setText(item); Person person = getTableView().getItems().get(getIndex()); if(person.getName.equals("MATEUS")) { setStyle("-fx-background-color: red;"); } else { setStyle(""); } } } }); ```
TempData are always empty I want to use TempData to store messages between Post and followed redirect but TempData are always empty. I have BaseContoller offering some infrastructure for passing TempData. Simplified code looks like: ``` public abstract class BaseController : Controller { public const string AuditMessagesKey = "AuditMessages"; private List<InformationMessage> _informationMessages = new List<InformationMessage>(); protected BaseController() { // I also tried this in overriden Initialize ViewData[AuditMessagesKey] = GetAuditMessages(); } protected void AddAuditMessage(InformationMessage message) { if (message == null) return; _informationMessages.Add(message); } protected override void OnResultExecuting(ResultExecutingContext filterContext) { base.OnResultExecuting(filterContext); if (filterContext.Result is RedirectToRouteResult) { // I see that messages are stored into TempData TempData[AuditMessagesKey] = _informationMessages; // This also doesn't help // TempData.Keep(AuditMessagesKey); } } private ICollection<InformationMessage> GetAuditMessages() { // TempData are always empty here var messages = TempData[AuditMessagesKey] as List<InformationMessage>; if (messages == null) { messages = new List<InformationMessage>(); } return messages; } } ``` Action method looks like: ``` [HttpPost] public ActionResult CancelEdit(RequestSaveModel model) { AddAuditMessage(new InformationMessage { Message = String.Format(Messages.RequestEditationCanceled, model.Title), Severity = MessageSeverity.Information }); return RedirectToAction("Detail", new { Id = model.Id}); } ``` Application is tested on VS Development web server. There are no Ajax calls and I removed all Html.RenderAction calls from my master page. I can see that TempData are accessed only once per request in GetAuditedMessages and stored only once in OnResultExecuting. Nothing overwrites the data. Session state is allowed. The code is little bit simplified. We are also using antiforgery token, custom filters for authorization and for action selection but it should not affect TempData behavior. I don't understand it. I used TempData before in test application and it worked fine.
The problem I see in your code is that you are trying to retrieve the data from TempData in the controller's constructor - which is before it is available. Move the call to `GetAuditMessages()` into an OnActionExecuting method, and it will be accessible. ``` public abstract class BaseController : Controller { public const string AuditMessagesKey = "AuditMessages"; private List<InformationMessage> _informationMessages = new List<InformationMessage>(); protected BaseController() { // TempData is not available yet } protected override void OnActionExecuting(ActionExecutingContext filterContext) { ViewData[AuditMessagesKey] = GetAuditMessages(); base.OnActionExecuting(filterContext); } protected void AddAuditMessage(InformationMessage message) { if (message == null) return; _informationMessages.Add(message); } protected override void OnResultExecuting(ResultExecutingContext filterContext) { base.OnResultExecuting(filterContext); if (filterContext.Result is RedirectToRouteResult) { // I see that messages are stored into TempData TempData[AuditMessagesKey] = _informationMessages; // This also doesn't help // TempData.Keep(AuditMessagesKey); } } private ICollection<InformationMessage> GetAuditMessages() { var messages = TempData[AuditMessagesKey] as List<InformationMessage>; if (messages == null) { messages = new List<InformationMessage>(); } return messages; } } ```
iOS 8 UITableView first row has wrong height I'm working on an app where I face a strange issue. I've created a UITableViewController in the storyboard and added a prototype cell. In this cell, I've added an UILabel element and this UILabel takes up the whole cell. I've set it up with Auto Layout and added left, right, top and bottom constraints. The UILabel contains some text. Now in my code, I initialize the the rowHeight and estimatedRowHeight of the table view: ``` override func viewDidLoad() { super.viewDidLoad() self.tableView.rowHeight = UITableViewAutomaticDimension self.tableView.estimatedRowHeight = 50 } ``` And I create the cell as follows: ``` override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { var cell : UITableViewCell? = tableView.dequeueReusableCellWithIdentifier("HelpCell") as? UITableViewCell if(cell == nil) { cell = UITableViewCell(style: .Default, reuseIdentifier: "HelpCell") } return cell! } ``` I return two rows in my table view. Here comes my problem: the height of the first row is way to large. It appear that the second, third row etc all have a correct height. I really don't understand why this is the case. Can someone help me with this?
I had a problem where the cells' height were not correct on the first load, but after scrolling up-and-down the cells' height were fixed. I tried all of the different 'fixes' for this problem and then eventually found that calling these functions after initially calling `self.tableView.reloadData`. ``` self.tableView.reloadData() // Bug in 8.0+ where need to call the following three methods in order to get the tableView to correctly size the tableViewCells on the initial load. self.tableView.setNeedsLayout() self.tableView.layoutIfNeeded() self.tableView.reloadData() ``` Only do these extra layout calls after the initial load. I found this very helpful information here: <https://github.com/smileyborg/TableViewCellWithAutoLayoutiOS8/issues/10> **Update:** Sometimes you might have to also completely configure your cell in `heightForRowAtIndexPath` and then return the calculated cell height. Check out this link for a good example of that, <http://www.raywenderlich.com/73602/dynamic-table-view-cell-height-auto-layout> , specifically the part on `heightForRowAtIndexPath`. **Update 2:** I've also found it **VERY** beneficial to override `estimatedHeightForRowAtIndexPath` and supply somewhat accurate row height estimates. This is very helpful if you have a `UITableView` with cells that can be all kinds of different heights. Here's a contrived sample implementation of `estimatedHeightForRowAtIndexPath`: ``` public override func tableView(tableView: UITableView, estimatedHeightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat { let cell = tableView.cellForRowAtIndexPath(indexPath) as! MyCell switch cell.type { case .Small: return kSmallHeight case .Medium: return kMediumHeight case .Large: return kLargeHeight default: break } return UITableViewAutomaticDimension } ``` **Update 3:** `UITableViewAutomaticDimension` has been fixed for iOS 9 (woo-hoo!). So you're cells *should* automatically size themselves without having to calculate the cells height manually.
What makes facebook pages load so fast I am also a php programmer. I always wondered how do Facebook pages load so fast with so many users connecting to them at any instance. My site never reached such speed even when the traffic was less. What do they do so very well for their sites? What should i take care of even if i want to make a website even half the speed of Facebook. I am referring to php and mysql and jquery websites.
Take a look at this reddit post by someone who interned at Facebook: <http://www.reddit.com/r/programming/comments/nav19/facebook_releases_hhvm_60_percent_faster_than_its/c37pitt> Most relevant parts of the post: > > - Database calls have cache keys. Memcached short-circuits the database hits. Facebook has a > 99.9% hit rate. Occasionally, there can be an issue which causes the hit rate to decrease to 95%. This probably doesn't happen much, but when it did, it made the whole site unusably slow. > - Stupid database calls that fetch a lot are generally better than smart database calls that fetch minimally. Why? Because most of the stupid database calls are the same - and therefore very cacheable - and using server resources to filter stuff out is therefore much more economical than using database resources. Or, in other words, it lessens the complexity of the database calls and makes fewer at the expense of more data per database call, and more of the load shifted to memcached. > > >
What is the equivalent of GL\_TRIANGLE\_STRIP in Metal for iOS? Trying to draw a strip of triangles as illustrated here: ![GL_TRIANGLE_STRIP](https://i.stack.imgur.com/NCxQz.png) Completed the objc.io [tutorial](http://www.objc.io/issues/18-games/metal/), where they draw a quad using two triangles. The triangles are disconnected and drawn individually, meaning I need to specify 6 vertices instead of 4. ``` // Interleaved vertex data X,Y,Z,W, R,G,B,A static float vertexData[] = { // First triangle: From bottom right, clockwise 0.5, -0.5, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, // bottom right -0.5, -0.5, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, // bottom left -0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, // top left // Second triangle: From top right, clockwise 0.5, 0.5, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, // top right 0.5, -0.5, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, // bottom right -0.5, 0.5, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, // top left }; ``` Is there a way to draw a strip as in OpenGL ES without duplicating vertices?
The short answer is this: ``` renderEncoder.drawPrimitives(MTLPrimitiveType.TriangleStrip, vertexStart: 0, vertexCount: 6) ``` It's an equivalent to `GL_TRIANGLE_STRIP`. Also you may want to use indexed drawing, then you will only load each vertex once, and after that you will need to use array of vertex indices to specify draw order. That way you will save data by not specifying duplicated vertices. Here's a call for indexed drawing. ``` renderEncoder.drawIndexedPrimitives(submesh.primitiveType, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: submesh.indexBuffer.offset) ``` Cheers!
Angular 6 - Load JSON from local I am trying to load a local JSONfile of two ways. This is my json file: ``` { "imgsesion": "fa_closesesion.png", "texthome": "volver a la home", "logo": "fa_logo.png", "menu": { "background": "orange", "link1": "ESCRITOR", "link2": "MÚSICO", "link3": "AYUDA ADMIN", "submenu": { "link1": { "text1": "novelas", "text2": "obras de teatro" }, "link2": { "text1": "compositor", "text2": "intérprete" } } } } ``` - Way 1: Using Http This is my service file (general.service.ts) ``` getContentJSON() { return this.http.get('assets/json/general.json') .map(response => response.json()); } ``` This way working ok, but shows the next error in the web browser console: ``` ERROR TypeError: Cannot read property 'menu' of undefined ``` - Way 2: Using HttpClient This is my service file (general.service.ts) ``` getContentJSON() { return this.httpClient.get("assets/json/general.json"); } ``` It does not work because I can not find the general.json file, it goes through the control of the error and it gives me an error 404 This is the component file (app.component.ts) ``` export class AppComponent implements OnInit { contentGeneral: any; ngOnInit() { this.getContentJSON(); } getContentJSON() { this.generalService.getContentJSON().subscribe(data => { this.contentGeneral = data; }, // Bind to view err => { // Log errors if any console.log('error: ', err); }); } } ``` This is the template file (`app.component.html`): ``` <a href="#" routerLink="/home" class="linkHome">{{contentGeneral.texthome}}</a> <div class="submenu" *ngIf="linkWrite.isActive || isSubMenuWriter"> <span class="d-block text-right small">{{contentGeneral.menu.submenu.link1.text1}}</span> <span class="d-block text-right small">{{contentGeneral.menu.submenu.link1.text2}}</span> </div> ``` This is my actual error: In app.component.ts, I add the import: ``` import * as data_json from './assets/json/general.json'; ``` But when I launch ng serve it gives me the following error: [![enter image description here](https://i.stack.imgur.com/gbvp0.png)](https://i.stack.imgur.com/gbvp0.png) How I could resolve it?
The simplest solution: ``` import "myJSON" from "./myJson" ``` **Important update!** I found, that this method stops working in newest Angular versions, because of this error: > > ERROR in src/app/app.weather.service.ts(2,25): error TS2732: Cannot find module './data.json'. Consider using '--resolveJsonModule' to import module with '.json' extension > > > To make it works, go to the tsconfig.json and add this two, inside compilerOptions( tsconfig.json ) : ``` "resolveJsonModule": true, "esModuleInterop": true, ``` After change, re-run `ng serve`. If you only use the first option, you can get an error like this: > > ERROR in src/app/app.weather.service.ts(2,8): error TS1192: Module '"....app/data/data.json"' has no default export. > > > (I found this very good answer here (<https://www.angularjswiki.com/angular/how-to-read-local-json-files-in-angular/>))
How can I test a class which contains imported async methods in it? This is my first time working with tests and I get the trick to test UI components. Now I am attempting to test a class which has some static methods in it. It contains parameters too. See the class: ``` import UserInfoModel from '../models/UserInfo.model'; import ApiClient from './apiClient'; import ApiNormalizer from './apiNormalizer'; import Article from '../models/Article.model'; import Notification from '../models/Notification.model'; import Content from '../models/Link.model'; export interface ResponseData { [key: string]: any; } export default class ApiService { static makeApiCall( url: string, normalizeCallback: (d: ResponseData) => ResponseData | null, callback: (d: any) => any ) { return ApiClient.get(url) .then(res => { callback(normalizeCallback(res.data)); }) .catch(error => { console.error(error); }); } static getProfile(callback: (a: UserInfoModel) => void) { return ApiService.makeApiCall(`profile`, ApiNormalizer.normalizeProfile, callback); } } ``` I already created a small test which is passing but I am not really sure about what I am doing. ``` // @ts-ignore import moxios from 'moxios'; import axios from 'axios'; import { baseURL } from './apiClient'; import { dummyUserInfo } from './../models/UserInfo.model'; describe('apiService', () => { let axiosInstance: any; beforeEach(() => { axiosInstance = axios.create(); moxios.install(); }); afterEach(() => { moxios.uninstall(); }); it('should perform get profile call', done => { moxios.stubRequest(`${baseURL.DEV}profile`, { status: 200, response: { _user: dummyUserInfo } }); axiosInstance .get(`${baseURL.DEV}profile`) .then((res: any) => { expect(res.status).toEqual(200); expect(res.data._user).toEqual(dummyUserInfo); }) .finally(done); }); }); ``` I am using moxios to test the axios stuff -> <https://github.com/axios/moxios> So which could be the proper way to test this class with its methods?
### Introduction [Unit tests](https://books.google.com.ua/books/about/Unit_Test_Frameworks.html?id=WvFuyuc5ZAEC&redir_esc=y) are automated tests written and run by software developers to ensure that a section of an application meets its design and behaves as intended. As if we are talking about object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method. The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. So if we consider your `ApiService.makeApiCall` function: ``` static makeApiCall( url: string, normalizeCallback: (d: ResponseData) => ResponseData | null, callback: (d: any) => any ) { return ApiClient.get(url) .then((res: any) => { callback(normalizeCallback(res.data)); }) .catch(error => { console.error(error); }); } ``` we can see that it has one external resource calling `ApiClient.get` which should be [mocked](https://stackoverflow.com/questions/2665812/what-is-mocking). It's not entirely correct to mock the HTTP requests in this case because `ApiService` doesn't utilize them directly and in this case your unit becomes a bit more broad than it expected to be. ### Mocking Jest framework provides great mechanism of [mocking](https://jestjs.io/docs/en/es6-class-mocks) and example of Omair Nabiel is correct. However, I prefer to not only stub a function with a predefined data but additionally to check that stubbed function was called an expected number of times (so use a real nature of mocks). So the full mock example would look as follows: ``` /** * Importing `ApiClient` directly in order to reference it later */ import ApiClient from './apiClient'; /** * Mocking `ApiClient` with some fake data provider */ const mockData = {}; jest.mock('./apiClient', function () { return { get: jest.fn((url: string) => { return Promise.resolve({data: mockData}); }) } }); ``` This allows to add additional assertions to your test example: ``` it('should call api client method', () => { ApiService.makeApiCall('test url', (data) => data, (res) => res); /** * Checking `ApiClient.get` to be called desired number of times * with correct arguments */ expect(ApiClient.get).toBeCalledTimes(1); expect(ApiClient.get).toBeCalledWith('test url'); }); ``` ### Positive testing So, as long as we figured out what and how to mock data let's find out what we should test. Good tests should cover [two situations](https://stackoverflow.com/questions/8162423/what-is-positive-test-and-negative-test-in-unit-testing): **Positive Testing** - testing the system by giving the valid data and **Negative Testing** - testing the system by giving the Invalid data. In my humble opinion the third branch should be added - **Boundary Testing** - Test which focus on the boundary or limit conditions of the software being tested. Please, refer to this [Glossary](http://www.aptest.com/glossary.html) if you are interested in other types of tests. The positive test flow flow for `makeApiCall` method should call `normalizeCallback` and `callback` methods consequently and we can write this test as follows (however, there is more than one way to skin a cat): ``` it('should call callbacks consequently', (done) => { const firstCallback = jest.fn((data: any) => { return data; }); const secondCallback = jest.fn((data: any) => { return data; }); ApiService.makeApiCall('test url', firstCallback, secondCallback) .then(() => { expect(firstCallback).toBeCalledTimes(1); expect(firstCallback).toBeCalledWith(mockData); expect(secondCallback).toBeCalledTimes(1); expect(secondCallback).toBeCalledWith(firstCallback(mockData)); done(); }); }); ``` Please, pay attention to several things in this test: - I'm using `done` callback to let jest know the test was finished because of asynchronous nature of this test - I'm using `mockData` variable which the data that `ApiClient.get` is mocked this so I check that callback got correct value - `mockData` and similar variables should start from `mock`. Otherwise Jest will not allow to out it out of mock [scope](https://jestjs.io/docs/en/es6-class-mocks#calling-jestmock-docs-en-jest-object-jestmockmodulename-factory-options-with-the-module-factory-parameter) ### Negative testing The negative way for test looks pretty similar. `ApiClient.get` method should throw and error and `ApiService` should handle it and put into a `console`. Additionaly I'm checking that none of callbacks was called. ``` import ApiService from './api.service'; const mockError = {message: 'Smth Bad Happened'}; jest.mock('./apiClient', function () { return { get: jest.fn().mockImplementation((url: string) => { console.log('error result'); return Promise.reject(mockError); }) } }); describe( 't1', () => { it('should handle error', (done) => { console.error = jest.fn(); const firstCallback = jest.fn((data: any) => { return data; }); const secondCallback = jest.fn((data: any) => { return data; }); ApiService.makeApiCall('test url', firstCallback, secondCallback) .then(() => { expect(firstCallback).toBeCalledTimes(0); expect(secondCallback).toBeCalledTimes(0); expect(console.error).toBeCalledTimes(1); expect(console.error).toBeCalledWith(mockError); done(); }); }); }); ``` ### Boundary testing Boundary testing could be arguing in your case but as long as (according to your types definition `normalizeCallback: (d: ResponseData) => ResponseData | null`) first callback can return `null` it could be a good practice to check if is the successfully transferred to a second callback without any errors or exceptions. We can just rewrite our second test a bit: ``` it('should call callbacks consequently', (done) => { const firstCallback = jest.fn((data: any) => { return null; }); const secondCallback = jest.fn((data: any) => { return data; }); ApiService.makeApiCall('test url', firstCallback, secondCallback) .then(() => { expect(firstCallback).toBeCalledTimes(1); expect(firstCallback).toBeCalledWith(mockData); expect(secondCallback).toBeCalledTimes(1); done(); }); }); ``` ### Testing asynchronous code Regarding testing asynchronous code you can read a comprehensive documentation [here](https://jestjs.io/docs/en/asynchronous.html). The main idea is when you have code that runs asynchronously, Jest needs to know when the code it is testing has completed, before it can move on to another test. Jest provides three ways how you can do this: 1. By means of a callback ``` it('the data is peanut butter', done => { function callback(data) { expect(data).toBe('peanut butter'); done(); } fetchData(callback); }); ``` Jest will wait until the done callback is called before finishing the test. If `done()` is never called, the test will fail, which is what you want to happen. 2. By means of promises If your code uses promises, there is a simpler way to handle asynchronous tests. Just return a promise from your test, and Jest will wait for that promise to resolve. If the promise is rejected, the test will automatically fail. 3. `async/await` syntax You can use `async` and `await` in your tests. To write an async test, just use the `async` keyword in front of the function passed to test. ``` it('the data is peanut butter', async () => { const data = await fetchData(); expect(data).toBe('peanut butter'); }); ``` ### Example Here you can find a ready to use example of your code <https://github.com/SergeyMell/jest-experiments> Please, let me know if something left unclear for you. ### UPDATE (29.08.2019) Regarding your question > > Hi, what can I do to mock ./apiClient for success and error in the same file? > > > According to the [documentation](https://jestjs.io/docs/en/manual-mocks#using-with-es-module-imports) Jest will automatically hoist `jest.mock` calls to the top of the module (before any imports). It seems that you can do `setMock` or `doMock` instead, however, there are [issues](https://github.com/facebook/jest/issues/5584) with mocking this way that developers face from time to time. They can be overridden by using `require` instead of `import` and other hacks (see [this article](https://medium.com/trabe/mocking-different-values-for-the-same-module-using-jest-a7b8d358d78b)) however I don't like this way. The correct way for me in this case is do split mock defining and implementation, so you state that this module will be mocked like this ``` jest.mock('./apiClient', function () { return { get: jest.fn() } }); ``` But the implementation of the mocking function differs depending on scope of tests: ``` describe('api service success flow', () => { beforeAll(() => { //@ts-ignore ApiClient.get.mockImplementation((url: string) => { return Promise.resolve({data: mockData}); }) }); ... }); describe('api service error flow', () => { beforeAll(() => { //@ts-ignore ApiClient.get.mockImplementation((url: string) => { console.log('error result'); return Promise.reject(mockError); }) }); ... }); ``` This will allow you to store all the `api service` related flows in a single file which is what you expected as far as I understand. I've updated my [github example](https://github.com/SergeyMell/jest-experiments) with `api.spec.ts` which implements all mentioned above. Please, take a look.
which is better to use request.getParameter() or @RequestParm? Which way is considered as better Software Engineering practice in spring: 1) using the spring annotation @RequestParam ``` @RequestMapping(value = "/doSomeThing", method = RequestMethod.GET) @ResponseBody public boolean doSomeThing(@RequestParam("name") String name) { boolean success = false; // do the logic return success; } ``` 2) using the request method getParameter ``` @RequestMapping(value = "/doSomeThing2", method = RequestMethod.GET) @ResponseBody public boolean doSomeThing2(HttpServletRequest request) { boolean success = false; String name = request.getParameter("name"); // do the logic return success; } ```
I would use `@RequestParam` annotation because that way your code is **more readable** and **easier to unit-test**. **Why more readable?** Because it is clear that you depend on HTTP API only for that single parameter. `HttpServletRequest` is big object, you could do many things with it. An in this case you are using only very small subset of that functionality. Code is more readable when method signature is as specific as possible. Having parameter of type `HttpServletRequest` is less specific that parameter of type `String`. It is in line with **Interface segregation principle** (client should be forced to depend on methods it does not use.) **Why easier to test?** Using `@RequestParam`, you do not have to mock anything! If you have `HttpServletRequest` as parameter then for unit test you have to carefully mock that object -carefuly mocking every invocation of getParameter.
Diffrence between x++ and ++x? > > **Possible Duplicate:** > > [Is there a difference between x++ and ++x in java?](https://stackoverflow.com/questions/1094872/is-there-a-difference-between-x-and-x-in-java) > > > I am reading the official Java tutorial and I don't get the difference between postfix and prefix (++x vs x++). Could someone explain?
`++x`: increment `x`; the value of the overall expression is the value *after* the increment `x++`: increment `x`; the value of the overall expression is the value *before* the increment Consider these two sections: ``` int x = 0; System.out.println(x++); // Prints 0 // x is now 1 int y = 0; System.out.println(++y); // Prints 1 // y is now 1 ``` I personally try to avoid using them as expressions within a larger statement - I prefer standalone code, like this: ``` int x = 0; System.out.println(x); // Prints 0 x++; // x is now 1 int y = 0; y++; System.out.println(y); // Prints 1 // y is now 1 ``` Here I believe everyone would be able to work out what's printed and the final values of `x` and `y` without scratching their heads too much. There are definitely times when it's useful to have pre/post-increment available within an expression, but think of readability first.
How to Animate Addition or Removal of Android ListView Rows In iOS, there is a very easy and powerful facility to animate the addition and removal of UITableView rows, [here's a clip from a youtube video](http://www.youtube.com/watch?v=VhSWE6_ieyA#t=1m10s) showing the default animation. Note how the surrounding rows collapse onto the deleted row. This animation helps users keep track of what changed in a list and where in the list they were looking at when the data changed. Since I've been developing on Android I've found no equivalent facility to animate individual rows in a [TableView](http://developer.android.com/reference/android/widget/ListView.html). Calling [`notifyDataSetChanged()`](http://developer.android.com/reference/android/widget/BaseAdapter.html#notifyDataSetChanged()) on my Adapter causes the ListView to immediately update its content with new information. I'd like to show a simple animation of a new row pushing in or sliding out when the data changes, but I can't find any documented way to do this. It looks like [LayoutAnimationController](http://LayoutAnimationController) might hold a key to getting this to work, but when I set a LayoutAnimationController on my ListView (similar to [ApiDemo's LayoutAnimation2](http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/view/LayoutAnimation2.html)) and remove elements from my adapter after the list has displayed, the elements disappear immediately instead of getting animated out. I've also tried things like the following to animate an individual item when it is removed: ``` @Override protected void onListItemClick(ListView l, View v, final int position, long id) { Animation animation = new ScaleAnimation(1, 1, 1, 0); animation.setDuration(100); getListView().getChildAt(position).startAnimation(animation); l.postDelayed(new Runnable() { public void run() { mStringList.remove(position); mAdapter.notifyDataSetChanged(); } }, 100); } ``` However, the rows surrounding the animated row don't move position until they jump to their new positions when `notifyDataSetChanged()` is called. It appears ListView doesn't update its layout once its elements have been placed. While writing my own implementation/fork of ListView has crossed my mind, this seems like something that shouldn't be so difficult. Thanks!
``` Animation anim = AnimationUtils.loadAnimation( GoTransitApp.this, android.R.anim.slide_out_right ); anim.setDuration(500); listView.getChildAt(index).startAnimation(anim ); new Handler().postDelayed(new Runnable() { public void run() { FavouritesManager.getInstance().remove( FavouritesManager.getInstance().getTripManagerAtIndex(index) ); populateList(); adapter.notifyDataSetChanged(); } }, anim.getDuration()); ``` for top-to-down animation use : ``` <set xmlns:android="http://schemas.android.com/apk/res/android"> <translate android:fromYDelta="20%p" android:toYDelta="-20" android:duration="@android:integer/config_mediumAnimTime"/> <alpha android:fromAlpha="0.0" android:toAlpha="1.0" android:duration="@android:integer/config_mediumAnimTime" /> </set> ```
How to maintain login status in a PWA initially loaded via Safari 14/iOS 14? Our requirement is to have our users login to an app via a URL and, having added the app to their homescreen as a PWA, maintain that logged-in status, so that *a second login to the installed PWA is not required*. This is certainly possible under Android/Chrome where the logged-in status can be initially stored and accessed by the PWA via a variety of mechanisms (including cookie, IndexedDB, cache). However, it now appears to us that a PWA under iOS 14/iPadOS 14 is tightly sandboxed and Safari has no way of passing logged-in status to it. Over the years, and through the various versions of iOS, a variety of sharing mechanisms have been offered - and rendered obsolete in a subsequent version. These include: 1. the cache, accessed via a fake endpoint ([ref](https://www.netguru.com/codestories/how-to-share-session-cookie-or-state-between-pwa-in-standalone-mode-and-safari-on-ios)) 2. a session cookie ([ref](https://stackoverflow.com/questions/9998900/maintain-php-session-in-web-app-on-iphone/14594908#14594908)) A mechanism that doesn't rely on browser-shared storage is the addition of a server-generated token to the URL ([ref](https://stackoverflow.com/questions/3813599/iphone-bookmark-to-homescreen-removes-cookies-and-session)), ([ref](https://stackoverflow.com/questions/21109615/ios7-safari-saving-to-home-screen-and-persist-token?noredirect=1&lq=1)) - the problem here is that it upsets Android/Chrome, which uses an unmodified `start_url` in the web app manifest. This is an issue which has provoked a number of SO questions over the years (three of them referenced above) and some of them have been answered with solutions that apparently worked under earlier versions of iOS. What we're wanting now is a solution which works under the latest version as well as it works under Android/Chrome. Any offers?
It can be done. Here's how we've succeeded in doing it: 1. When the user initially logs in to the app in the browser, we generate a UID on the server. 2. We pair this UID with the username in a server file (`access.data`). 3. We generate the web app manifest dynamically. In it we set the `start_url` to the index page and append a query string incorporating the UID e.g. `"start_url": "/<appname>/index.html?accessID=<UID>"`. 4. We create a cookie to verify that the app has been accessed e.g. `access=granted`. 5. When the user accesses the app as an iOS PWA, the app looks for this cookie and doesn't find it (cunning ;) - we use one of the iOS deficiencies (not sharing cookies between Safari and the PWA) to defeat that same deficiency). 6. The absence of the `access` cookie tells the app to extract the UID from the query string. 7. It sends the UID back to the server, which looks for a match in `access.data`. 8. If the server finds a match, it tells the app that the PWA user is already logged in and there's no need to again display the login screen. Mission accomplished! *Note*: Android/Chrome simply ignores the `accessID` in the query string - I was wrong in my question to imply that Android/Chrome requires an unmodified `start_url`.
Given two sets of vectors, how do I find the closest vector in the second set for each vector in the first set? Given: Two sets `{S1, S2}` of vectors of dimension `D`. `S1` is represented by a `N*D` matrix and accordingly is `S2` represented by a `M*D` matrix. I am looking for an elegant way to get for every vector `s1` in `S1` the nearest neighbour `s2` in `S2` and the according distance. A simple approach would of course be to have two for loops and get ``` dist = norm(s1 - s2); ``` However, there must be a more elegant and efficient way to do this.
Yup. With the mighty power of [`bsxfun`](http://www.mathworks.com/help/matlab/ref/bsxfun.html) and [`permute`](http://www.mathworks.com/help/matlab/ref/permute.html), with a side of [`sum`](http://www.mathworks.com/help/matlab/ref/sum.html), and a dash of [`reshape`](http://www.mathworks.com/help/matlab/ref/reshape.html). This would be the first part, where you calculate the pair-wise distances between a point in `S1` and another point in `S2`: ``` out = reshape(sqrt(sum(bsxfun(@minus, S1, permute(S2, [3 2 1])).^2, 2)), size(S1,1), size(S2,1)); ``` The last thing you need now is to determine the closest vector in `S2` to each of `S1`. That can be done using [`min`](http://www.mathworks.com/help/matlab/ref/min.html): ``` [dist,ind] = min(out, [], 2); ``` `dist` would contain the smallest distance between a point in `S1` with a point in `S2`, and `ind` would tell you which point that was. --- This code looks very intimidating, but let's break it up into pieces. 1. `permute(S2, [3 2 1])`: This takes the matrix `S2`, which is a `M x D` matrix and shuffles the dimensions so that it becomes a `1 x D x M` matrix.... now why would we want to do that? Let's move onto the next part and it'll make more sense. 2. `bsxfun(@minus, S1, ...)`: `bsxfun` stands for **B**inary **S**ingleton E**X**pansion **FUN**ction. What `bsxfun` does is that if you have two inputs where either or both inputs has a singleton dimension, or if either of both inputs has only one dimension which has value of 1, each input is replicated in their singleton dimensions to match the size of the other input, and then an element-wise operation is applied to these inputs together to produce your output. In this case, I want to subtract these two newly formed inputs together. As such, given that `S1` is `N x D`... or technically, this is `N x D x 1`, and given that `S2` is `M x D`, which I permuted so that it becomes `1 x D x M`, we will create a new matrix that is `N x D x M` long. The first input will duplicate itself as a 3D matrix where each slice is equal to `S1` and that is `N x D`. `S2` is now a 3D matrix, but it is represented in such a way where each row in the original matrix is a slice in the 3D matrix where each slice consists of just a single row. This gets duplicated for `N` rows. We now apply the `@minus` operation, and the effect of this is that for each output slice `i` in this new matrix, this gives you the component wise difference between the point `i` in `S2` with all of the other points in `S1`. For example, for slice #1, row #1 gives you the component wise differences between point #1 in `S2` and point #1 in `S1`. Row #2 gives you the component wise differences between point #1 in `S2` and point #2 `S1`, and so on. 3. `sum((...).^2, 2)`: We want to find the Euclidean distance between one point and another point, so we sum these distances squared over each column independently. This results in a new 3D matrix where each slice contains `N` values where there are `N` distances for each of the `M` points. For example, the first slice will give you the distances from point #1 in `S2` with all of the other points in `S1`. 4. `out = reshape(..., size(S1,1), size(S2,1));`: We now reshape this so that it becomes a `M x N` matrix so that each row and column pair of `(i,j)` gives you the distances between points `i` in `S1` and `j` in `S2`, thus completing the calculations. 5. Doing `[dist,ind] = min(out, [], 2);` determines the smallest distance between a point in `S1` with the other points in `S2`. `dist` will give you the smallest distances while `ind` will tell you **which vector it is**. Therefore, for each element in `dist`, it gives you the smallest distance between a point `i` in `S1` and one of `S2`, and `ind` tells you which vector that was that belonged to `S2`. --- We can verify that this gives us correct results by using your proposed approach of looping through each pair of points and calculating the norm. Let's create `S1` and `S2`: ``` S1 = [1 2 3; 4 5 6; 7 8 9; 10 11 12]; S2 = [-1 -1 -1; 0 9 8]; ``` More neatly displayed: ``` >> S1 S1 = 1 2 3 4 5 6 7 8 9 10 11 12 >> S2 S2 = -1 -1 -1 0 9 8 ``` Using the loop approach, we have this code: ``` out = zeros(size(S1,1), size(S2,1)); for ii = 1 : size(S1,1) for jj = 1 :size(S2,1) out(ii,jj) = norm(S1(ii,:) - S2(jj,:)); end end ``` We get this matrix: ``` >> out out = 5.3852 8.6603 10.4881 6.0000 15.6525 7.1414 20.8327 10.9545 ``` Similarly, if we ran the code I wrote, we also get: ``` >> out = reshape(sqrt(sum(bsxfun(@minus, S1, permute(S2, [3 2 1])).^2, 2)), size(S1,1), size(S2,1)) out = 5.3852 8.6603 10.4881 6.0000 15.6525 7.1414 20.8327 10.9545 ``` To complete the process, let's find the smallest distances and the corresponding vectors: ``` >> [dist,ind] = min(out, [], 2); >> dist dist = 5.3852 6.0000 7.1414 10.9545 >> ind ind = 1 2 2 2 ``` Therefore, for the first vector in `S1`, the closest vector to this in `S2` was first one, with a distance of 5.3852. Similarly, the second vector of `S1`, the closest vector in `S2` was the second one, with a distance of 6. You can repeat this for the other values and see that it's correct.
Vuex - When to use getters and when to use state I have a hard time figuring out when to use a getter or a state for best performance. I will list some scenarios that you are welcome to comment on: **Scenario 1 - Getters VS state in actions and getters** In an action or getter, if you use a list of products multiple times to find a result would you then use getters.products or state.products? Also, if you needed to use the products 10 times in the same function, would you call getters.products or state.products 10 times or would you assign the products to a variable at the beginning and then use that 10 times? Are there any perfomance gain in any over the others? **Scenario 2 - Getters returning a function** In the Vuex documentation it states that returning a function in a getter will not cache the result of that function. So having a getter for sorting a list of 1000 products would be bad, right? Like: ``` const getters = { sortedProducts: state => { return state.products.sort(a, b => { ... }) } } ``` So when ever a product is updated, which may or may not alter the sorting, then it would do the entire calculation once again, or? Would it be better to have a state instead that is manually updated by an action and mutation? Generally, would it ever make sense to have getters returning functions that has to do with large amount of data?
Vuex *getters* is to Vue *computed* as Vuex *state* is to Vue *data*. **Scenario 1** > > In an action or getter, if you use a list of products multiple times to find a result would you then use getters.products or state.products? > > > I don't quite understand your scenario here; a code example would better illustrate what you mean. Say you have `products` state which is an array of product objects. If you need access to *sorted* (for example) `products` in more than one place, then making a `sortedProducts` getter would be better than sorting the `products` each time because Vue will cache the result and only recompute its value when the `products` array changes. > > Also, if you needed to use the products 10 times in the same function, would you call getters.products or state.products 10 times or would you assign the products to a variable at the beginning and then use that 10 times? Are there any perfomance gain in any over the others? > > > No need to assign it to a variable at the beginning of the function if you're concerned about performance. The performance cost of accessing the store state or getters is negligible. Code readability is more important here. **Scenario 2** The `sortedProducts` getter function does not return a function, so Vuex will cache the result. > > Would it be better to have a state instead that is manually updated by an action and mutation? > > > If you're talking about your `sortedProducts` getter, no. > > Generally, would it ever make sense to have getters returning functions that has to do with large amount of data? > > > The only situation that you would need a getter to return a function is if you want the getter to be able to accept parameters, in which case the getter is more like a Vue component *method* instead of a Vue component *computed* property. If you have a getter that returns a function and deals with a large amount of data, then Vuex can't help you out to cache the result of that function call. You'll have to figure out a way to minimize the number of times it is called, or incorporate [memoization](https://en.wikipedia.org/wiki/Memoization), etc.
DialogFlow: simple ways of passing parameters between intents and from previous context to the current context? (Without using fullfillment) Hi I’m look in for simple solutions for passing values between intents and contexts. I’ve tried to set output context (c1) for intent A, and use c1 as input context for intent B. However, I’m not able to access the parameters value within intent B. Do I have to use fullfillment to implement this ? Besides, I also want to use the previous parameters’ value of intent A when intent A is triggered next time. Again, can we do this without using fullfillment? If fullfillment is essential, can you give some guidance please?
Accessing parameter values from one intent to another where contexts are used can be done from the Console itself. Fulfillment Webhook response can also be used but for your use case this can be done from the Console itself. You can refer to the below replication steps: - In the text response of **Default Welcome Intent** add **Hi what is your name?** and add an output context **awaiting\_name**. [![enter image description here](https://i.stack.imgur.com/3Mv9n.png)](https://i.stack.imgur.com/3Mv9n.png) - Create another Intent **Get Name** and in that pass β€œ**awaiting\_name**” as input context . Pass some training phrases like "john,sandeep,jacob" and map it with them to the **@sys.given.name** entity. - In the Get Name Intent the text response is **Ok $name, what is your email address?**. Add **awaiting\_email** in the output context field of this intent. [![enter image description here](https://i.stack.imgur.com/WVrQ3.png)](https://i.stack.imgur.com/WVrQ3.png) [![enter image description here](https://i.stack.imgur.com/bnYEV.png)](https://i.stack.imgur.com/bnYEV.png) - Create another intent β€œ**Get Email**” and add **awaiting\_email** in the Input context. Add training phrases like "[email protected]","[email protected]" and map them with **@sys.email** entity. [![enter image description here](https://i.stack.imgur.com/IyCHB.png)](https://i.stack.imgur.com/IyCHB.png) - When you want to fetch the parameter value from another intent to the present intent where context is used you need to call it by **#context-name.parameter-name** as per this [doc](https://cloud.google.com/dialogflow/es/docs/intents-actions-parameters#context). - My final output response is **Thanks #awaiting\_name.name we will contact you soon on $email** [![enter image description here](https://i.stack.imgur.com/uZCJe.png)](https://i.stack.imgur.com/uZCJe.png)
Variable sized column with ellipsis in a table I am trying to layout a table in CSS. The requirements are as follow: the first column expands as much as it can, and text in the first column is limited to one line of text, if more, there should be an ellipsis. The other columns take only the space they need to contain the text in them without wrapping (text-wrap: nowrap). The table itself is 100% width. I managed to have either a fixed size first column with ellipsis, or a variable size first column with no ellipsis, I can't find a way to have a variable sized columns with ellipsis. Is it achievable with CSS? I can use CSS 3 properties if required, but I would like to avoid the use of JS. Markup: ``` <table class="table"> <tr> <th>First</th> <th class="no-wrap">Second</th> </tr> <tr> <td class="expand-column"> Some long long text here </td> <td class="no-wrap"> Other text </td> </tr> </table> ``` CSS: ``` .table, .expand-column { width: 100%; } .no-wrap { white-space: nowrap; } ```
Is this the desired look: <http://jsfiddle.net/Uhz8k/> ? This works in Firefox 21+, Chrome 43+ (probably earlier), and IE11. It doesn't work in IE9. (Don't know about IE10.) The html code is below: ``` <table class="table"> <tr> <th>First</th> <th>Second</th> </tr> <tr> <td class="expand-column"> Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here, Some long long text here. </td> <td class="no-wrap"> Other text here </td> </tr> <tr> <td class="expand-column"> Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here, Some other long text here. </td> <td class="no-wrap"> Some other text here </td> </tr> </table> ``` and the CSS: ``` .table { width: 100%; border: 1px solid grey; } .expand-column { max-width: 1px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; border: 1px solid grey; } .no-wrap { white-space: nowrap; border: 1px solid grey; width: 1px; } th { border: 1px solid grey; } ```
Add cell to bottom of UITableView in iOS I am using xcode 4.2 with storyboard to create an iphone app. When I press the edit button in the top right corner I would like to have the options to delete the existing rows and see the extra cell (with the green '+' icon) at the top which would allow me to add a new cell. I have an array which is being populated in the `viewDidLoad` method using CoreData I have enabled the settings button ``` self.navigationItem.rightBarButtonItem = self.editButtonItem; ``` And implemented the method ``` - (void)tableView:(UITableView *)tableView commitEditingStyle: (UITableViewCellEditingStyle)editingStyle forRowAtIndexPath: (NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // removing a cell from my array and db here... } else if (editingStyle == UITableViewCellEditingStyleInsert) { // adding a cell to my array and db here... } } ``` I realise I need to add the cell at some point which I can then edit but it isn't clear to me where and I am unable to find a explanation on the internet.
Ok, the basic idea is that when the edit button is clicked we'll show the delete controls next to each row and add a new row with the add control so that users can click it in order to add an entry right? First, since you have the edit button setup already let's instruct our table that in editing mode we should show an extra row. We do that in our `tableView:numberOfRowsInSection`: ``` -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return self.editing ? a_recs.count + 1 : a_recs.count; } ``` `a_recs` here is the array I've setup to store our records so you'll have to switch that out with your own array. Next up we tell our `tableView:cellForRowAtIndexPath:` what to do with the extra row: ``` -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *CellIdentifier = @"Cell"; BOOL b_addCell = (indexPath.row == a_recs.count); if (b_addCell) // set identifier for add row CellIdentifier = @"AddCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; if (!b_addCell) { cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } } if (b_addCell) cell.textLabel.text = @"Add ..."; else cell.textLabel.text = [a_recs objectAtIndex:indexPath.row]; return cell; } ``` We also want to instruct our table that for that add row we want the add icon: ``` -(UITableViewCellEditingStyle)tableView:(UITableView *)tableView editingStyleForRowAtIndexPath:(NSIndexPath *)indexPath { if (indexPath.row == a_recs.count) return UITableViewCellEditingStyleInsert; else return UITableViewCellEditingStyleDelete; } ``` Butter. Now the super secret kung fu sauce that holds it all together with chopsticks: ``` -(void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; [self.tableView setEditing:editing animated:animated]; if(editing) { [self.tableView beginUpdates]; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:[NSIndexPath indexPathForRow:a_recs.count inSection:0]] withRowAnimation:UITableViewRowAnimationLeft]; [self.tableView endUpdates]; } else { [self.tableView beginUpdates]; [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:[NSIndexPath indexPathForRow:a_recs.count inSection:0]] withRowAnimation:UITableViewRowAnimationLeft]; [self.tableView endUpdates]; // place here anything else to do when the done button is clicked } } ``` Good luck and bon appetit!
How does bootstrapping work for gcc? I was looking up the pypy project (Python in Python), and started pondering the issue of what is running the outer layer of python? Surely, I conjectured, it can't be as the old saying goes "turtles all the way down"! Afterall, python is not valid x86 assembly! Soon I remembered the concept of bootstrapping, and looked up compiler bootstrapping. "Ok", I thought, "so it can be either written in a different language or hand compiled from assembly". In the interest of performance, I'm sure C compilers are just built up from assembly. This is all well, but the question still remains, how does the computer get that assembly file?! Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS? Can someone explain this to me?
> > Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS? > > > I understand what you're asking... what would happen if we had no C compiler and had to start from scratch? The answer is you'd have to start from assembly or hardware. That is, you can either build a compiler in software or hardware. If there were no compilers in the whole world, these days you could probably do it faster in assembly; however, back in the day I believe compilers were in fact dedicated pieces of hardware. The [wikipedia article](https://secure.wikimedia.org/wikipedia/en/wiki/History_of_compiler_construction) is somewhat short and doesn't back me up on that, but never mind. The next question I guess is what happens today? Well, those compiler writers have been busy writing portable C for years, so the compiler should be able to compile itself. It's worth discussing on a very high level what compilation is. Basically, you take a set of statements and produce assembly from them. That's it. Well, it's actually more complicated than that - you can do all sorts of things with lexers and parsers and I only understand a small subset of it, but essentially, you're looking to map C to assembly. Under normal operation, the compiler produces assembly code matching your platform, but it doesn't have to. It can produce assembly code for any platform you like, provided it knows how to. So the first step in making C work on your platform is to create a target in an existing compiler, start adding instructions and get basic code working. Once this is done, in theory, you can now *cross compile* from one platform to another. The next stages are: building a kernel, bootloader and some basic userland utilities for that platform. Then, you can have a go at compiling the compiler for that platform (once you've got a working userland and everything you need to run the build process). If that succeeds, you've got basic utilities, a working kernel, userland and a compiler system. You're now well on your way. Note that in the process of porting the compiler, you probably needed to write an assembler and linker for that platform too. To keep the description simple, I omitted them. If this is of interest, [Linux from Scratch](http://www.linuxfromscratch.org/) is an interesting read. It doesn't tell you how to create a new target from scratch (which is significantly non trivial) - it assumes you're going to build for an existing known target, but it does show you how you cross compile the essentials and begin building up the system. Python does not actually assemble to assembly. For a start, the running python program keeps track of counts of references to objects, something that a cpu won't do for you. However, the concept of instruction-based code is at the heart of Python too. Have a play with this: ``` >>> def hello(x, y, z, q): ... print "Hello, world" ... q() ... return x+y+z ... >>> import dis dis.dis(hello) 2 0 LOAD_CONST 1 ('Hello, world') 3 PRINT_ITEM 4 PRINT_NEWLINE 3 5 LOAD_FAST 3 (q) 8 CALL_FUNCTION 0 11 POP_TOP 4 12 LOAD_FAST 0 (x) 15 LOAD_FAST 1 (y) 18 BINARY_ADD 19 LOAD_FAST 2 (z) 22 BINARY_ADD 23 RETURN_VALUE ``` There you can see how Python thinks of the code you entered. This is python bytecode, i.e. the assembly language of python. It effectively has its own "instruction set" if you like for implementing the language. This is the concept of a virtual machine. Java has exactly the same kind of idea. I took a class function and ran `javap -c class` to get this: ``` invalid.site.ningefingers.main:(); Code: 0: aload_0 1: invokespecial #1; //Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: iconst_0 1: istore_1 2: iconst_0 3: istore_1 4: iload_1 5: aload_0 6: arraylength 7: if_icmpge 57 10: getstatic #2; 13: new #3; 16: dup 17: invokespecial #4; 20: ldc #5; 22: invokevirtual #6; 25: iload_1 26: invokevirtual #7; //....... } ``` I take it you get the idea. These are the assembly languages of the python and java worlds, i.e. how the python interpreter and java compiler think respectively. Something else that would be worth reading up on is [JonesForth](https://rwmj.wordpress.com/2010/08/07/jonesforth-git-repository/). This is both a working forth interpreter and a tutorial and I can't recommend it enough for thinking about "how things get executed" and how you write a simple, lightweight language.
Postgres Crosstab allocating values to wrong columns I have a sample table named antest as shown below to test the crosstab function. ``` create table antest(student text, subject text, result numeric); insert into antest(student, subject, result) values ('peter','music',2.0), ('peter','language',2.0), ('gabriel','history',8.0), ('john','history',9.0), ('john','maths',4.0), ('john','music',7.0); student|subject|result -------+-------+------ peter |music |2.0 peter |lanuage|2.0 gabriel|history|8.0 john |history|9.0 john |maths |4.0 john |music |7.0 ``` Result wanted: ``` student|music|language|history|maths -------+-----+--------+-------+----- peter |2.0 |2.0 | | gabriel| | |8.0 | john |7.0 | |9.0 |4.0 ``` I have executed the follwoin query for that: ``` select * from public.crosstab ( 'select student, subject, result from antest', 'select distinct subject from antest' ) as final_result(student text, music numeric, maths numeric, history numeric, language numeric); ``` I got the following result: ``` student|music|maths|history|language -------+-----+-----+-------+-------- peter |2.0 | | |2.0 gabriel| |8.0 | | john |7.0 |9.0 |4.0 | ``` Kindly let me know the mistake I am doing. I have to repeat this query for anyother database which is 30gb large later with about 75 attributes. Is there any possibility to automate it?
You have to take care that the order of categories is exactly the same in the category query and in the column definition. Because you want an arbitrary chosen (not alphabetical) order, you should use `values` in the category query. ``` select * from crosstab ( $q$ select student, subject, result from antest $q$, $q$ values ('music'), ('language'), ('history'), ('maths') $q$ ) as final_result(student text, music numeric, language numeric, history numeric, maths numeric); student | music | language | history | maths ---------+-------+----------+---------+------- peter | 2.0 | 2.0 | | gabriel | | | 8.0 | john | 7.0 | | 9.0 | 4.0 (3 rows) ``` Of course, you can use `order by` in the query but then you have to change the order in the column definition: ``` select * from crosstab ( $q$ select student, subject, result from antest $q$, $q$ select distinct subject from antest order by 1 $q$ ) as final_result(student text, history numeric, language numeric, math numeric, music numeric); student | history | language | math | music ---------+---------+----------+------+------- peter | | 2.0 | | 2.0 gabriel | 8.0 | | | john | 9.0 | | 4.0 | 7.0 (3 rows) ```
How to open wss: or ws: port on my website? I am using **Ratchet-lib/socketo.me** as a websocket for my chatting app. When I put localhost:8080 it works perfectly. How can I put my wesbite as wss:// when I publish the app online? How to open port pr smthg? This is the connection code : ``` $(document).ready(function(){ update_chat_history_data(); var conn = new WebSocket('wss://localhost:8080'); conn.onopen = function(e) { console.log("Connection established!"); }; ``` I want to change **var conn = new WebSocket('wss://localhost:8080');** with **var conn = new WebSocket('wss://mywebsite:port');** Thanks
If you are using nginx in your production environment and it has by default ssl enabled meaning (https). Than you can do reverse proxy to your ratchet server. ``` upstream websocketserver { server ratchet:8080; } server { listen 443 ssl; #you main domain configuration location /ws/ { proxy_pass http://websocketserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_read_timeout 86400; # neccessary to avoid websocket timeout disconnect proxy_redirect off; } ``` } Than you will be able to use ratchet url like: **wss://yourdomain.com/ws** This is how it works with nginx but I guss is same with apache or some other web server. Just do reverse proxy!
Initialize a calendar in a constructor If I do this: ``` new Estimacao("AarΓ£o","Affenpinscher","AbΓ³bora",new GregorianCalendar(1999,7,26),0), ``` Everything works as as expected. But if i do this: ``` new Estimacao("AarΓ£o","Affenpinscher","AbΓ³bora",new Calendar(1999,7,26),0), ``` It can be done. As far as I know. We have to initialize calendar like this: ``` Calendar date = Calendar.getInstance(); date.set(Calendar.YEAR, 1999); date.set(Calendar.MONTH, 7); date.set(Calendar.DAY_OF_MONTH, 26); ``` The thing I want to know is if it's possible to use `Calendar`, and achieve the same as `GregorianCalendar`, when creating and initializing the object `new Estimacao` as above.
Calendar is an Abstract class, so you can't create an instance of it. When you call getInstance it actually returns a new GregorianCalendar instance. So it is the same as your first example. So I guess the question is, why do you want to call new Calendar instead of new GregorianCalendar? If it is just so that you can hide the implementation you are using then I would either just do what you have already done to initialise a Calendar. Or create a single method that takes the same parameters and hides the calls to the Calendar class, e.g. ``` public Calendar getCalendar(int day, int month, int year) { Calendar date = Calendar.getInstance(); date.set(Calendar.YEAR, year); // We will have to increment the month field by 1 date.set(Calendar.MONTH, month+1); // As the month indexing starts with 0 date.set(Calendar.DAY_OF_MONTH, day); return date; } ```
Python package: how to avoid redefining author version etc? I would like to distribute a Python package (I would like to use setuptools and I already have a working setup.py file), and the related documentation (produced using Sphinx). I find myself a bit confused by the fact that I have to specify the authors names, maintainers, version, release, date, emails etc in different parts. I was wondering if there is some way to define this kind of common information only once for the package and then use it both in the setup.py script and in .rst files and so on. What are the possible approaches to this problem?
If you are invoking sphinx using distutils, your case is covered. The answer is in the documentation in [sphinx/setup\_command.py](https://github.com/sphinx-doc/sphinx/blob/3261b87cf54fbccc9b2a306e032be9941db509cb/sphinx/setup_command.py#L41). From that example, your setup.py should have a part that looks somewhat like this: ``` # this is only necessary when not using setuptools/distribute from sphinx.setup_command import BuildDoc cmdclass = {'build_sphinx': BuildDoc} name = 'My project' version = '1.2' release = '1.2.0' setup( name=name, author='Bernard Montgomery', version=release, cmdclass=cmdclass, # these are optional and override conf.py settings command_options={ 'build_sphinx': { 'project': ('setup.py', name), 'version': ('setup.py', version), 'release': ('setup.py', release)}}, ) ``` After that, calling `python setup.py build_sphinx` will build the documentation, having a single point of truth for those shared values. Well done. Works for me. Hope it helps!
How to calculate REM for type? How do I convert PX into REM for my type? I've read Jonathan Snook's article about using REM, but he was using font-size:62.5%, which defaults to your font size to 10px (for easier calculating). How would I convert my PX units into REM if I was to use 100%, 75%, and so forth?
`Target Size / Base Size = Value` Since we can assume that browsers use 16px as default (after all, that's what Jonathan Snook did to assume that 62.5% is 10px), then your Base is 16. If you want 32px, then `32 / 16 = 2`, want 40px then `40 / 16 = 2.5`, want 24 then `24 / 16 = 1.5`. The same goes for 75%... Determine what 75% is (it's 12) and perform the same calculation. If you want 32px, then `32 / 12 = 2.666`, want 40px then `40 / 12 = 3.333`, want 24 then `24 / 12 = 2`.
Convert nested dict to dataframe I want to convert a nested dict to dataframe ``` { '2022-09-08T15:00:00Z': { 'INMET_BRASILIA_A001_M': { "DVENTO": [ {'value' : '95.0', 'quality': 'qf-GOOD', 'quality_flag': 'GOOD','structure' : 'INMET_BRASILIA_A001_M', 'element' : 'DVENTO'}], "TD_MN_C":[ {'value' : '6.0', 'quality': 'qf-GOOD', 'quality_flag': 'GOOD','structure' : 'INMET_BRASILIA_A001_M', 'element' : 'TD_MN_C'},] }, }, '2022-09-09T12:00:00Z': { 'INMET_GOIANIA_A002_M': { "DVENTO" : [ {'value' : '25', 'quality' : 'qf-GOOD', 'quality_flag' : 'GOOD', 'structure' : 'INMET_GOIANIA_A002_M', 'element' : 'DVENTO' }], "TD_MN_C":[{ 'value' : '3.0', 'quality' : 'qf-GOOD', 'quality_flag' : 'GOOD', 'structure' : 'INMET_GOIANIA_A002_M', 'element' : 'TD_MN_C'}], }, } } ``` I had this nested dict and now i want to convert to dataframe, something like this ``` DVENTO TD_MN_C 2022-09-08T15:00:00Z 95.0 6.0 2022-09-09T12:00:00Z 25 3.0 ``` please help me, I've been trying this for days
There are various ways to achieve the desired output. Assuming that the dictionary is stored in the variable `dictionary`, one can start by doing the following ``` df = pd.DataFrame.from_dict(dictionary, orient='index').stack().apply(pd.Series).reset_index(level=1, drop=True) [Out]: DVENTO TD_MN_C 2022-09-08T15:00:00Z [{'value': '95.0', 'quality': 'qf-GOOD', 'qual... [{'value': '6.0', 'quality': 'qf-GOOD', 'quali... 2022-09-09T12:00:00Z [{'value': '25', 'quality': 'qf-GOOD', 'qualit... [{'value': '3.0', 'quality': 'qf-GOOD', 'quali... ``` Then, as one wants the cells of `DVENTO` and `TD_MN_C` to be the `value` from the list with the dictionary obtained before in each cell, one can simply use list comprehensions as follows ``` df['DVENTO'] = [x[0]['value'] for x in df['DVENTO']] df['TD_MN_C'] = [x[0]['value'] for x in df['TD_MN_C']] [Out]: DVENTO TD_MN_C 2022-09-08T15:00:00Z 95.0 6.0 2022-09-09T12:00:00Z 25 3.0 ``` A one-liner would be like this ``` df = pd.DataFrame.from_dict(dictionary, orient='index').stack().apply(pd.Series).reset_index(level=1, drop=True).applymap(lambda x: x[0]['value'] if isinstance(x, list) else x) [Out]: DVENTO TD_MN_C 2022-09-08T15:00:00Z 95.0 6.0 2022-09-09T12:00:00Z 25 3.0 ```
Haskell equivalent to contains Basically i'm trying to scan a string for words that contain a character and count the number of words found that have that character. I'm just wondering if there is a way to check if a word contains a character. The code so far is as follows: ``` let w = "Hello i'm kind of new to Haskell, but so far it's great!" length $ filter (== 'n') w ``` This gets any occurrence on the letter n though and if there was 2 n's in one word it would count that as 2 and not 1, I basically want to check if the words contains the letter input and then get the length. Thanks
You can use `elem`: ``` > 'n' `elem` w True ``` For future reference in case you haven't seen it before, there is actually a search engine that will look up functions based on type for Haskell. If I search for `Char -> String -> Bool` or `String -> Char -> Bool` on [hoogle](https://www.haskell.org/hoogle/) then `elem` is the second result. This can be a very powerful tool for discovering functions you didn't know existed. --- Reading your question a bit closer and realizing exactly what you want to do, you can combine `words`, `elem`, and `filter` pretty easily to count the number of words in a string that contain a particular letter: ``` wordsWithLetter :: Char -> String -> Int wordsWithLetter c w = length $ filter (c `elem`) $ words w ``` Here `filter (c `elem`)` will remove all words that don't contain `c` so counting the remaining words using `length` tells you how many words contained that letter.
Access Entity Manager inside Symfony 5 migration class? I have added a news relation inside my table, and inside the postUp() method I want update all my Threads. How can I access to the EntityManager inside my migration class for get all my thread, update them and persist/flush my modifications please?
Doctrine migration tool is aimed to execute SQL queries for you in order to amend your database schema. It has nothing to do with your persistent objects, as it operates on a database level, instead of an ORM level. You can write SQL queries to update related database records. Also, you can use a database connection within a migration script in order to fetch data you need from a database. Here is a simple example to give you a starting point. ``` foreach ($this->connection->fetchAll('SELECT id FROM table') as $id) { $ids = $this->connection->fetchAll( 'SELECT some_id FROM another_table WHERE whatever_id = :whateverId GROUP BY some_id', [ 'whateverId' => $id['id'] ] ); $this->addSql( 'UPDATE table_to_update SET some_field = :someField WHERE id = :some_id', [ 'someField' => implode( ',', array_map( static function (array $data) { return $data['some_id']; }, $ids ) ), 'some_id' => $id['id'] ] ); } ``` As an alternative for the case if you absolutely cannot solve your problem without accessing your persistent objects (for instance if you need to apply some business logic within a migration or a migration logic is beyond a simple schema change), it's better to write a [Symfony Console Command](https://symfony.com/doc/current/console.html) and inject EntityManager, repositories and whatever you may need. This console command would be executed one time to apply a complex migration and then decommissioned within a next release.
iPhone Facebook Video Upload I've been working on this for a couple of days now and just can't seem to find a straight answer or example anywhere. I am trying to upload a video to facebook from within my iPhone App. I can connect to facebook (and have uploaded pictures) without a problem using: ``` _facebook = [[Facebook alloc] initWithAppId:kAppID]; _permissions = [[NSArray arrayWithObjects:@"publish_stream", @"offline_access",nil] retain]; [_facebook authorize:_permissions delegate:self]; ``` However I can't seem to get my video uploading working. My current code is: ``` NSString *filePath = [[NSBundle mainBundle] pathForResource:@"TestMovie" ofType:@"mp4"]; NSData *data = [NSData dataWithContentsOfFile:filePath]; NSMutableDictionary *params = [NSMutableDictionary dictionaryWithObjectsAndKeys: data, @"video", nil, @"callback", @"test", @"title", @"upload testing", @"description", @"EVERYONE", @"privacy", nil]; [_facebook requestWithMethodName:@"video.upload" andParams:params andHttpMethod:@"POST" andDelegate:self]; ``` And since video upload calls have to be made to a different server I changed the restserver url within the facebook.m file to: ``` static NSString* kRestserverBaseURL = @"https://api-video.facebook.com/method/"; ``` When I run this the upload crashes with an error: ``` facebookErrDomain err 353. ``` Any help would be appreciated. **EDIT:** With Zoul's help I now have the following code implemented (I have done nothing to alter his upload class nor the version of the SDK it came with). The request no longer gets an error however nothing is being uploaded. I initialize the facebook object and the upload object: ``` _facebook = [[Facebook alloc] initWithAppId:kAppID]; _permissions = [NSArray arrayWithObjects:@"publish_stream", @"offline_access",nil]; [_facebook authorize:_permissions delegate:self]; _upload = [[FBVideoUpload alloc] init]; ``` And then I use it once facebook has logged in: ``` - (void)fbDidLogin{ _upload.accessToken = _facebook.accessToken; _upload.apiKey = kApiKey; _upload.appSecret = kApiSecret; NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Test" ofType:@"mp4"]; NSURL *fileURL = [NSURL fileURLWithPath:filePath]; NSData *data = [NSData dataWithContentsOfFile:filePath]; NSMutableDictionary *params = [NSMutableDictionary dictionaryWithObjectsAndKeys: data, @"", @"test", @"title", @"upload testing", @"description", @"EVERYONE", @"privacy", nil]; [_upload startUploadWithURL:fileURL params:params delegate:self]; } ```
I’ve got a [video upload branch](https://github.com/zoul/facebook-ios-sdk/tree/video-upload) in my fork of the Facebook SDK on GitHub. I did not touch it for several weeks, but it used to work fine (only it requires the old-style authentication, see [this branch](https://github.com/zoul/facebook-ios-sdk/tree/force-old-auth)). There are some comments in the [FBVideoUpload class header](https://github.com/zoul/facebook-ios-sdk/blob/video-upload/src/FBVideoUpload.h), but the interface is pretty much self-explanatory. There’s also some helpful discussion under my [pull request](https://github.com/facebook/facebook-ios-sdk/pull/153) – especially the thing about SSL certificates on the `api-video` cluster that could make the whole issue easier, but I did not review the code yet. [Rant: It’s a pity that the Facebook SDK for iOS does not exactly thrive on GitHub. There are many pull requests, but the official developers never seem to merge anything, not even trivial typo fixes in the documentation. Most of the time the pull requests simply sit there until rejected.] And yes, did I mention that the video upload code is a messy hack? The video upload code is a messy hack. It parses some auth tokens and it could break anytime soon, but it was the only way I could make it work back then. --- *Update:* The video upload branch is no more, you can now easily upload video using the official SDK: ``` NSData *videoData = [NSData dataWithContentsOfURL:movieURL]; NSMutableDictionary* params = [NSMutableDictionary dictionaryWithObjectsAndKeys: …, @"title", …, @"description", …, @"file", videoData, @"clip.mov", nil]; [facebook requestWithGraphPath:@"me/videos" andParams:params andHttpMethod:@"POST" andDelegate:self]; ``` This is β€œthe right way to do it”ℒ, the previous solution was just a temporary hack.
Python: Invalid Literal for Int() Base 10 I'm writing code for a project to determine the validity of credit cards and i've hit a wall, it seems like all of the things i have tried so far are not working. This is giving me an error for the sumofodds function where j=int(card[i]) The error is "Invalid Literal for Int() with Base 10 Is there anyone that can give me some advce? ``` def sumofdoubles(): card=input() x=len(card) summ=0 for i in range(x-2,-1,-2): j=int(card[i]) u=j+j if u>9: h=u/2 summ=summ+h return(summ) def sumofevens(): card=input() x=len(card) summ=0 for i in range(x-2,-1,-2): j=int(card[i]) u=j+j if u<9: summ=summ+u return(summ) def sumofodds(): summ=0 card=input() x=len(card) for i in range(x-1,-1,-2): j=int(card[i]) summ=summ+j return(summ) def main(): card=input() length=len(card) summ=0 while(card!="#####"): if (card[0]=='4' or card[0]=='5' or card[0]=='6' or (card[0]=='3' and card[1]=='1')): dbls=sumofdoubles() evens=sumofevens() odds=sumofodds() if((dbls+evens+odds)%10==0): print("Valid") main() ``` This is the full traceback for those wondering ``` python test.py<s.input File "test.py", line 52 in <module> main() File "test.py", line 48, in main odds=sumofodds() File "test.py", line 33, in sumofodds j=int(card[i]) ValueError: invalid literal for int() with base 10: '#' ```
Well, whatever you did you typed in something that isn't actually a Base 10 number. This includes anything that isn't number characters or spaces. So don't type in that. :-) Examples: ``` >>> int('04.9') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '04.9' >>> int('4-') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '4-' >>> int("Jack") Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: 'Jack' ``` Update: Yes you typed a '#'. That's not a valid number.
NGINX Reverse Proxy for upstream Django/Gunicorn Backend I posted this on the nginx mailing list but haven't heard back from anyone so I thought I'd give it a crack over here on stackoverflow :) I currently have a Django app hosted up on Amazon EC2. All of my data is served through Gunicorn on port 8000 (a Python WSGI HTTP Server for UNIX. It's a pre-fork worker model ported from Ruby's Unicorn project). I don't have to worry about passing static content (images) to the client because all of that is handled for me by Amazon's S3 service. Django passes the url of the content through Gunicorn to the client via json. The client can then download it. My Django app is hosted on a t1.micro instance. Here are the specs as provided by Amazon Web Services: Processor: Up to 2 EC2 compute units (for short periodic bursts). Virtual cores: 1 Memory: 615 MiB Platform: 32-bit and 64-bit I have 3 Gunicorn workers running alongside my Django app on this instance. For my Nginx Reverse proxy server, I also use a t1.micro instance. I've set it up and everything is working perfectly fine. Here is my etc/nginx/sites-enabled/default config as follows: ``` # Gunicorn server upstream django { server 10.0.10.0:8000; } # Serve static files and redirect any other request to Gunicorn server { listen 80; server_name 23.0.23.23/; #root /var/www/domain.com/; #access_log /var/log/nginx/domain.com.access.log; #error_log /var/log/nginx/domain.com.error.log; # Check if a file exists at /var/www/domain/ for the incoming request. # If it doesn't proxy to Gunicorn/Django. #try_files $uri @django; # Setup named location for Django requests and handle proxy details location @django { proxy_pass http://django; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } ``` This setup is great but it doesn't account for proxy buffering for slow clients. It also doesn't account for caching and it doesn't account for the number of nginx workers I will require. How can I configure the compression? I found resources that state there is something called gzip, does this support json? How can I fine tune my nginx config according to my t1.micro instance specs? If you we're in my scenario, what settings would you use? Your answers and examples are much appreciated. Thank you :)
## Proxy Buffering Generally proxy buffering is only going to help you if you're generating very big web pages, or sending large files. regardless, it's pretty easy to set up, but you will need to tune the buffer sizes to about the size of your largest pages +20% (Any page that doesnt fit in the buffer gets written to disk), or selectively enable proxy buffering on your largest pages. docs: <http://wiki.nginx.org/HttpProxyModule#proxy_buffering> ## Caching I don't know much about your app and how dynamic it's content is, but setting up correct Cache Control/ETAG header generation on your app will be the first thing you'll want to look at. This is what will let Nginx know what is safe to proxy. Also, you may wish to setup multiple cache zones to manage the amount of space your caches take on disk. ``` proxy_cache one; proxy_cache_path /data/nginx/cache/one levels=1:2 max_size=1G keys_zone=one:1000m; ``` You'll want rules that allow you to bypass the cache (for debugging or programatically) ``` proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment; proxy_cache_bypass $http_pragma $http_authorization; ``` You'll also want to have your app unconditionally serve from cache when your application throws errors: ``` proxy_cache_use_stale error timeout invalid_header; ``` docs: - <http://wiki.nginx.org/HttpProxyModule#proxy_cache> - <http://wiki.nginx.org/HttpProxyModule#proxy_cache_path> - <http://wiki.nginx.org/HttpProxyModule#proxy_cache_use_stale> ## Gzip Enabling gzip on your site is always a tradeoff between CPU time and bandwidth. True, you can lower the amount of data sent over the wire if you gzip your content, but if you're running on a T1 Micro, you'll severely limit your capacity for proxying requests due to CPU utilisation. Generally gzip is a much better idea for static content that you can pre-zip and then serve over and over again. (Yes, gzip supports json, but this is because gzip becomes the wire format, and is transparently unzipped by the client. You should read up on `Content-Encoding: gzip`) docs: <http://betterexplained.com/articles/how-to-optimize-your-site-with-gzip-compression/> ## Misc You'll want to set a few misc settings too: ``` # Directives turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { log_not_found off; } ```
Testing functions through I/O in Haskell I'm playing codewars to sharpen my Haskell skills, and running into a problem that I haven't had in imperative languages. Let's say I'm writing a function foo() in javascript, which takes an int, adds two, squares it, subtracts one, and returns the square root of that number. ``` var foo = function(n) { n += 2; n = n * n; n -= 1; n = Math.sqrt(n); } ``` I want to check on the state of the data being processed in the function at various points to help me troubleshoot/revise/debug code, so I will insert console.log() statements whenever I want to see where I'm at. For example, am I, in fact, squaring the sum of n+2 correctly halfway through the function? Let's see... ``` var foo = function(n) { n += 2; n = n * n; console.log("n = " + n); n -= 1; n = Math.sqrt(n); } ``` While this example should be simple enough for a Haskeller to write in one line, if you have a complex function and want to check the state at different points, how do Haskellers do it? Is there a standard practice using the IO() monad? Do they get around it some other way?
GHCi has a fancy [debugger](https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/ghci-debugger.html) that lets you step through your code and evaluate it line by line, checking it's state and intermediary results. But, of course, there is also the `printf`-style debugging that you are asking for, using the `trace` function from `Debug.Trace`. Nothing wrong with using that for small scripts imho, but it's generally discouraged. `trace` has the type `String -> a -> a`, so you pass a string that gets printed (via `unsafePerformIO`) and any argument that gets simply returned. --- In your example we could use it as follows. This is your function translated to Haskell: ``` foo x = sqrt $ (x+2)**2 - 1 ``` Now we can just add `trace` and the string we want to see, e.g. `"Debug Me: " ++ (show ((x+2)**2))`. First import `Debug.Trace`, though: ``` import Debug.Trace foo x = sqrt $ (trace ("Debug Me: " ++ (show ((x+2)**2))) ((x+2)**2)) - 1 ``` A bit ugly? Following David Young's comment below, we better use `traceShowId :: Show a => a -> a`, if what we want to output is identical to the intermediary result (converted to `String`, of course): ``` import Debug.Trace foo x = sqrt $ (traceShowId ((x+2)**2)) - 1 ``` --- See [here](https://wiki.haskell.org/Debugging) for a summary of debugging options.
How to get apache to serve a site via VirtualBox? [This tutorial](http://www.jacklmoore.com/notes/development) taught me how to create an Ubuntu Server (12.10, 64-bit) VM in VirtualBox on a **Windows 7** host machine, install Apache and have it serve a site from a dummy domain accessible via the host machine's browser. It took a day and many attempts (although it seemed I was following the instructions as they were laid out, I was always doing something wrong) and I'd finally done it. I find Ubuntu as a much seamless development environment than Windows, and so I wanted to same thing on Ubuntu. So, this time, I created an Ubuntu Server VM in VirtualBox on **Ubuntu** host machine, installed Apache and configured it. But when I access the site via the dummy domain, I get the '*Server not found*' error. Yes, I did modify the `/etc/hosts` file just as mentioned in [the tutorial](http://www.jacklmoore.com/notes/development) (which is for Windows 7). But I couldn't get it to work. Don't know what's wrong. Anyone know what else I should be doing? **EDIT:** If I am not clear enough, please ask. I am willing to clarify.
If you need to allow other machines in your physical network reach your VM or if the VM needs Internet access, use bridged networking. Otherwise, stick to host-only networking. 1. Stop your VM and open the settings for it in the *VirtualBox (OSE) Manager* 2. Go to the *Network* tab 3. Select the network mode at your choice (bridged networking or host-only) If you want to use bridged networking, you've to select the right network adapter at **Name: \_\_\_\_\_\_\_\_\_\_\_\_**. For wired connections, you'd select something named like `eth0`. Wireless connections are usually named `wlan0` (the numbers may vary). 4. Save the settings 5. Start the Ubuntu VM 6. When up, you can gather the IP address by running: ``` sudo ifconfig ``` The output should look similar to this: ``` eth0 Link encap:Ethernet HWaddr 08:00:27:f4:c3:7b inet addr:192.168.1.4 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fef4:c37b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20 errors:0 dropped:0 overruns:0 frame:0 TX packets:25 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3244 (3.2 KB) TX bytes:2512 (2.5 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ``` In the above output, `192.168.1.4` (on the second line) is the IP address that can be used in your Ubuntu host system to access your VM. 7. Now open the **hosts** file in Ubuntu host machine: ``` sudo gedit /etc/hosts ``` (If you don't want to use `gedit`, replace the word with the name of your favorite editor. E.g. `vim`, `nano`.) Once the file is open, add this line, and **save** it: ``` 192.168.1.4 my-dummy-site.com ``` 8. Open any browser on your host machine and go to `my-dummy-site.com` to access your website, served right from VirtualBox. ***( Special thanks to [@iSeth](http://chat.stackexchange.com/transcript/message/7038658#7038658) for [the help](https://askubuntu.com/a/138121/36459). Entirely based on [this answer](https://askubuntu.com/a/52150/36459), bit is NOT the same. )***
Java configuration of SimpleUrlHandlerMapping (Spring boot) I have an existing Spring web application, that uses two controllers, which extend AbstractController. I want to integrate Spring Boot into the application, so that we can run it as a standalone application. I am facing a problem, because Spring is not forwarding the calls to my controller. How can I map the controller to an URL pattern like "/app/\*"? SampleController.java ``` @Controller("myController") public class SampleController extends AbstractController { @Override protected ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception { response.getWriter().print("Hello world!"); return null; } } ``` Application.java ``` @EnableAutoConfiguration @Configuration @ComponentScan public class Application { public static void main(String[] args) throws Exception { SpringApplication.run(Application.class, args); } @Bean public SimpleUrlHandlerMapping sampleServletMapping() { SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping(); Properties urlProperties = new Properties(); urlProperties.put("/index", "myController"); mapping.setMappings(urlProperties); return mapping; } } ``` When I start the application I get the following message: ``` INFO [org.springframework.web.servlet.handler.SimpleUrlHandlerMapping] Mapped URL path [/index] onto handler 'myController' ``` But when I send a request to /index, I get following messages: ``` DEBUG [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping] Looking up handler method for path /index DEBUG [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping] Did not find handler method for [/index] DEBUG [org.springframework.web.servlet.handler.SimpleUrlHandlerMapping] Matching patterns for request [/index] are [/**] DEBUG [org.springframework.web.servlet.handler.SimpleUrlHandlerMapping] URI Template variables for request [/index] are {} DEBUG [org.springframework.web.servlet.handler.SimpleUrlHandlerMapping] Mapping [/index] to HandlerExecutionChain with handler [org.springframework.web.servlet.resource.ResourceHttpRequestHandler@11195d3e] and 1 interceptor ```
`SimpleUrlHandlerMappings` are ordered and, as [described in the javadoc](http://docs.spring.io/spring/docs/4.0.6.RELEASE/javadoc-api/org/springframework/web/portlet/handler/AbstractHandlerMapping.html#setOrder-int-) the default is `Integer.MAX_VALUE` which means that they have the lowest possible precedence. This causes `ResourceHttpRequestHandler` (which is mapped to `/**` and has an order of `Integer.MAX_VALUE - 1` by default) to take precedence over the mapping for your controller. Update your `sampleServletMapping()` method to set your mapping's order to a value that's less than `Integer.MAX_VALUE - 1`. For example: ``` @Bean public SimpleUrlHandlerMapping sampleServletMapping() { SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping(); mapping.setOrder(Integer.MAX_VALUE - 2); Properties urlProperties = new Properties(); urlProperties.put("/index", "myController"); mapping.setMappings(urlProperties); return mapping; } ```
how to convert a Java POJO to JSON string? I hava a POJO for which i set values, the pojo is: ``` public class CreateRequisitionRO extends AbstractPortfolioSpecificRO { private static final long serialVersionUID = 2418929142185068821L; private BigDecimal transSrlNo; private String transCode; private InflowOutflow inflowOutflow; public BigDecimal getTransSrlNo() { return transSrlNo; } public void setTransSrlNo(BigDecimal transSrlNo) { this.transSrlNo = transSrlNo; } public InflowOutflow getInflowOutflow() { return inflowOutflow; } public void setInflowOutflow(InflowOutflow inflowOutflow) { this.inflowOutflow = inflowOutflow; } public String getTransCode() { return transCode; } } ``` This is how i set values : ``` CreateRequisitionRO[] request = new CreateRequisitionRO[1]; request[0].setTransSrlNo(new BigDecimal(1)); request[0].setTransCode("BUY"); request[0].setInflowOutflow(InflowOutflow.I); now i would like to convert/serialize the above java pojo to Json string. ``` Could some body help me how to do this? Best Regards
[XStream](http://xstream.codehaus.org) or [GSON](http://code.google.com/p/google-gson/), as mentioned in the other answer, will sort you. Follow the [JSON tutorial on XStream](http://xstream.codehaus.org/json-tutorial.html) and your code will look something like this: ``` CreateRequisitionRO product = new CreateRequisitionRO(); XStream xstream = new XStream(new JettisonMappedXmlDriver()); xstream.setMode(XStream.NO_REFERENCES); xstream.alias("product", Product.class); System.out.println(xstream.toXML(product)); ``` With GSON, your code will [look like this](https://sites.google.com/site/gson/gson-user-guide#TOC-Serializing-and-Deserializing-Generic-Types): ``` CreateRequisitionRO obj = new CreateRequisitionRO(); Gson gson = new Gson(); String json = gson.toJson(obj); ``` Pick your library and go.
ValueError: wrapper loop when unwrapping Python3 test cases (doctests) are failing with my sample code. But the same is working fine in Python2. `test.py` ``` class Test(object): def __init__(self, a=0): self.a = a def __getattr__(self, attr): return Test(a=str(self.a) + attr) ``` `tst.py` ``` from test import Test t = Test() ``` Run test cases: `python3 -m doctest -v tst.py` Error: ``` Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/lib/python3.6/doctest.py", line 2787, in <module> sys.exit(_test()) File "/usr/lib/python3.6/doctest.py", line 2777, in _test failures, _ = testmod(m, verbose=verbose, optionflags=options) File "/usr/lib/python3.6/doctest.py", line 1950, in testmod for test in finder.find(m, name, globs=globs, extraglobs=extraglobs): File "/usr/lib/python3.6/doctest.py", line 933, in find self._find(tests, obj, name, module, source_lines, globs, {}) File "/usr/lib/python3.6/doctest.py", line 992, in _find if ((inspect.isroutine(inspect.unwrap(val)) File "/usr/lib/python3.6/inspect.py", line 513, in unwrap raise ValueError('wrapper loop when unwrapping {!r}'.format(f)) ValueError: wrapper loop when unwrapping <test.Test object at 0x7f6e80028550> ``` Can anyone please help in how to overcome this error. Thanks.
This is arguably a bug in doctest. What's happening is that doctest is searching for functions/methods/callables with a docstring, and while doing so it's [unwrapping](https://docs.python.org/3/library/inspect.html#inspect.unwrap) any decorators it finds. Why it does this, I have no idea. But anyway, doctest ends up calling `inspect.unwrap(t)` (where `t` is a `Test` instance), which is essentially equivalent to doing this: ``` while True: try: t = t.__wrapped__ except AttributeError: break ``` Because `t` is a `Test` instance, accessing `t.__wrapped__` calls `__getattr__` and returns a new `Test` instance. This would go on forever, but `inspect.unwrap` is smart enough to notice that it's not getting anywhere, and throws an exception instead of entering an infinite loop. --- As a workaround, you can rewrite your `__getattr__` method to throw an `AttributeError` when `__wrapped__` is accessed. Even better, throw an AttributeError when *any* dunder-attribute is accessed: ``` def __getattr__(self, attr): if attr.startswith('__') and attr.endswith('__'): raise AttributeError return Test(a=str(self.a) + attr) ```
What are the Consequences of Using a Stable Identifier in Scala? In Scala, I have heard that an identifier is 'stable' if it starts with a capital letter or is surrounded with backticks. What does it mean for an identifier to be stable in Scala? What side effects does this have? Finally, is this related to the convention that constants (like `static final` in Java) are recommended to start with a capital letter?
According to the language [reference](http://www.scala-lang.org/files/archive/spec/2.12/03-types.html#paths): > > A stable identifier is a path which ends in an identifier. > > > Backticks are not directly related to this term. Backticks are needed to wrap something that is not by itself a [lexically](http://www.scala-lang.org/files/archive/spec/2.12/01-lexical-syntax.html) valid identifier, such as keywords. You are probably thinking of a [stable identifier *pattern*](https://www.scala-lang.org/files/archive/spec/2.11/08-pattern-matching.html#stable-identifier-patterns): > > To resolve the syntactic overlap with a variable pattern, a stable identifier pattern may not be a simple name starting with a lower-case letter. However, it is possible to enclose such a variable name in backquotes; then it is treated as a stable identifier pattern. > > > What this talks about is what happens if you use a "normal" lower case variable name in a match pattern expression. The consequence is that the match will always succeed and bind the result to the *new* binding (that potentially shadows the binding you meant to use). To get the match to use the existing variable binding you can use backticks/backquotes. None of this is really related to the Java convention related to naming of constants. That is just a convention.
Access variable from one role in another role in an Ansible playbook with multiple hosts I'm using the latest version of Ansible, and I am trying to use a default variable in `role-one` used on host `one`, in `role-two`, used on host `two`, but I can't get it to work. Nothing I have found in the documentation or on StackOverflow has really helped. I'm not sure what I am doing wrong. Ideally I want to set the value of the variable once, and be able to use it in another role for any host in my playbook. I've broken it down below. --- In my inventory I have a hosts group called `[test]` which has two hosts aliased as `one` and `two`. ``` [test] one ansible_host=10.0.1.10 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key> two ansible_host=10.0.1.20 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key> ``` I have a *single playbook* with a *play for each of these hosts* and I supply the `hosts:` value as `"{{ host_group }}[0]"` for host `one` and `"{{ host_group }}[1]"` for host `two`. The play for host `one` uses a role called `role-one` and the play for host `two` uses a role called `role-two`. ``` - name: Test Sharing Role Variables hosts: "{{ host_group }}[0]" roles: - ../../ansible-roles/role-one - name: Test Sharing Role Variables hosts: "{{ host_group }}[1]" roles: - ../../ansible-roles/role-two ``` In `role-one` I have set a variable `variable-one`. ``` --- # defaults file for role-one variable_one: Role One Variable ``` I want to use the value of `variable_one` in a template in `role-two` but I haven't had any luck. I'm using the below as a task in `role-two` to test and see if the variable is getting "picked-up". ``` --- # tasks file for role-two - debug: msg: "{{ variable_one }}" ``` When I run the playbook with `ansible-playbook test.yml --extra-vars "host_group=test"` I get the below failure. ``` TASK [../../ansible-roles/role-two : debug] *********************************************************************************************************************************************************************************************** fatal: [two]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['test']\" is undefined\n\nThe error appears to be in 'ansible-roles/role-two/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# tasks file for role-two\n- debug:\n ^ here\n"} ```
Variables declared in roles are scoped to the play. If you want to access a variable from `role-one` in `role-two`, they would both need to be in the same play. For example, you could write: ``` - name: Test Sharing Role Variables hosts: "{{ host_group }}" tasks: - import_role: name: role-one when: inventory_hostname == "one" - import_role: name: role-two when: inventory_hostname == "two" ``` Alternatively, you could restructure your roles so that the variables can be imported separately from your actions. That is, have a `role_one_vars` role that does nothing but define variables, and then you can import that in both `role-one` and `role-two`. That is, you would have a structure something like: ``` playbook.yml hosts roles/ role-one/ tasks/ main.yml role-one-vars/ variables/ main.yml role-two/ tasks/ main.yml ``` And `role-one/tasks/main.yml` would look like: ``` - import_role: name: role-one-vars - debug: msg: "in role-one: {{ variable_one }}" ``` `role-two/tasks/main.yml` would look like: ``` --- - import_role: name: role-one-vars - debug: msg: "in role-two: {{ variable_one }}" ``` And `role-one-vars/vars/main.yml` would look like: ``` --- variable_one: role one variable ``` Putting this all together, the output looks like: ``` PLAY [Test Sharing Role Variables] ***************************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************** ok: [one] TASK [role-one : debug] **************************************************************************************************************************************************** ok: [one] => { "msg": "in role-one: role one variable" } PLAY [Test Sharing Role Variables] ***************************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************** ok: [two] TASK [role-two : debug] **************************************************************************************************************************************************** ok: [two] => { "msg": "in role-two: role one variable" } PLAY RECAP ***************************************************************************************************************************************************************** one : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 two : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
Why are CIL and CLR required in .NET? I saw this nice image [here](http://en.wikipedia.org/wiki/File%3aOverview_of_the_Common_Language_Infrastructure.svg). I learned that all the compilers that support .net language convert the source code to `CIL` format. Now Microsoft is never bringing in `.NET` for all the operating system by writing a CLR for all operating systems. Then why keep such an intermediate code format and a CLR to execute that CIL. Is that not an headache to deal with. Why did Microsoft choose be like this ? **EDIT** This kinda architecture has its price. It will reduce the performance, wouldn't it ? Java does this to maintain platform independence, for what reason .NET do it ? Why not keep a simple plain C like compiler. Any way will also require a compiler to convert the code to CIL if I need to add up any new language, the only difference it would make is the target language. Tat's all.
Because they only need to write one compiler for C# to CIL - which is the hard part. Making an interpreter (or more often, Just-In-Time compiler) for the CIL per platform is relatively easy compared to writing a compiler from C# to (per platform) executable code. Beyond that, the runtime can handle anything that compiles to CIL. If you want a new language (like F#) you only have to write *one* compiler for it and you auto-magically get all the platform support for things .NET supports. Oh, and I can take a .NET dll and run that on windows or on linux via Mono without recompilation (assuming all of my dependencies are satisfied). As for performance, it's debatable. There are "pre-compilers" essentially that take the CIL and make native binaries. Others argue that just-in-time compilers can make optimizations that static compilers simply cannot. In my experience, it depends a lot on what your application is doing and what platform you're running it on (mostly how good the JITer is on that platform). It's extremely rare for me to run into a scenario where .NET wasn't *good enough*.
Get full list of Autocomplete entries? How can I get a full list of saved auto-complete entries for a specific textbox name in [google-chrome](/questions/tagged/google-chrome "show questions tagged 'google-chrome'")? Pressing the `↓` key with the textbox focused, gives me 6 random entries. Typing `s` gives me another 6 entries - this time beginning with `s`. But how can I get the full list? It's for my own PC, so I have admin rights - nothing underhand going on. It is, however, on a domain which I do not own. I do use the *Sign-in* feature of Chrome, if that makes any difference. The text box in question is: ``` <input type="search" class="inputbox" name="q"> ```
1. Download and run [SQLite Database Browser](http://sourceforge.net/projects/sqlitebrowser/) 2. Close Chrome (or copy the file to another location and open that) 3. Open `C:\Users\<username>\AppData\Local\Google\Chrome\User Data\Default\Web Data` with the db browser - For OS X, the relevant file is located at `~/Library/Application Support/Google/Chrome/Profile 1/Web Data` - For Linux, `~/.config/google-chrome/Default/Web Data` - For other OS's, this will point you in the right direction: [Where does Chrome save its SQLite database to? - Stack Overflow](https://stackoverflow.com/questions/8936878/where-does-chrome-save-its-sqlite-database-to) 4. `Browse Data` 5. Table: `autofill` Alternatively, to get the autofill entries for a specific input name, click `Execute SQL` and execute this: ``` select * from autofill where name = 'q' order by value asc ```
Warnings after octave installation I am getting following warning after launching octave. i used installation instruction from [here](http://sourceforge.net/projects/octave/files/Octave_Windows%20-%20MinGW/Octave%203.6.0%20for%20Windows%20MinGW%20installer/). What could be the issue? Are these major warnings? I am using windows. ``` warning: gmsh does not seem to be present some functionalities will be disabled warning: dx does not seem to be present some functionalities will be disabled warning: function C:\Octave\Octave3.6.0_gcc4.6.2\share\octave\packages\statistics- 1.1.0\fstat.m shadows a core library function ```
That's because you have installed packages that you probably do not need (otherwise you would already noticed more than just the warning message) ``` warning: gmsh does not seem to be present some functionalities will be disabled ``` You have the msh package installed and loaded. This package uses gmsh hence the warning. If you don't use this package remove it `pkg uninstall msh` or turn off its automatic loading at startup with `pkg rebuild -noauto msh` ``` warning: dx does not seem to be present some functionalities will be disabled ``` I don't know exactly which package uses dx but I'm guessing the problem is same as the msh. Run `pkg unload all` and then try to load them one by one until you find the warning. Fix as in the case of msh package. ``` warning: function C:\Octave\Octave3.6.0_gcc4.6.2\share\octave\packages\statistics- 1.1.0\fstat.m shadows a core library function ``` The `fstat()` function from the statistics package is shadowing the same function with same name from Octave core. The one in core is already deprecated so don't worry about this warning. It will go away after Octave 3.8 is released.
Using async in event emitter I am being challenged trying to make an async call inside an event. Here's the code from [Nodemailer](https://nodemailer.com/transports/ses/) - I've added the line where I need to make an async call: ``` let transporter = nodemailer.createTransport({ SES: new aws.SES({ apiVersion: '2010-12-01' }), sendingRate: 1 // max 1 messages/second }); // Push next messages to Nodemailer transporter.on('idle', () => { while (transporter.isIdle()) { // I need to make an async db call to get the next email in queue const mail = await getNextFromQueue() transporter.sendMail(mail); } }); ``` I found this [post](https://stackoverflow.com/questions/45967019/nodejs-7-eventemitter-await-async) which suggest switching things around which makes sense however I have been unable to apply it correctly to this. *Update* - The answer was to mock sendMail using Sinon.
You can just mark your callback as `async` and use `await` inside of it. The fact that it's an `event` handler callback makes no difference since at the end it's just a plain-old `Function`. ### Node snippet ``` 'use strict' const EventEmitter = require('events') const myEmitter = new EventEmitter() const getDogs = () => { return new Promise(resolve => { setTimeout(() => { resolve(['Woof', 'Woof', 'Woof']) }, 500) }) } myEmitter.on('event', async () => { const dogs = await getDogs() console.log(dogs) }) myEmitter.emit('event') ``` ### Alternative scenario If you still can't get it to work it might be because `transporter.on` is not the same as `EventEmitter.on` - meaning it's a custom function provided by `transporter`. It could assume internally that the callback function provided is not a `Promise` - keep in mind that labelling a function as `async` forces the function to always implicitly return a `Promise`. If that's the case you might want to wrap the `async` function in an [IIFE](https://en.wikipedia.org/wiki/Immediately-invoked_function_expression). ``` // ..rest of code from above myEmitter.on('event', () => { // wrap into an IIFE to make sure that the callback // itself is not transformed into a Promise (async function() { const dogs = await getDogs() console.log(dogs) })() }) myEmitter.emit('event') ```
Angular2 Detect if element in template view has class We're using bootstrap, and sometimes it automatically adds classes to DOM elements. What is the best way to attach to these elements and detect when a particalr css class is added to a component template child element? Say i have this component: ``` import { Component, ViewChild, ElementRef } from '@angular/core'; import { HeaderService } from './header.service'; @Component({ selector: 'header-comp', templateUrl: './Home/RenderLayoutHeader' }) export class HeaderLayoutComponent { constructor(private _headerService: HeaderService) { } } ``` And this is a portion of my view template: ``` <header-comp> <li class="nav-header-icon-list-item"> <div class="overlay-dropdown dropdown" id="patientDDL"> <button class="btn btn-default dropdown-toggle session-menu-container" type="button" id="session-dropdown-menu" data-toggle="dropdown" data-backdrop="true" data-dismiss="modal" aria-haspopup="true" aria-expanded="false"> <img data-initials="ER" src="https://lorempixel.com/40/40/abstract/" class="img-circle session-user-profile-img"> ``` How do i detect in my component when bootstrap adds "open" class to #patientDDL element and execute a function in my component? Thanks! EDIT: I modified my component to this per Gunter's solution but I'm getting a null reference when i don't precede the criteria with a null check) ``` import { Component, ViewChild, ElementRef, DoCheck } from '@angular/core'; import { HeaderService } from './header.service'; @Component({ selector: 'header-comp', templateUrl: './Home/RenderLayoutHeader' }) export class HeaderLayoutComponent implements DoCheck { @ViewChild('patientDDL') patientDropDownList: ElementRef; constructor(private _headerService: HeaderService) { } ngDoCheck() { console.log('ngDoCheck called'); if (this.patientDropDownList && this.patientDropDownList.nativeElement.classList.contains('open')) { this._headerService.setPatientDDLOpen(true); } else { this._headerService.setPatientDDLOpen(false); } } } ``` Also the console statment is logged 4 times while the template loads but then it is never invoked again, even after the class has been added/removed multiple times. This is angular2 rc1 not sure if that is relevant.
Add a template variable to be able to query the element. ``` <div #patientDDL class="overlay-dropdown dropdown" id="patientDDL"> ``` Query the element ``` @ViewChild('patientDDL') patientDDL:ElementRef; ``` Implement `ngDoCheck()` to run the check whether the class was added when change detection runs: ``` ngDoCheck() { if(patientDDL.nativeElement.classList.contains('open')) { this.doSomething(); } } ``` or on some specific event ``` @HostListener('click', ['$event']) clickHandler(event) { if(patientDDL.nativeElement.classList.contains('open')) { this.doSomething(); } } ```
R data table recommended way to deal with date time I have a csv file with one column of timestamps "2000-01-01 12:00:00.123456". What's the recommended way to dealing with it in data table? I need to deal with grouping, matching/rolling join with IDate column from another table, time series plotting, etc. ``` IDateTime("2000-01-01 12:00:00.123456") Error in if (any(neg)) res[neg] = paste("-", res[neg], sep = "") : missing value where TRUE/FALSE needed ``` I see this answer in the possible [duplicate](https://stackoverflow.com/a/14063077/1568919) question in which Matthew suggested manually casting dates into integers. But that's 3 years old and I wonder if there now exists a better way?
`IDateTime` requires a `POSIXct` class object in order to work properly (it seems to work properly with a `factor` conversion too, not sure why). I agree it isn't documented very well and maybe worth opening an FR/PR on GH regarding documentation- there is an open queue regarding an `IDateTime` [vignette](https://github.com/Rdatatable/data.table/issues/944) though. And there is already an [FR](https://github.com/Rdatatable/data.table/issues/1383) regarding allowing it to work with a `character` class. ``` IDateTime(as.POSIXct("2000-01-01 12:00:00.123456")) # idate itime # 1: 2000-01-01 12:00:00 ## IDateTime(factor("2000-01-01 12:00:00.123456")) ## will also work ``` Pay attention to the `tz` parameter in `as.POSIXct` if you want to avoid unexpected behaviour --- Regardless, it seems like the error is actually caused by the print method of `ITime` which calls `format.ITime`, see [here](https://github.com/Rdatatable/data.table/blob/master/R/IDateTime.R#L89) and [here](https://github.com/Rdatatable/data.table/blob/master/R/IDateTime.R#L89) e.g., if you will run `res <- IDateTime("2015-09-29 08:22:00")` this will *not* produce an error, though `res` will be `NA` due to wrong conversion (I believe) in [here](https://github.com/Rdatatable/data.table/blob/master/R/IDateTime.R#L59-L68) (the format is only `"%H:%M:%OS"`). It seems like a bug to me and I still uncertain why `factor` class works correctly when there is no `factor` method in `methods(as.ITime)`. Maybe due to its `integer` internal storage mode which calls another related method.
UI Testing: Slider fails to adjust when nested in Table View Cell In UI Testing on Xcode 9.0 (9A235), interacting with `UISlider`with `adjust(toNormalizedSliderPosition:)` does not work at all when `UISlider`is nested in a `UITableViewCell`. I have tested in many different scenarios: - Normal `UISlider`s not embedded in a `UITableView` work fine with the `adjust` method. - `UISlider` that co-exists with a `UITableView` but not inside a Table View Cell continues to work with `adjust` - `UISlider` in a `UITableView` can be uniquely identified. - `UISlider` identified in a `UITableView` can respond to simple event like `tap()` - `UISlider` identified in a `UITableView` doesn't work with `adjust` method at all, even when I modify the number of rows to 1. The error message is: > > Failure fetching attributes for element pid: 24415, > elementOrHash.elementID: 106102876671744.43: Error > Domain=XCTDaemonErrorDomain Code=13 "Fetching value for attribute 5011 > returned AX error -25205." UserInfo={NSLocalizedDescription=Fetching > value for attribute 5011 returned AX error -25205.} > > > Related discussion I found online: <https://forums.developer.apple.com/thread/77445> I have uploaded [my code online](https://github.com/pt2277/XCTest_UITest_UISlider) too if anyone is interested in looking at it. I have already submitted a [bug report](http://www.openradar.me/radar?id=5059599031861248) to Apple regarding this. What I am asking is, does anyone know of a possible workaround I can use to adjust the `UISlider` values when the slider is nested in a `UITableViewCell`? Thanks!
You can use this as a workaround: ``` func testSliderAdjustsInTableView() { let app = XCUIApplication() app.staticTexts["Slider in Table View UI Test"].tap() let cell = app.tables.element.cells["cell 0"] let button = cell.buttons["Test Button"] button.tap() let slider = cell.sliders["value slider"] XCTAssert(slider.exists) let fiftyPercent = slider.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.5)) let ninetyPercent = slider.coordinate(withNormalizedOffset: CGVector(dx: 0.9, dy: 0.5)) fiftyPercent.press(forDuration: 0.1, thenDragTo: ninetyPercent) XCTAssertEqual(slider.value as! String, "90Β %") } ```
jQuery validation - changing value of max at runtime I have these validation rules: ``` $("#ed_details").validate({ rules: { tagno: { required: true }, itemid: { required: true }, ralno: { required: true }, feet: { required: true, number: true, min: 0 }, inches: { required: true, number: true, min: 0 }, cms: { required: true, number: true, min: 0 }, length: { required: true, number: true, max: $('#maxlength').val() }, qty: { required: true } } }); ``` This is the field that lengths validates against. It is text only for testing purposes. It will be hidden once this is working. ``` <input type="text" name="maxlength" id="maxlength" value="<?=$maxL['ml']?>" /> ``` This works fine unless I change the value of maxlength at runtime (which can happen). The user is selecting an item from a drop down. Each item has a maximum length of tubing that can be used with it. When the item changes, the value of maxlength does change on the screen. However, when the validation runs, I see an error message based on the original length (0 for a new item, whatever the maxlength was at load for an edit). ``` Please enter a value less than or equal to 156. ``` I see this even when the maxlength field shows another value. If I use Firebug on the field, it shows the original value even after it has been changed. Also, changing the maxlength field by hand instead of code does not make a difference. Any help is greatly appreciated.
You can **not** change `rules` like this... ``` length: { required: true, number: true, max: $('#maxlength').val() }, ``` That's because, like most jQuery plugins, `.validate()` is called only **one** time to *initialize* the plugin on your `form`... it does not run repeatedly, so dynamically changing the `rules` options within `.validate()` does nothing. Use the `rules` method to `add` and update your rules dynamically. (use along with the code that changes the value.) ``` $('#maxlength').rules("add", { max: $('#maxlength').val() }); ``` See this very crude "proof of concept" demo below (room for you to rearrange). Test the form (value is initially 5). Then click the button one time and the value and rule (along with message) are updated correctly to "2". <http://jsfiddle.net/Y565M/> <http://docs.jquery.com/Plugins/Validation/rules#.22add.22rules>
Necessity of static block in Java I found that in Java, there is a feature called `static block`, which includes code that is executed when a class is first loaded (I don't understand what 'loaded' means, does it mean initialized?). Is there any reason to do the initialization bit inside a static block and not in the constructor? I mean, even the constructor does the same thing, do all the necessary stuff when a class is first initialized. is there anything that the static block accomplishes which a constructor can't?
I first want to highlight one thing thing from your question: > > the constructor does the same thing, do all the necessary stuff when a class is first initialized > > > This is incorrect. A constructor does all the initialization necessary when an *instance* of a class is created. No constructors execute when the class itself is first loaded into memory and initialized (unless an instance of the class happens to be created as part of the class initialization). This confusion (between initializing a class and initializing instances of the class) is probably why you are questioning the utility of `static` blocks. If a class has static members that require complex initialization, a `static` block is the tool to use. Suppose you need a static map of some kind (the purpose is irrelevant here). You can declare it in-line like this: ``` public static final Map<String, String> initials = new HashMap<String, String>(); ``` However, if you want to populate it once, you can't do that with an in-line declaration. For that, you need a `static` block: ``` public static final Map<String, String> initials = new HashMap<String, String>(); static { initials.put("AEN", "Alfred E. Newman"); // etc. } ``` If you wanted to be even more protective, you can do this: ``` public static final Map<String, String> initials; static { Map<String, String> map = new HashMap<String, String>() map.put("AEN", "Alfred E. Newman"); // etc. initials = Collections.unmodifiableMap(map); } ``` Note that you cannot initialize `initials` in-line as an unmodifiable map because then you couldn't populate it! You also cannot do this in a constructor because simply calling one of the modifying methods (`put`, etc.) will generate an exception. To be fair, this is not a complete answer to your question. The `static` block could still be eliminated by using a private static function: ``` public static final Map<String, String> initials = makeInitials(); private static Map<String, String> makeInitials() { Map<String, String> map = new HashMap<String, String>() map.put("AEN", "Alfred E. Newman"); // etc. return Collections.unmodifiableMap(map); } ``` Note, though, that this is not replacing a `static` block with code in a constructor as you proposed! Also, this won't work if you need to initialize several `static` fields in an interrelated way. A case where a `static` block would be awkward to replace would be a "coordinator" class that needs to initialize several other classes exactly once, especially awkward if it involves dependency injection. ``` public class Coordinator { static { WorkerClass1.init(); WorkerClass2.init(WorkerClass1.someInitializedValue); // etc. } } ``` Particularly if you don't want to hard-wire any dependence into `WorkerClass2` on `WorkerClass1`, some sort of coordinator code like this is needed. This kind of stuff most definitely does not belong in a constructor. Note that there is also something called an *instance initializer block*. It is an anonymous block of code that is run when each instance is created. (The syntax is just like a `static` block, but without the `static` keyword.) It is particularly useful for anonymous classes, because they cannot have named constructors. Here's a real-world example. Since (unfathomably) `GZIPOutputStream` does not have a constructor or any api call with which you can specify a compression level, and the default compression level is none, you need to subclass `GZIPOutputStream` to get any compression. You can always write an explicit subclass, but it can be more convenient to write an anonymous class: ``` OutputStream os = . . .; OutputStream gzos = new GZIPOutputStream(os) { { // def is an inherited, protected field that does the actual compression def = new Deflator(9, true); // maximum compression, no ZLIB header } }; ```
Check if numerical values are valid integers I have written some basic code below which allows me to get user input for games they have played. The user will input as follows "GameName:Score:Time". Once the user has inputted this I then convert the time and score to integers as they are inputted to a string. From this I need to ensure that the user has inputted a valid integer and I'm not to sure on how to do this. ``` import java.util.Scanner; import java.io.IOException; import java.text.ParseException; public class REQ2 { public static void main (String[] args) throws ParseException { String playername; String line; String[] list = new String[100]; int count = 0; int score; int time; int InvalidEntries; Scanner sc = new Scanner(System.in); System.out.println("Please enter your name"); playername = sc.nextLine(); if(playername.equals("")) { System.out.println("Player name was not entered please try again"); System.exit(0); } System.out.println("Please enter your game achivements (Game name:score:time played) E.g. Minecraft:14:2332"); while (count < 100){ line = sc.nextLine(); if(line.equals("quit")){ break; } if(!(line.contains(":"))){ System.out.println("Please enter achivements with the proper \":\" sepration\n"); break; } list[count]=line; System.out.println("list[count]" + list[count]); count++; for (int i=0; i<count; i++){ line=list[i]; String[] elements =line.split(":"); if (elements.length !=3){ System.out.println("Error please try again, Please enter in the following format:\nGame name:score:timeplayed"); break; } score = Integer.parseInt(elements[1].trim()); time=Integer.parseInt(elements[2].trim()); } } }} ```
The most powerful and flexible way is probably with regex: ``` final Pattern inputPattern = Pattern.compile("^(?<gameName>[^:]++):(?<score>\\d++):(?<time>\\d++)$") final String line = sc.nextLine(); final Matcher matcher = inputPattern.matcher(line); if(!matcher.matches()) { throw new IllegalArgumentException("Invalid input") //or whatever } final String gameName = matcher.group("gameName"); final int score = Integer.parseInt(matcher.group("score")); final int time = Integer.parseInt(matcher.group("time")); ``` This way your regex both *validates* and *parses* your input. The same is also possible with a `MessageFormat`: ``` final MessageFormat inputFormat = new MessageFormat("{0}:{1,number,integer}:{2,number,integer}") final String line = sc.nextLine(); //throws an exception if input is invalid - possibly catch and report final Object[] input = inputFormat.parse(line); final String gameName = input[0]; //will be a long final int score = (int) input[1]; final int time = (int) input[2]; ``` Finally, the most simple way to do this, and with minimal changes to your current code is simply to catch the `NumberFormatException` that `parseInt` throws: ``` try { score = Integer.parseInt(elements[1].trim()); } catch(NumberFormatException ex) { //invalid input, emit error or exit } ```
How to save associated object in Grails? I am a grails beginner. i have a 2domain class ``` class Employee { String name String department static constraints = { } public String toString() { name } } class Address { String line1 String line2 Employee employee static belongsTo = Employee static constraints = { } } ``` where Address belongs to Employee .. so i have given `belongsTo` association. My Employee/create.gsp page takes input for fields specified in Employee and Address. so on creation of employee , address must be get save automatically . so what could be the save action in EmployeeController i have tried some thing like this but did not work. ``` def save = { def employeeInstance = new Employee(params) def addressInstance = new Address(params) if (employeeInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'employee.label', default: 'Employee'), employeeInstance.id])}" redirect(action: "show", id: employeeInstance.id) } else { render(view: "create", model: [employeeInstance: employeeInstance]) } } ``` how to save this associated model ?
Here you have a one-to-one relationsip - add an `address` property to the `Employee` class. ``` class Employee { String name String department Address address public String toString() { name } } ``` Change your `belongsTo` of the `Address` like this: ``` class Address { String line1 String line2 static belongsTo = [employee: Employee] } ``` Now you could create an `Employee` like this: ``` def employeeInstance = new Employee(params) employeeInstance.address = new Address(params) if (employeeInstance.save(flush: true)) { // your logic } ``` Read the [docs (one-to-one relationship)](http://grails.org/doc/latest/guide/GORM.html#manyToOneAndOneToOne) for further informations.
Alternative Way To Approach SQL statment If no rows exist, show all Wondering if anyone out there might know a trick to a small sql script. <http://sqlfiddle.com/#!3/09638/3> I am looking to return only the rows that have a manual transmission and are Ford make. If no rows exist, then return all Ford make vehicles. I currently doing it using an IF EXISTs condition. I considered using a temporary table to store the first set of data, then looking at the rowcount() (rows inserted == 0) of the table to see if I needed to insert more data. There may be no other way to do it then my two options I described. Maybe the community has some thought on it. ### Example DDL ``` CREATE TABLE Cars ( Make varchar(50), Model varchar(50), ManualTransmission bit ); INSERT INTO Cars (Make, Model, ManualTransmission) VALUES ('Ford', 'Taurus', 0), ('Ford', 'Contour', 0), ('Ford', 'Mustang', 0), ('Jeep', 'Liberty', 1), ('Jeep', 'Cherokee', 0); ```
One way ``` WITH CTE AS (SELECT *, RANK() OVER (ORDER BY ManualTransmission DESC) AS Rnk FROM Cars WHERE Make = 'Ford') SELECT Make, Model, ManualTransmission FROM CTE WHERE Rnk = 1 ``` Or another ``` SELECT TOP 1 WITH TIES Make, Model, ManualTransmission FROM Cars WHERE Make = 'Ford' ORDER BY ManualTransmission DESC ``` Both of these answers exploit the fact that `ManualTransmission` is a `BIT` datatype and `1` is the maximum possible value it can have. If `ManualTransmission` is nullable then you would need to change them to ``` ORDER BY ISNULL(ManualTransmission,0) DESC ``` Or ``` ORDER BY CASE WHEN ManualTransmission = 1 THEN 0 ELSE 1 END ``` The `CASE` version would also be adaptable for more complex conditions.
Custom Authentication using Keycloak I've been trying to grasp Keycloak over the last weeks, so I can implement a way to authenticate users using a legacy provider (which is Oracle-based, with table of sessions and all sort of bizarre stuff). We plan to get rid of this in a near future, but for now we have to deal with it, so the idea is to use Keycloak on the front line - leveraging the main benefits it provides, like SSO - omitting the legacy auth provider from the apps that need authentication. I read a little about building custom OIDC Identity Providers but it looks cumbersome for such a simple auth method exchange. Is there a simpler way (rather than a new OIDC provider) of building a custom authentication provider using Keycloak? If so, could you give me an example or at least a deeper explanation? I found the Keycloak documentation very weak in terms of live examples, added to my lack of knowledge of Auth Protocols in general (which I'm already working on). Thanks in advance.
This is something my organization took on as well. It's tricky at best. This user storage SPI is the route we took. We implemented a basic approach at first, by reading from the remote database via API calls. In our case we had two databases to connect through and our SPI implementation would call both and unify any data they provided on the user. <https://www.keycloak.org/docs/latest/server_development/index.html#_user-storage-spi> > > You can use the User Storage SPI to write extensions to Keycloak to connect to external user databases and credential stores. The built-in LDAP and ActiveDirectory support is an implementation of this SPI in action. Out of the box, Keycloak uses its local database to create, update, and look up users and validate credentials. Often though, organizations have existing external proprietary user databases that they cannot migrate to Keycloak’s data model. For those situations, application developers can write implementations of the User Storage SPI to bridge the external user store and the internal user object model that Keycloak uses to log in users and manage them. > > > And then in this section: <https://www.keycloak.org/docs/latest/server_development/index.html#provider-capability-interfaces> > > If you have examined the UserStorageProvider interface closely you might notice that it does not define any methods for locating or managing users. These methods are actually defined in other capability interfaces depending on what scope of capabilities your external user store can provide and execute on. For example, some external stores are read-only and can only do simple queries and credential validation. You will only be required to implement the capability interfaces for the features you are able to. You can implement these interfaces: > > > What follows on that page is a listing of various other interfaces that you can implement based on your needs to provide the behavior you want. > > `org.keycloak.storage.user.UserLookupProvider` > > > This interface is required if you want to be able to log in with users > from this external store. Most (all?) providers implement this > interface. > > > `org.keycloak.storage.user.UserQueryProvider` > > > Defines complex queries that are used to locate one or more users. You > must implement this interface if you want to view and manage users > from the administration console. > > > `org.keycloak.storage.user.UserRegistrationProvider` > > > Implement this interface if your provider supports adding and removing > users. > > > `org.keycloak.storage.user.UserBulkUpdateProvider` > > > Implement this interface if your provider supports bulk update of a > set of users. > > > `org.keycloak.credential.CredentialInputValidator` > > > Implement this interface if your provider can validate one or more > different credential types (for example, if your provider can validate > a password). > > > `org.keycloak.credential.CredentialInputUpdater` > > > Implement this interface if your provider supports updating one or > more different credential types. > > > Recommendation: Clone the keycloak source code from their github repo for a better understanding of their code and how your implementation will interact with their existing framework. Primarily this is useful when debugging or figuring out "what the hell is happening that my code doesn't work the way I think it should". Usually it's because the way your implementation is used is different or you haven't taken caching into account. Cloning the source code is good because it also provides valuable examples of some basic implementations you can find in the examples folder. That's the best advice I can offer right now. Here is the native LDAP provider implementation: <https://github.com/keycloak/keycloak/blob/master/federation/ldap/src/main/java/org/keycloak/storage/ldap/LDAPStorageProvider.java> You can use that as reference. **Disclaimer**: We undertook this effort with version 3.4 of Keycloak. The documentation was a tiny bit different and there was another example at the time of this being done directly in the source. The latest master (version 8) looks similar at the interface level. At a high level the effort will look like this: 1. Implement user storage provide interface. 2. Create a new authenticator, or override the existing Username authenticator, to customize any additional behavior after your user storage provider above has provided its custom functionality. You might not need step 2 if just implementing step 1 gets you want you want. Step 1 abstracts your custom database backend and reads the user information and provides it back to Keycloak as a `UserModel` that it'll import into its local database for use later.
Java: how does finally trump throw? I just ran into a hidden gem in one of our Java libraries: ``` for(Widget w : widgets) { if(shouldDoStuff()) { try{ // Do stuff to w. } catch(Exception e){ throw new RuntimeException("Couldn't do stuff."); } finally{ // Compiler warning: finally block does not complete normally continue; } } } ``` I know that `finally` trumps everything, but I'm wondering about 2 things: 1. What happens when the `catch` clause *does* execute? Does the exception get thrown or not? What happens first: the thrown exception or the `continue` statement? 2. How can I rewrite this to get rid of the warning? I found [this very similar question](https://stackoverflow.com/questions/5126455/in-java-what-if-both-try-and-catch-throw-same-exception-and-finally-has-a-return) but the accepted answer just states that the exception will be thrown **abruptly**, and I'm not sure what that means. Plus it doesn't really help me understand my first question above about the order of events that will transpire.
"finally" will be executed after your RuntimeException is thrown and before it is processed by another catch upper in the stack. As your finally just continues, in fact it will do nothing. The contradiction is between the throw in the catch that will end the loop and the continue. One approach could be: ``` boolean exceptionOccured = false; for(Widget w : widgets) { if(shouldDoStuff()) { try { // Do stuff to w. } catch(Exception e){ exceptionOccured = true; // do not throw yet. e.printStackTrace(); } } } if (exceptionOccured) { throw new RuntimeException("Couldn't do stuff."); } ``` The main concern with this approach is you don't know what went wrong.
How to know if a flutter image has been seen by the user I have an `Image` component in a scrollable screen. At beginning when the screen open, the image cannot be seen but you need to scroll down to view it. How can you make sure the image is completely seen by the user after they have scrolled to it? I want to count the image impression of user. How do you achieve this in flutter?
I didn't have much information about your code, so this is how I solved it. The impression is only counted when the image is completely visible on the screen, you can change that using `_count =` expression. And I used simple `Container` for `Image`. Take a look at this screenshot first. [![enter image description here](https://i.stack.imgur.com/UUHs6.gif)](https://i.stack.imgur.com/UUHs6.gif) --- **Code** ``` void main() => runApp(MaterialApp(home: HomePage()),); class HomePage extends StatefulWidget { @override _HomePageState createState() => _HomePageState(); } class _HomePageState extends State<HomePage> { ScrollController _scrollController; double _heightListTile = 56, _heightContainer = 200, _oldOffset = 0, _heightBox, _initialAdd; int _initialCount, _count, _previousCount = 0, _itemsInList = 4; @override void initState() { super.initState(); _heightBox = ((_itemsInList) * _heightListTile) + _heightContainer; _scrollController = ScrollController(); _scrollController.addListener(() { double offset = _scrollController.offset; if (offset >= _oldOffset) { _oldOffset = offset; _count = _initialCount + (offset + _initialAdd) ~/ _heightBox; if (_count != _previousCount) setState(() {}); _previousCount = _count; } }); Timer.run(() { bool isIos = Theme.of(context).platform == TargetPlatform.iOS; var screenHeight = MediaQuery.of(context).size.height - (isIos ? 100 : 80); // for non notches phone use 76 instead of 100 (it's the height of status and navigation bar) _initialCount = screenHeight ~/ _heightBox; _initialAdd = screenHeight % _heightBox; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text(_count == null ? "Let's count" : "Images shown = ${_count}")), body: ListView.builder( itemCount: 100, controller: _scrollController, itemBuilder: (context, index) { if (index == 0) return Container(); if (index != 0 && index % (_itemsInList + 1) == 0) { return Container( height: _heightContainer, alignment: Alignment.center, color: Colors.blue[(index * 20) % 1000], child: Text("Image #${(index + 1) ~/ 5}"), ); } return SizedBox(height: _heightListTile, child: ListTile(title: Text("Item ${index}"))); }, ), ); } } ```
Ruby - passing by value Is there a way to pass objects by value and not by reference in Ruby? For Example, ``` class Person attr_accessor :name end def get_name(obj) obj.name = "Bob" puts obj.name end jack = Person.new jack.name = "Jack" puts jack.name get_name(jack) puts jack.name ``` the output should be ``` Jack Bob Jack ``` instead of ``` Jack Bob Bob ``` Any help would be appreciated.
**No.** Ruby [passes by reference, not value](https://stackoverflow.com/questions/1872110/is-ruby-pass-by-reference-or-by-value). If you need to simulate passing by value, you can use Ruby's `Object#clone` method. In this case, you'd do something like this: ``` def get_name(obj) new_object = obj.clone new_object.name = "Bob" puts new_object.name end ``` This makes a **shallow copy** of an object. In other words, an object's instance variables are copied, but the objects the variables reference aren't copied. If you need to do a **deep copy**, you can read [this Stack Overflow post](https://stackoverflow.com/questions/8206523/how-to-create-a-deep-copy-of-an-object-in-ruby). Ruby doesn't have a one-method way to perform deep copies, but that post describes how to use **marshalling** and **unmarshalling** to make a deep copy. `clone` and `dup` work very similarly, but there are some differences. According to the docs: ## [Object#clone](http://ruby-doc.org/core-2.0.0/Object.html#method-i-clone) > > Produces a shallow copy of *obj*β€”the instance variables of *obj* are copied, but not the objects they reference. Copies the frozen and tainted state of *obj*. See also the discussion under [Object#dup](http://ruby-doc.org/core-2.0.0/Object.html#method-i-dup). > > > ## [Object#dup](http://ruby-doc.org/core-2.0.0/Object.html#method-i-dup) > > Produces a shallow copy of *obj*β€”the instance variables of *obj* are copied, but not the objects they reference. dup copies the tainted state of *obj*. See also the discussion under [Object#clone](http://ruby-doc.org/core-2.0.0/Object.html#method-i-clone). In general, clone and dup may have different semantics in descendant classes. While clone is used to duplicate an object, including its internal state, dup typically uses the class of the descendant object to create the new instance. > > > This method may have class-specific behavior. If so, that behavior will be documented under the #initialize\_copy method of the class. > > > You can take a look at the [dup](http://ruby-doc.org/core-2.0.0/Object.html#method-i-dup) and [clone](http://ruby-doc.org/core-2.0.0/Object.html#method-i-clone) docs. ## Edit While my answer probably gives what the OP was looking for, it is not strictly correct, with respect to the semantics of passing by reference or value. See the other answers and comments on this page for some more discussion. You can also look at the discussion in the comments [here](https://stackoverflow.com/questions/1872110/is-ruby-pass-by-reference-or-by-value) and [this post](http://ryanverner.com/post/33837637667/ruby-pass-by-reference-or-pass-by-value) for more information.
C# Casting MemoryStream to FileStream My code is this: ``` byte[] byteArray = Encoding.ASCII.GetBytes(someText); MemoryStream stream = new MemoryStream(byteArray); StreamReader reader = new StreamReader(stream); FileStream file = (FileStream)reader.BaseStream; ``` Later I'm using file.Name. I'm getting an InvalidCastException: it displays follows > > Unable to cast object of type 'System.IO.MemoryStream' to type 'System.IO.FileStream'. > > > I read somewhere that I should just change FileStream to Stream. Is there something else I should do?
A `MemoryStream` is not associated with a file, and has no concept of a filename. Basically, you can't do that. You certainly can't cast between them; you can only cast upwards an downwards - not sideways; to visualise: ``` Stream | --------------- | | FileStream MemoryStream ``` You can cast a `MemoryStream` to a `Stream` trivially, and a `Stream` to a `MemoryStream` via a type-check; but never a `FileStream` to a `MemoryStream`. That is like saying a dog is an animal, and an elephant is an animal, so we can cast a dog to an elephant. You *could* subclass `MemoryStream` and add a `Name` property (that you supply a value for), but there would still be no commonality between a `FileStream` and a `YourCustomMemoryStream`, and `FileStream` doesn't implement a pre-existing interface to get a `Name`; so the caller would have to **explicitly** handle both separately, or use duck-typing (maybe via `dynamic` or reflection). Another option (perhaps easier) might be: write your data to a temporary file; use a `FileStream` from there; then (later) delete the file.
sweetalert2 multiple swal at the same function I'd like to make a condition and call a swal for each one (Sweetalert2). But only one of the swal runs. How can I do it? ``` function validateEmail(email) { var regex = /\S+@\S+\.\S+/; return regex.test(email); } function validateBirth(data) { var regex = /^([0-9]{2})\/([0-9]{2})\/([0-9]{4})$/; return regex.test(data); } function validacao() { var data = document.getElementById('birth').value; var email = document.getElementById('email').value; if (!validateBirth(data)) { swal( 'title..', 'text..', 'type..' ); } if (!validateEmail(email)) { swal( 'title..', 'text..', 'type..' ); } } ```
**Update 2021:** Just make your function `async` and `await` promises from `Swal.fire()`: ``` async function validacao() { var data = document.getElementById('birth').value; var email = document.getElementById('email').value; if (!validateBirth(data)) { await Swal.fire( 'title..', 'text..', 'type..' ); } if (!validateEmail(email)) { await Swal.fire( 'title..', 'text..', 'type..' ); } } ``` --- **Old answer which will not work for latest versions of SweetAlert2:** There's [`swal.queue()`](https://sweetalert2.github.io/#chaining-modals), use it for multiple modals. Your case should look like this: ``` var modals = []; // birth modal if (!validateBirth(data)) { modals.push({title: 'title1', text: 'text1', ... }); } // email modal if (!validateEmail(email)) { modals.push({title: 'title2', text: 'text2', ... }); } Swal.queue(modals); ```
How does Rails CSRF protection work? Rails raises an `InvalidAuthenticityToken` when the CSRF token doesn't match. But, from reading the [source](https://github.com/rails/rails), I can't figure out how this actually happens. I start by acking the tree for that class: ``` $ ack --ignore-dir=test InvalidAuthenticityToken actionpack/lib/action_controller/metal/request_forgery_protection.rb 4: class InvalidAuthenticityToken < ActionControllerError #:nodoc: 17: # which will check the token and raise an ActionController::InvalidAuthenticityToken actionpack/lib/action_dispatch/middleware/show_exceptions.rb 22: 'ActionController::InvalidAuthenticityToken' => :unprocessable_entity ``` Only two hits, ignoring the comment. The first one is the class definition: ``` class InvalidAuthenticityToken < ActionControllerError #:nodoc: end ``` The second one is translating the exception into an HTTP status code. CSRF protection gets enabled by calling `protect_from_forgery` in the controller, so let's look at that: ``` def protect_from_forgery(options = {}) self.request_forgery_protection_token ||= :authenticity_token before_filter :verify_authenticity_token, options end ``` It adds a filter: ``` def verify_authenticity_token verified_request? || handle_unverified_request end ``` Which calls this when verification fails: ``` def handle_unverified_request reset_session end ``` So how is `InvalidAuthenticityToken` actually raised?
The behavior was [changed fairly recently](https://github.com/rails/rails/commit/ae19e4141f27f80013c11e8b1da68e5c52c779ea) but the documentation has yet to be updated. The new approach being used is to presume the session has been hijacked, and therefore to clear the session. Assuming your session contains the all-important authentication information for this request (like the fact you're logged in as `alice`) and your controller assures the user is authenticated for this action, your request will be redirected to a login page (or however you choose to handle a non logged-in user). However, for requests which are not authenticated, like a sign-up form, the request would go through using an empty session. It seems this commit also goes on to close a [CSRF vulnerability](http://weblog.rubyonrails.org/2011/2/8/csrf-protection-bypass-in-ruby-on-rails), but I didn't read into the details of that. To obtain the old behavior, you would simply define this method: ``` def handle_unverified_request raise(ActionController::InvalidAuthenticityToken) end ``` You can read more about CSRF and other Rails security issues at the [Ruby on Rails Security Guide](http://guides.rubyonrails.org/security.html#cross-site-request-forgery-csrf).
How to implement HttpContext in .net core 2.0? I have been trying to implement the http context in the ASP.NET MVC and system.web ,it would just use allow me to use HttpContext.Current to access the context . Anyway I started by injecting the IhttpcontextAccessor in the configureService method in the StartUp Class. I am posting this to see if anyone has implemented this using .Net Core 2.0. If so, please feel free to share the knowledge. Thanks in advance. ``` services.AddSingleton<HttpContextAccessor, HttpContextAccessor>(); ```
If you need it from a Controller, then just use `HttpContext`, as Muqeet's answer says. If you do need to inject it, then you're on the right track with `AddSingleton`, but you have a slight typo: ``` services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>(); ``` Notice the `I` in the first type. Then set up your constructor: ``` public class UserService : IUserService { private readonly HttpContext _context; public UserService(IHttpContextAccessor httpContextAccessor) { _context = httpContextAccessor.HttpContext; } } ``` Then you can use `context` anywhere else in your class. That's the basics of dependency injection. `AddSingleton` makes it available for injection. Then you ask for it where you need it by adding it to your constructor. That article that Maarten linked to in his comment explains in more detail.
How to compile "Hello World" in C++ with Ninja? I'm new with Ninja. Still don't know how to use it. I created a simple `hello.cpp` file: ``` #include <iostream> int main() { std::cout << "Hello World!" << std::endl; return 0; } ``` I am using Linux Ubuntu. I have installed CMake with: `apt install cmake` and I have installed ninja: `apt-get install ninja-build` But now what should I do to compile my `hello.cpp` file with Ninja? I tried to run `ninja` but I'm getting error about `rules.ninja`: ``` ninja: error: build.ninja:30: loading 'rules.ninja': No such file or directory ``` I don't know how to create `rules.ninja` and how to configure it, and I don't know if I miss more things.
Assuming here that you do not have a `CMakeLists.txt` file at all. To compile this program, you first need to create a `CMakeLists.txt` file. This file is used by CMake to configure the project. `CMakeLists.txt` (place it in the same folder as your source files): ``` cmake_minimum_required(VERSION 3.8) project(my_exe) set(CMAKE_CXX_STANDARD 14) # Try 11 if your compiler does not support C++14 add_executable(my_exe hello.cpp) ``` Then you need to invoke CMake (in the terminal, go to the folder containing the `CMakeLists.txt` file) and later build the project. First, you should create a build directory. This is handy since you don't want to mix build output with your project files. ``` mkdir build cd build ``` Then, you invoke CMake and tell it to generate a Ninja build system (`-GNinja`), while at the same time tell it where the `CMakeLists.txt` file is located (`..`), which should be directly below the `build` folder: ``` cmake -GNinja .. ``` Now, you are ready to build the executable: ``` ninja ``` Done. You should now have an executable name `my_exe` in your `build` folder.
c# protobuf reflection usage I am using official c# protobuf(not Protobuf-net). Dose it support create a message object according to its type? The typical deserialize is like: ``` MyProtoMessageClass obj = MyProtoMessageClass.Parser.ParseFrom(byteArray); ``` But how to generate the instance according to a string which is ``` "MyProtoMessageClass" ``` or a obj of `Google.Protobuf.Reflection.MessageDescriptor` which is ``` MyProtoMessageClass.Descriptor ``` ? **UPDATE** ``` delegate void handler(object data); class Wrapper { public handler h; public global::Google.Protobuf.IMessage m; } Dictionary<ushort, Wrapper> dict = new Dictionary<ushort, Wrapper>(); // register class HandlerClass { public void handle(object o) { ProtoMessageClass data = (ProtoMessageClass)o; // use data } } h = HandlerClassObj.handle; m = new ProtoMessageClass(); dict[1] = new Wrapper{h = h, m = m}; // call ushort cmd = 1;// from socket byte[] dataRecv; // from socket var w = dict[cmd]; Google.Protobuf.IMessage msg = w.m.Descriptor.Parser.ParseFrom(dataRecv); w.h.Invoke(msg); ```
Assume we got this proto definition: ``` syntax = "proto3"; package tutorial; option csharp_namespace = "T1.Models"; message Person { int32 id = 1; string name = 2; } ``` Compiling this proto file, we get a class called `Person` that implements `Google.Protobuf.IMessage`. This interface contains a property `MessageDescriptor Descriptor { get; }`, which is implemented by the class `Person` and returns a public static property of type `MessageDescriptor`. The `MessageDescriptor` contains a public static property called `Parser`, and we can call the `ParseFrom(byteArray)` of this. the code: ``` var typ = Assembly.GetExecutingAssembly().GetTypes().First(t => t.Name == "Person"); //get the type using the string we got, here it is 'Person' var descriptor = (MessageDescriptor)typ.GetProperty("Descriptor", BindingFlags.Public | BindingFlags.Static).GetValue(null, null); // get the static property Descriptor var person = descriptor.Parser.ParseFrom(byteArray); // parse the byte array to Person ```
Move TRichEdit Caretpos is there a way to change the caret position in pixel? i would like to move the care pos everytime i move the mouse mouse. like: Onmousemove: MoveCaretPos(X, Y);
No, you cannot set the position of the caret in a specific point, instead you must set the caret in a character position. To do this you must use the [`EM_CHARFROMPOS`](http://msdn.microsoft.com/en-us/library/bb761566%28v=vs.85%29.aspx) message to retrieve the closest character to a specified point and then set the value returned to the [`SelStart`](http://docwiki.embarcadero.com/VCL/en/StdCtrls.TCustomEdit.SelStart) property. Check this sample ``` procedure TForm1.RichEdit1MouseMove(Sender: TObject; Shift: TShiftState; X, Y: Integer); var APoint : TPoint; Index : Integer; begin APoint := Point(X, Y); Index := SendMessage(TRichEdit(Sender).Handle,EM_CHARFROMPOS, 0, Integer(@APoint)); if Index<0 then Exit; TRichEdit(Sender).SelStart:=Index; end; ```
Using AutoMapper with Data Reader I went through [How can I easily convert DataReader to List<T>?](https://stackoverflow.com/questions/1464883/how-can-i-easily-convert-datareader-to-listt) I wanted to implement something like what is accepted as an answer in the above link. Scenrio: I am using OdbcDataReader to retrieve from the database. And I have a Model Class . FYI , the properties of this class are exact replica of the column names from the database. I need to map these columns to the properties and return List Can this be accomplished using Automapper.
Something like this ``` public List<T> ReadData<T>(string queryString) { using (var connection = new SqlConnection(constr)) using (var command = new SqlCommand(queryString, connection)) { connection.Open(); using (var reader = command.ExecuteReader()) if (reader.HasRows) return Mapper.DynamicMap<IDataReader, List<T>>(reader); } return null; } ``` Define your class ``` public class MarkType { public int id { get; set; } public string name { get; set; } public DateTime inserted { get; set; } } ``` Use ``` List<MarkType> lst = _helper.ReadData<MarkType>("SELECT [id],[name],[inserted] FROM [marktype]"); ```
What is the significance of using checked here Can anyone elaborate following statement: ``` byte[] buffer = new Byte[checked((uint)Math.Min(32 * 1024, (int)objFileStream.Length))]; ``` why i should not use ``` byte[] buffer = new Byte[32 * 1024]; ```
Attempt was to throw exception if `objFileStream.Length` will return number greater then `int.MaxValue` (2147483647), because `Length` on `Stream` returns `long` type (I assume `objFileStream` is stream). In **.net** arithmetic overflow is not checked by default. Next code demonstrates this case: ``` long streamLength = long.MaxValue; //suppose buffer length is big var res = checked( (int)(streamLength + 1) ); //exception will be thrown Console.WriteLine( res ); //will print 0 in you comment checked keyword ``` After short analysis, you can reduce next statement ``` new Byte[checked((uint)Math.Min(32 * 1024, (int)objFileStream.Length))]; ``` to ``` new Byte[Math.Min(32 * 1024, checked((int)objFileStream.Length))]; ``` Personal recommendation: I don't see how `OverflowException` will help you here. `Math.Min` will ensuse, that array will be created no longer than `32768` items. If you try to `catch` somewhere in the calling method, you won't be able to deduce what is the reason of that error, it may come from anywhere in the called stack. So you probably need not to always allocate array of size 32768 as you proposed ``` byte[] buffer = new Byte[32 * 1024]; ``` but still use `Math.Min` so that you'll save storage, if `objFileStream.Length` will return small number ``` byte[] buffer = new Byte[Math.Min(32 * 1024, objFileStream.Length)]; ```
Using RegisterInitializer to wire event handlers I have a WCF service that uses Simple Injector for dependency injection. I want to wire up some event handlers in the container bootstrapper. I have created an interface `IStatusChangeNotification`: ``` public interface IStatusChangeNotification { event EventHandler<int> JobStatusChange; } ``` My `CommandHandler` implements `IStatusChangeNotification` and there are two event handler classes `EmailNotification` and `MmrNotification`, each defining a `Notify()` method. Then in my bootstrap code I have the following: ``` container.Register<EmailNotification>(); container.Register<MmrNotification>(); container.RegisterManyForOpenGeneric(typeof(ICommandHandler<>), Assembly.GetExecutingAssembly()); container.RegisterInitializer<IStatusChangeNotification>(scn => { scn.JobStatusChange += container.GetInstance<EmailNotification>().Notify; scn.JobStatusChange += container.GetInstance<MmrNotification>().Notify; }); ``` This works and the notifications are received. My question is whether this is the correct/best approach for wiring up event handlers? How do I remove the handlers at the end of the request and will failing to remove them result in a memory leak?
Although your approach might work, I think this part of your system design might deserve the same amount of attention as your command handlers do. The most common reason for command handlers to trigger events, is to publishing events that describe some business related action. So instead of using .NET events, model those domain events the same way as you model your commands: ``` // Abstractions public interface IEventHandler<TEvent> where TEvent : IDomainEvent { void Handle(TEvent e); } public interface IEventPublisher { void Publish<TEvent>(TEvent e) where TEvent : IDomainEvent; } // Events public class JobStatusChanged : IDomainEvent { public readonly int JobId; public JobStatusChanged(int jobId) { this.JobId = jobId; } } // Container-specific Event Publisher implementation public class SimpleInjectorEventPublisher : IEventPublisher { private readonly Container container; public SimpleInjectorEventPublisher(Container container) { this.container = container; } public void Publish<TEvent>(TEvent e) { var handlers = container.GetAllInstances<IEventHandler<TEvent>>(); foreach (var handler in handlers) { hanlder.Handle(e); } } } ``` With the previous infrastructure, you can create the following event and command handlers: ``` // Event Handlers public class EmailNotificationJobStatusChangedHandler : IEventHandler<JobStatusChanged> { public void Handle(JobStatusChanged e) { // TODO: Implementation } } public class MmrNotificationJobStatusChangedHandler : IEventHandler<JobStatusChanged> { public void Handle(JobStatusChanged e) { // TODO: Implementation } } // Command Handler that publishes public class ChangeJobStatusCommandHandler : ICommandHandler<ChangeJobStatus> { private readonly IEventPublisher publisher; public ChangeJobStatusCommandHandler(IEventPublisher publisher) { this.publisher = publisher; } public void Handle(ChangeJobStatus command) { // change job status this.publisher.Publish(new JobStatusChanged(command.JobId)); } } ``` Now you can register your command handlers and event handlers as follows: ``` container.RegisterManyForOpenGeneric(typeof(ICommandHandler<>), Assembly.GetExecutingAssembly()); // This registers a collection of eventhandlers with RegisterAll, // since there can be multiple implementations for the same event. container.RegisterManyForOpenGeneric(typeof(IEventHandler<>), container.RegisterAll, Assembly.GetExecutingAssembly()); ``` This removes the need to register each event handler class seperately, since they are simply implementations of `IEventHandler<JobStatusChanged>` and can all be batch-registered in one line of code. There's also no need to use `RegisterInitializer` to hook any events using custom defined interfaces. Other advantages of this are: - The dependency between a command handler and the `IEventPublisher` interface makes it very clear that this command is publishing events. - The design is much more scalable, since its less likely for the composition root to have to change when new commands and events are added to the system. - It does your domain much good, since each event gets its own entity in the system. - It will be much easier to change the way events are processed, since that's now an implementation detail of the `SimpleInjectorEventProcessor`. For instance, you can deside to run them in parallel, run them in their own transaction, process them later (by storing them in an event store).
How to know which shared library my program is actually using at run time? How do I figure out the path of a shared library that my program is using at run time? I have glibc 2.12 as the primary glibc running on my CentOS 6.10 system, and have also [installed](https://stackoverflow.com/questions/35616650/how-to-upgrade-glibc-from-version-2-12-to-2-14-on-centos) glibc 2.14 in `/opt/glibc-2.14`. When I inspect my executable file with ``` $ objdump -p ./myProgram ``` it gives this info ``` Dynamic Section: NEEDED libpthread.so.0 NEEDED libcurl.so.4 NEEDED libc.so.6 ``` and my `LD_LIBRARY_PATH` has this value `/opt/glibc-2.14/lib`. Is there away to see which `libc.so.6` library (perhaps with the path to the library file) my program is actually using **while it is running**?
**On Linux**: One possible approach is to look into the corresponding entry in the `/proc/` filesystem. For example for a program with PID `X` you can find info in `/proc/X/maps` similar to: ``` ... 7f34a73d6000-7f34a73f8000 r--p 00000000 08:03 18371015 /nix/store/681354n3k44r8z90m35hm8945vsp95h1-glibc-2.27/lib/libc-2.27.so 7f34a73f8000-7f34a7535000 r-xp 00022000 08:03 18371015 /nix/store/681354n3k44r8z90m35hm8945vsp95h1-glibc-2.27/lib/libc-2.27.so ... ``` It clearly shows where my libc (the one used by this program) is. --- [Example](https://ideone.com/f22sVw) (missing some error handling!) to show where `fopen` is coming from: ``` #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <string.h> #include <stdint.h> #define BSIZE 200 int main(void) { char buffer[BSIZE]; int const pid = getpid(); snprintf(buffer, BSIZE, "/proc/%d/maps", pid); FILE * const maps = fopen(buffer, "r"); while (fgets(buffer, BSIZE, maps) != NULL) { unsigned long from, to; int const r = sscanf(buffer, "%lx-%lx", &from, &to); if (r != 2) { puts("!"); continue; } if ((from <= (uintptr_t)&fopen) && ((uintptr_t)&fopen < to)) { char const * name = strchr(buffer, '/'); if (name) { printf("%s", name); } else { puts("?"); } } } fclose(maps); } ```
Typecasting or initialization, which is better in Swift? Since Swift does a strict check for types, sometimes we need to convert between NSString and String, NSArray and [AnyObject], etc. But in fact there are two different ways to do this: 1. Typecasting, such as `str as String` 2. Initialization, such as`String(str)` Which is more recommended in Swift? Why?
You are talking about casting vs. coercion. It isn't a matter of "recommended". They are completely different things. - *Casting* is merely a way of redescribing something to the compiler. Your redescription must be true already, or you will crash at runtime. For example, you can cast an AnyObject to a String (with `as!`) just in case it *is* a string; you are simply telling the compiler about this, so that you can then send String-related messages to this object. The cast is legal because String is bridged to NSString and an NSString is an AnyObject, so *this* AnyObject might indeed be a String. But if you lie and this is *not* a string, you'll crash later when the app runs and you try to cast to a String something that is not in fact already a String. - *Coercion* makes a new object entirely. It works only just in case the new type has an *initializer* that accepts the old object. For example, you cannot cast between numeric types in Swift. You have to coerce, which is a completely different thing - that is, you must make a new object of a different numeric type, based on the original object. The only way to use an Int8 where a UInt8 is expected is to coerce it: `UInt8(x)`. And this is legal because UInt8 has an Int8 initializer (as you can see in the Swift header): ``` extension UInt8 { public init(_ v: Int8) // ... } ```
D3D11\_CREATE\_DEVICE\_DEBUG on Windows 8.1 I am trying to create a dx11 device using D3D11CreateDeviceAndSwapChain on Windows 8.1. This was previously working on Windows 8 before I installed 8.1. However now the device creation fails when I pass the D3D11\_CREATE\_DEVICE\_DEBUG flag. The enum documentation <http://msdn.microsoft.com/en-us/library/windows/desktop/ff476107(v=vs.85).aspx> states that you need D3D11\_1SDKLayers.dll installed but shouldn't this be installed already as part of the Windows 8 SDK? I've even tried re installing the Windows 8 SDK but this hasn't helped. Any one have any ideas?
Using the Windows 8.1 SDK Layers (which are used when you pass the D3D11\_CREATE\_DEVICE\_DEBUG) requires that the Windows 8.1 SDK be installed on your system. The Windows 8 SDK Layers (found in the Windows 8 SDK) are not compatible with 8.1. You can find the 8.1 SDK on the link below. <http://go.microsoft.com/fwlink/?LinkID=294834> Also, the Windows Graphics development team closely monitors the MSDN Forum "Building Windows Store games with DirectX", which can be found at: <http://social.msdn.microsoft.com/Forums/windowsapps/en-us/home?category=windowsapps%2Cwindowsapps81preview> So feel free to use that as a resource if you need help.
MediaQuery.of() called with a context that does not show a MediaQuery error shown even though MediaQuery not used in app I am a beginner in Flutter and I am trying to make a simple app with an AppBar with a title showing the text "Hi there" When executing the code via Android Studio, I am always receiving the error ``` MediaQuery.of() called with a context that does not contain a MediaQuery. ``` Why is this error called even though I don't call MediaQuery.of()? How do I solve this problem I first thought this was an issue with flutter or Android Studio. I used flutter doctor and everything was fine. The exact code is shown below: ``` [√] Flutter (Channel stable, v1.2.1, on Microsoft Windows [Version 10.0.17134.706], locale en-US) [√] Android toolchain - develop for Android devices (Android SDK version 28.0.3) [√] Android Studio (version 3.4) [√] VS Code (version 1.33.1) [√] Connected device (1 available) β€’ No issues found! ``` I realized that the Flutter framework itself probably wasn't a problem. I then decided to restart Android Studio because that itself could solve the problem on its own. Unfortunately, the pesky error had still shown Here is my code. ``` import 'package:flutter/material.dart'; void main() => runApp(myApp()); class myApp extends StatelessWidget{ Widget build(BuildContext context){ return Scaffold( appBar: AppBar( title: Text("Hi there") ) ); } } ``` I expected to see an AppBar with a title stating "Hi There", but instead I had received an error.
`MediaQuery` is used by `Scaffold` internal components to layout its children as evident from its [source code](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/scaffold.dart). Thus, it needs to be wrapped inside a widget which will provide a `MediaQuery`, like a `MaterialApp` widget, which inherits from [`WidgetsApp`](https://docs.flutter.io/flutter/widgets/WidgetsApp-class.html) : ``` void main() => runApp(myApp()); class myApp extends StatelessWidget{ Widget build(BuildContext context){ return MaterialApp( home: Scaffold( appBar: AppBar( title: Text("Hi there") ) ), ); } } ```
An object reference is required for the non-static field, method, or property 'System.Web.UI.Page.Session.get' I'm getting error like > > An object reference is required for the non-static field, method, or > property 'System.Web.UI.Page.Session.get' > > > Can you suggest me to recover from this problem in session. ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.Services; using System.Configuration; using System.Data.SqlClient; using System.Web.SessionState; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } //Onclick Submit Button [WebMethod(EnableSession = true)] //[System.Web.Services.WebMethod(EnableSession = true)] public static string Login(string email, string password) { var con = new SqlConnection(ConfigurationManager.ConnectionStrings["blogConnString"].ConnectionString); con.Open(); string res = "0"; SqlDataReader reader; string sql = "select uid,username from personal where email='" + email + "' and password='" + password + "'"; SqlCommand cmd1 = new SqlCommand(sql, con); reader = cmd1.ExecuteReader(); while (reader.Read()) { res = "1"; Session["UID"] = reader["uid"].ToString(); //error line here Session["UNAME"] = reader["username"].ToString(); //error line here } return res; con.Close(); } } ```
Try this code and please be-ware of SQL Injection - to prevent it, use a Parametrized Query as follows: ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.Services; using System.Configuration; using System.Data.SqlClient; using System.Web.SessionState; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } //Onclick Submit Button [WebMethod(EnableSession = true)] //[System.Web.Services.WebMethod(EnableSession = true)] public static string Login(string email, string password) { var con = ConfigurationManager.ConnectionStrings["blogConnString"].ConnectionString; con.Open(); string res = "0"; SqlDataReader reader; string sql = "select uid,username from personal where email=@Email and password=@Password"; using(SqlConnection connection = new SqlConnection(con)) { SqlCommand command = new SqlCommand(commandText, connection); command.Parameters.Add("@Email", SqlDbType.String); command.Parameters["@Email"].Value = email; command.Parameters.AddWithValue("@Password", password); reader = command.ExecuteReader(); while (reader.Read()) { res = "1"; HttpContext.Current.Session["UID"] = reader["uid"].ToString(); //Either Remove Static from Method Declaration or use HttpContext.Current along with session. HttpContext.Current.Session["UNAME"] = reader["username"].ToString(); } } return res; con.Close(); } } ```
Difference between Resolvers and Controllers? I may be overthinking it, but are controllers and resolvers the same thing in web applications? Coming from the MERN stack, everyone used to call these request processing functions "controllers," but in PERN+GQL, everyone calls them "resolvers." By my understanding, they are the same thing. - *MERN - MongoDB, Express, React, Node* - *PERN+GQL - PostgreSQL, Express, React, Node, GraphQL* Controller/Resolver: A function used to process requests and produce a response in applications?
Disclaimer: this is based on what I've read, not on direct personal experience with GraphQL. It looks like the term "resolver" is associated with GraphQL (at least in the context of your question), where it has to do with *resolving* the specification represented by the query into an actual data structure (it figures out how to make the data have a concrete shape, and fill it with values). That is, for each field specified in the query, there's a function that figures out how to resolve that specific field. Each field-specific function is termed a resolver. And then there's some mechanism that calls the corresponding resolvers recursively for each field, until you hit simple (non-composite, scalar) values. > > "Each field on each type is backed by a function called the resolver > which is provided by the GraphQL server developer. When a field is > executed, the corresponding resolver is called to produce the next > value. > > > If a field produces a scalar value like a string or number, then the > execution completes. However if a field produces an object value then > the query will contain another selection of fields which apply to that > object. This continues until scalar values are reached. GraphQL > queries always end at scalar values." ([source](https://graphql.org/learn/execution/)) > > > So, resolvers are not specifically concerned with handling the web request itself, they have a more narrowly defined job. # Some more context In a web app, or a web service, a controller is (more or less) an entry point for a request. A controller can (and often does) call other stuff to handle the details of the request. That is, it orchestrates other objects and functions that have narrower responsibilities, and that collaborate together to fulfill the request. That's just the normal responsibility segregation (i.e. as soon as your logic becomes nontrivial, you don't want to write all of it in the same function on the controller). With GraphQL (or any query-based approach), you can replace all endpoints that allow callers to request data in a predefined way (or some subset of such endpoints), with a single endpoint that accepts a user-provided *query*, which allows them to specify exactly what they want. The query itself plays the same role that URL parameters or request body did in a traditional Web APIs (the query is a form of *input*), it's just that's it's more powerful, because it's a language of sorts. So, with a query-based approach, you can have a single entry point - a single controller - that, among other things, calls some resolver to ask it to interpret/resolve the query (or a part of it) into an actual data structure. Maybe it orchestrates which resolver is called when, or how the data is combined into the final result (not sure about the details in the GraphQL case). You could in principle do extra stuff before and/or after you call the resolver(s) (like authorization, query extraction/parsing, or field censoring, or something), and then return the response to the client. And you could implement all this manually. I'm guessing that in various frameworks that have support for GraphQL, the presence of the controller (or whatever internal system takes on its role) and other supporting systems is not obvious (as in, the framework might handle all the controller-y details for you, and call the resolver(s) for you) - so I'm guessing that this is the source of your confusion. So, to summarize: to me, it looks like the conceptual responsibility of the resolver isn't to respond to the request, but to figure out how to transform the data, as it exist in some storage or data source, into the user-specified shape (the data format that the user wants). When you think about it, you could have a similar concept without having a client/server architecture (you could have a scenario where you query your own local data, without going over the web, so the concept itself is not fundamentally web-related).
sql query to set year as column name By writing the following query ``` SELECT item_name, YEAR( DATE ) , SUM( item_sold_qty ) FROM item JOIN sales ON item.id = sales.item_number GROUP BY YEAR( DATE ) , item_name ORDER BY item_name ``` i'm able to get the following result ``` item_name YEAR( DATE ) SUM( item_sold_qty ) pencil 2011 22 pencil 2012 44 eraser 2012 22 eraser 2011 11 pen 2012 66 pen 2011 33 nib 2012 88 nib 2011 44 ``` Instead i want the result in the following way ``` item_name 2011 2012 pencil 22 44 eraser 11 22 pen 33 66 nib 44 88 ``` I'm not really good at sql and have no clue for how to set the years as column names. Need help. **NOTE** :: My database has 2 tables. Sales table has date column which has different dates like 2012-03-01, 2012-04-02, 2011-07-03, so on...
Maybe something like this: ``` SELECT item_name, SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011', SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012' FROM item JOIN sales ON item.id = sales.item_number GROUP BY item_name ORDER BY item_name ``` **EDIT** If you want the other years and still sum them. Then you can do this: ``` SELECT item_name, SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011', SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012', SUM(CASE WHEN NOT YEAR( DATE ) IN (2011,2012) THEN item_sold_qty ELSE 0 END) AS 'AllOtherYears' FROM item JOIN sales ON item.id = sales.item_number GROUP BY item_name ORDER BY item_name ``` **EDIT2** If you have a lot of years and you do not want to keep on adding years. Then you need to using dynamic sql. That means that you concat a varchar of the sql and then execute it. Useful References: - [MySQL pivot table with dynamic headers based on single column data](https://stackoverflow.com/questions/6158230/mysql-pivot-table-with-dynamic-headers-based-on-single-column-data) - [How To have Dynamic SQL in MySQL Stored Procedure](https://stackoverflow.com/questions/190776/how-to-have-dynamic-sql-in-mysql-stored-procedure) - [MySQL/Pivot table](http://en.wikibooks.org/wiki/MySQL/Pivot_table) - [MYSQL - Rows to Columns](https://stackoverflow.com/questions/1241178/mysql-rows-to-columns)
Cannot get Rust enum in scope when in crate > > Editor's note: The code in this question is from a version of Rust prior to 1.0. The underlying system of how enums are imported was changed for Rust 1.0. > > > This seems like it should be easy (emulating C/C++ enums), but I can't get it to work. I simply want to use an enum from a crate, but this doesn't seem to work no matter what I try. Am I missing something about Rust enums (can they not be used like old C/C++ enums)? log/mod.rs: ``` pub enum Level { Debug, Normal, } pub struct Log { pub log_level: Level, } ``` main.rs: ``` extern crate project; use project::log::{Log, Level}; fn main() { // error: unresolved name `Normal`. let logger = Log { log_level: Normal }; // unresolved name `Level::Normal`. let logger = Log { log_level: Level::Normal }; // unresolved name `log::Level::Normal`. let logger = Log { log_level: log::Level::Normal }; // unresolved name `project::log::Level::Normal`. let logger = Log { log_level: project::log::Level::Normal }; } ```
### Rust 1.0 Enum variants are now namespaced under the enum name. These two options work as-is: ``` extern crate project; use project::log::{Level, Log}; fn main() { let logger = Log { log_level: Level::Normal, }; let logger = Log { log_level: project::log::Level::Normal, }; } ``` You can also import the module: ``` extern crate project; use project::log; fn main() { let logger = log::Log { log_level: log::Level::Normal, }; } ``` Or you can even import all enum variants: ``` extern crate project; use project::log::{Log, Level::*}; fn main() { let logger = Log { log_level: Normal, }; } ``` ### Before Rust 1.0 You need to import each enum variant by name, not just the name of the enumeration, in order to use its unqualified name. Change the second line in main.rs to ``` use project::log::{Log, Debug, Normal}; ``` Alternately, you can use the qualified name, without the `Level::` part of the path, since enum variants aren't namespaced like C++ enum classes are. ``` use project::log; ... Log { log_level: log::Normal }; ```
How to export GRAL plot to JPG? I'm trying to export example [GRAL Pie plot](http://trac.erichseifert.de/gral/browser/gral-examples/src/main/java/de/erichseifert/gral/examples/pieplot/SimplePiePlot.java) to jpg using: ``` private byte[] getJpg() throws IOException { BufferedImage bImage = new BufferedImage(800, 600, BufferedImage.TYPE_INT_RGB); Graphics2D g2d = bImage.createGraphics(); DrawingContext drawingContext = new DrawingContext(g2d, DrawingContext.Quality.QUALITY, DrawingContext.Target.BITMAP); PiePlot plot = getPlot(); plot.draw(drawingContext); ByteArrayOutputStream baos = new ByteArrayOutputStream(); ImageIO.write(bImage, "jpg", baos); baos.flush(); byte[] bytes = baos.toByteArray(); baos.close(); return bytes; } ``` But it renders as black rectangle with some legend information (legend is ok). Who knows the right way to render JPG from GRAL plot?
Shurely, I found a built'in solution, DrawableWriter. Now the export looks like this: ``` private byte[] getJpg() throws IOException { BufferedImage bImage = new BufferedImage(800, 600, BufferedImage.TYPE_INT_ARGB); Graphics2D g2d = (Graphics2D) bImage.getGraphics(); DrawingContext context = new DrawingContext(g2d); PiePlot plot = getPlot(); plot.draw(context); ByteArrayOutputStream baos = new ByteArrayOutputStream(); DrawableWriter wr = DrawableWriterFactory.getInstance().get("image/jpeg"); wr.write(plot, baos, 800, 600); baos.flush(); byte[] bytes = baos.toByteArray(); baos.close(); return bytes; } ``` Thanks to developers! Everything is done already.
How to do Math.random() in for-loop with different values? I try to fill the array `s` with 9999 different sets of values `m[i][random]`, here is the code: ``` let m = [[22,0],[53,0],[64,0],[45,0],[34,0]]; let l = m.length; let s = []; for (let j = 0; j < 9999; j++) { for(let i = 0; i < m.length; i++) { let x = Math.floor(Math.random()*l); m[i][1] = x; } s.push(m); } ``` But i get the same values: ``` console.log(s) [ [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], [ [ 22, 0 ], [ 53, 2 ], [ 64, 0 ], [ 45, 4 ], [ 34, 1 ] ], ...] ``` What am I doing wrong? How to fix it?
Create the `m` subarray inside the loop (so you have a separate subarray for each iteration), not outside of it - outside, you've only created a single array in memory that each index points to. ``` let s = []; for (let j = 0; j < 9999; j++) { let m = [[22,0],[53,0],[64,0],[45,0],[34,0]]; let l = m.length; for(let i = 0; i < m.length; i++) { let x = Math.floor(Math.random()*l); m[i][1] = x; } s.push(m); } ``` Or, more functionally and all at once with `Array.from`: ``` const s = Array.from( { length: 2 }, () => [[22,0],[53,0],[64,0],[45,0],[34,0]] .map(([num]) => [num, Math.floor(Math.random() * 5)]) ); console.log(s); ```
MySQL query : all records of one table plus count of another table I have 2 tables: User and Picture. The Picture table has the key of the user. So basically each user can have multiple pictures, and each picture belongs to one user. Now, I am trying to make the following query: I want to select all the user info plus the total number of pictures that he has (even if it's 0). How can I do that? Probably it sounds quite simple, but I am trying and trying and can't seem to find the right query. The only thing I could select is this info, but only for users that have at least 1 picture, meaning that the Pictures table has at least one record for that key... But I also wanna consider the users that don't have any. Any idea? Thanks!
You may want to try the following: ``` SELECT u.name, IFNULL(sub_p.total, 0) num FROM users u LEFT JOIN ( SELECT COUNT(*) total, user_id FROM pictures GROUP BY user_id ) sub_p ON (sub_p.user_id = u.user_id); ``` Test case: ``` CREATE TABLE users (user_id int, name varchar(10)); CREATE TABLE pictures (user_id int); INSERT INTO users VALUES (1, 'Joe'); INSERT INTO users VALUES (2, 'Peter'); INSERT INTO users VALUES (3, 'Bill'); INSERT INTO pictures VALUES (1); INSERT INTO pictures VALUES (1); INSERT INTO pictures VALUES (2); INSERT INTO pictures VALUES (2); INSERT INTO pictures VALUES (2); ``` Result: ``` +-------+-----+ | name | num | +-------+-----+ | Joe | 2 | | Peter | 3 | | Bill | 0 | +-------+-----+ 3 rows in set (0.00 sec) ```
How to access ssis package variables inside script component How can I access variables inside my C# code which I've used in Data Flow -> Script Component - > My c# Script with my SSIS package? I have tried with which is also not working ``` IDTSVariables100 varCollection = null; this.VariableDispenser.LockForRead("User::FilePath"); string XlsFile; XlsFile = varCollection["User::FilePath"].Value.ToString(); ```
Accessing package variables in a Script **Component** (of a Data Flow Task) is not the same as accessing package variables in a Script **Task**. For a Script Component, you first need to open the [Script Transformation Editor](http://msdn.microsoft.com/en-us/library/ms181225%28v=sql.105%29.aspx) (right-click on the component and select "Edit..."). In the Custom Properties section of the Script tab, you can enter (or select) the properties you want to make available to the script, either on a read-only or read-write basis: ![screenshot of Script Transformation Editor properties page](https://i.stack.imgur.com/vXFwG.png) Then, within the script itself, the variables will be available as strongly-typed properties of the Variables object: ``` // Modify as necessary public override void PreExecute() { base.PreExecute(); string thePath = Variables.FilePath; // Do something ... } public override void PostExecute() { base.PostExecute(); string theNewValue = ""; // Do something to figure out the new value... Variables.FilePath = theNewValue; } public override void Input0_ProcessInputRow(Input0Buffer Row) { string thePath = Variables.FilePath; // Do whatever needs doing here ... } ``` One important caveat: if you need to **write** to a package variable, you can only do so in the PostExecute() method. Regarding the code snippet: ``` IDTSVariables100 varCollection = null; this.VariableDispenser.LockForRead("User::FilePath"); string XlsFile; XlsFile = varCollection["User::FilePath"].Value.ToString(); ``` `varCollection` is initialized to null and never set to a valid value. Thus, **any** attempt to dereference it will fail.
How to draw a set of horizontal lines? I am new to OpenGL as learning exercise I decided to draw a set of horizontal lines from a grid of m x n matrix containing the vertices locations This is what I have ![enter image description here](https://i.stack.imgur.com/4ElHu.jpg) and If I use LINE\_STRIP ![enter image description here](https://i.stack.imgur.com/p0sMA.jpg) **A code snippet using vertex arrays and indices will be great, I cant seem to be able to get the concept just from a text book I need to see and play with a code example Any help will be much appreciated!** --- @Thomas Got it working with the following code ``` totalPoints = GRID_ROWS * 2 * (GRID_COLUMNS - 1); indices = new int[totalPoints]; points = new GLModel(this, totalPoints, LINES, GLModel.DYNAMIC); int n = 0; points.beginUpdateVertices(); for ( int row = 0; row < GRID_ROWS; row++ ) { for ( int col = 0; col < GRID_COLUMNS - 1; col++ ) { int rowoffset = row * GRID_COLUMNS; int n0 = rowoffset + col; int n1 = rowoffset + col + 1; points.updateVertex( n, pointsPos[n0].x, pointsPos[n0].y, pointsPos[n0].z ); indices[n] = n0; n++; points.updateVertex( n, pointsPos[n1].x, pointsPos[n1].y, pointsPos[n1].z ); indices[n] = n1; n++; } } points.endUpdateVertices(); ``` Then I update and draw by doing ``` points.beginUpdateVertices(); for ( int n = 0; n < totalPoints; n++ ) { points.updateVertex( n, pointsPos[indices[n]].x, pointsPos[indices[n]].y, pointsPos[indices[n]].z ); } points.endUpdateVertices(); ``` This is the result ![enter image description here](https://i.stack.imgur.com/UtEWb.jpg) --- Fix it by changing the nested for loop ``` for ( int col = 0; col < GRID_COLUMNS; col++ ) { for ( int row = 0; row < GRID_ROWS - 1; row++ ) { int offset = col * GRID_ROWS; int n0 = offset + row; int n1 = offset + row + 1; indices[n++] = n0; indices[n++] = n1; } } ``` Now I can have any number of rows and columns Thanks agin!
You need to draw a line for each segment and resuse an index, i.e. for the first part you'd draw a line for (0,1), (1,2), (2,3) and so on. Edit: Suppose you have a 4x5 array (4 lines, 5 vertices per line). You could then calculate the indices like this (pseudo code): ``` Vertex[] v = new Vertex[20]; // 20 vertices in the grid for(int row = 0; row < numrows; row++) // numrows = 4 { int rowoffset = row * numcols ; //0, 4, 8, 12 for(int col = 0; col < (numcols - 1); col++) //numcols = 5 { addLineIndices(rowoffset + col, rowoffset + col +1); //adds (0,1), (1,2), (2,3) and (3, 4) for the first row } } ``` Then issue the draw call for `numrows * (numcols - 1)` linesegments (GL\_LINES), i.e. 16 in the example. Note that `addLineIndices` would be a function that adds the index pair for one line segment to an index array which is then supplied to the draw call.
One Signal - idsAvailable is deprecated OneSignal's latest build version generate a warning like '**idsAvailable**' is deprecated. How can I get 'playerID' using **getPermissionSubscriptionState()** instead idsAvailable as I did below? ``` OneSignal.idsAvailable({ (userId, pushToken) in if (pushToken != nil) { if let playerID = userId { // do something } } }) ```
OneSignal keeps really good documentation. For android you can get userIDs by using the `idsAvailable` method which returns the userID and registrationID. userID aka playerID is a OneSignal UUID formatted string. (unique per device per app) registrationID is a Google assigned identifier (unique per device per app and changes on reinstalls). For iOS, you can get the userIDs in the same way but in version 2.5.0+ of the native iOS SDK added getPermissionSubscriptionState method and addSubscriptionObserver. You can get the OSPermissionSubscriptionState for Swift as follows: ``` let status: OSPermissionSubscriptionState = OneSignal.getPermissionSubscriptionState() let hasPrompted = status.permissionStatus.hasPrompted print("hasPrompted = \(hasPrompted)") let userStatus = status.permissionStatus.status print("userStatus = \(userStatus)") let isSubscribed = status.subscriptionStatus.subscribed print("isSubscribed = \(isSubscribed)") let userSubscriptionSetting = status.subscriptionStatus.userSubscriptionSetting print("userSubscriptionSetting = \(userSubscriptionSetting)") let userID = status.subscriptionStatus.userId print("userID = \(userID)") let pushToken = status.subscriptionStatus.pushToken print("pushToken = \(pushToken)") ``` For Objective-C: ``` OSPermissionSubscriptionState* status = [OneSignal getPermissionSubscriptionState]; status.permissionStatus.hasPrompted status.permissionStatus.status status.subscriptionStatus.subscribed status.subscriptionStatus.userSubscriptionSetting status.subscriptionStatus.userId status.subscriptionStatus.pushToken ``` So your above code will now look something like this: ``` let status: OSPermissionSubscriptionState = OneSignal.getPermissionSubscriptionState() let userID = status.subscriptionStatus.userId print("userID = \(userID)") let pushToken = status.subscriptionStatus.pushToken print("pushToken = \(pushToken)") if pushToken != nil { if let playerID = userID { // do something } } ```
Call a python function within a html file Is there a way to call a python function when a certain link is clicked within a html page? Thanks
You'll need to use a web framework to route the requests to Python, as you can't do that with just HTML. [Flask](http://flask.pocoo.org/) is one simple framework: **server.py**: ``` from flask import Flask, render_template app = Flask(__name__) @app.route('/') def index(): return render_template('template.html') @app.route('/my-link/') def my_link(): print 'I got clicked!' return 'Click.' if __name__ == '__main__': app.run(debug=True) ``` **templates/template.html**: ``` <!doctype html> <title>Test</title> <meta charset=utf-8> <a href="/my-link/">Click me</a> ``` Run it with `python server.py` and then navigate to <http://localhost:5000/>. The development server isn't secure, so for deploying your application, look at <http://flask.pocoo.org/docs/0.10/quickstart/#deploying-to-a-web-server>
How to add entry to 'More' menu or top menu to add action on multiple selections? ### Goal I want to create delivery from a selection of order lines, but I can't get the button to appear. [![add a create delivery button](https://i.stack.imgur.com/MDlUk.png)](https://i.stack.imgur.com/MDlUk.png) ### Module ``` compose_delivery_order/ β”œβ”€β”€ delivery_order_button.py β”œβ”€β”€ delivery_order_button.xml β”œβ”€β”€ images/ β”‚Β Β  └── delivery_order_button.png β”œβ”€β”€ __init__.py β”œβ”€β”€ __openerp__.py β”œβ”€β”€ order_to_invoice_create_delivery_button.py β”œβ”€β”€ order_to_invoice_create_delivery_button.xml ← I'm working on this β”œβ”€β”€ sale_order_button.py └── sale_order_confirm_button.xml ``` ### XML `<record>` ``` <record id="action_sale_order_liquidar" model="ir.actions.server"> <field name="name">Liquidar</field> <field name="type">ir.actions.server</field> <field name="model_id" ref="model_sale_order"/> <field name="state">code</field> <field name="code">self.some_custom_code(cr, uid, context.get('active_ids'), context=context)</field> </record><record id="id_of_the_action_value" model="ir.values"> <field name="name">Liquidar</field> <field name="action_id" ref="action_sale_order_liquidar"/> <field name="value" eval="'ir.actions.server,' + str(ref('action_sale_order_liquidar'))"/> <field name="key">action</field> <field name="model_id" ref="model_sale_order"/> <field name="model">sale.order</field> <field name="key2">client_action_multi</field> </record> ``` ### Python ``` import logging from openerp import netsvc from openerp.osv import osv logger = logging.getLogger(__name__) class sale_order_line(osv.osv): _inherit = 'sale.order.line' def some_custom_code(self, db_cursor, user_id, ids, context=None): logger.error('+ + + + + + sale_order_line -> some_custom_code + + + + + + ') return True def some_custom_code(self, db_cursor, user_id, ids, context=None): logger.error('+ + + + + + some_custom_code + + + + + + ') return True ``` ### Question How do I add the button ? Either to submenu of `[More v]` or next to `[Create] or Import`. N.B.: I check it's imported in the `__openerp__.py`'s `data` section.
You need take care of below things while adding new menu on *More* menu. - register form view - register action view - register *ir\_values* (from where you want to display menu) - give proper related *view\_id* reference - *target* attribute in action view - give related name for *record* tag Try with below code (add code on .xml file): ``` <?xml version="1.0" encoding="utf-8"?> <openerp> <data> <record id="view_create_delivery_button" model="ir.ui.view"> <field name="name">Create Delivery</field> <field name="model">sale.order.line</field> <field name="arch" type="xml"> <form string="Create Delivery" version="7.0"> <!-- Design Form or put your field here --> <footer> <!-- Add button on footer of pop-up window --> </footer> </form> </field> </record> <record id="action_create_delivery_button" model="ir.actions.act_window"> <field name="name">Create Delivery</field> <field name="type">ir.actions.act_window</field> <field name="res_model">sale.order.line</field> <field name="view_type">form</field> <field name="view_mode">form</field> <field name="view_id" ref="view_create_delivery_button"/> <field name="target">new</field> <field name="multi">True</field> </record> <record model="ir.values" id="ir_values_create_delivery_button"> <field name="model_id" ref="sale.model_sale_order_line" /> <field name="name">Create Delivery</field> <field name="key2">client_action_multi</field> <field name="value" eval="'ir.actions.act_window,' + str(ref('action_create_delivery_button'))" /> <field name="key">action</field> <field name="model">sale.order.line</field> </record> </data> </openerp> ```
Why does "\\\\" equals "\\" in zsh? So to write a backslash to `stdout` you do ``` zsh$ echo \\ \ ``` You would think that to output 2 backslashes, you need to run this: ``` zsh$ echo \\\\ \ ``` **Wrong**, You need 6 or 8: ``` zsh$ echo \\\\\\ \\ zsh$ echo \\\\\\\\ \\ ``` **Why on earth would I need 8 backslashes ?** Behaviors in different shells : ``` ksh$ echo \\\\ \\ zsh$ echo \\\\ \ bash$ echo \\\\ \\ sh$ echo \\\\ \ csh$ echo \\\\ \\ ``` [![Backslashes](https://i.stack.imgur.com/3UUSL.png)](https://i.stack.imgur.com/3UUSL.png)
You are probably used to `bash`'s built-in `echo`, which does *not* perform expansion of escape sequences. POSIX `echo`, however, *does*. `zsh` follows the POSIX specification, so when you write `echo \\\\\\`, here's what happens. 1. Quote removal reduces `\\\\\\` to `\\\`; each escaped `\` becomes a literal `\`. 2. `echo` takes the three `\` it receives, and replaces the first `\\` with `\`. The remaining `\` doesn't precede anything, so it is treated literally. 3. The final result is an output of two `\`. With `echo \\\\\\\\`, here's what happens. 1. As before, `\\\\\\\\` becomes `\\\\`, because each pair of backslashes becomes a single `\` after quote removal. 2. `echo` takes the 4 backslashes, and replaces the first `\\` with `\` and the *second* `\\` with `\` as well, since this time it received two matched pairs. 3. The final result is again two `\`. `bash` probably inherited its non-POSIX comformance from `ksh`, `sh` is by definition a POSIX-comformant shell, and it's best not to think about what `csh` is doing.
SQL Server 2008 convert int to dd:hh:mm How can I convert an `int` value (ie: `1800`) which represents minutes into a value that looks like this: `dd:hh:mm` (`days:hours:minutes`). So `1800` should be converted into `1:06:00` (`1 day 6 hours 0 minutes`). In a stored procedure I have this: ``` SELECT Record_ID, Project_ID, Ticket_ID, WO_Type, DC, Title, Device_Quantity, Total/1440 as Total, ((Total - Elapsed) - DATEDIFF(mi, Record_Time, getdate())) as FinalTimeLeft, Completed FROM Record ``` How would I implement the casting into the SP? Above `FinalTimeLeft=1800`
How about this example - it does hours, minutes and seconds but it should be easy to modify for days, hours and minutes: ``` SELECT CAST(mins / 3600 AS VARCHAR) + ':' + RIGHT('0' + CAST((mins % 3600) / 60 AS VARCHAR), 2) + ':' + RIGHT('0' + CAST(mins % 60 AS VARCHAR), 2) FROM (SELECT 1800 AS mins) a ``` EDIT: Included your stored procedure with my code amended for day, hour and minute: ``` SELECT * ,CAST(FinalTimeLeft / 1440 AS VARCHAR) + ':' + RIGHT('0' + CAST((FinalTimeLeft / 60) % 24 AS VARCHAR), 2) + ':' + RIGHT('0' + CAST(FinalTimeLeft % 60 AS VARCHAR), 2) AS duration FROM ( SELECT Record_ID ,Project_ID ,Ticket_ID ,WO_Type ,DC ,Title ,Device_Quantity ,Total/1440 as Total ,((Total-Elapsed)-DATEDIFF(mi,Record_Time,getdate())) as FinalTimeLeft ,Completed FROM record) a ```
Definition and Convergence of Iteratively Reweighted Least Squares I've been using iteratively reweighted least squares (IRLS) to minimize functions of the following form, $J(m) = \sum\_{i=1}^{N} \rho \left(\left| x\_i - m \right|\right)$ where $N$ is the number of instances of $x\_i \in \mathbb{R}$, $m \in \mathbb{R}$ is the robust estimate that I want, and $\rho$ is a suitable robust penalty function. Let's say it's convex (though not necessarily strictly) and differentiable for now. A good example of such a $\rho$ is the [Huber loss function](http://en.wikipedia.org/wiki/Huber_loss_function). What I've been doing is differentiating $J(m)$ with respect to $m$ (and manipulating) to obtain, $\frac{dJ}{dm}= \sum\_{i=1}^{N} \frac{\rho'\left( \left|x\_i-m\right|\right) }{\left|x\_i-m\right|} \left( x\_i-m \right) $ and iteratively solving this by setting it equal to 0 and fixing weights at iteration $k$ to $w\_i(k) = \frac{\rho'\left( \left|x\_i-m{(k)}\right|\right) }{\left|x\_i-m{(k)}\right|}$ (note that the perceived singularity at $x\_i=m{(k)}$ is really a removable singularity in all $\rho$'s I might care about). Then I obtain, $\sum\_{i=1}^{N} w\_i(k) \left( x\_i-m{(k+1)} \right)=0$ and I solve to obtain, $m(k+1) = \frac{\sum\_{i=1}^{N} w\_i(k) x\_i}{ \sum\_{i=1}^{N} w\_i(k)}$. I repeat this fixed point algorithm until "convergence". I will note that if you get to a fixed point, you are optimal, since your derivative is 0 and it's a convex function. I have two questions about this procedure: 1. Is this the standard IRLS algorithm? After reading several papers on the topic (and they were very scattered and vague about what IRLS is) this is the most consistent definition of the algorithm I can find. I can post the papers if people want, but I actually didn't want to bias anyone here. Of course, you can generalize this basic technique to many other types of problems involving vector $x\_i$'s and arguments other than $\left|x\_i-m{(k)}\right|$, providing the argument is a norm of an affine function of your parameters. Any help or insight would be great on this. 2. Convergence seems to work in practice, but I have a few concerns about it. I've yet to see a proof of it. After some simple Matlab simulations I see that one iteration of this is *not a [contraction mapping](http://en.wikipedia.org/wiki/Contraction_mapping)* (I generated two random instances of $m$ and computing $\frac{\left|m\_1(k+1) - m\_2(k+1)\right|}{\left|m\_1(k)-m\_2(k)\right|}$ and saw that this is occasionally greater than 1). Also the mapping defined by several consecutive iterations is not strictly a contraction mapping, but the probability of the Lipschitz constant being above 1 gets very low. So is there a notion of a *contraction mapping in probability*? What is the machinery I'd use to prove that this converges? Does it even converge? Any guidance at all is helpful. Edit: I like the paper on IRLS for sparse recovery/compressive sensing by Daubechies et al. 2008 "Iteratively Re-weighted Least Squares Minimization for Sparse Recovery" on the arXiv. But it seems to focus mostly on weights for nonconvex problems. My case is considerably simpler.
As for your first question, one should define "standard", or acknowledge that a "canonical model" has been gradually established. As a comment indicated, it appears at least that the way you use IRWLS is rather standard. As for your second question, "contraction mapping in probability" could be linked (however informally) to convergence of "recursive stochastic algorithms". From what I read, there is a huge literature on the subject mainly in Engineering. In Economics, we use a tiny bit of it, especially the seminal works of Lennart Ljung -the first paper was [Ljung (1977)](http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1101561&abstractAccess=no&userType=inst)- which showed that the convergence (or not) of a recursive stochastic algorithm can be determined by the stability (or not) of a related ordinary differential equation. *(what follows has been re-worked after a fruitful discussion with the OP in the comments)* > > **Convergence** > > > I will use as reference [Saber Elaydi "An Introduction to Difference Equations", 2005, 3d ed.](http://ramanujan.math.trinity.edu/selaydi/books/diffeq/index.shtml) The analysis is conditional on some given data sample, so the $x's$ are treated as fixed. The first-order condition for the minimization of the objective function, viewed as a recursive function in $m$, $$m(k+1) = \sum\_{i=1}^{N} v\_i[m(k)] x\_i, \;\; v\_i[m(k)] \equiv \frac{w\_i[m(k)]}{ \sum\_{i=1}^{N} w\_i[m(k)]} \qquad [1]$$ has a fixed point (the argmin of the objective function). By Theorem 1.13 pp 27-28 of Elaydi, if the first derivative with respect to $m$ of the RHS of $[1]$, evaluated at the fixed point $m^\*$, denote it $A'(m^\*)$, is smaller than unity in absolute value, then $m^\*$ is *asymptotically stable* (AS). More over by Theorem 4.3 p.179 we have that this also implies that the fixed point is *uniformly* AS (UAS). "Asymptotically stable" means that for some range of values around the fixed point, a neighborhood $(m^\* \pm \gamma)$, not necessarily small in size, the fixed point is *attractive* , and so if the algorithm gives values in this neighborhood, it will converge. The property being "uniform", means that the boundary of this neighborhood, and hence its size, is independent of the initial value of the algorithm. The fixed point becomes *globally* UAS, if $\gamma = \infty$. So in our case, if we prove that $$|A'(m^\*)|\equiv \left|\sum\_{i=1}^{N} \frac{\partial v\_i(m^\*)}{\partial m}x\_i\right| <1 \qquad [2]$$ we have proven the UAS property, but without global convergence. Then we can either try to establish that the neighborhood of attraction is in fact the whole extended real numbers, or, that the specific starting value the OP uses as mentioned in the comments (and it is standard in IRLS methodology), i.e. the sample mean of the $x$'s, $\bar x$, always belongs to the neighborhood of attraction of the fixed point. We calculate the derivative $$\frac{\partial v\_i(m^\*)}{\partial m} = \frac {\frac{\partial w\_i(m^\*)}{\partial m}\sum\_{i=1}^{N} w\_i(m^\*)-w\_i(m^\*)\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}}{\left(\sum\_{i=1}^{N} w\_i(m^\*)\right)^2}$$ $$=\frac 1{\sum\_{i=1}^{N} w\_i(m^\*)}\cdot\left[\frac{\partial w\_i(m^\*)}{\partial m}-v\_i(m^\*)\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}\right]$$ Then $$A'(m^\*) = \frac 1{\sum\_{i=1}^{N} w\_i(m^\*)}\cdot\left[\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}x\_i-\left(\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}\right)\sum\_{i=1}^{N}v\_i(m^\*)x\_i\right]$$ $$=\frac 1{\sum\_{i=1}^{N} w\_i(m^\*)}\cdot\left[\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}x\_i-\left(\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}\right)m^\*\right]$$ and $$|A'(m^\*)| <1 \Rightarrow \left|\sum\_{i=1}^{N}\frac{\partial w\_i(m^\*)}{\partial m}(x\_i-m^\*)\right| < \left|\sum\_{i=1}^{N} w\_i(m^\*)\right| \qquad [3]$$ we have $$\begin{align}\frac{\partial w\_i(m^\*)}{\partial m} = &\frac{-\rho''(|x\_i-m^\*|)\cdot \frac {x\_i-m^\*}{|x\_i-m^\*|}|x\_i-m^\*|+\frac {x\_i-m^\*}{|x\_i-m^\*|}\rho'(|x\_i-m^\*|)}{|x\_i-m^\*|^2} \\ \\ &=\frac {x\_i-m^\*}{|x\_i-m^\*|^3}\rho'(|x\_i-m^\*|) - \rho''(|x\_i-m^\*|)\cdot \frac {x\_i-m^\*}{|x\_i-m^\*|^2} \\ \\ &=\frac {x\_i-m^\*}{|x\_i-m^\*|^2}\cdot \left[\frac {\rho'(|x\_i-m^\*|)}{|x\_i-m^\*|}-\rho''(|x\_i-m^\*|)\right]\\ \\ &=\frac {x\_i-m^\*}{|x\_i-m^\*|^2}\cdot \left[w\_i(m^\*)-\rho''(|x\_i-m^\*|)\right] \end{align}$$ Inserting this into $[3]$ we have $$\left|\sum\_{i=1}^{N}\frac {x\_i-m^\*}{|x\_i-m^\*|^2}\cdot \left[w\_i(m^\*)-\rho''(|x\_i-m^\*|)\right](x\_i-m^\*)\right| < \left|\sum\_{i=1}^{N} w\_i(m^\*)\right|$$ $$\Rightarrow \left|\sum\_{i=1}^{N}w\_i(m^\*)-\sum\_{i=1}^{N}\rho''(|x\_i-m^\*|)\right| < \left|\sum\_{i=1}^{N} w\_i(m^\*)\right| \qquad [4]$$ This is the condition that must be satisfied for the fixed point to be UAS. Since in our case the penalty function is convex, the sums involved are positive. So condition $[4]$ is equivalent to $$\sum\_{i=1}^{N}\rho''(|x\_i-m^\*|) < 2\sum\_{i=1}^{N}w\_i(m^\*) \qquad [5]$$ If $\rho(|x\_i-m|)$ is Hubert's loss function, then we have a quadratic ($q$) and a linear ($l$) branch, $$\rho(|x\_i-m|)=\cases{ (1/2)|x\_i- m|^2 \qquad\;\;\;\; |x\_i-m|\leq \delta \\ \\ \delta\big(|x\_i-m|-\delta/2\big) \qquad |x\_i-m|> \delta}$$ and $$\rho'(|x\_i-m|)=\cases{ |x\_i- m| \qquad |x\_i-m|\leq \delta \\ \\ \delta \qquad \qquad \;\;\;\; |x\_i-m|> \delta}$$ $$\rho''(|x\_i-m|)=\cases{ 1\qquad |x\_i-m|\leq \delta \\ \\ 0 \qquad |x\_i-m|> \delta} $$ $$\cases{ w\_{i,q}(m) =1\qquad \qquad \qquad |x\_i-m|\leq \delta \\ \\ w\_{i,l}(m) =\frac {\delta}{|x\_i-m|} <1 \qquad |x\_i-m|> \delta} $$ Since we do not know how many of the $|x\_i-m^\*|$'s place us in the quadratic branch and how many in the linear, we decompose condition $[5]$ as ($N\_q + N\_l = N$) $$\sum\_{i=1}^{N\_q}\rho\_q''+\sum\_{i=1}^{N\_l}\rho\_l'' < 2\left[\sum\_{i=1}^{N\_q}w\_{i,q} +\sum\_{i=1}^{N\_l}w\_{i,l}\right]$$ $$\Rightarrow N\_q + 0 < 2\left[N\_q +\sum\_{i=1}^{N\_l}w\_{i,l}\right] \Rightarrow 0 < N\_q+2\sum\_{i=1}^{N\_l}w\_{i,l}$$ which holds. So for the Huber loss function the fixed point of the algorithm is uniformly asymptotically stable, irrespective of the $x$'s. We note that the first derivative is smaller than unity in absolute value for any $m$, not just the fixed point. What we should do now is either prove that the UAS property is also global, or that, if $m(0) = \bar x$ then $m(0)$ belongs to the neighborhood of attraction of $m^\*$.
setting minimum width and height for each gridster.js widget I'm using gridster.js combined with Jquery UI to make it resizable by dragging using this bit: ``` $('.layout_block').resizable({ grid: [grid_size + (grid_margin * 2), grid_size + (grid_margin * 2)], animate: false, minWidth: grid_size, minHeight: grid_size, containment: '#layouts_grid ul', autoHide: true, stop: function(event, ui) { var resized = $(this); setTimeout(function() { resizeBlock(resized); }, 300); } }); $('.ui-resizable-handle').hover(function() { layout.disable(); }, function() { layout.enable(); }); function resizeBlock(elmObj) { var elmObj = $(elmObj); var w = elmObj.width() - grid_size; var h = elmObj.height() - grid_size; for (var grid_w = 1; w > 0; w -= (grid_size + (grid_margin * 2))) { grid_w++; } for (var grid_h = 1; h > 0; h -= (grid_size + (grid_margin * 2))) { grid_h++; } layout.resize_widget(elmObj, grid_w, grid_h); } ``` as suggested on GitHub: <http://jsfiddle.net/maxgalbu/UfyjW/> I need to specify minimum width and height for each widget and not for gridster genrally. Is there an obvious way to achieve this functionality? Thanks
If it help anyone, I managed a solution: Added a data-attribute to each widget to specify minimum X and Y: ``` data-minx="2" data-miny="2" ``` And then rewrote the resize function: ``` $('.layout_block').resizable({ grid: [grid_size + (grid_margin * 2), grid_size + (grid_margin * 2)], animate: false, containment: '#layouts_grid ul', autoHide: true, start: function(event, ui) { var resized = $(this); $('.layout_block').resizable( "option", "minWidth", resized.data('minx') * grid_size ); $('.layout_block').resizable( "option", "minHeight", resized.data('miny') * grid_size ); }, stop: function(event, ui) { var resized = $(this); setTimeout(function() { resizeBlock(resized); }, 300); } }); ``` Instead of setting a minimum height and width I run a function on resizable start that gets the minx and miny attributes and adjusts the parameters for resizing accordingly.
Shall I optimize or let compiler to do that? What is the preferred method of writing loops according to efficiency: Way a) ``` /*here I'm hoping that compiler will optimize this code and won't be calling size every time it iterates through this loop*/ for (unsigned i = firstString.size(); i < anotherString.size(), ++i) { //do something } ``` or maybe should I do it this way: Way b) ``` unsigned first = firstString.size(); unsigned second = anotherString.size(); ``` and now I can write: ``` for (unsigned i = first; i < second, ++i) { //do something } ``` the second way seems to me like worse option for two reasons: scope polluting and verbosity but it has the advantage of being sure that size() will be invoked once for each object. Looking forward to your answers.
I usually write this code as: ``` /* i and size are local to the loop */ for (size_t i = firstString.size(), size = anotherString.size(); i < size; ++i) { // do something } ``` This way I do not pollute the parent scope and avoid calling `anotherString.size()` for each loop iteration. It is especially useful with iterators: ``` for(some_generic_type<T>::forward_iterator it = container.begin(), end = container.end(); it != end; ++it) { // do something with *it } ``` Since C++ 11 the code can be shortened even more by writing a *range-based for loop*: ``` for(const auto& item : container) { // do something with item } ``` or ``` for(auto item : container) { // do something with item } ```
How to correctly undo a bad commit in P4V (Perforce)? Ok, so I am using Perforce P4V to back up my work for a Unity project. One of my teammates checked in some changes to the metafiles which broke everything. No problem though right? That's the whole point of using P4. We can just revert that. Only... revert didn't work? The behavior I am seeing is File A was changed in changelist 1 File B was changed in changelist 2, File C And A were changed in changelist 3 Let's say Changelist 3 contains the bad change I clicked on changelist 2 in my history, then clicked get revision, and checked the Force Operation box. changelist 2 being the last known good state what I expected to happen was to have all of my files restored to the state I was in when changelist 2 was submitted. Instead, file C was reverted, but File A was not. It's like, since file A didn't change in changelist 2 it didn't bother to get that version. So I am in a state where all of the unity metafiles are maimed and all prefab references are broken. When that didn't work I tried using get a revision to go back to the most current state. Then using Backout. That similarly didn't work, metafiles still maimed. I then tried selecting the last known good state and rolling the entire project folder back to that state. Again, didn't work. But then again, I may have maimed my project so badly at that point that nothing would have worked. The only way I have found that appears to correctly be reverting the files and restoring the broken links is manually selecting each file or folder and reverting it to the last good commit, which is different for each file/folder since they were added and changed in different commits. What I don't understand is why the force get revision didn't do that on its own. Or what the "correct" way to undo a bad commit is. I even tried deleting the entire assets folder then using get revision force to pull an entirely new copy from the server using the last known good commit. This appeared to work perfectly once, but when I tried to repeat it to verify my results it went back to losing all of the meta file links. The only dependable way of getting back into a good state appears to be manually force getting each file and folder to the individual last known good commit. I have consigned myself to having to manually fix my blunder this time, but I'd really appreciate help to know how to do this the right way for the future.
Use the `p4 undo` command. ``` p4 undo @BADCHANGELIST p4 submit ``` That's all there is to it! (There is a similar operation in P4V called "Back Out" -- I have encountered difficulties using this and prefer using the command line because it's easier, but YMMV.) Note that `p4 undo` and `p4 revert` are entirely different commands! Many people get confused about this because `git` calls its "undo" command `git revert`, but a `p4 revert` is not the same as a `git revert` (it's more like `git reset --hard`). You can't use `p4 revert` to undo something that's already submitted; it's only for reverting your pending work to the latest submitted state from the depot. I'll see if I can explain what went wrong with what you attempted: > > Let's say Changelist 3 contains the bad change I clicked on changelist 2 in my history, then clicked get revision, and checked the Force Operation box. changelist 2 being the last known good state what I expected to happen was to have all of my files restored to the state I was in when changelist 2 was submitted. > > > Instead, file C was reverted, but File A was not. It's like, since file A didn't change in changelist 2 it didn't bother to get that version. > > > IIRC when you click on a changelist and use it to sync in P4V, it defaults to doing `p4 sync @CHANGE,CHANGE` rather than `p4 sync @CHANGE` -- as you've observed, this is very unhelpful if you're trying to "roll back" to an earlier state across the entire workspace. You can probably fix this by clicking checkboxes in the sync dialog somewhere. I believe if you click on a changelist in the "Folder History" view it uses the path (`p4 sync path/...@CHANGE`) which should work better. Once you've managed to sync the workspace to the older state, if you want to make that state permanent, you need to open the files for `edit` ("check out" in P4V), `sync` to schedule a resolve ("get latest" in P4V), `resolve` (which is also "resolve" in P4V, make sure you accept `your` workspace version (P4V might call this "accept target", I'm not sure), and then `submit`. Again, I recommend just using `p4 undo` so you can skip all of those manual steps.