text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Erwin, a modeling consultant and top Solver Foundation user, encountered some problems trying to do two-way data binding using DataTable objects. There are more details on this discussion thread. Ross, a member of the Solver Foundation team, was kind enough to code up a workaround for Erwin's example. In addition to this CS file, you will need to create a new DBML file called SampleDataContext, as I described in a previous post. using System; using System.Collections.Generic; using System.Linq; using System.Data; using System.Data.OleDb; using System.Data.Linq; using System.Text; using Microsoft.SolverFoundation.Services; using System.IO; namespace OML1 { class Test { static void Main(string[] args) { Test t = new Test(); t.Solve(); } // Holds the OML model string strModel = @"Model[ Parameters[Sets,I], Parameters[Reals,p[I]], Decisions[Reals[0,Infinity],x[I]], Constraints[ Foreach[{i,I}, x[i]==p[i]] ] ]"; // SFS SolverContext context; SampleDataContext data; // Constructor public Test() { context = SolverContext.GetContext(); data = new SampleDataContext("Data Source=Sql_server_name;Initial Catalog=DataPartitionAllocation20_5;Integrated Security=True"); context.DataSource = data; } // Solve the problem public void Solve() { context.LoadModel(FileFormat.OML, new StringReader(strModel)); Parameter p = context.CurrentModel.Parameters.First(q => q.Name == "p"); p.SetBinding(data.P, "value", new string[] { "index" }); Decision x = context.CurrentModel.Decisions.First(d => d.Name == "x"); x.SetBinding(data.X, "value", new string[] { "index" }); Solution solution = context.Solve(); Console.Write("{0}", solution.GetReport()); context.PropagateDecisions(); } } } GurobiDirective gurobiDirective = new GurobiDirective(); gurobiDirective.OutputFlag = true; Solution solution = context.Solve(gurobiDirective); Optimize a model with 185 Rows, 380 Columns and 1199 NonZeroes Presolve removed 1 rows and 185 columns Presolve time: 0.30s Presolved: 184 Rows, 195 Columns, 832 Nonzeros Objective GCD is 1 Root relaxation: objective 2.783286e+03, 36 iterations, 0.02 seconds Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time 0 0 2783.2857 0 14 - 2783.2857 - - 0s 0 0 3006.1964 0 29 - 3006.1964 - - 0s 0 0 3006.1964 0 29 - 3006.1964 - - 0s 0 0 3006.1964 0 25 - 3006.1964 - - 0s 0 0 3006.1964 0 25 - 3006.1964 - - 0s H 0 0 6029.0000 3006.1964 50.1% - 0s 0 0 3006.1964 0 25 6029.0000 3006.1964 50.1% - 0s H 0 2 3805.0000 3006.1964 21.0% - 0s 0 2 3006.1964 0 25 3805.0000 3006.1964 21.0% - 0s * 75 54 29 3644.0000 3077.0000 15.6% 8.2 0s * 129 80 27 3381.0000 3077.0000 8.99% 8.7 0s * 158 62 30 3323.0000 3077.0000 7.40% 8.1 0s Cutting planes: Learned: 3 Gomory: 1 MIR: 3 Explored 1231 nodes (5693 simplex iterations) in 0.69 seconds Thread count was 2 (of 2 available processors) Optimal solution found (tolerance 1.00e-04) Best objective 3.3230000000e+03, best bound 3.3230000000e+03, gap 0.0000% gurobiDirective.Heuristics = 0.1; gurobiDirective.Presolve = PresolveLevel.None; There is a full reference in the Gurobi Solver Programming Primer that ships with Solver Foundation. My point is that you do not get a "least common denominator" solution - you get all the bells and whistles. Gurobi is just one example - there are a wide range of plug-ins available, covering most of the popular LP and MIP solvers - check out solverfoundation.com for the full list. To use a plug-in solver, you need to: 1) install the solver (duh), 2) change your exe.config or web.config to point to the solver. No code changes are required. You can find additional documentation on the default MIP directives and the third-party Solver configuration in the SFS Programming Primer. Last thing - modern programming languages like C# are cool because they promote reusability and maintainability. Don't forget about that when you are writing SFS code. It's common for models in a particular vertical (say, finance or transportation) to have common "blocks" of goals or constraints that are re-used over and over again. It's pretty easy to build reusable model libraries using extension methods. For example, take the assignment constraints in my TSP example. I could factor them out: public static class ModelingExtensions { public static void AssignmentConstraintsNoDiag(this Model model, Set s, Decision assign) { model.AddConstraint("A1", Model.ForEach(s, i => Model.Sum(Model.ForEachWhere(s, j => assign[i, j], j => i != j)) == 1)); model.AddConstraint("A2", Model.ForEach(s, j => Model.Sum(Model.ForEachWhere(s, i => assign[i, j], i => i != j)) == 1)); } } On the Solver Foundation MSDN forum there was a question about how to read model data from a DB and use it within a Solver Foundation model. In this post I will extend my production planning sample to use LINQ to SQL. To follow along at home you will need to have a recent version of SQL Server installed locally, and some basic knowledge of how to create SQL tables. You should also have compiled and run the code from my previous post. Step 1: Create and populate the DB The first step is to create tables corresponding to the entities in my model. I created a very simple DB with three tables: Countries, Products, and Yields. The Yields table has foreign key constraints to the Countries and Products tables. Here is a diagram: To populate the DB I just wrote a script that inserts my problem data, and ran itin SQL Management Studio. Here's the script (and forgive my SQL): GO DELETE FROM Yields DELETE FROM Products DELETE FROM Countries GO INSERT INTO Countries (Id, Name, Limit, Cost) VALUES (0, 'SA', 9000, 20) INSERT INTO Countries (Id, Name, Limit, Cost) VALUES (1, 'VZ', 6000, 15) GO INSERT INTO Products (Id, Name, Demand) VALUES (0, 'Gas', 1900) INSERT INTO Products (Id, Name, Demand) VALUES (1, 'Jet Fuel', 1500) INSERT INTO Products (Id, Name, Demand) VALUES (2, 'Lubricant', 500) GO INSERT INTO Yields (CountryId, ProductId, Value) VALUES (0, 0, 0.3) INSERT INTO Yields (CountryId, ProductId, Value) VALUES (1, 0, 0.4) INSERT INTO Yields (CountryId, ProductId, Value) VALUES (0, 1, 0.4) INSERT INTO Yields (CountryId, ProductId, Value) VALUES (1, 1, 0.2) INSERT INTO Yields (CountryId, ProductId, Value) VALUES (0, 2, 0.2) INSERT INTO Yields (CountryId, ProductId, Value) VALUES (1, 2, 0.3) Step 2: Create Entity and DataContext classes in Visual Studio Scott Guthrie's blog (and the MSDN docs) show you exactly how to do this: Step 3: Modify Solver Foundation Services data binding code This is in fact very easy because Solver Foundation Services was designed to work well with LINQ. Take the PetrochemDataBinding sample from last time, and change the SetBinding statements to work with the PetrochemDataContext class instead of a hardcoded DataSet. The code is almost identical: private static void PetrochemLinqDataBinding() { SolverContext context = SolverContext.GetContext(); context.ClearModel(); Model model = context.CreateModel(); PetrochemDataContext db = new PetrochemDataContext(); Set products = new Set(Domain.Any, "products"); Set countries = new Set(Domain.Any, "countries"); Parameter demand = new Parameter(Domain.Real, "demand", products); demand.SetBinding(db.Products, "Demand", "Id"); Parameter yield = new Parameter(Domain.Real, "yield", products, countries); yield.SetBinding(db.Yields, "Value", "ProductId", "CountryId"); Parameter limit = new Parameter(Domain.Real, "limit", countries); limit.SetBinding(db.Countries, "Limit", "Id"); Parameter cost = new Parameter(Domain.Real, "cost", countries); cost.SetBinding(db.Countries, "Cost", "Id");); } That's all there is to it! Note that instead of passing the entire collection (e.g. db.Countries) you could easily use LINQ statements or stored procedures, or whatever you like. Here's an example that I walked through during yesterday's INFORMS session. Erwin has two blog postings about Solver Foundation and the traveling salesman problem, but I want to throw in my two cents because I want to emphasize a couple of points: The traveling salesman problem is a classical problem in computer science, and you should bow your head in shame if you don't know about it (and turn in your conference badge if you happen to be in Phoenix).; using SolverFoundation.Plugin.Gurobi;: We have one parameter in our model: With that in mind, here's the model. Explanation of the goals and constraints follow. static void Main(string[] args) {)); model.AssignmentNoDiag(city, assign); //.. In this post I am going to present two complete C# programs for modeling and solving a simple production problem using Solver Foundation. In this example we have two refineries (located in Saudi Arabia and Venzuela) that produce three products: gasoline, jet fuel, and lubricant. The goal is to minimize production costs, which depend on location. There is demand for each product which must be met. Finally, each production site has a limited production capacity. The code for the simple case is below. Creating a model amounts to adding the goals, decisions, and constraints using the appropriate Add method. The signature for each is similar: the first argument is the name and the second argument is a term. Terms are created by combining decisions or parameters using the usual operators, or by using static methods on the Model class (as we will see in our second example). Finally, notice the call to SolverContext.Solve(). I have supplied a Directive that specifies that the Simplex solver should be used. using System; using System.Collections.Generic; using System.Data; using System.Linq; using System.Text; using Microsoft.SolverFoundation.Services; namespace Microsoft.SolverFoundation.Samples.Petrochem { class Program { static void Main(string[] args) { PetrochemSimple(); } private static void PetrochemSimple() { SolverContext context = SolverContext.GetContext(); context.ClearModel(); Model model = context.CreateModel(); Decision sa = new Decision(Domain.RealRange(0, 9000), "SA"); Decision vz = new Decision(Domain.RealRange(0, 6000), "VZ"); model.AddDecisions(sa, vz); model.AddGoal("goal", GoalKind.Minimize, 20 * sa + 15 * vz); model.AddConstraint("demand1", 0.3 * sa + 0.4 * vz >= 1900); model.AddConstraint("demand2", 0.4 * sa + 0.2 * vz >= 1500); model.AddConstraint("demand3", 0.2 * sa + 0.3 * vz >= 500); Solution solution = context.Solve(new SimplexDirective()); Report report = solution.GetReport(); Console.WriteLine(report); } } Now let's reimplement the same example using data binding. Data binding is a powerful mechanism for creating large, maintainable models. Notice that in my first example, the numeric data such as the yields, the demands, and capacities were expressed directly in the terms. The first step in using Solver Foundation data binding is to lift these values into Parameters. It is often useful to create indexed parameters using Sets. In this example, there are two clearly defined Sets: the set of countries, and the set of products. In my example below I create a DataSet which contains the data for each of my parameters. (The GetData() method is just an example, it's not pretty but it is needed to complete the example.) Then I create a series of indexed parameters. For each of them I call the SetBinding method to associate the data with the parameter. In addition to the data, SetBinding also requires the caller to indicate which property specifies the values for the parameter. If the parameter is indexed, I also need to specify the parameters of the index properties. Since I am working with DataTables, these are simply column names. Notice that I could swap in any other data source that is enumerable - in particular LINQ works really well with Solver Foundation parameters. After the parameters are created, I define the decisions, goals, and constraints. Notice that there is only one decision - it is indexed. The Model.Sum and Model.Foreach operations allow me to define a series of constraints over one or more indexed sets in one single statement. This means that if I were to add more countries or products, my model definition would not change at all. private static void PetrochemDataBinding() { SolverContext context = SolverContext.GetContext(); context.ClearModel(); Model model = context.CreateModel(); // Retrieve the problem data. DataSet data = GetData(); Set products = new Set(Domain.Any, "products"); Set countries = new Set(Domain.Any, "countries"); Parameter demand = new Parameter(Domain.Real, "demand", products); demand.SetBinding(data.Tables["Demand"].AsEnumerable(), "Demand", "Product"); Parameter yield = new Parameter(Domain.Real, "yield", products, countries); yield.SetBinding(data.Tables["Yield"].AsEnumerable(), "Yield", "Product", "Country"); Parameter limit = new Parameter(Domain.Real, "limit", countries); limit.SetBinding(data.Tables["Limit"].AsEnumerable(), "Limit", "Country"); Parameter cost = new Parameter(Domain.Real, "cost", countries); cost.SetBinding(data.Tables["Cost"].AsEnumerable(), "Cost", "Country");); } private static DataSet GetData() { string[] products = new string[] { "Gas", "Jet Fuel", "Lubricant" }; string[] countries = new string[] { "SA", "VZ" }; double[][] yield = new double[][] { new double[] { 0.3, 0.4 }, new double[] { 0.4, 0.2 }, new double[] { 0.2, 0.3 } }; double[] demand = new double[] { 1900, 1500, 500 }; double[] limit = new double[] { 9000, 6000 }; double[] cost = new double[] { 20, 15 }; DataSet dataSet = new DataSet(); #region Fill DataSet DataTable table = new DataTable("Yield"); dataSet.Tables.Add(table); table.Columns.Add("Product", typeof(string)); table.Columns.Add("Country", typeof(string)); table.Columns.Add("Yield", typeof(double)); for (int p = 0; p < products.Length; p++) { for (int c = 0; c < countries.Length; c++) { DataRow row = table.NewRow(); row[0] = products[p]; row[1] = countries[c]; row[2] = yield[p][c]; table.Rows.Add(row); } } table = new DataTable("Demand"); dataSet.Tables.Add(table); table.Columns.Add("Product", typeof(string)); table.Columns.Add("Demand", typeof(double)); for (int p = 0; p < products.Length; p++) { DataRow row = table.NewRow(); row[0] = products[p]; row[1] = demand[p]; table.Rows.Add(row); } table = new DataTable("Limit"); dataSet.Tables.Add(table); table.Columns.Add("Country", typeof(string)); table.Columns.Add("Limit", typeof(double)); for (int c = 0; c < countries.Length; c++) { DataRow row = table.NewRow(); row[0] = countries[c]; row[1] = limit[c]; table.Rows.Add(row); } table = new DataTable("Cost"); dataSet.Tables.Add(table); table.Columns.Add("Country", typeof(string)); table.Columns.Add("Cost", typeof(double)); for (int c = 0; c < countries.Length; c++) { DataRow row = table.NewRow(); row[0] = countries[c]; row[1] = cost[c]; table.Rows.Add(row); } #endregion return dataSet; } A few of us on the Solver Foundation team will be presenting at the INFORMS Practice Conference in Phoenix, Arizona. I'm looking forward to it! We have both a workshop and a session - I will also be participating in Gurobi's session to show off its integration with Solver Foundation. Here are the details - looking forward to seeing you there. Technology WorkshopSunday, April 26, 9AM - 12PMMicrosoft Solver Foundation is a pure, managed code runtime for mathematical programming, modeling, and optimization. Microsoft Solver Foundation Microsoft Solver Foundation. Software TutorialsMonday, April 27, 9:10AM - 10:00AMSolver Foundation is a framework and managed code runtime for mathematical programming, modeling and optimization. This tutorial will focus on the technical overview of Solver Foundation, and how Third-party solver vendors, modelers and solution specialists can leverage Solver Foundation from all CLS-compliant languages and Microsoft Office. I am pleased to announce that Solver Foundation v1.2 is live on solverfoundation.com! Our goal in 1.2 was to make a few key improvements to address feedback that we have received from partners and customers. If you have feedback, questions, suggestions…please post it here, or on our MSDN forum. Positive or negative - that's okay. I am particularly interested in hearing how you use Solver Foundation in your application - or what features you would like to see. Back to 1.2: I. Two weeks back I posted two articles showing how easy it is to model critical path scheduling using Microsoft Solver Foundation. I received a few emails asking about various extensions; I will be covering those in upcoming posts. Julian just wrote a great blog post that covers the most commonly requested extension - resource constrained scheduling. If you are getting started with Solver Foundation and want to see an interesting, instructive example, I encourage you to check out his post. Two things that I really like about his OML: Julian mentions it in his post, but I also want to call out another great resource for learning OML. Erwin Kalvalagen has written an OML tutorial which includes several interesting examples. It's a great complement to the Excel Programming Primer that is part of the Solver Foundation documentation. I am going to be rolling out the rest of my branch-and-bound algorithm in the next few posts. To make that easier, in this post I introduce some common matrix, vector, and permutation methods. It turns out that for technical computing applications, C#'s extension methods (introduced in 3.0) are awesome. With vectors it's great because you retain the control of having direct array access, but you get the nice object-oriented notation. I've left out comments and error-checking, and it's not super-optimized, but you get the idea. None of these methods end up being the bottleneck. Most of the methods are obvious, but I do want to comment on the permutation methods. The goal of QAP is to find an optimal assignment of facilities to locations, represented as a permutation. In the course of our branch-and-bound algorithm, we'll be making partial assignments - that is, only some of the entries in the permutation will be filled in. By convention, p[i] == -1 will mean that facility i is unassigned. An index where p[i] < 0 is called "unused". When we branch, we'll want to pick an unused facility (or location), and try assigning all unused locations (or facilities) to it. The use of C#'s iterator and yield concepts really help here. That said, here's the code. public static class MatrixUtilities { #region Vector public static void ConstantFill(this T[] data, T val) { for (int i = 0; i < data.Length; i++) { data[i] = val; } } public static int ArgMin(this double[] data, out double best) { if (data.Length == 0) { best = Double.MinValue; return -1; } int iBest = 0; best = data[0]; for (int i = 1; i < data.Length; i++) { if (best > data[i]) { best = data[i]; iBest = i; } } return iBest; } public static int ArgMax(this double[] data, out double best) { if (data.Length == 0) { best = Double.MinValue; return -1; } int iBest = 0; best = data[0]; for (int i = 1; i < data.Length; i++) { if (best < data[i]) { best = data[i]; iBest = i; } } return iBest; } #endregion #region Permutation public static void Swap(this int[] p, int i, int j) { int temp = p[i]; p[i] = p[j]; p[j] = temp; } public static int FindUnused(this int[] data, int index) { foreach (int unused in data.UnusedIndices()) { if (index-- <= 0) { return unused; } } return -1; } public static IEnumerable UnusedIndices(this int[] data) { for (int i = 0; i < data.Length; i++) { if (data[i] < 0) { yield return i; } } } public static string PermutationToString(this int[] p, bool oneBased) { StringBuilder build = new StringBuilder(p.Length * 4); int width = ((int)Math.Log10(p.Length)) + 2; build.Append("["); for (int i = 0; i < p.Length; i++) { int index = oneBased ? p[i] + 1 : p[i]; build.Append(index.ToString().PadLeft(width)); } build.Append("]"); return build.ToString(); } #endregion #region Matrix public static string MatrixToString(this double[][] A) { if (A != null) { StringBuilder build = new StringBuilder(A.Length * A.Length * 3); for (int i = 0; i < A.Length; i++) { if (A[i] != null) { for (int j = 0; j < A[i].Length; j++) { build.AppendFormat("{0,4}", A[i][j]); build.Append(" "); } build.AppendLine(); } } return build.ToString(); } return null; } public static double[][] NewMatrix(int m, int n) { double[][] M = new double[m][]; for (int i = 0; i < M.Length; i++) { M[i] = new double[n]; } return M; } public static double[][] NewMatrix(int n) { return NewMatrix(n, n); } #endregion } This: The motivation and code for enumerating and branching will be the subject of the next two posts.? Whew - an exciting and tiring week is over. We had hundreds of employees stop by our TechFest booth to ask questions, talk about their team's optimization or modeling problems, look at our demos, or simply hear more about Solver Foundation. A big thanks to my teammates Lengning, Lucas, and Lin for doing such a great job this last week. OML - our declarative language for specifying optimization problems - was received with great interest. Microsoft France's experience using Solver Foundation to schedule TechDays really "worked" as an example. Even our simple quadratic programming example in Excel opened a lot of eyes. There several questions about data binding (and there have been some recent threads in our forums), so I think I'd like to devote a couple of posts to that. Lastly, I was surprised to get several inquiries into our nonlinear unconstrained optimizer - that's an area I will devote some time to that as well. I will be attending a couple of conferences over the next several months, but for now it's back to work on the next version of Solver Foundation. Another busy day - we spoke with literally hundreds of Microsoft employees about Solver Foundation at TechFest '09. Microsoft is a huge company, so we got a chance to talk to people from many different parts of the company with varying backgrounds. Some are intimately familiar with optimization, others are not. So part of the challenge is to try to "meet people where they are at" and talk about Solver Foundation in a way that makes sense for them, without trivializing it or making it sound like magic beans. If you had the opportunity to come by our booth tomorrow, you'd get a few cool demos, and a variation on the following "pitch". Depending on what you wanted to talk about we could talk more about how to model real problems, or maybe the API, or maybe some of the solvers that we're developing. Or maybe the stock market, or how it sucked to lose the Sonics. If people leave with a good idea of what we're about and can think of some situations where Solver Foundation could help, I'm happy. Anyway, here's goes: People and businesses need to be able to juggle different priorities and constraints to make good decisions. Examples include production planning, scheduling projects, configuring IT systems, advertising online. Microsoft Solver Foundation is a managed code platform for planning, scheduling, configuration, and optimization. Solver Foundation lets you easily describe your problem, get it solved, and connect it to your application. It has three main layers: • Modeling and Programming: the modeler lets you describe optimization problems declaratively: "what", not "how". You can do it in code, or in our Excel add-in. • Solver Foundation Services: a full-featured .Net library that can be accessed from Visual Studio, C#, VB, ASP.Net, Silverlight. It transparently handles parallelism & multiple cores. It provides a rich object model, events, and data binding. • Solvers: we provide a bunch of solvers that cover a wide range of optimization problems including: LP, QP, constraint, MIP, nonlinear unconstrained. We feature an open, extensible architecture that lets plug-in third party solvers if you so choose. Solver Foundation is developed in conjunction with researchers in Redmond and Cambridge, and is a great example of Microsoft Research innovation. We recently released our 1.1 version and you can check out solverfoundation.com to download our free Express version!
http://blogs.msdn.com/natbr/
crawl-002
en
refinedweb
A blog on coding, .NET, .NET Compact Framework and life in general.... <Added additional stuff after a discussion on internal CSharp user list> I was going through the generics spec in C#2.0 and found something really weird. default(t) Consider the following code. class MyGenClass<T>{ public void Method(ref T t) { t = null; }} class { {"); int. If you would like to receive an email when updates are made to this post, please register here RSS I tried this and it worked T obj default(T).Equals(obj); The Mono project just ran into this issue: DOES NOT WORK if T is a reference type since you cannot call .Equals on null... I want to do something like: class myObject{ public T Fetch<T>(ICriteria oCriteria){ T fetchResult = default(T); if(T == something){ fetchResult = doSomething(); } } } private something doSomething(){ return new something(); which gives me a compile time error at fetchResult = doSomething(); Any help? Regards, Sunny "The int gets converted to a nullable type" - nope, i checked the IL and it's not, still int32 So If You'd Like To Check Inside A Generic Class If A Variable Eqauls To Default You Should Do: public bool IsDefault(TInterpretationResult res) { //Compiler Error //return (default(T) == res); var obj = default(T) as object; if (obj == null) //Is Not Value Type return (res == null); if (obj.GetType().IsValueType) { return (ValueType)obj == (ValueType)obj; } Debug.Fail("Should Not Reach Here!"); return false; } Seems Kinda Ugly, It's Strange That MS Didn't Handle It In A More Elegent Way Oops ... Fixed: public static bool IsDefaultValue(T res) if (res == null) //res Is Not A Value Type return false; return res.Equals(obj); Still Ugly! // The clean way ;) return Object.Equals(res, null); Thanks for the info! I just ran into this exact problem.
http://blogs.msdn.com/abhinaba/archive/2005/12/08/501544.aspx
crawl-002
en
refinedweb
NAMETcl_ExprLong, Tcl_ExprDouble, Tcl_ExprBoolean, Tcl_ExprString - evaluate an expression SYNOPSIS #include <tcl.h> or objPtr. - - char *string (in) Expression to be evaluated. Must be in writable memory (the expression parser makes temporary modifications to the string during parsing, which it undoes before returning). - -. interp argument refers to an interpreter used to evaluate the expression (e.g. for variables and nested Tcl commands) and to return error information. For all of these procedures the return value is a standard Tcl result: TCL_OK means the expression was successfully evaluated, and TCL_ERROR means that an error occurred while evaluating the expression. If TCL_ERROR is returned then. SEE ALSOTcl_ExprLongObj, Tcl_ExprDoubleObj, Tcl_ExprBooleanObj, Tcl_ExprObj KEYWORDSboolean, double, evaluate, expression, integer, object, string Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl3_ExprLong.htm
crawl-002
en
refinedweb
Microsoft Enterprise Search Blog Server2008-09-02T16:47:00ZObservations from the Text Analytics Summit 2009<P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>One of the hard parts about organizing a conference like the 5<SUP>th</SUP> annual </FONT><A href=""><FONT size=3 face=Calibri>Text Analytics Summit</FONT></A><FONT size=3 face=Calibri>,. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri. <SPAN style="mso-spacerun: yes"> </SPAN>Overall, I appreciated the more commercial/consumer focus and felt that the conference organizers did a great job of finding representative examples and balancing the practical (vendor briefings and case studies) with the theoretical.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Trend 1:<SPAN style="mso-spacerun: yes"> </SPAN><SPAN style="mso-spacerun: yes"> </SPAN>ETL-like Tools<?xml:namespace prefix = o<o:p></o:p></FONT></FONT></B></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Ok, this is not really a trend in text analytics, but it is one in enterprise search that is informed by text and data analytics. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Many of the vendors at the conference demonstrated graphical tools designed to simplify the process of building text analysis “pipelines”. These tools look very much like the Extract, Transform, and Load (</FONT><A href=""><FONT color=#0000ff size=3 face=Calibri>ETL</FONT></A><FONT size=3 face=Calibri>) tools that have been around for many years in the data integration world. The difference is that the text analysis versions of these tools focus on operations for handling unstructured text. For example, </FONT><A href=""><FONT size=3 face=Calibri>named entity recognizers</FONT></A><FONT size=3 face=Calibri> are a common text analytics task for automatically recognizing and tagging things like person names, company names, and locations in text. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>This ETL “pattern” exists in enterprise search, as well, where information must be <I style="mso-bidi-font-style: normal"><U>extracted</U></I> from a source repository (e.g. an email archive), <I style="mso-bidi-font-style: normal"><U>transformed</U></I> into an enhanced, canonical representation (e.g. annotated XML), and <I style="mso-bidi-font-style: normal"><U>loaded</U></I> into a database or index for searching. <SPAN style="mso-spacerun: yes"> </SPAN. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>This is changing.<SPAN style="mso-spacerun: yes"> </SPAN. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri </FONT><A href=""><FONT color=#0000ff size=3 face=Calibri>convergence</FONT></A><FONT size=3 face=Calibri> between these two spaces. Sue Feldman and Hadley Reynolds of </FONT><A href=""><FONT color=#0000ff size=3 face=Calibri>IDC</FONT></A><FONT size=3 face=Calibri> reinforced this role of text analytics by describing it as a cornerstone of </FONT><A href=""><FONT size=3 face=Calibri>Unified Information Access</FONT></A><FONT size=3 face=Calibri> during their Market Report at the conference. Given this, it shouldn’t be surprising to see that, as text analytic tools and concepts have found their way into BI applications, traditional BI tools and concepts, like ETL, are finding a place within enterprise search. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Trend 2:<SPAN style="mso-spacerun: yes"> </SPAN>Empowering the End User <o:p></o:p></FONT></FONT></B></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3><FONT face=Calibri?” <SPAN style="mso-spacerun: yes"> </SPAN>- meaning, when will the tools be so easy to use that your end users won’t need you to do their investigation for them? <SPAN style="mso-spacerun: yes"> </SPAN></FONT></FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>In practice, we are seeing more applications that combine conventional search with advanced text analytics in ways that bring a more powerful search experience to relatively unsophisticated end users. </FONT><A href=""><FONT size=3 face=Calibri>Silobreaker.com</FONT></A><FONT size=3 face=Calibri> is a clever site that combines the richness of text analytics within what is fundamentally a news search application. Unlike other news search sites, Silobreaker offers options and tools that help to uncover and <U>discover</U> interesting and potentially novel connections and patterns in the news. There are still some usability challenges with a consumer site like Silobreaker, but I like it as an example of ad hoc search converging with iterative knowledge discovery. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>The trend toward empowering users with more than just a search box and list of blue links also reaches into less “analytical” consuemr applications. Two examples are </FONT><A href=""><FONT color=#0000ff size=3 face=Calibri></FONT></A><FONT size=3 face=Calibri> and </FONT><A href=""><FONT color=#0000ff size=3 face=Calibri></FONT></A><FONT size=3><FONT face=Calibri>. Both sites show the power of applying analytics to both structured and textual data (classifieds in the case of Oodle, real estate postings in the case of Globrix) in what are otherwise fundamentally search applications. <SPAN style="mso-spacerun: yes"> </SPAN></FONT></FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Trend 3: <SPAN style="mso-spacerun: yes"> </SPAN>Taking Sentiment Analysis to the next level<o:p></o:p></FONT></FONT></B></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri </FONT><A href=""><FONT size=3 face=Calibri>Bing Web search</FONT></A><FONT size=3 face=Calibri>). </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri: </FONT></P> <P style="TEXT-INDENT: 0.5in; MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>“Acme’s new P40 digital camera has a good viewer, but its controls are awkward.”</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>While it’s relatively easy for a human, it takes some heavy linguistic lifting for a machine to recognize that the sentiment of this opinion is directed not just at Acme or at the P40 digital camera, but specifically at the <I style="mso-bidi-font-style: normal">viewer</I> (positive sentiment) and the <I style="mso-bidi-font-style: normal">controls</I> (negative sentiment). It’s ever trickier establishing what the word “its” refers to in the 2nd part of the sentence. Is it the Acme P40 itself, or just the viewer?</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Nate</FONT></P><img src="" width="1" height="1">ntreloar Big – Search Scale and Performance on a Budget<P class=MsoNormal<FONT face=Calibri size=3>I recently came across Paul Nelson’s informative post on </FONT><A href=""><FONT face=Calibri size=3>search scalability</FONT></A><FONT face=Calibri size=3>. I don’t know how long it’s been up there, but reading it made me think of customers I’ve spoken with recently who are looking to scale up their search deployments, but, due to tight budgets, want to do so without simply buying more hardware. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Paul focuses on document count as the main consideration for architecting scalable search, saying:</FONT></P> <P class=MsoNormal<I style="mso-bidi-font-style: normal"><FONT size=3><FONT face=Calibri>There is really only one dimension of size: The total count of documents in the system.<?xml:namespace prefix = o<o:p></o:p></FONT></FONT></I></P> <P class=MsoNormal<FONT face=Calibri size=3>He goes on to describe several useful strategies for scaling search for “large” systems – those with document counts of >500 million. Importantly, imo, he also points out that even medium sized systems (10-100 million docs) will have special scaling needs depending on their performance requirements:</FONT></P> <P class=MsoNormal<I style="mso-bidi-font-style: normal"><SPAN style="mso-bidi-font-size: 9.0pt"><FONT size=3><FONT face=Calibri. <SPAN style="mso-spacerun: yes"> </SPAN><SPAN style="mso-spacerun: yes"> </SPAN><o:p></o:p></FONT></FONT></SPAN></I></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Search System Performance Metrics<o:p></o:p></FONT></FONT></B></P> <P class=MsoNormal<FONT face=Calibri size=3>Metrics for search system performance typically fall into two categories: <B style="mso-bidi-font-weight: normal">query performance </B>and <B style="mso-bidi-font-weight: normal">indexing performance</B>. In turn, these categories each have two measures associated with them:</FONT></P> <P class=MsoNormal<U><FONT size=3><FONT face=Calibri>Query performance<o:p></o:p></FONT></FONT></U>< latency</B> (or response time) – the time it takes for a query to be processed and results to be returned.</FONT>< rate</B> – the rate at which the system can process queries. Usually measured in queries per second (or QPS). </FONT></FONT></P> <P class=MsoNormal<U><FONT size=3><FONT face=Calibri>Indexing performance*<o:p></o:p></FONT></FONT></U>< latency</B> – the time it takes for a document to be indexed and made available to search. </FONT>< rate</B> – the rate at which the system can process and index documents. Measured in documents per second. </FONT></FONT></P> <P class=MsoNormal<FONT size=3><FONT face=Calibri>*<I style="mso-bidi-font-style: normal">Indexing performance assumes systems that actually create an index or some other sort of database optimized for information retrieval. This rules out “federated search” engines, which rely on other systems to create and manage these indices.<o:p></o:p></I></FONT></FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>There are some variations on these measurements. For example, you can track average or peak values for each. <SPAN style="mso-spacerun: yes"> </SPAN>Document count per node (where a node = a Processing/Memory/Storage unit on a network) impacts all of these measures, but there’s a balance between query performance and index performance that also influences how many documents you can squeeze onto a single node.<SPAN style="mso-spacerun: yes"> </SPAN>The perhaps obvious explanation is that the more system resources you allocate to serve query performance, the fewer resources you’ll have available for indexing, and vice versa. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Impact of Features<o:p></o:p></FONT></FONT></B></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>I remember one FAST partner describing how their legacy eDiscovery tool (built on relational database technology) took up to <B style="mso-bidi-font-weight: normal">2 weeks</B>. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> <P class=MsoNormal<B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Know Your Options<o:p></o:p></FONT></FONT></B></P> <P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3”. </FONT></P> <P class=MsoNormal<A href=""><FONT face=Calibri size=3></FONT></A><FONT size=3><FONT face=Calibri><SPAN style="mso-spacerun: yes"> </SPAN>(Search Engine Developers group on Yahoo)</FONT></FONT></P> <P class=MsoNormal<A href=""><FONT face=Calibri size=3></FONT></A><FONT size=3><FONT face=Calibri><SPAN style="mso-spacerun: yes"> </SPAN>(Enterprise Search Engine Professionals on LinkedIn)</FONT></FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Nate </FONT></P><img src="" width="1" height="1">ntreloar Search – From What to Why?<P class=MsoNormal<FONT face=Calibri size=3>Day 1 at the </FONT><A href=""><FONT face=Calibri size=3>Enterprise Search Summit</FONT></A><FONT face=Calibri size=3> in NYC is wrapping up and I’ve just listened to </FONT><A href=""><FONT face=Calibri size=3>Lisa Denissen</FONT></A><FONT face=Calibri size=3> from Shearman & Sterling talk about Actionable Search. Actionable search is a key tenet of Microsoft’s enterprise search strategy, so it was good to see promotion of the concept.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Understanding what motivates people to search means going beyond capturing requirements like “I need to be able to search all of Product Marketing’s PowerPoints” to addressing more precise needs like <SPAN style="mso-spacerun: yes"> </SPAN> <U>why</U> the user is searching, not just <U>what</U> they hope to find.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>Facebook</FONT></A><FONT face=Calibri size=3>.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3?”<SPAN style="mso-spacerun: yes"> </SPAN>Actionable search promises to close this gap between information access and outcomes. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Nate</FONT></P><img src="" width="1" height="1">ntreloar and Natural User Interfaces - Part 2<P style="MARGIN-BOTTOM: 0pt"><FONT face=Calibri size=3>In my </FONT><A href="" mce_href=""><FONT face=Calibri size=3>first post</FONT></A><FONT face=Calibri size=3> on this subject last week, I referred to a scene in the movie “Minority Report” as a visionary example of a natural user interfaces (NUIs) and, more to the theme of this blog, a visionary example of ad hoc search within a NUI.<SPAN style="mso-spacerun: yes"> </SPAN>I realize that I didn’t offer a definition of NUIs in that post, so, before I go back to the search connection, here’s a quick primer.</FONT></P> <P style="MARGIN-BOTTOM: 0pt"><B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>NUIs Defined</FONT></FONT></B><B style="mso-bidi-font-weight: normal"><?xml:namespace prefix = o /><o:p><FONT face=Calibri size=3> </FONT></o:p></B></P> <P style="MARGIN-BOTTOM: 0pt"><FONT face=Calibri size=3>Natural user interfaces or NUIs rely on natural expressions like touches and gestures to directly and intuitively control the experience of a software application. The word “natural” means that the interaction is not controlled through an artificial device, like a mouse or keyboard. <I style="mso-bidi-font-style: normal">(I take this to imply that a Nintendo Wii is <U>not</U> an example of a NUI, since there are still artificial controllers involved. Other opinions and thoughts on this are welcomed).</I> </FONT></P> <P style="MARGIN-BOTTOM: 0pt"><FONT face=Calibri size=3 </FONT><A href="" mce_href=""><FONT face=Calibri size=3>article on touch computing from PC Magazine</FONT></A><FONT face=Calibri size=3> offers a catalog of some of the systems currently available. </FONT></P> <P style="MARGIN-BOTTOM: 0pt"><B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>Microsoft Surface</FONT></FONT></B><B style="mso-bidi-font-weight: normal"><o:p><FONT face=Calibri size=3> </FONT></o:p></B></P> <P style="MARGIN-BOTTOM: 0pt"><FONT face=Calibri size=3>One of the technologies mentioned in the PC Magazine story is </FONT><A href="" mce_href=""><FONT size=3><FONT face=Calibri>Microsoft <SPAN style="mso-bidi-font-family: 'Times New Roman'; mso-fareast-font-family: 'Times New Roman'; mso-bidi-font-size: 11.0pt; mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri">Surface</SPAN></FONT></FONT></A><FONT face=Calibri size=3>.<SPAN style="mso-spacerun: yes"> </SPAN>Microsoft Surface<SPAN style="mso-bidi-font-family: 'Times New Roman'; mso-fareast-font-family: 'Times New Roman'; mso-bidi-font-size: 11.0pt; mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri"> is a Windows powered device in the form factor of a table - a coffee table, if you will - with a surface that supports touch and gesture interaction. There are other NUI platforms, but t</SPAN>here are a couple things that make Microsoft Surface different and interesting. </FONT></P> <P style="MARGIN-BOTTOM: 0pt"><SPAN style="mso-bidi-font-family: 'Times New Roman'; mso-fareast-font-family: 'Times New Roman'; mso-bidi-font-size: 11.0pt; mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri"><o:p><FONT face=Calibri size=3></FONT></o:p></SPAN></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Second, Microsoft Surface devices have built-in cameras that can not only track touches and gestures, but can recognize digitally tagged objects and can initiate specific actions when these objects are placed on the table. For example, </FONT><A href="" mce_href=""><FONT face=Calibri size=3>Infusion Development</FONT></A><FONT face=Calibri size=3> has created an </FONT><A href="" mce_href=""><FONT face=Calibri size=3>application</FONT></A><FONT face=Calibri size=3> designed to enhance the doctor patient consultation experience. By placing a tagged card on Microsoft Surface, doctors can use and access interactive cardiac images, dynamic charts and clinical documents to help explain medical conditions and procedures to their patients. </FONT></P> <P class=MsoNormal<B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>NUIs:<SPAN style="mso-spacerun: yes"> </SPAN>Where’s the Search?<o:p></o:p></FONT></FONT></B></P> <P class=MsoNormal<FONT face=Calibri size=3 </FONT><A href="" mce_href=""><FONT face=Calibri size=3>TouchWall demo by Bill Gates</FONT></A><FONT face=Calibri size=3> from last year’s CEO Summit focused on navigation. Where’s the search?</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>I’ll grant that structural navigation metaphors in NUIs are really cool and work pretty well.<SPAN style="mso-spacerun: yes"> </SPAN.</FONT></P> <P style="MARGIN-BOTTOM: 0pt"><B style="mso-bidi-font-weight: normal"><FONT size=3><FONT face=Calibri>A Prototype and a Request</FONT></FONT></B></P> <P style="MARGIN-BOTTOM: 0pt"><FONT face=Calibri size=3 </FONT><A href="" mce_href=""><FONT face=Calibri color=#0000ff size=3>here</FONT></A><FONT face=Calibri size=3>, or the longer keynote presentation from the event </FONT><A href="" mce_href=""><FONT face=Calibri color=#0000ff size=3>here<>When Mark Stone, Global Enterprise Search Lead at EMC Consulting, and I first conceived this demo, we were inspired by three things:</FONT></P> <P class=MsoListParagraphCxSpFirst>1)</FONT><SPAN style="FONT: 7pt 'Times New Roman'"> </SPAN></SPAN></SPAN><FONT face=Calibri size=3>The dramatic growth and potential of NUI technologies, particularly Microsoft Surface.</FONT></P> <P class=MsoListParagraphCxSpMiddle>2)</FONT><SPAN style="FONT: 7pt 'Times New Roman'"> </SPAN></SPAN></SPAN><FONT face=Calibri size=3>The dearth of search examples in all these NUI applications.</FONT></P> <P class=MsoListParagraphCxSpLast </SPAN></SPAN></SPAN><FONT face=Calibri size=3>The potential for creating transformative user experiences that combine search and NUIs .</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3”?<SPAN style="mso-spacerun: yes"> </SPAN>How can we use a 3<SUP>rd</SUP>. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> <P style="MARGIN-BOTTOM: 0pt"><FONT face=Calibri size=3 </FONT><A href="mailto:[email protected]" mce_href="mailto:[email protected]"><FONT face=Calibri color=#0000ff size=3>directly<>I look forward to seeing your examples and will summarize what I find in a future post. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>In the mean time, I feel like we need a new name for search interfaces within NUIs. I like the phrase “Natural Search Interface” used by the </FONT><A href="" mce_href=""><FONT face=Calibri size=3>Microsoft Germany Partner site in reference to the Microsoft/EMC Consulting prototype</FONT></A><FONT face=Calibri size=3>. I’ll use that. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Nate</FONT></P><img src="" width="1" height="1">ntreloar and Natural User Interfaces (NUIs) - Part 1<P class=MsoNormal<FONT face=Calibri size=3 </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>scene</FONT></A><FONT face=Calibri size=3> where Cruise’s character is interacting with a futuristic looking visual display and using appropriately dramatic gestures to grab, spin, shrink, expand, and otherwise manipulate the various news stories and images floating on the display. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>There are now many good and <I style="mso-bidi-font-style: normal">real </I <U>search-driven</U> </FONT><FONT face=Calibri size=3>user interfaces we will be seeing soon… in much less than 20 years time.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Nate</FONT></P><img src="" width="1" height="1">ntreloar Year with Microsoft – a FAST Perspective<P class=MsoNormal<FONT face=Calibri size=3>After years of writing customer proposals, internal memoranda, and various stuffily formal documents, it feels like a luxury to be able to just write what I think about enterprise search.<SPAN style="mso-spacerun: yes"> </SPAN>It’s actually part of my job these days and I’m looking forward to sharing a perspective from 13 years in the industry – the past 6 years with FAST and, most recently, with Microsoft. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>As a reminder, it’s been a more than a year since the original offer came down from Microsoft to acquire FAST. To be precise, the bid was announced on </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>January 8<SUP>th</SUP>, 2008</FONT></A><FONT face=Calibri size=3> and the deal closed on </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>April 25<SUP>th</SUP>, 2008</FONT></A><FONT size=3><FONT face=Calibri>. The FAST team now makes up a large part of the new Enterprise Search Group (ESG) within the Microsoft Business Division (MBD) – the division that makes SharePoint, the Office line of products, Exchange, etc… .<SPAN style="mso-spacerun: yes"> </SPAN></FONT></FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3.<SPAN style="mso-spacerun: yes"> </SPAN>Now, with a year under the belt at Microsoft, I have a few more insights to offer than just the initial “nice validation” response. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>In his keynote presentation at </FONT><A href=""><FONT face=Calibri size=3>FASTforward’09</FONT></A><FONT face=Calibri size=3> in February, Kirk Koenigsbauer addressed three key topics related to Microsoft’s interest in enterprise search (a transcript of Kirk’s keynote can be found </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>here</FONT></A><FONT face=Calibri size=3>). These were:</FONT><>Commitment (to enterprise search)</FONT></P> <P class=MsoListParagraphCxSpMiddle>Vision<>Product Plans </FONT></P> <P class=MsoNormal<FONT size=3><FONT face=Calibri>These topics provide a useful framework for sharing my own observations.<U><?xml:namespace prefix = o<o:p></o:p></U></FONT></FONT></P> <P class=MsoNormal<U><FONT size=3><FONT face=Calibri>Commitment<o:p></o:p></FONT></FONT></U></P> <P class=MsoNormal<FONT face=Calibri size=3), <SPAN style="mso-spacerun: yes"> </SPAN>and of course the investment itself to acquire FAST (US$1.2B). There are other supporting data points, like the </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>announcement</FONT></A><FONT face=Calibri size=3> of Oslo (FAST’s headquarters) as a key R&D center for business search. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3. <SPAN style="mso-spacerun: yes"> </SPAN>For example, I often get a question like this from customers and partners: </FONT></P> <P class=MsoNormal<I style="mso-bidi-font-style: normal"><FONT size=3><FONT face=Calibri>“Have you guys talked with the folks over in Microsoft’s <product name> team?”<o:p></o:p></FONT></FONT></I></P> <P class=MsoNormal<FONT face=Calibri size=3>…and then…</FONT></P> <P class=MsoNormal<FONT size=3><FONT face=Calibri>“<I style="mso-bidi-font-style: normal"> Man, you should because FAST technology added to what they’re doing would be powerful combination.”<o:p></o:p></I></FONT></FONT></P> <P class=MsoNormal<FONT face=Calibri size=3.<SPAN style="mso-spacerun: yes"> </SPAN. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>To be honest, search is such a generally valued concept and the possibilities are so compelling when it’s combined with other Microsoft products and technology that it’s all we can do to stay focused on our main priorities. It’s a good problem.</FONT></P> <P class=MsoNormal<U><FONT face=Calibri size=3>Vision</FONT></U></P> <P class=MsoNormal<FONT face=Calibri size=3 </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>FASTforward’09 conference</FONT></A><FONT face=Calibri size=3> this past February – “Engage Your Users”.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3, </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>Microsoft Surface</FONT></A><FONT face=Calibri size=3>, but that’s a topic for another post.</FONT></P> <P class=MsoNormal<U><FONT size=3><FONT face=Calibri>Product Plans<o:p></o:p></FONT></FONT></U></P> <P class=MsoNormal<FONT size=3><FONT face=Calibri>At FASTforward’09 we announced our plans to target enterprise search in two areas:<SPAN style="mso-spacerun: yes"> </SPAN></FONT></FONT><>Business productivity – applications inside the firewall where, in particular, SharePoint provides the framework for content management and collaboration. <>Internet business – “outside the firewall” applications for attracting, retaining, and otherwise monetizing customers.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3 </FONT><A href=""><FONT face=Calibri size=3>“consumerization”</FONT></A><FONT face=Calibri size=3> of search features happen more than once at FAST. Features that we initially designed for consumer search found their way into intranet search deployments (one simple example is the “best bets” concept like the one found in </FONT><A href=""><FONT face=Calibri color=#0000ff size=3>SharePoint</FONT></A><FONT face=Calibri size=3>). The opposite has also happened. Now, consider the capabilities in SharePoint, which is already powering many consumer facing Web sites, and you can see where this can lead. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>There you have it, my first post for the Microsoft Enterprise Search Blog. Look for more posts from me in this general category of enterprise search vision and strategy. I welcome all comments on this and future entries. </FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Next up – Search plus Natural User Interfaces.</FONT></P> <P class=MsoNormal<FONT face=Calibri size=3>Nate </FONT></P><img src="" width="1" height="1">ntreloar Presents FAST forward 09: Engage Your User<p>The Mirage, Las Vegas, Feb 9-11</p> <p>Since its inaugural conference in 2006, FAST<i>forward</i> has been a venue for though leadership and innovation in the field of search. This year, <strong>FAST</strong><strong><i>forward’09</i></strong> is the industry’s largest business and technology conference dedicated to search-driven innovation. Join the discussion! At <strong>FAST</strong><strong><i>forward’09</i></strong>,. </p> <p>New this year, a SharePoint technology track covering Enterprise Search, Social Computing, Enterprise Content Management and more!  Other tracks include:</p> <ul> <li>Monetization via Search (customer-facing)</li> <li>Productivity via Search (internal enterprise)</li> <li>FAST technology</li> <li>Partner Solutions</li> </ul> <p>Top Ten Reasons Why You Should Attend FAST<i>forward’09</i>:</p> <blockquote> <p><strong>1. Uncover new opportunities for using search </strong></p> <p><strong>2. Hear what others have done with search technology </strong></p> <p><strong>3. Learn industry best practices for search </strong></p> <p><strong>4. Hear the Microsoft vision for search and FAST </strong></p> <p><strong>5. Learn how SharePoint and FAST products are positioned </strong></p> <p><strong>6. Gain insight on integration plans for SharePoint and FAST products </strong></p> <p><strong>7. Understand how partners can help </strong></p> <p><strong>8. Obtain access to Microsoft and FAST executives and industry luminaries </strong></p> <p><strong>9. Network with colleagues </strong></p> <p><strong>10. Attend convenient pre-conference technical training </strong></p> </blockquote> <p></p> <p>Come spend three days with us at the Mirage in Las Vegas learning from industry thought leaders, customers, partners, and our own Microsoft experts! </p> <p>Learn more at <a href="">FASTforward ‘09</a>. Register before January 9 and receive <b>$400 off</b> of the full registration fee. See you there!</p><img src="" width="1" height="1">enterprisesearch positioned in the Leaders Quadrant of the 2008 Information Access Magic Quadrant<P>We!</P> <P:</P> <UL> <LI>Some departments or small organizations need search that is quick and easy to set up; we offer Microsoft Search Server Express as a free download so that you can get it up and running in about 30 minutes. We’re excited to see customers like <A href="" mce_href="">St. Jude Medical</A> and Urbis having quick successes with Express. We’re also seeing partners, such as <A href="" mce_href="">StartReady</A>, build solutions around Search Server Express to create a search appliance. </LI> <LI>Many organizations need search as an integral part of a business productivity infrastructure; Search in Microsoft Office SharePoint Server is integrated with other key SharePoint productivity workloads such as portals, collaboration, ECM, business processes and BI. Customers like McCann Worldgroup and Jones Lang LaSalle are all deriving productivity increases with better search in SharePoint. In particular, both companies are promoting collaboration and leveraging in-house experts with people search enhanced by user profiles in MySites. </LI> <LI>Some organizations face business problems that demand high-end search; FAST ESP offers best-in-class search with extreme scalability, query performance, and other advanced capabilities for sophisticated customer-facing or inside-the-firewall applications. For example, <A href="" mce_href="">Aerotek</A> and <A href="" mce_href="">TEKsystems</A>, two of the world’s largest staffing companies, deliver job searching to more than 1.3 million users. In more than 164 million queries, greater than 99.5% of query results came back in less than 2 seconds. For inside-the-firewall productivity, they index more than 10 million complex candidate records with low latency during high volume index updates. We’re also excited to see Pfizer pushing the envelope with an Enterprise Collaboration Framework driven by FAST ESP on top of SharePoint </LI></UL> <P!</P> <P>Kirk Koenigsbauer <BR>General Manager, <BR>SharePoint Business Group </P> <P><A href="" mce_href="">Magic Quadrant for Information Access Technology</A> .</P><SPAN style="FONT-SIZE: 10pt; COLOR: red; FONT-FAMILY: 'Arial','sans-serif'"><SPAN style="FONT-SIZE: 8.5pt; FONT-FAMILY: 'Verdana','sans-serif'; mso-bidi-font-family: 'Times New Roman'"><FONT color=#000000> <P class=MsoNormal<SPAN style="FONT-SIZE: 8.5pt; FONT-FAMILY: 'Arial','sans-ser. <?xml:namespace prefix = o<o:p></o:p></SPAN></P></FONT></SPAN></SPAN><img src="" width="1" height="1">enterprisesearch People Search on the Road….<p>In another great blog post Matt McDermott walks you through the steps of enabling SharePoint’s people search capability on a mobile device with the end results looking something like this;</p> <p><a href=""><img border="0" alt="Search Results" src="" width="324" height="244" /></a></p> <p>The post is here;</p> <p> <a title="" href=""></a></p> <p>Richard Riley <br />Senior Technical Product Manager <br />Microsoft Corp.</p><img src="" width="1" height="1">enterprisesearch Post: One Stop Search from the Microsoft Office Research Task Pane<p>Since the release of Microsoft Office 2003, Microsoft desktop applications such as MS Word, PowerPoint, Excel, Outlook and Internet Explorer have contained an internal federated or meta-search capability known as the ‘Research Pane’. To see this in action in office 2003 (see <a href="">link</a>.</p> <p><a href=""><img style="border-right-width: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="clip_image002" border="0" alt="clip_image002" src="" width="448" height="320" /></a></p> <p. </p> <p>Raritan Technologies specializes in <a href="">Federated Search solutions</a> and has created an array of search connectors to a number of web sites, web services, search engines and databases and directory services (to name a few) using our Search Integration Framework Toolkit (<a href="">SIFT</a>) and Federation Manager. We and our partner in this effort, <a href="">New Idea Engineering</a>,.</p> <p.</p> <p>For more information on the Raritan Technologies “Research Pane Integration” or to arrange for a trial connector please visit <a href=""></a>.</p> <p>Barry Freindlich <br />President Raritan <br />Technologies, Inc.</p><img src="" width="1" height="1">enterprisesearch to: Customize the Thesaurus in SharePoint Search and Search Server<p>The.</p> <p>A SharePoint Search administrator can modify the thesaurus file to substitute all these words at search query time. This document explains how to set up a thesaurus and where to find the relevant files.</p> <p><strong>Supported Thesaurus Syntax:</strong> <br />To use the sample files provided by the product, you need to remove the comment beginning (<!--) and ending lines (-->) from the xml file.</p> <p><strong>Explanation of terms:</strong></p> <table border="0" cellspacing="0" cellpadding="0" width="446"><tbody> <tr> <td valign="top" width="200"><strong>Term</strong></td> <td valign="top" width="244"><strong>Meaning</strong></td> </tr> <tr> <td valign="top" width="200">thesaurus</td> <td valign="top" width="244">marks beginning (and end) of thesaurus</td> </tr> <tr> <td valign="top" width="200">diacritics_sensitive</td> <td valign="bottom" width="244"> <p>Diacritics are marks, such as accents that are added to letters that change their pronunciation. For example, the acute accent over and e gives you: é. <br />0 – ignore diacritics <br />1 – respect diacritics</p> </td> </tr> <tr> <td valign="top" width="200">expansion</td> <td valign="top" width="244">A list of alternative forms each marked by <sub> by the sub keyword</td> </tr> <tr> <td valign="top" width="200">sub</td> <td valign="top" width="244">One of several alternatives in an expansion</td> </tr> <tr> <td valign="top" width="200">replacement</td> <td valign="top" width="244">Several patterns will be replaced with a substitution.</td> </tr> <tr> <td valign="top" width="200">pat</td> <td valign="top" width="244">A pattern to be replaced</td> </tr> <tr> <td valign="top" width="200">sub</td> <td valign="top" width="244">Item to be substituted</td> </tr> </tbody></table> <p><strong>Example:</strong></p> <pre class="csharpcode"><span class="kwrd"><</span><span class="html">XML</span> <span class="attr">ID</span><span class="kwrd">="Microsoft Search Thesaurus"</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">thesaurus</span> <span class="attr">xmlns</span><span class="kwrd">="x-schema:tsSchema.xml"</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">diacritics_sensitive</span><span class="kwrd">></span>0<span class="kwrd"></</span><span class="html">diacritics_sensitive</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">expansion</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">sub</span><span class="kwrd">></span>Internet Explorer<span class="kwrd"></</span><span class="html">sub</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">sub</span><span class="kwrd">></span>IE<span class="kwrd"></</span><span class="html">sub</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">sub</span><span class="kwrd">></span>IE5<span class="kwrd"></</span><span class="html">sub</span><span class="kwrd">></span> <span class="kwrd"></</span><span class="html">expansion</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">replacement</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">pat</span><span class="kwrd">></span>NT5<span class="kwrd"></</span><span class="html">pat</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">pat</span><span class="kwrd">></span>W2K<span class="kwrd"></</span><span class="html">pat</span><span class="kwrd">></span> <span class="kwrd"><</span><span class="html">sub</span><span class="kwrd">></span>Windows 2000<span class="kwrd"></</span><span class="html">sub</span><span class="kwrd">></span> <span class="kwrd"></</span><span class="html">replacement</span><span class="kwrd">></span> <span class="kwrd"></</span><span class="html">thesaurus</span><span class="kwrd">></span></pre> <p><strong>The example means:</strong></p> <ul> <li>We have elected to ignore accents, etc in the thesaurus </li> <li>Queries containing IE, or any other one of the <sub> clauses will also contain “internet explorer” and “ie5”. </li> <li>If a query contains terms “NT5” or “W2K”, they will be replaced by “Windows 2000”. </li> </ul> <p><strong>How to Customize the Thesaurus:</strong></p> <ol> <li>Find the appropriate thesaurus file in the config folder contained in the registry key: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\12.0\Search\Global\Gathering Manager]"DefaultApplicationsPath” </li> <li>Update the thesaurus file(s) for each appropriate language for each desired <expansion> or <replacement>. </li> <li>Replace the file(s) on each index, query and web frontend server for each search application path: <br />%programfiles%\Microsoft Office Servers\12.0\Data\Office Server\Applications\[GUID]\Config  <br />Note index propagation does not sync these files on all the servers in the farm. </li> <li. </li> </ol> <p><strong>Notes:</strong></p> <p>See “<i>Finding Important Files</i>” below for a summary of where to find the key files to manage your thesaurus.</p> <ol> <li>(optional) If you want to have the same thesaurus files apply to all newly created SSPs, put your thesaurus files under the main config folder <br />(e.g., %programfiles%\Microsoft Office Servers\12.0\Data\config). </li> <li>If there is a syntax error in the thesaurus file, all expansions and replacements will be ignored. </li> <li>If a word in the thesaurus file matches a stop word in the stop word file, it will be ignored.   To avoid this, remove it from the appropriate stop word file. </li> <li>Thesaurus terms are broken into words at query time.  Add words you do not want to be broken into the custom dictionary file customLANG.lex (see Finding Important Files for more details). </li> <li>Search first applies the thesaurus, and then expands words into their alternate forms, when “stemming” functionality is turned on.   Care should be taken to avoid expanding into too many unnecessary forms as this may harm search performance and accuracy. </li> <li”.</li> <li <i>these two expressions look exactly the same</i>.  Please check for this kind of problem in the logs if you are building a large thesaurus.</li> <li>There is a 10,000 term limit per language in thesaurus. </li> </ol> <p><strong>Finding Important Files: </strong></p> <p>The following are the most important files used to manage your thesaurus. </p> <p>There are 50 default stop word files and 48 thesaurus sample files for the languages we support.</p> <p>The search service install path can be located by examining registry key [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\12.0\Search\Global\Gathering Manager]"DefaultApplicationsPath”</p> <p>The default location of the thesaurus files (for each index, query and web frontend server) is: <br />%programfiles%\ Microsoft Office Servers\12.0\Data\Office Server  <br />When a search application is created, a copy of the thesaurus file will also be placed under: %programfiles%\Microsoft Office Servers\12.0\Data\Office Server\Applications\[GUID]\Config </p> <p>Stop word files for each language can be found as noiseLANG.txt, where LANG is the 3 letter acronym for that language. For example, US English is noiseENU.txt, and the language neutral list is noiseNEU.txt.</p> <p>To find the appropriate acronym for your language(s), please look them up under: <a href=""></a>. <br /></p> <table border="0" cellspacing="0" cellpadding="0" width="400"><tbody> <tr> <td valign="top" width="200">Ping Lin <br />Senior Test Lead <br />Microsoft Corp.</td> <td valign="top" width="200">Victor Poznanski <br />Senior Program Manager <br />Microsoft Corp.</td> </tr> </tbody></table><img src="" width="1" height="1">enterprisesearch Image Search<p><a href="">Matthew McDermott</a>, a <a href="">SharePoint MVP</a>, has written a great 4 part blog post on how to make SharePoint 2007 search (and Search Server) render image results in a way that looks very similar to <a href=""></a>. </p> <p>Not only does this make searching images much easier, it’s also a very thorough step-by-step tutorial on how to customize results using the built in Web Parts and XSL – it’s well worth a read. </p> <p><a href="">SharePoint Image Search (Part 1)</a></p> <p><a href="">SharePoint Image Search (Part 2)</a></p> <p><a href="">SharePoint Image Search (Part 3)</a></p> <p><a href="">SharePoint Image Search (Part 4)</a></p> <p>The end result makes SharePoint Image results look like the screencap below.</p> <p><a href=""><img style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="isearch" border="0" alt="isearch" src="" width="447" height="331" /></a></p> <p>Richard Riley <br />Senior Technical Product Manager <br />Microsoft Corp.</p><img src="" width="1" height="1">enterprisesearch File groups and Search<p mce_keep="true">This article has been a long time coming, but it is finally here.  In the post below I will cover how to configure the Search database to span multiple filegroups.  First I'll cover a little about the benefits of doing so:</p> <p>General references on what SQL file groups are:</p> <ul> <li>A basic description of <a href="" mce_href="">Physical Database Files and Filegroups</a> </li> <li>High level discussion on the benefits of <a href="" mce_href="">Using Files and Filegroups</a> </li> </ul> . </p> <p.  </p> <p.   </p> <p><b>Issues and concerns with using filegroups:</b></p> <p><b>Back-up and Restore</b></p> <p.    </p> <p><b>Future upgrades, Service packs and Hot fixes</b></p> <p. </p> <p. </p> <p>However, the risk still exists and you will want to re-run the scripts below after each update that you apply to your system.  In the case when you apply an update and the index did <b>not</b> change running the script is a no-op and nothing gets moved.  So it is very cheap to run the script on a system that already has the indexes moved.  </p> <p><b>SQL 2005 and greater</b></p> <p>The script that is moving the indexes is utilizing new features that were released in SQL 2005.  As such you <b>cannot </b>perform this optimization with SQL 2000.  </p> <p><b>Step- by-Step instructions for applying filegroups to your environment.</b></p> <p>To deploy this you will need to manually create a file group on the Search database.  To do this execute the following steps:</p> <p>a. Go to the <u>Filegroups</u> section of the Search database <u>properties</u> within SQL Server Management Studio.</p> <p>b. From the <u>Filegroups</u> section click add and fill in the name "CrawlFileGroup." <b>The scripts are written assume the filegroup has this name, failure to use this name will result in early failures  in the script</b></p> <p><a href="" mce_href=""><img style="border-right-width: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="clip_image001[1]" border="0" alt="clip_image001[1]" src="" width="461" height="175" mce_src="" /></a></p> <p>c. Once you have a new filegroup with the name CrawlFileGroup you need add a file into this group.  To do this select the <u>Files</u> section of the database properties dialog and add a new file into the CrawlFileGroup.  Be sure that you place this file onto a separate drive with isolated spindles. </p> <p><a href="" mce_href=""><img style="border-right-width: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="clip_image002[1]" border="0" alt="clip_image002[1]" src="" width="464" height="192" mce_src="" /></a></p> <p>d. Next you need to install the stored proc that will move the indexes and tables to the new filegroup.  Open the script named  <a target="_blank" href="" mce_href="">MoveTableToFileGroup.sql</a> within Management Studio and execute it; ensuring that you are working with the Search database  This will create a stored proc named proc_MoveTableToFileGroup.  Confirm that this sproc does indeed exist within the Search database.</p> <p>e. Open and execute the second script named   <a target="_blank" href="" mce_href="">MoveCrawlTablesToFileGroup.sql</a>, this is the script that does all of the work by calling proc_MoceTableToFileGroup for each table that is dedicated for crawling.  </p> <p>That is all there is to it.  You have now moved you crawl tables on to a separate set of spindles.  </p> <p>Thank you for your time and as always I welcome any feedback or questions</p> <p>Dan Blood <br />Senior Test  Engineer <br />Microsoft Corp</p><img src="" width="1" height="1">enterprisesearch Post: Announcing conceptClassifier for SharePoint – Automatic Classification within Office<p mce_keep="true">Enterprise customers are increasingly struggling with how to apply policy and governance at the desktop. End user adoption is cited as the single most critical barrier to success in ECM and Records Management initiatives. Using Concept Searching’s unique compound term processing concept<b>Classifier</b> for SharePoint can now be used to automatically classify content from Microsoft Office Applications, upload the documents directly to SharePoint, store the metadata in SharePoint properties and write back the classifications to the custom properties of the document for use within knowledge and workflow applications or enterprise applications such as ECM, Document Management, Records Management, or eDiscovery.</p> <p>The classification can take place automatically without end user intervention. Optionally, Subject Matter Experts can be granted the authority to manually adjust the classification based on the taxonomy. A ribbon bar has been added to the familiar Office interface enabling automatic classification of content. When the end user classifies a document the system will retrieve existing concepts as an aid to the classification process as shown below. Subject Matter Experts also have the ability to add or delete classes in the taxonomy.</p> <p><a href="" mce_href=""><img style="border-right-width: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="clip_image002" border="0" alt="clip_image002" src="" width="442" height="278" mce_src="" /></a></p> <p>Documents are uploaded to SharePoint and the classification metadata is stored in the properties fields. The classification status automatically reflects the manual classification so as to not overwrite the classification classes the Subject Matter Expert entered. The systems administrator features currently enabled include the ability to edit the classifications, classify the document, a batch of documents or the full library. This metadata can now be used by Microsoft Enterprise Search to improve identification of relevant documents when searching. </p> <p><a href="" mce_href=""><img style="border-right-width: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="clip_image004" border="0" alt="clip_image004" src="" width="446" height="281" mce_src="" /></a></p> <p>For more information visit <a href="" mce_href=""></a> or <a href="" mce_href="">click here</a> to view a webcast demo of the integrated technology. </p> <p>Martin Garland<strong>     <br /></strong>President                                                                                                                   Concept Searching, Inc </p><img src="" width="1" height="1">enterprisesearch Index defrag and maintenance tasks for Search<p></p> <p>Hi all, this topic is an area that has caused me much pain and work.  My goal for this was to follow the recommended SQL guidelines while minimizing the impact that these maintenance jobs have on Crawling and Queries.  We know from the <a href="">SQL Monitoring an I/O</a> post that Search is extremely I/O intensive .  As it turns out so is all of the regular maintenance that SQL recommends, so finding the right balance between the two is an interesting scheduling task. </p> <p>As a starting point much information about SQL maintenance and MOSS is covered in the following paper:</p> <ul> <li><a href=";clcid=0x409"><b>Database Maintenance for Microsoft<sup>®</sup> SharePoint<sup>®</sup> Products and Technologies</b></a><b> </b></li> </ul> <p>There are some key areas from the above paper that I would like to augment here.</p> <ol> <li>The stored procedure (proc_DefragIndexes) identified in this paper will work, but it is extremely expensive to run on the Search DB as it defrags <u>all</u> of the indexes in the table. </li> <li>Maintenance plans generated with the Maintenance Plan Wizard in SQL Server 2005 can cause unexpected results (KB <a href="">932744</a>.)  While this was fixed in SQL 2005 SP2 these maintenance plans also do more work than is necessary to have a healthy functional system.    </li> <li>Shrinking  the Search DB  should not be a necessary task that you need to perform.  The process of Shrinking the database does not provide a performance benefit.  SQL best practices for <a href="">DBCC SHRINKFILE</a> suggest that this operation is most effective after an operation that creates lots of unused space.  Search does not regularly perform these types of operations.  The only time that a SHRINKFILE may make sense is after you have cleaned out your index by removing a Content Source.      </li> <li>Rebuilding an index can cause latency issues with SQL Mirroring if the SQL I/O subsystem is constrained.  If you are using SQL Mirroring, be sure you are following the SQL <a href="">best practices</a> and the <a href="">SharePoint mirroring white paper</a>.  Because Search, SQL Mirroring, and defrag are all very I/O intensive you will want to be extra cautious with your deployment plan for this defrag script and make sure you <b><i>test the script prior to going into production</i></b>. </li> </ol> <p><b>DBCC CHECKDB</b></p> <p><a href="">DBCC CHECKDB</a>.   </p> .</p> <p.         </p> <p><b>Fragmentation and index statistics freshness</b></p> <p:</p> <ul> <li>IX_MSSDocProps </li> <li>IX_MSSDocSdids </li> <li>IX_AlertDocHistory </li> <li>IX_MSSDEFINITIONS_DOCID </li> <li>IX_MSSDEFINITIONS_TERM </li> <li>PK_Sdid </li> <li>IX_SDHash </li> <li>IX_DOCID </li> </ul> <p. </p> <ul> <li>IX_int -- defrag this index if you have a lot of queries that using numeric properties in the property store.  The classic case is date rage queries. </li> <li>IX_Str -- defrag this index if you have a lot of queries that using string properties in the property store.  There is not a common case for this but if you have made changes to your managed properties and are driving your search UI off of exact matches for a string based property you will want to regularly defrag this index. </li> </ul> <p>Once we knew which indexes to defrag we looked at the duration it took for the index to reach a 10% defragmentation rate.  From this we adjusted the <a href="">FILLFACTOR</a>.</p> <p>We then looked at the cost/benefit of doing a <a href="">Reorganize versus a Rebuild</a>.  <b>1 hour</b> while the Reorganize takes as long as <b>8 hours</b>. .  <a href="">UPDATE STATISTICS</a>:  In the experiments we ran we found that simply doing the rebuild (which also updates statistics) that it was not necessary to regularly use this command. </p> <p.  </p> <p>To mitigate this we have added a parameter to the script that allows you to reduce the <a href="">MAXDOP<:</p> <pre class="csharpcode"><span class="kwrd">ALTER</span> <span class="kwrd">INDEX</span> IX_MSSDocProps <span class="kwrd">ON</span> [dbo].[MSSDocProps] REBUILD <span class="kwrd">WITH</span> (MAXDOP = 1, <span class="kwrd">FILLFACTOR</span> = 80, ONLINE = <span class="kwrd">OFF<:</p> <ul> <li>The duration of the command.  Will it complete within your service window?  For comparison purposes this command completes in under an hour on the <a href="">SearchBeta hardware</a> </li> <li><a href="">SQL I/O latencies</a> </li> <li>If you have mirroring in place <ul> <li>The <a href="">Database Mirroring Monitor</a> </li> <li><a href="">Send and Redo Queues</a>  within perfmon.  The monitor above will tell you if mirroring is too far out of sync, but these counters are useful for comparison if you start changing the MAXDOP parameter. </li> </ul> </li> </ul> <p>Bottom line we feel the rebuild is a much better operation to run and recommend that you:</p> <ol> <li>Run the <a target="_blank" href="">script</a> on a regular basis; once a night or on the weekends depending on your service windows. <ul> <li><b>Weekends or weekly</b> - reduce the fragmentation rate (sproc parameter) to 5.0 or lower to prevent missing the defrag due to a fraction of a percent (IE - 9.5%) </li> <li><b>Nightly</b> - use the defaults for fragmentation rate. The largest index (MSSDocProps) gets rebuilt approximately every 2 weeks on SearchBeta. Running the script nightly will ensure that your indexes are up to date more often, but gives you less control over the exact time that the index rebuild occurs. </li> </ul> </li> <li>Before running the script the first time test out how your system will behave when rebuilding MSSDocProps. </li> <li><b>Reduce MAXDOP - </b>If your environment shows poor I/O response time or unacceptable durations (cannot complete a defrag inside your service window) reducing the MAXDOP value <b>may</b> reduce the duration of the script and put less pressure on the I/O system.  Reducing the MAXDOP will not help enough if the system is very I/O bound.  </li> <li><b>SQL Mirroring</b> - SQL mirroring is sensitive to I/O latencies, adding the defrag may be too much I/O for the system handle. </li> <li><b>Poor I/O latency</b> - You should focus on improving the I/O subsystem of your SQL environment before you begin running this script.     </li> </ol> <p>Stored Procedure syntax:</p> <pre class="csharpcode"><span class="kwrd">exec</span> proc_DefragSearchIndexes [MAXDOP <span class="kwrd">value</span>], <br />[fragmentation <span class="kwrd">percent<> <ul> <li><b>MAXDOP value</b> - Integer value. Default is 0  which means that all available CPUs will be used. </li> <li><b>Fragmentation percent</b> - decimal value. Default is 10.0.  This value was explicitly chosen because we able measure query latency improvements on SearchBeta when defragging at the 10% boundary.   </li> </ul> <p>-Thanks </p> <p>Dan Blood <br />Senior Test  Engineer <br />Microsoft Corp</p><img src="" width="1" height="1">enterprisesearch
http://blogs.msdn.com/enterprisesearch/atom.xml
crawl-002
en
refinedweb
NAMEon_exit - register a function to be called at normal program termination. SYNOPSIS #include <stdlib.h> int on_exit(void (*function)(int , void *), void *arg); DESCRIPTIONThe on_exit() function registers the given function to be called at normal program termination, whether via exit(3) or via return from the program's main. The function is passed the argument to exit(3) and the arg argument from on_exit(). RETURN VALUEThe on_exit() function returns the value 0 if successful; otherwise it returns a non-zero value. CONFORMING TOThis function comes from SunOS, but is also present in libc4, libc5 and glibc. SEE ALSOatexit(3), exit(3) Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl3_on_exit.htm
crawl-002
en
refinedweb
Turtle Programming in Python Introduction | turtle module “Turtle” is a Python feature like a drawing board, which lets us command a turtle to draw all over it! We can use functions like turtle.forward(…) and turtle.right(…) which can move the turtle around.Commonly used turtle methods are : Plotting using Turtle To make use of the turtle methods and functionalities, we need to import turtle.”turtle” comes packed with the standard Python package and need not be installed externally.The roadmap for executing a turtle program follows 4 steps: - Import the turtle module - Create a turtle to control. - Draw around using the turtle methods. - Run turtle.done(). So as stated above, before we can use turtle, we need to import it.We import it as : from turtle import * # or import turtle After importing the turtle library and making all the turtle functionalities available to us, we need to create a new drawing board(window) and a turtle. Let’s call the window as wn and the turtle as skk. So we code as: wn = turtle.Screen() wn.bgcolor("light green") wn.title("Turtle") skk = turtle.Turtle() Now that we have created the window and the turtle, we need to move the turtle. To move forward 100 pixels in the direction skk is facing, we code: skk.forward(100) We have moved skk 100 pixels forward, Awesome! Now we complete the program with the done() function and We’re done! turtle.done() So, we have created a program that draws a line 100 pixels long. We can draw various shapes and fill different colors using turtle methods. There’s plethora of functions and programs to be coded using the turtle library in python. Let’s learn to draw some of the basic shapes. Shape 1: Square Shape 2: Star Shape 3: Hexagon Visit pythonturtle.org to get a taste of Turtle without having python pre-installed. The shell in PythonTurtle is a full Python shell, and you can do with it almost anything you can with a standard Python shell. You can make loops, define functions, create classes, etc. You can access these codes for wonderful turtle programs here Some amazing Turtle Programs 1. Spiral Square Outside In and Inside Out Output: 2. User Input Pattern 3. Spiral Helix Pattern Output: 4. Rainbow Benzene Output: Trees using Turtle Programming References: - Turtle documentation for Python 3 and 2 - eecs.wsu.edu [PDF] !?
https://www.geeksforgeeks.org/turtle-programming-python/
CC-MAIN-2019-47
en
refinedweb
Write a C program to Print factorial of a number. In this program, We are going to write a C program which takes an input number and print it's factorial. To solve this problem, we are using iterative approach. If you are not familiar with iterative and recursive approach then you can check this awesome tutorial on recursion vs iteration. Difference between iteration and recursion Java program to print factorial of a number What is Factorial?For any input number n, it's factorial is factorial = 1 * 2 * 3 * 4 .......n; Suppose, An input number is 4 then it's factorial is 4 * 3 * 2* 1 = 24 C Program to Print Factorial of a Number using Loop In this program, we take a input number and using for loop our program calculate and print the factorial of a number. #include<stdio.h> int main() { int number, fact=1, i; printf("Enter a number \n "); scanf("%d",&number); //If a number is greater than zero if(number > 0) { for(i = 1; i <= number; i++){ fact = fact*i; } printf("The factorial of a number is: %d",fact); } else if (number == 0 ){ printf("Factorial of 0 is 1"); } else { printf("Factorial of negative number doesn't exist"; } return 0; } C Program to Print Factorial of a Number using Function #include <stdio.h> //This method calculate and return factorial int factorial(int num) { int fact = 1; //Factorial of a zero is zero if (num == 0) { return 1; } for (int i = 1; i <= num; i++) { fact = fact*i; } return fact; } int main(void) { int num; printf ("Enter a number \n"); scanf ("%d", &num); if (num >= 0) { //Function is called factorial(num) printf ("Factorial of a number is %d", factorial(num)); } else { printf ("Factorial of a negative number doesn't exist"); } return 0; } Output - Enter a number 5 Factorial of a number is 120 Print factorial of a number using recursion
http://www.cprogrammingcode.com/2011/09/c-program-to-calculate-factorial-of.html
CC-MAIN-2019-47
en
refinedweb
©nabg ©nabg java.net.*; import java.net.*; • Classes supporting TCP/IP based client-server connections. • (You will meet TCP/IP from C/C++ perspective in CSCI214; inner workings of TCP/IP covered in many SECTE subjects) • In days of “Java and the Internet”, the “… and the Internet” component had introductory coverage of TCP/IP. You must have had something on TCP/IP in CSCI102; it cannot be that the only coverage of networking is in elective subjects. ©nabg ©nabg Program-to-program communications over IP IP (Internet Protocol) • How to identify end-point program? • “Port” • Each machine has one (or more) IP addresses • IP protocol defines how to get data packets across the Internet from one machine to another. • But need – Portal, a door or gateway • Ports are OS structures – Data buffer for input/data buffer for output – Identified by a number – Client program communicates with server program – Not “client machine communicates with server machine” • Programs ask OS for use of a port • Server program asks for specific port number, publishes its port number; client program uses this to identify server • OS allocates port number for client • Other protocols layered over IP allow program to program communications • TCP & UDP ©nabg ©nabg UDP & TCP • UDP – Client composes one data request packet – Client sends it to server (identified by combination of IP address and port number; includes its own IP address and port number in packet header) – Server responds with one packet (using client address from header) – Connection terminated • TCP – TCP uses IP and ports in same way as UDP – TCP libraries hide “packet” nature of communications • Client and server see connection as continuously open input-output stream, both can read and write data – TCP library code deals with all the hard bits of arranging for data transmission in packets, guaranteed delivery, flow-control etc Ports & sockets • Ports belong to OS • Programs need access to the I/O buffers on port • “Sockets” – I/O streams working with buffers of given port ©nabg ©nabg clients and servers Sockets network (Internet or intranet) • Actually two kinds – Data stream sockets • Read and write application data – Server sockets Server Client • Used when client wants to make connection to server – Client details put in server-socket input buffer – Additional I/O buffers allocated for data transfers with this client – Server program given all these details Java Applet in Web page Java application C++ or other application Java server C++ server No need for client and server to be implemented in same language – they can often communicate by text messages. ©nabg ©nabg Clients Clients and Servers • • Code for handling client side is completely standard, the code examples in books on TCP/IP programming in C/C++ can easily be adapted for Java. Java Applets are restricted to servers running on same machine as Webserver program that provided HTML page with Applet • Server program must be running (listening at “well known port”) client build address structure for server get socket connect to server while not finished get command from user write to socket read reply show reply to user – official servers (like ftpd, httpd, ...) are organised by OS on server machine – local standards are also handled automatically via “inetd” system (system administrator defines list of extra server programs run at this site and their ports, if OS receives request at port it starts corresponding server program; on Windows, programs are registered with OS as “services” and get started automatically) – private server programs must be launched by owner • Client program launched by user – Familiar examples – browser client, ftp-client ©nabg Obviously, this could (& should) be handled by a dedicated thread if you also want to maintain GUI interaction ©nabg Servers Servers • Server may have to deal with many clients • Concurrency strategies: – Serial: • deal with clients one at a time, handling all requests until client disconnects; then start handling next client. • OS helps by maintaining a queue of clients who wish to connect; – Forking • Start a new process for each client – just as was described for WebServer (this strategy rarely used with Java servers) – Threaded • Each client gets a thread to serve them • The normal server (as described in TCP/IP texts) is a forking server: Different processes! server (listener) create socket bind to “well known” port listen (i.e. activate service) forever() accept client fork subprocess to deal with client new socket created by accept is passed to child server (child process) forever() read next request if “bye” then quit handle request send response ©nabg ©nabg Threaded server Servers All the one process (independent clients) main thread • Forking server traditionally favoured on Unix systems create socket bind to port activate forever() accept client create new thread to deal with this client start thread in “handle_client” – lots of system support – relatively easy to program – appropriate when clients are independent (don’t desire to interact) • Alternative – a server that supports multiple concurrent clients, handle_client forever() read next request if “bye” then break handle request send response thread die handle_client forever() read next request if “bye” then break handle request send response thread die usually via a threads mechanism ©nabg handle_client forever() read next request if “bye” then break handle request send response thread die handle_client forever() read next request if “bye” then break handle request send response thread die ©nabg Threaded server Servers • Java servers are rarely (if ever) forking servers shared data – though can launch a process via Runtime object • Java’s support for threads makes it simpler than usual to implement as threaded server handle_client forever() read next request if “bye” then break handle request send response thread die – all objects have associated locks, simply need to start to use these (facilitating safe use of shared data) – thread package provides simple subset with most useful features from a full threads implementation – ... handle_client forever() read next request if “bye” then break handle request send response thread die ©nabg • Java solution: have a “ClientHandler” class (optionally inherits Thread), create an instance for each client (passing new socket), let it run() ©nabg Your programming of Client - server systems • You can rely on the libraries to handle: – establishment of communications link – reliable data exchange (TCP/IP) (or the rarely useful simple packet transfer based on cheaper UDP/IP mechanisms) • What you really see at both ends is a connection that supports read and write access. • You must invent Client Application Application specific PROTOCOL Server Application Server provides a service • Typically, a server is a bit like an instance of a class that has a public interface advertising a few operations it can perform for a client. – Example 1: HTTP server • GET, POST, OPTIONS, … – Example 2: FTP server • List directory, change directory, get file, put file, … • Sometimes will have a service with only one operation – echo, ping, … ©nabg ©nabg Client – server application protocol • Each operation defined by server has name, and arguments and returns particular kind of data (or “an exception”) • Client must supply data identifying operation required, and supply and data needed. • Protocol specifies how client to supply such information and what kinds of responses it may receive ©nabg Your programming of Client - server systems • Decide on the different commands and data that you want to exchange. List for each command (request) : – data that may be sent with the command – allowed responses (“success”, “failure”, other eg “defer”) and note appropriate action – data that may come with different responses • Associate suitable key words with each command, and key word (or numeric codes) that could identify possible responses. ©nabg Your programming of Client - server systems Your programming of Client - server systems • Client (things that you should think about) • Server – how does user select the server? – connection mechanism set up; how to handle failures? – how does user select the commands to be sent, and how are necessary input data obtained? – sending data; – getting response • switch(response code) into ... – First, what strategy? • Depends on load that you expect on your server, if lightly used a serial server may suffice (and is a lot easier to implement) – If expect to deal with many clients, then need concurrent (threaded) server – handle various permitted responses appropriate for last request – displaying response; – how does user terminate session? ©nabg • Much of code is standardized, so can cut-copy-paste ©nabg Your programming of Client - server systems • Server (things that you should think about) – setting up listener mechanism, accepting new client and getting socket, creating thread for new client – these aspects should all be standard – “handle_client” • reading request; dealing with the unexpected, excessive data, ... • switch(keyword) into ... – invoke separate functions for each of the commands that this server handles • generation of response – how does it terminate? Your programming of Client - server systems • Handle-client – If serial server architecture, • call a function that will accept commands and generate responses until user disconnects – If threaded architecture • create a runnable ClientHandler object • create a thread that runs this ClientHandler • let multiple clients run with own threads ©nabg ©nabg Java support for networking network classes • Socket • java.net package includes a variety of classes that are essentially “wrappers” for the simple to use but rather messy Berkeley sockets networking facilities. – client side connection to network • connect(...) • provides InputStream and OutputStream (byte oriented; re-package inside a DataInputStream etc) • status info ... – classes for the sockets – classes that represent network addresses for server machines (hosts) and clients • ServerSocket – classes that handle URLs – class for talking http protocol to ‘httpd’ server on some host – ... ©nabg – server side connection to network • accept(...) (gives back new Socket for use in handle_client()) • status info, ... ©nabg Example network classes • InetAddress • Program – – – – – – “data structure” representing an address (a “final” class, basically it’s an opaque box holding system dependent information needed by Socket classes) – create instances using static member functions of class • static InetAddress.getByName(String host) ... – can ask for an Internet address for host (specified in dotted decimal or domain name form) takes hostname and port number as command line arguments attempts to build InetAddress for host attempts to open Socket to host/port combination creates DataStream adaptors for Socket loop • • • • • • static InetAddress.getLocalHost() (own address) – not really needed as can create Sockets without needing InetAddress structures, but can be useful as functions that create InetAddress objects can return more informative exceptions if something wrong (whereas exception with Socket simply tells you connection wasn’t possible) read line of input if “Quit” then stop send line (and ‘\n’!) to server read bytes returned convert to String and display A crude substitute for ‘telnet’ (port 23) – talk to ‘echo’, etc. ©nabg ©nabg Example – telnet client • Client reads in a line, making it a Java String with 16-bit characters • telnet (or other standard service) expects ASCII text for communications • Client cannot simply write the String to socket (which it could if talking Java-to-Java) • Instead must write as 8-bit characters • Can use “DataInputStream” and “DataOutputStream” to handle this Basic structure ... • Essentially as shown before ... client build address structure for server get socket - connect to server (one step process with Java classes) while not finished get command from user write to socket read reply show reply to user ©nabg ©nabg import java.net.*; “Telnet” client import java.io.*; substitute public class NetConnect { public static void main(String argv[]) { check input arguments try to use first as a hostname when creating an InetAddress try to use second as a port number create socket for InetAddres, port combination create streams for socket public static void main(String argv[]) { if(argv.length != 2) { System.out.println("Invoke with hostname and port number arguments"); System.exit(0); } InetAddress ina = null; try { ina = InetAddress.getByName(argv[0]); } catch(UnknownHostException uhne) { System.out.println("Couldn't interpret first argument as host name"); System.exit(0); promote System .in to BufferedReader loop } int port = 0; try { String ps = argv[1].trim(); port = Integer.parseInt(ps); } catch (NumberFormatException nfe) { prompt for input; read line, trim, if Quit then break write as bytes to socket loop reading bytes (until get ‘\n’) print data read System.out.println("Couldn't interpret second argument as port number"); System.exit(0); } } } ©nabg ©nabg public static void main(String argv[]) { ... Socket s = null; try { s = new Socket(ina, port); } catch (IOException io) { System.out.println("No luck with that combination; try another host/port"); System.exit(0); } DataOutputStream writeToSocket = null; DataInputStream readFromSocket = null; try { writeToSocket = new DataOutputStream(s.getOutputStream()); readFromSocket = new DataInputStream(s.getInputStream()); } catch (IOException io) { public static void main(String argv[]) { ... BufferedReader d = new BufferedReader(new InputStreamReader(System.in)); byte[] inputbuffer = new byte[2048]; for(;;) { System.out.print(">"); String str = null; try { str = d.readLine(); } catch (IOException io) {System.exit(0); } str = str.trim(); if(str.equals("Quit")) break; try { writeToSocket.writeBytes(str + "\n"); } catch (IOException io) { System.out.println("Write to socket failed"); System.exit(0); } System.out.println("Got socket, but couldn't set up read/write communications"); System.exit(0); } ©nabg ©nabg public static void main(String argv[]) { ... for(;;) { ... int numread = 0; for(numread=0;;numread++) { byte b = 0; try { b = readFromSocket.readByte(); inputbuffer[numread] = b; } catch (IOException io) { System.out.println("Read from socket failed"); System.exit(0); } if(b == '\n') break; } String response = new String(inputbuffer, 0, numread); System.out.println(response); } could have simply printed data from buffer } but more typically would want String Reading and writing @ sockets • If talking Java to Java, it is your choice – promote socket connections to BufferedReader & BufferedWriter (send Unicode characters across network) – promote socket connections to ObjectInputStream and ObjectOutputStream (send objects across the network –this is often the best) – promote socket connections to DataInputStream and DataOutputStream, send Java doubles, UTF-8 data etc across web • If talking Java to other – promote socket connections to DataInputStream and DataOutputStream, send data as text, writing and reading using byte transfers (don’t want to confuse other programs with Unicode!) ©nabg ©nabg Example’s reads and writes Example – simple client • This client meant for general use, not Java-to-Java, so use DataInputStream, DataOutputStream, and text written and read as bytes. • Most “telnet” like services are “line-oriented” • This simple client works, eg connect to port 7 of typical Unix server and you are talking to “echo” program – each line you enter will be echoed by other machine – read one line up to and including ‘\n’ – respond with single line (if multi-line response, then some agreed terminator, eg line with just ‘.’) • Hence code to make sure ‘\n’ sent and loop reading characters until get ‘\n’ (could have used DataInputStream.readLine() but it is deprecated) ©nabg ©nabg Examples • Simple serial server Examples – getDate etc • Threaded version of same • Internet Relay Chat program ©nabg ©nabg NetBeans and examples • NetBeans environment helps develop and test single program, • Now have “client” and “server” • If developing in NetBeans – Use separate projects (possibly sharing files) – Build client, build server; get .jar files with applications – Run at cmd.exe level so can start server application then start client application from separate cmd.exe shell Remote service (simple serial server) ©nabg ©nabg Service operations Protocol • Easy! • getDate – getDate – returns date • client sends string Date • client receives string with Date-time on server • getFortune – getFortune – returns a random fortune cookie string chosen from a small collection • client sends string Fortune • client receives string chosen from servers collection • quit – quit – client politely says goodbye ©nabg • client sends string Quit • server does not respond, it just closes socket ©nabg Communications Serial server • Set up server socket • Loop forever • In this example, use ObjectStreams – overkill! – good practice for real examples • – – – – wait for client to connect (blocking accept call) get socket connecting to client open streams for socket loop • read command • if command was getDate send back date-time • else if command was getFortune send back fortune ObjectStreams – preferred approach for Java to Java – until command was quit – close client socket ©nabg ©nabg Client code import java.net.*; import java.io.*; public class Client { public static void main(String[] args) { // Procedural client, just one mainline for // something this simple! … } } public static void main(String[] args) { if(args.length != 2) { Basic setup – get host & port System.out.println( "Invoke with hostname and port number arguments"); System.exit(1); } String hostName = args[0]; int port = 0; try { String ps = args[1].trim(); port = Integer.parseInt(ps); } catch (NumberFormatException nfe) { System.out.println( "Couldn't interpret second argument as port number"); System.exit(1); } ©nabg ©nabg // Get I/O streams, make the ObjectStreams // for serializable objects ObjectInputStream responseStream = null; ObjectOutputStream requestStream = null; try { requestStream = new ObjectOutputStream( sock.getOutputStream()); requestStream.flush(); responseStream = new ObjectInputStream( sock.getInputStream()); } catch(IOException ioe1) { System.out.println("Failed to get socket streams"); System.exit(1); } System.out.println("Connected!"); public static void main(String[] args) { … // Create client socket Socket sock = null; try { sock = new Socket(hostName, port); } catch(Exception e) { // I/O, unknown host, ... System.out.println("Failed to connect because " + e.getMessage()); System.exit(1); } Output first, flush it, then input – else system may stall; feature! ©nabg ©nabg // main loop, user commands BufferedReader input = new BufferedReader( new InputStreamReader(System.in)); for(;;) { System.out.print(">"); String line; try { … // see next two slides } catch(IOException ioe2) { System.out.println("Problems!"); System.out.println(ioe2.getMessage()); System.exit(1); } catch(ClassNotFoundException cnfe) { System.out.println("Got something strange back from server causing class not found exception!"); System.exit(1); } line = input.readLine(); if(line==null) { System.exit(0); } if(line.equals("Fortune")) { requestStream.writeObject(line); requestStream.flush(); requestStream.reset(); String cookie = (String) responseStream.readObject(); System.out.println(cookie); } else if(line.equals("Date")) { requestStream.writeObject(line); requestStream.flush(); requestStream.reset(); String date = (String) responseStream.readObject(); System.out.println(date); } else … } ©nabg ©nabg … else if(line.equals(“Quit”)) { // Disconnect, don’t expect response to “Quit” requestStream.writeObject(line); requestStream.flush(); requestStream.reset(); requestStream.close(); responseStream.close(); break; } // ignore any other input commands – // they are invalid Sending and receiving • Write the object • Flush – OS may not send small message! • It waits assuming that there will be more to send and better to send one big package than two small ones – So force sending • Reset – Remember those dictionaries associated with objectstreams that remember every object written or read? – Clear by explicitly “resetting” – A reset by the sender causes a reset on receiver ©nabg ©nabg ObjectStream & Strings • Strings are objects! • Class String implements Serializable Simple serial server • Can use Strings with ObjectInputStream and ObjectOutputStream ©nabg ©nabg Usual structure Chosen port number? • Create server socket bound to chosen port and prepare to accept client connections • Loop – – – – • Don’t use numbers < 1024 ever (unless your program would be started by “root” on Unix – i.e. it is some major server like httpd, ftpd, …) • Avoid numbers < 10,000 blocking wait for client to connect get new datastream socket when client does connect invoke “handle client” function to process requests close datastream socket when client leaves – 1K…5K many claimed by things like Oracle – 5K…10K used for things like X-terminals on Unix – 10K..65000 ok usually • Some ports will be claimed, e.g. DB2 will grab 50,000 • If request to get port at specified number fails, pick a different number! ©nabg import java.io.*; import java.net.*; import java.util.*; public class Server { private static final int DEFAULT_PORT = 54321; private static Random rgen = new Random(System.currentTimeMillis()); private static String getFortune() { … } private static void handleClient(Socket sock) { … } public static void main(String[] args) { int port = DEFAULT_PORT; if(args.length==1) { try { port = Integer.parseInt(args[0]); } catch (NumberFormatException ne) { } } ServerSocket reception_socket = null; try { reception_socket = new ServerSocket(port); } catch(IOException ioe1) { … } for(;;) { … } } } ©nabg public static void main(String[] args) { int port = DEFAULT_PORT; if(args.length==1) { try { port = Integer.parseInt(args[0]); } catch (NumberFormatException ne) { } } ServerSocket reception_socket = null; try { reception_socket = new ServerSocket(port); } catch(IOException ioe1) { …} } for(;;) { Socket client_socket=null; try { client_socket = reception_socket.accept(); } catch(IOException oops) { … } handleClient(client_socket); } } Simple serial server! ©nabg ©nabg private static void handleClient(Socket sock) { ObjectInputStream requests = null; ObjectOutputStream responses = null; try { responses = new ObjectOutputStream( sock.getOutputStream()); responses.flush(); requests = new ObjectInputStream( sock.getInputStream()); } catch(IOException ioe1) { System.out.println("Couldn't open streams"); try { sock.close(); } catch(Exception e) {} return; } for(;;) { … } } Setting up of communications streams for current client ©nabg for(;;) { try { String request = (String) requests.readObject(); if(request.equals("Date")) { responses.writeObject((new Date()).toString()); responses.flush(); responses.reset(); } else if(request.equals("Fortune")) { responses.writeObject(getFortune()); responses.flush(); responses.reset(); } else if(request.equals("Quit")) { try { sock.close(); } catch(Exception e) {} break; } } catch(Exception e) { try { sock.close(); } catch(Exception eclose) {} return; } } “Dispatching” remote requests to service functions } ©nabg Server private static String getFortune() { String[] cookies = { "The trouble with some women is they get all excited about nothing, and then they marry him.", "It's true hard work never killed anybody, but I figure, why take the chance?", "A successful man is one who makes more money than his wife can spend.", "When a man brings his wife flowers for no reason, there's a reason.", "Genius is the ability to reduce the complicated to the simple." }; int which = rgen.nextInt(cookies.length); return cookies[which]; } Client ©nabg ©nabg Multiple machines • If practical, run client and server on different machines – Copy client .jar file to second machine – Start server on one machine (with known IP address or known DNS name) – Start client on second machine • Command line arguments are server DNS name (or IP address) and port What about errors on server side? ©nabg ©nabg … and the communications Errors • • First server was fault free – nothing can go wrong except for a break in communications. • Usually client submits data as well as request identifier – and submitted data may cause problems on server. • Modify our service to illustrate this Client now sending a composite message – – • Worth introducing a Message structure for client to send data public class Message implements Serializable { public String control; public Serializable associateddata; public Message(String cntrl, Serializable data) { control = cntrl; associateddata = data; } – getFortune(String choice) – client specifies which fortune by supplying a string • string should represent integer • integer should be in range • Bad data results in server side error that must be returned to client ©nabg control (keyword string identifying command) other data (some form of serializable data, or maybe null if command doesn’t take data) } ©nabg Server needs mechanism to return errors to client • At least two approaches: – Use the same message structure • control word indicates success or failure • associated data is the response or supplementary information about the failure – Create an Exception Object on the server • do not “throw” it on the server! • send it back (Exceptions are serializable and go through object streams) • client receives object, determines whether or not is exception ©nabg Using messages first • Client – modify code in command loop line = input.readLine(); if(line==null) { System.exit(0); } if(line.equals("Fortune")) { System.out.print("Which one : "); String choice = input.readLine(); requestStream.writeObject(new Message(line, choice)); requestStream.flush(); requestStream.reset(); Message response = (Message) responseStream.readObject(); if(response.control.equals("OK")) System.out.println(response.associateddata); else System.out.println(response.associateddata); } ©nabg Server • Fortune function private static String getFortune(int choice) { String[] cookies = { … "It's true hard work never killed anybody, but I …", … }; return cookies[choice]; } Message request = (Message) requests.readObject(); if(request.control.equals("Fortune")) { int choice; try { choice = Integer.parseInt( (String) request.associateddata); String fortune = getFortune(choice); responses.writeObject(new Message("OK", fortune)); responses.flush(); responses.reset(); } catch(NumberFormatException nfe) { responses.writeObject(new Message("Error" , "- non numeric data for choice")); responses.flush(); responses.reset(); } catch(IndexOutOfBoundsException ndxe) { responses.writeObject(new Message("Error", "IndexOutOfBounds!")); responses.flush(); responses.reset(); } } ©nabg ©nabg Other version – with exceptions Client if(line.equals("Fortune")) { System.out.print("Which one : "); String choice = input.readLine(); requestStream.writeObject(new Message(line, choice)); requestStream.flush(); requestStream.reset(); • Client still sends its requests using those Message structs • Response object is what? – Know it will be a Serializable, but what? – String Serializable response = (Serializable) responseStream.readObject(); if(response instanceof Exception) throw ((Exception) response); • print it – Exception • throw it System.out.println(response); // know it is a string } • Use “instanceof” to find if response object is Exception ©nabg ©nabg Client – exception handling catch(IOException ioe2) { System.out.println("Problems!"); System.out.println(ioe2.getMessage()); System.exit(1); } catch(ClassNotFoundException cnfe) { System.out.println("Got something strange back from server causing class not found exception!"); System.exit(1); } catch(Exception other) { System.out.println( "Server probably returned this exception!"); System.out.println(other); } ©nabg Server if(request.control.equals("Fortune")) { int choice; try { choice = Integer.parseInt( (String) request.associateddata); String fortune = getFortune(choice); responses.writeObject(fortune); responses.flush(); responses.reset(); } catch(NumberFormatException nfe) { responses.writeObject(nfe); responses.flush(); responses.reset(); } catch(IndexOutOfBoundsException ndxe) { responses.writeObject(ndxe); responses.flush(); responses.reset(); } } ©nabg Concurrent server Threaded version? ! • No change to client • Server: – Use application specific client handler class that implements Runnable • its public void run() method has the code from the handle_client function (plus any auxiliary functions) – After accepting client • Create new ClientHandler with reference to Socket • Create thread • Thread runs handler ©nabg ©nabg Add another operation to demonstrate that each client has separate handler! Client Ping code if(line.equals("Ping")) { requestStream.writeObject( new Message(line, null)); requestStream.flush(); requestStream.reset(); String response = (String) responseStream.readObject(); System.out.println(response); } • Ping • Client sends Ping as message • Server (ClientHandler) responds with – “Pong” plus a count – count value is incremented each time – count is instance variable of ClientHandler so each client has distinct count value ©nabg ©nabg Server Java 1.5 addition public class Server { private static final int DEFAULT_PORT = 54321; public static void main(String[] args) { … for(;;) { try { Socket client_socket = reception_socket.accept(); ClientHandler ch = new ClientHandler(client_socket); Thread t = new Thread(ch); t.start(); } … ©nabg • Example code creates a new thread for each client and destroys the thread when finished. • Costly. • Well threads aren’t as costly as separate processes but they aren’t cheap either. • The Java 1.5 concurrency utilities package has things like Executor.newFixedThreadPool (which gives you a pool of threads); then give Runnable to this pool, a thread will be allocated. ©nabg ClientHandler public class ClientHandler implements Runnable { private int count; private ObjectInputStream requests; private ObjectOutputStream responses; private String getFortune(int choice) { … } public ClientHandler(Socket sock) { // initialize, open streams etc … } public void run() { for(;;) { … } } } ClientHandler.run() Message request = (Message) requests.readObject(); if(request.control.equals("Date")) { … } else if(request.control.equals("Fortune")) { … } else if(request.control.equals("Ping")) { responses.writeObject("Pong " + Integer.toString(++count)); responses.flush(); responses.reset(); } else if(request.control.equals("Quit")) { … } ©nabg ©nabg Micro I RC nternet elay hat ©nabg ©nabg IRC? IRC • Internet relay chat: • Client program – host machine runs server program – connect to server • program supports multiple concurrent clients (via threading or polling mechanisms) accessing shared data • specify hostname and port and nickname – – – – – server program provides • “virtual channels” – each channel related to a discussion topic – when first connect to server, can review choices, and pick channel view “channels”, submit selection view list of participants, get display of recent messages own messages broadcast to other participants see any messages entered by others • discussion groups on channels – associated with each channel have • group of clients (identified by nicknames) • buffer containing last few messages exchanged ©nabg ©nabg Micro-IRC • Misses out the channels layer. • MicroIRC server Canvas allows text to scroll up like an ordinary computer terminal List of participants – multithreaded Java application – buffer for 10 recent messages • MicroIRC client – also multithreaded (listening for messages from others, responding to local users keys/mouse actions) – List display of names of paticipants – Canvas (fake scrolling) of recent text (no local buffer) – TextField and action Button for message entry Sender names, and messages TextField (edit your message) Button (broadcast message) ©nabg ©nabg MicoIRC NetBeans style • Three projects – Server application – Client application – Utilities library • (In /share/cs-pub/csci213 as compressed collection of NetBeans projects) ©nabg ©nabg Structure Overall operation • Server started • Client connects • Client attempts to “register” • Server – Threaded (using thread pool) – ClientHandlers created for each client – SharedData – keeps track of connected clients and last few messages posted – Provides nickname; registration attempt will fail if nickname already in use – If registration succeeds, client receives in return a set of strings with nicknames for all connected clients and a copy of the last few messages posted. • Client – Simple GUI with text field to post data – Separate client thread to handle asynchronous messages from server ©nabg • Client can now post messages – Messages posted to the server are distributed to all connected clients ©nabg Distribution of postings by server Communications • Two possible strategies – Thread in ClientHandler that receives message is used to run code that forwards it to each client – Separate thread on server • TCP/IP socket streams wrapped in ObjectInputStream and ObjectOutputStream • Forever – Sleep a bit – Check if any new messages in buffer • Post each one to each of clients • This implementation uses first strategy • Serializable objects written and read ©nabg ©nabg Classes - utilities Classes - server • “Utilities” library • Server, the usual – Just a collection of little serializable structs – Server – procedural main-line, cut-&-paste code from any other threaded server – ClientHandler • A “posting” has an author and a message • The initial data received by a client should contain a String[] and an array of postings • A message has an “operation” and some other data • A MyException is a specialization of Exception ©nabg • A Runnable – SharedData • Owns collections of client handlers and recent postings ©nabg Classes - client Client • AWT thread • ClientProgram – procedural, deal with command line arguments, create client object (which creates its GUI) • Client – Handle input fields – Used for actionPerformed that handles send button – Writes to ObjectOuputStream – Runnable • Created thread in Client.run() – Forever • Blocking read on ObjectInputStream • Read a Message object – Add_Client – Remove_Client – Post • ClientGUI ©nabg ©nabg Library project • Classes like Posting used in both Client and Server applications • Create a “Java library” project (“IRCUtilities”) • Define the classes • Build the project • Create Client and Server Application projects – In Library tab of project • Add library/Project/IRCUtilities public class Message implements Serializable { public String operation; public Serializable otherdata; public Message(String op, Serializable data) { operation = op; otherdata = data; } } public class Posting implements Serializable { public String author; public String comment; public Posting(String person, String txt) { author = person; comment = txt; } } “otherdata” in Message will be a Posting or an InitialData object ©nabg ©nabg Micro-IRC Application protocol public class InitialData implements Serializable { public String[] playerNames; public Posting[] postings; public InitialData(String[] names, Posting[] posts) { playerNames = names; postings = posts; } } • Client initiated exchanges – Register • client sends Message with “Register” and a nickname • server sends either public class MyException extends Exception { public MyException(String reason) { super(reason); } } – Message with “OK” and an InitialData object with details of all current players – MyException • if client gets OK, then continue; else terminate determine (main aspects of) application protocol ©nabg ©nabg Micro-IRC Application protocol Micro-IRC Application protocol • Client initiated exchanges • Server initiated – Post – Post • client sends Message with “Post” and contents of text input buffer • not acknowledged • server sends Message “Post” together with a Posting object containing the nickname of participant who posted message and contents of message • no acknowledgement from client – Quit – Add_Client: • client sends “Quit:” (not acknowledged) • another client has connected – Remove_Client: • some other client has disconnected determine (main aspects of) application protocol ©nabg determine (main aspects of) application protocol ©nabg MicroIRC Client • Selection of server and establishment of connection – use command line arguments • host_name, port number, nickname to use when connect – construct InetAddress with given host_name, if fail print error report and quit – socket connection opened to server, again failures result in program exiting. decide on general mechanisms for client, (but not their implementation) MicroIRC Client • Most messages from server are asynchronous! – server sends client a Post whenever any of users on system has entered data, and sends “add/remove client” as others connect and disconnect – so not synchronized with actions of local user – implication => Need a separate thread ready at all times to handle asynchronous message from server ©nabg ©nabg MicroIRC Client • displaying response Canvas allows text to scroll up like an ordinary computer terminal – several possible ways of handling messages List of participants • TextArea (disabled) in scrolling window – allows user to scroll back, refresh is possible after display temporarily obscured – potentially large storage demands as add Strings to TextArea • array of Strings displayed in window Sender names, and messages – discard oldest if array full – limits storage, still allows some scrollback and refresh • no storage of data – messages drawn directly on Canvas This option – “copyarea” used to produce scrolling effect picked for demo – can’t view messages “scrolled off screen”, can’t refresh if display TextField (edit your message) temporarily obscured ©nabg Button (broadcast message) ©nabg public class ClientProgram { public static void main(String[] args) { if(args.length != 3) { … } InetAddress ina = null; try { ina = InetAddress.getByName(args[0]); } catch(UnknownHostException uhne) { … } int port = 0; try { String ps = args[1].trim(); port = Integer.parseInt(ps); } catch (NumberFormatException nfe) { … } Client GUI Frame with BorderLayout java.awt.List in East area; Displays its collection of strings Client cl = new Client(args[2]); try { Thread.sleep(100); } catch(InterruptedException ie) {} if(!cl.connect(ina, port)) { … } cl.register(); Thread t = new Thread(cl); t.start(); } } Panel in South, flowlayout, holds label, textfield, actionbutton ©nabg Create client & GUI (pause to let AWT thread start) Connect Send registration request Start thread for asynchronous messages ©nabg public class ClientGUI extends Frame { public ClientGUI(Client cl) { // Next slide } public void addToList(String s) { fMembers.add(s); } public void removeFromList(String s) { fMembers.remove(s); } public ClientGUI(Client cl) { super("Micro-IRC for " + cl.getName()); setLayout(new BorderLayout()); fMembers=new List(5); fMembers.setEnabled(false); add(fMembers,"East"); fCanvas = new Canvas(); fCanvas.setSize(500,400); fFont = new Font("Serif", Font.PLAIN, 12); fCanvas.setFont(fFont); add(fCanvas,"Center"); Panel p = new Panel(); p.add(new Label("Your message :")); fInput = new TextField(" ", 60); p.add(fInput); Button b = new Button("Send"); p.add(b); b.addActionListener(cl); add(p, "South"); pack(); addWindowListener(cl); } public void addPosting(Posting p) { // later slide } String getInput() { return fInput.getText(); } private List fMembers; private Canvas fCanvas; private TextField fInput; private Button fAction; private Font fFont; } ClientGUI ClientGUI ©nabg ©nabg public void addPosting(Posting p) { Graphics g = fCanvas.getGraphics(); Scroll up FontMetrics f = g.getFontMetrics(fFont); int delta = f.getHeight(); Dimension d = fCanvas.getSize(); g.copyArea(0, delta, d.width, d.height-delta, 0, -delta); g.clearRect(0, d.height-delta, d.width,delta); GUI building • The usual • Frame (container) – Layout manager defined – Components added Add new g.setColor(Color.red); g.drawString(p.author, 4, d.height - f.getDescent()); int pos = 4 + f.stringWidth(p.author) + 10; if(pos<50) pos = 50; g.setColor(Color.black); g.drawString(p.comment, pos, d.height - f.getDescent()); • List • Canvas • Panel – Label – TextField – Button • ActionListener, WindowListener } (As data only written to screen, information is lost if window is hidden then re-exposed) ClientGUI ©nabg ©nabg Client • Client – Runnable, ActionListener, WindowListener – Runnable – for separate thread that gets asynchronous messages from server – ActionListener – when GUI’s button clicked, grab input from textfield and post it to server – WindowListener – on window closing, hide GUI, stop run thread, send quit message to server ©nabg public class Client implements Runnable, ActionListener, WindowListener { public Client(String nickname) { … } public void register() { … } public String getName() { return fName; } boolean connect(InetAddress ina, int port) { … } public void run() { … } public void actionPerformed(ActionEvent e) { … }) { } private Socket mySocket; private ObjectInputStream in; private ObjectOutputStream out; private ClientGUI myGUI; private String fName; private Thread runThread; } Client ©nabg public Client(String nickname) { fName = nickname; myGUI = new ClientGUI(this); myGUI.setVisible(true); } public String getName() { return fName; } boolean connect(InetAddress ina, int port) { try { mySocket = new Socket(ina, port); out = new ObjectOutputStream(mySocket.getOutputStream()); out.flush(); in = new ObjectInputStream(mySocket.getInputStream()); } catch (IOException io) { return false; } return true; Wrap byte level socket streams } in object streams to allow transfer of objects Client public void register() { try { Message m = new Message("Register", fName); out.writeObject(m); out.flush(); Object o = in.readObject(); if(o instanceof MyException) throw (MyException) o; m = (Message) o; InitialData id = (InitialData) m.otherdata; String[] players = id.playerNames; for(String player : players) myGUI.addToList(player); Posting[] posts = id.postings; for(Posting p : posts) { myGUI.addPosting(p); } } catch(ClassNotFoundException cnfe) { … } catch(ClassCastException cce) { … } catch(MyException me) { … } catch(IOException io) { …} } Client ©nabg ©nabg public void run() { runThread = Thread.currentThread(); try { for(;;) { Message m = (Message) in.readObject(); String s = m.operation; if(s.equals("Add_Client")) { String cname = (String) m.otherdata; myGUI.addToList(cname); } else if(s.equals("Remove_Client")) { String cname = (String) m.otherdata; myGUI.removeFromList(cname); } else if(s.equals("Post")){ Posting p = (Posting) m.otherdata; myGUI.addPosting(p); } } } catch (Exception e) { … } } public void actionPerformed(ActionEvent e) { String msg = myGUI.getInput(); if(msg == null) return; try { Message m = new Message("Post", msg); out.writeObject(m); out.flush(); out.reset(); } catch (IOException io) { … } } Client ©nabg Client ©nabg public void windowClosing(WindowEvent e) { myGUI.setVisible(false); runThread.stop(); try { Message m = new Message("Quit", null); out.writeObject(m); out.flush(); try { Thread.sleep(1000); } catch(InterruptedException ie) {} in.close(); out.close(); } catch (IOException io) { } System.out.println("Client terminated"); System.exit(0); } public void windowClosed(WindowEvent e) { } … Your programming of Client - server systems • Server (things that you should think about) – setting up listener mechanism, accepting new client and getting socket, creating thread for new client – – “handle_client” • reading request; • switch(keyword) into ... – invoke separate functions for each of the commands that this server handles • generation of response – how does it terminate? Client ©nabg ©nabg Threaded server Micro-IRC Server shared data handle_client forever() read next request if “bye” then break handle request send response thread die handle_client forever() read next request if “bye” then break handle request send response thread die • Server – normal mechanism • “receptionist” listening at published well know port • accepts connections (getting new port-socket combination) • creates ClientHandler objects (and associated threads) for clients – shared data • vector with ten most recent messages • vector with ClientHandlers ©nabg ©nabg Micro-IRC Server – ClientHandlers • hard code public class Server { private static final int DEFAULT_PORT = 54321; private static final int THREADPOOL_SIZE = 8; public static void main(String[] args) { int port = DEFAULT_PORT; if(args.length==1) { try { port = Integer.parseInt(args[0]); } catch (NumberFormatException ne) { } ServerSocket reception_socket = null; try { – wait for “Register”, check nickname (if name in use, return exception, destroy this ClientHandler) – register client with shared data – send backlog info Threaded-server, threadpool style reception_socket = new ServerSocket(port); } catch(IOException ioe1) { … } SharedData sd = new SharedData(); ExecutorService pool = Executors.newFixedThreadPool( THREADPOOL_SIZE); • run main loop for(;;) { try { – read command Socket client_socket = reception_socket.accept(); ClientHandler ch = new ClientHandler(client_socket, sd); pool.execute(ch); • if Quit, terminate loop • if Post, add message to shared data • ? } catch(Exception e) { … } • on exit from main loop, or any i/o error, deregister this ClientHandler from shared data and terminate } } Server } ©nabg ©nabg public class SharedData { private Vector<Posting> data; private Vector<ClientHandler> subscribers; Threads & Locking public SharedData() { data = new Vector<Posting>(); subscribers = new Vector<ClientHandler>(); } • Multiple clients • Multiple threads in server • Possibly more than one thread trying to modify shared data • Possibly more than one thread trying to use a particular ClientHandler! public synchronized void addClient(ClientHandler ch, String nickname) throws MyException {…} – ClientHandler’s own thread receiving a message – Some other thread trying to broadcast another user’s posting public synchronized void removeClient(ClientHandler ch) {…} public synchronized void postMessage(String poster, String msg) {…} • Need locks – synchronized methods } SharedData ©nabg public synchronized void addClient(ClientHandler ch, String nickname) throws MyException { for(ClientHandler c : subscribers) if(c.getName().equals(nickname)) throw new MyException("Duplicate name"); // Send notification to each existing subscriber that there is a new participant Message m = new Message("Add_Client", nickname); for(ClientHandler c: subscribers) c.sendMessage(m); // Add new client to collection subscribers.add(ch); // Compose a response for new client String[] participants = new String[subscribers.size()]; int i=0; for(ClientHandler c: subscribers) participants[i++] = c.getName(); Posting[] posts = data.toArray(new Posting[0]); InitialData id = new InitialData(participants, posts); Message reply = new Message("OK", id); ch.sendMessage(reply); SharedData } ©nabg public synchronized void removeClient(ClientHandler ch) { subscribers.remove(ch); Message m = new Message("Remove_Client", ch.getName()); for(ClientHandler c : subscribers) c.sendMessage(m); } public synchronized void postMessage(String poster, String msg) { Posting p = new Posting(poster, msg); data.add(p); if(data.size()>10) data.removeElementAt(0); Message m = new Message("Post", p); for(ClientHandler c: subscribers) c.sendMessage(m); } SharedData ©nabg ©nabg public class ClientHandler implements Runnable { private ObjectInputStream input; private ObjectOutputStream output; private SharedData shared; private Socket mySocket; private String name; public ClientHandler(Socket aSocket, SharedData sd) throws IOException { … } public synchronized void sendMessage( Message m) { … } public String getName() { return name; } private boolean handleRegister() { … } public void run() { … } } Locks on SharedData • Synchronization locks on all methods • Guarantee only one thread will be updating shared data ClientHandler ©nabg ©nabg public ClientHandler(Socket aSocket, SharedData sd) throws IOException { mySocket = aSocket; output = new ObjectOutputStream(aSocket.getOutputStream()); output.flush(); input = new ObjectInputStream(aSocket.getInputStream()); shared = sd; Wrap byte level socket streams } private boolean handleRegister() { try { Message m = (Message) input.readObject(); String operation = m.operation; if(!operation.equals("Register")) return false; name = (String) m.otherdata; shared.addClient(this, name); return true; } catch(ClassNotFoundException cnfe) { … } catch(IOException ioe) { …} catch(MyException me) { try { output.writeObject(me); output.flush(); } catch(IOException io) {} } return false; } in object streams to allow transfer of objects public synchronized void sendMessage(Message m) { try { output.writeObject(m); output.flush(); output.reset(); } catch(IOException ioe) { … } } ClientHandler ©nabg ClientHandler ©nabg public void run() { if(!handleRegister()) { try { mySocket.close(); } catch(IOException ioe) {} return; } MicroIRC for(;;) { try { Message m = (Message) input.readObject(); String operation = m.operation; if(operation.equals("Quit")) break; else if(operation.equals("Post")) shared.postMessage(name, (String) m.otherdata); } catch(Exception e) { break; } } shared.removeClient(this); try { mySocket.close(); } catch(IOException ioe) {} • As networked application, quite likely to get a break in client-server link without formal logout – client program died – server terminated – ... • No particular event sent to other process, but will get fail at next read/write operation on socket. • Code tries to trap these and : – (if break spotted by client) terminate client process – (if break spotted by server) destroy client handler object } ClientHandler * Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
https://manualzz.com/doc/25835428/import-java.net
CC-MAIN-2019-47
en
refinedweb
Implementing Custom States. The commit() method is optional; it is useful if the bolt manages state on its own. This is currently used only by internal system bolts (such as CheckpointSpout). KeyValueState implementations should also implement the methods defined in the KeyValueState interface. The framework instantiates state through the corresponding StateProvider implementation. A custom state should also provide a StateProvider implementation that can load and return the state based on the namespace. Each state belongs to a unique namespace. The namespace is typically unique to a task, so that each task can have its own state. The StateProvider and corresponding State implementation should be available in the class path of Storm, by placing them in the extlib directory.
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.1.1/developing-storm-applications/content/implementing_custom_states.html
CC-MAIN-2019-47
en
refinedweb
Backport #2781 crash when gc_mark()ing already free'd locals of cloned scope Description =begin This causes a segfault on >= 1.8.7-p248 def def_x(arg) Object.send :define_method, :x do def_x lambda{} end end GC.stress = true # unnecessary but makes it occur faster def_x nil n = 3 # minimum for crash, increase if needed n.times { x 0 } This bug was caused by the fix i suggested for #1322,. The previous fix is flawed in that it added the SCOPE_MALLOC flag to the scope just so scope_dup() didn't process it. This had the side-effect that gc_mark_children() now processes the scope whereas it would not have before. A better fix is the following, which instead of adding the SCOPE_MALLOC flag, we add a check for the SCOPE_CLONE flag to scope_dup(). This fixes bug #1322 as well as the segfault: Please check the patch for other unforseen side effects. I didn't see any changes in rubyspec failures from p174 to a patched p248. =end History Updated by coderrr (coderrr .) over 9 years ago =begin just realized the check for SCOPE_CLONE is also no longer needed before freeing locals: =end Updated by coderrr (coderrr .) over 9 years ago =begin By the way, this causes the popular web framework sinatra to segfault due to =end Updated by tmm1 (Aman Gupta) about 9 years ago =begin I can confirm that this is still an issue in 1.8.7-p302 (I had to increase n=3000 to reproduce on linux). It is also causing segfaults when using Sinatra <= 0.9.5. The segfaults in Sinatra are fixed as of >= 0.9.6 with this patch: =end Updated by jeremyevans0 (Jeremy Evans) 4 months ago Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/2781
CC-MAIN-2019-47
en
refinedweb
#include <wx/dnd.h> This is a drop target which accepts files (dragged from File Manager or Explorer). Constructor. See wxDropTarget::OnDrop(). This function is implemented appropriately for files, and calls OnDropFiles(). Reimplemented from wxDropTarget. Override this function to receive dropped files. Return true to accept the data, or false to veto the operation.
https://docs.wxwidgets.org/trunk/classwx_file_drop_target.html
CC-MAIN-2019-47
en
refinedweb
how to Set Multiple Environments in Djangopublished on: | by In category: Django At some point you have experienced this well I have. To prevent this from happening simply create multiple environment. In this article we will create a basic todo app that displays the tasks created, allow you to create a new task and delete task no longer needed. The todo app will illustrate how to achieve setting different environments in django i.e Production, staging and local environment. If you are a complete beginner please refer to my previous post on creating Django projects using either pip and virtualenv or pipenv create project Create the directory for the project, create a virtual environment and activate it.Use pipenv to install django mkdir todo && cd todo pipenv --three pipenv shell pipenv install django django-admin.py startproject myapp # name of our project Create app cd src python manage.py startapp todo Inorder to use the app,register by adding it to installed app in settings.py INSTALLED_APPS = [ 'todo' # the todo app ] models from django.db import models class Task(models.Model): STATUS_CHOICES = ( ('urgent', 'Urgent'), ('normal', 'Normal'), ) title = models.CharField(max_length=250) description = models.TextField() status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='normal') due_date = models.DateField(blank=False) # a date when the task is due due_time = models.TimeField(blank=False) # a time when the task is due # not visible to user created = models.DateTimeField(auto_now_add=True) class Meta: verbose_name = 'task' verbose_name_plural = 'tasks' ordering = ('due_date',) def __str__(self): return self.title Registering the Models in Admin Site Django has a powerful inbuilt admin interface.We can easily view all task, perform delete and create operations from the admin panel. from django.contrib import admin from .models import Task @admin.register(Task) class TaskAdmin(admin.ModelAdmin): list_display =('title','description', 'status','due_date') list_filter =('status','due_date') search_fields=('title',' description') ordering=['status','due_date'] Create and run Migrations python manage.py makemigrations python manage.py migrate Creating superuser python manage.py createsuperuser Username (leave blank to use ''): running server python manage.py runserver When you visit , you shuld be able to add tasks. We are not creating the app for admin are we? So lets dive in and start creating views for users.We start by creating a form. forms.py In todo app create a file and call it forms.py. we will create a model froms. from django import forms from django.forms import ModelForm from .models import Task creating model forms class TaskForm(forms.ModelForm): class Meta: model = Task exclude = ('created',) Views The todo app will list all the tasks on the homepage.Create and delete task. We will define the urls in myapp.urls. Do not forget to import views from todo app from django.contrib import admin from django.urls import path from todo import views #import todo views urlpatterns = [ path('', views.index, name= 'index'), #homepage path('admin/', admin.site.urls), path('create/', views.create_task, name= 'create'), #add task path('delete/ /', views.delete_task, name= 'delete'), #delete task Views.py We will do a number of imports.we must import our form in forms.py and model.I will use the get_object_or_404 object to get task and task id from the db from django.shortcuts import render, redirect,get_object_or_404 from .models import Task from .forms import TaskForm Create the index logic to display all the tasks in views.py. Create the create logic to create new tasksCreate the create logic to create new tasks def index(request): todo_list = Task.objects.all()[:7] context = {'tasks': todo_list} return render (request, 'todo/task/index.html', context) def create_task(request): if request.method == 'POST': add_form =TaskForm(request.POST) if add_form.is_valid(): add_form.save() return redirect('index') else: add_form =TaskForm() context= {'form':add_form} return render(request, 'todo/task/create.html', context ) Create the delete logic to create new tasks def delete_task(request, task_id): task = get_object_or_404(Task, id=task_id) task.delete() return redirect('index') Templates and static files We have views but no templates to display them and some styling in todo app create directory templates with sub-directories todo inside todo directory we create our base template base.html Django Multiple Environment If you are already weary of waiting, the time has come. In myapp, create a folder and call it settings. Inside the settings module create 3 files. 1. base.py # contain common settings 2. local.py # will contain settings relevant for local development 3. prod.py # contain only settings relevant for production The settings folder is not a python module and to make it one simply add __init__.py file inside it. Copy settings.py content into base.py then delete it.Import the base settings in both local and deb by doing this: from .base import * After importing base settings in local.py I have removed debug= True settings from base.py and added it to local.py DEBUG = True If you try to run server at this point you should definitely get an error.Django cannot find either of the settings because we have moved them a folder inside. To resolve this error, in base.py change base-dir from BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) To BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(os.path.join(__file__, os.pardir)))) if we run the server again, we get the error raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty. We have not specified the development settings we we want to use and we can achieve that by 1. Simply adding the environment's settings flag each time you run commands 2.2. python manage.py command --settings=myapp.settings.local export DJANGO_SETTINGS_MODULE=myapp.settings.localThen run all other commands as usual note myapp.settingsmust be replaced with the project name.The name you used when running the dango-admin.py startproject name.Or simply check this line os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')in wsgi.py Conclusion Thank you for your patience, it was quite a long post.There are so many ways of doing this.I find this method simple.The todo app is basic but tweak it and if you host yours, leave a comment below. This code can be found on Github
https://achiengcindy.com/blog/2019/08/10/how-set-multiple-environments-django/
CC-MAIN-2019-47
en
refinedweb
import RPi.GPIO as GPIO from time import sleep GPIO.setmode(GPIO.BOARD) Motor1A = 16 Motor1B = 18 Motor1E = 22 Motor2A = 23 Motor2B = 21 Motor2E = 19 GPIO.setup(Motor1A,GPIO.OUT) GPIO.setup(Motor1B,GPIO.OUT) GPIO.setup(Motor1E,GPIO.OUT) GPIO.setup(Motor2A,GPIO.OUT) GPIO.setup(Motor2B,GPIO.OUT) GPIO.setup(Motor2E,GPIO.OUT) print "Going forwards" GPIO.output(Motor1A,GPIO.HIGH) GPIO.output(Motor1B,GPIO.LOW) GPIO.output(Motor1E,GPIO.HIGH) GPIO.output(Motor2A,GPIO.HIGH) GPIO.output(Motor2B,GPIO.LOW) GPIO.output(Motor2E,GPIO.HIGH) sleep(2) print "Going backwards" GPIO.output(Motor1A,GPIO.LOW) GPIO.output(Motor1B,GPIO.HIGH) GPIO.output(Motor1E,GPIO.HIGH) GPIO.output(Motor2A,GPIO.LOW) GPIO.output(Motor2B,GPIO.HIGH) GPIO.output(Motor2E,GPIO.HIGH) sleep(2) print "Stopping motor" GPIO.output(Motor1E,GPIO.LOW) GPIO.output(Motor2E,GPIO.LOW) GPIO.cleanup() Ben
https://lb.raspberrypi.org/forums/viewtopic.php?t=100715
CC-MAIN-2019-47
en
refinedweb
Puppeteer is a tool to manipulate web page by using headless Chrome. It can access pre-rendered content so that we can touch the page which could not be accessed without web browsers. Puppeteer can be controlled by node.js since it’s providing JavaScript API. So it’s also true that we can control Puppeteer by using TypeScript which is a superset of JavaScript language. The code in TypeScript has its own type system and at the same time, it can be compiled into JavaScript. Instead of using directly JavaScript to control Puppeteer, it’s far better to use TypeScript, I think. In this blog post, I tried to create a simple web crawler to capture the PDFs of each web page. As you may already know, TypeScript can be easily integrated with the npm package system. The dependencies can be written in package.json file. Please make sure to install your own TypeScript compiler in advance so that you can compile the project by running npm build. { "name": "simple-crawler", "version": "0.1.0", "description": "Create a screenshot of web page in PDF", "main": "dist/index.js", "types": "dist/index.d.ts", "scripts": { "build": "tsc", "prepublishOnly": "npm run build" }, "files": [ "bin", "dist", "package.json" ], "author": "Kai Sasaki", "license": "MIT", "dependencies": { "@types/node": "^8.0.27", "events": "^1.1.1", "puppeteer": "^0.10.2" }, "devDependencies": { "puppeteer-tsd": "0.0.2" }, "main": "./dist/index.js" } As you can see later, the compiler will generate JavaScript files under the directory dist. So the path to the main file is set to ./dist/index.js. Another thing you may notice is about type definition file. @types/node and puppeteer-tsd are the files keeping the type information of classes used in node and Puppeteer respectively. The compiler may produce type check failure without these files. Please don’t forget to include them. TypeScript compiler reads tsconfig.json file in order to refer compile options. You can customize the configuration but please keep in mind to include the type definition file written as node_modules/puppeteer-tsd/src/index.d.ts. The compiler cannot find the type definition of Puppeteer classes without it. { "compilerOptions": { "module": "commonjs", "noImplicitAny": false, "sourceMap": true, "removeComments": true, "preserveConstEnums": true, "declaration": true, "target": "es5", "lib": ["es2015", "dom"], "outDir": "./dist", "noUnusedLocals": true, "noImplicitReturns": true, "noImplicitThis": true, "noUnusedParameters": false, "pretty": true, "noFallthroughCasesInSwitch": true, "allowUnreachableCode": false }, "include": [ "src", "node_modules/puppeteer-tsd/src/index.d.ts" ] } Source files in TypeScript are positioned src directly so that TypeScript compiler can compile source files along with the type definition of Puppeteer. It’s hard to crawl all web pages existing in the world. Only Google can achieve this. So our crawler is designed to traverse the web pages according to the given web site structure file, site.json. The file looks like this. { "name": "index", "selector": null, "baseUrl": "", "children": [ { "name": "menu", "selector": ".element", "children": [] } ] } This file tells the crawler to visit first and then keep traversing the links specified by the children tags. selector is a pointer to the HTML element to be visited by the crawler. As you can see, children can be defined in a nested manner. Here is the implementation of our crawler. import {URL} from 'url'; import {mkdirSync, existsSync} from 'fs'; import * as puppeteer from 'puppeteer'; export class Crawer { private baseUrl: string; constructor(baseUrl: string) { this.baseUrl = baseUrl; } crawl(site: any) { (async () => { // Wait for browser launching. const browser = await puppeteer.launch(); // Wait for creating the new page. const page = await browser.newPage(); await this.crawlInternal(page, `${this.baseUrl}/index.html`, site["children"], site["name"]); browser.close(); })(); } /** * Crawling the site recursively * selectors is a list of selectors of child pages. */ async crawlInternal(page: any, path: string, selectors: [string], dirname: string) { // Create a directory storing the result PDFs. if (!existsSync(dirname)) { mkdirSync(dirname); } // Go to the target page. let url = new URL(path); await page.goto(path, {waitUntil: 'networkidle'}); // Take a snapshot in PDF format. await page.pdf({path: `${dirname}/${url.pathname.slice(1).replace("/", "-")}.pdf`, format: 'A4'}); if (selectors.length == 0) { return; } // Traversing in an order of BFS. let items: [string] = await page.evaluate((sel) => { let ret = []; for (let item of document.querySelectorAll(sel)) { let href = item.getAttribute("href"); ret.push(href); } return ret; }, selectors[0]["selector"]); for (let item of items) { console.log(`Capturing ${item}`); await this.crawlInternal(page, `${item}`, selectors[0]["children"], `${dirname}/${selectors[0]["name"]}`) } } } Puppeteer APIs are basically called asynchronous manner. If you want to call the crawling synchronously, you need to write await keyword in each call. The crawler visits all pages with depth first search algorithm. The crawler just checks every page specified by site.json so that we don’t need to worry about the infinite loop caused by the circular linkage between pages. Actually, this crawler is published in npm with name site-snapshot. The complete source code is kept here. Please take a look at the repository for more detail information. I created this tool to make PDFs of the pages our own web site. I described the background here. You may want to keep your web site in a physical format someday. I hope you enjoy the web crawling with Puppeteer. If you are interested in TypeScript, “Mastering TypeScript” is the best book probably. Though I was not familiar with TypeScript at the beginning, this book provided me comprehensive information and overview of the language. Thanks to this book, I was also able to start contributing to TensorFlow.js. You may want to learn new programming language. TypeScript should be the one empowering you. Thanks!Written on January 15th, 2019 by Kai Sasaki
https://www.lewuathe.com/simple-crawling-with-puppeteer-in-typescript.html
CC-MAIN-2019-47
en
refinedweb
Java Exercises: Create maximum number of regions obtained by drawing n given straight lines Java Basic: Exercise-234 with Solution If you draw a straight line on a plane, the plane is divided into two regions. For example, if you pull two straight lines in parallel, you get three areas, and if you draw vertically one to the other you get 4 areas. Write a Java program to create maximum number of regions obtained by drawing n given straight lines. Input: (1 ≤ n ≤ 10,000) Sample Solution: Java Code: import java.util.*; public class Main { public static void main(String[] args){ Scanner scan = new Scanner(System.in); System.out.println("Input number of straight lines:"); int n=scan.nextInt(); System.out.println("Number of regions:"); System.out.println((n * (n + 1) >> 1) + 1); } } Sample Output: Input number of straight lines: 5 Number of regions: 16 Flowchart: Java Code Editor: Contribute your code and comments through Disqus. Previous: Write a Java program that accept a even number (n should be greater than or equal to 4 and less than or equal to 50,000, Goldbach number) from the user and create a combinations that express the given number as a sum of two prime numbers. Print the number of combinations. Next: Write a Java program to test whether AB and CD are orthogonal or
https://www.w3resource.com/java-exercises/basic/java-basic-exercise-234.php
CC-MAIN-2019-47
en
refinedweb
Changelog for package network_interface 2.1.0 (2018-08-30) Merge pull request #13 from astuff/maint/remove_unused_variables_in_test Minor linting stuff. Implementing proper dependencies for rosunit. Removing unused variables in gtests. Merge pull request #10 from astuff/maint/roslint Removing temporary .orig files. Fixing roslint suggestions. Adding roslint. Merge pull request #8 from astuff/fix/strict_aliasing endianess check now returns bool removes copying input into system endianess check Adds check for system endianess This allows moving the source address start further into the stored uint64_t to properly grab the least significant bytes for putting into smaller containers. changes comment on readbeint test case removes unused include from debugging Fixes read_be and read_le to remove aliasing issue Also adds .vscode to gitignore for VS Code users. Merge pull request #6 from astuff/unit_testing Adds unit tests for some methods. Corrects found issues Unit tests found issues with misinterpreting floats and doubles Modified methods and now this commit Fixes #1 Merge pull request #3 from KyleARectorAStuff/feature/tcp_read_timeout Adding no-timeout option when the timeout_ms argument is 0 Before this commit, a timeout was used in every call TCPInterface::read or TCPInterface::read_exactly, with a default of 0 ms. After this commit, the default is set to 0 ms, and if the read or read_exactly methods receive a 0 timeout request, it will not set a deadline for the timeout, resulting in a blocking read. This allows for the TCPInterface to behave with a timeout, or else be used as it was previously. Removing timeout/received flags, adding error checking in timeout handler Before this commit, the result of a read or timeout was stored in a private variable, populated by the respective callback. After this commit, the conditional statements that proviously relied on the flags instead rely on the gloabl error message's value. Additionally, the timeout handler has added error checking to prevent it from executing fully when the timer.cancel() method is called after a successful read. Adding TCP timout to TCPInterface::read_exactly function Before this commit, the read_exactly method used a blocking read call. After this commit, the read_exactly function has a configurable timeout in milliseconds, with a default of 5 ms. Parameterizing timeout value and setting default to 5 ms Before this commit, the timeout for the TCPInterface::read() method had a hard-coded timeout value of 5 ms. After this commit, the TCPInterface::read() function takes an optional parameter for the timeout, in milliseconds. This parameter defaults to 5 ms. Removing while loop with io_service_.run_one() condition for correct execution Before this commit, the tcp_interface read method would constantly return a timeout error, even if data had been read properly. After this commit, the read method returns an OK status if the read was successful, or TIMEOUT or READ_FAILED depending on the failure type. In the Boost asio library, the io_service can be run continuously, or run once until an event hander has been dispatched. The return value of the run_one method was previously used as a while loop exit condition, but this resulted in the initial behavior describe above, as if the run_one method actually returned after several even handlers were dispatched, instead of just one. After removing the while loop and using the method alone, the desired behavior was achieved. Initial implementation of timeout on TCP read Contributors: Daniel-Stanek, Joe Kale, Joshua Whitley, Kyle Rector, Lucas Buckland, Nishanth Samala, Sam Rustan, Samuel Rustan, Zach Oakes 2.0.0 (2018-04-25) Updating package.xml to format 2. Re-releasing under MIT license. Removing unused header. Fixing type-punned pointer isssues. Adding utility header. Cleaning up function formatting and some const refs. Adding README. Removing roscpp from list of dependencies. Updating repo URLs. Adding Travis CI integration. Bumping version. Adding is_open functions for tcp and udp. Fixing license typos. Standardizing interface error handling. Added additional error values BAD_PARAM and SOCKET_CLOSED. Removed ni_error_handler in favor of return_status_desc. First pass at standardizing reads, writes, and error reporting in network_interface. read_some on TCP was not returning the number of bytes read. Changing license to GPLv3. Changing message name to ROS standard format. Fixing catkin_package line to include the correct directory. added read_exactly message to tcp. adds size to TCPFrame added tcp_interface. renamed packaged to network_interface renamed package, added tcp interface, renamed header and namespace Initial version Contributors: Daniel Stanek, Joe Kale, Joshua Whitley
http://docs.ros.org/lunar/changelogs/network_interface/changelog.html
CC-MAIN-2019-47
en
refinedweb
Closed Bug 796827 Opened 7 years ago Closed 7 years ago [b2g-bluetooth] Destroy Bluetooth Hfp Manager instance while in shutdown Categories (Core :: DOM: Device Interfaces, defect) Tracking () mozilla18 People (Reporter: gyeh, Assigned: gyeh) References Details Attachments (1 file, 4 obsolete files) No description provided. Follow up of Bug 791197 Comment on attachment 666874 [details] [diff] [review] v1 patch Review of attachment 666874 [details] [diff] [review]: ----------------------------------------------------------------- The fact that the diff isn't lining up worries me. Looks good otherwise, but would like to make sure the diff is what is expected before r+'ing. Re-r? after either fixing or telling me I'm crazy. :) ::: dom/bluetooth/BluetoothHfpManager.cpp @@ +36,5 @@ > + > +namespace { > + > +StaticRefPtr<BluetoothHfpManager> gBluetoothHfpManager; > + Nit: Don't need extra newlines between this and other statics. @@ +106,5 @@ > : mCurrentVgs(-1) > , mCurrentCallIndex(0) > , mCurrentCallState(nsIRadioInterfaceLayer::CALL_STATE_DISCONNECTED) > { > +bool So I think your diff may be off here. Splinter is showing Init() to be /in/ the constructor? @@ +342,5 @@ > +{ > + MOZ_ASSERT(NS_IsMainThread()); > + > + gInShutdown = true; > + Nit: We don't need all of these extra newlines. Oops, you got me! I found it after attaching the patch :p Attachment #666874 - Attachment is obsolete: true Attachment #666885 - Flags: review?(kyle) register NS_XPCOM_SHUTDOWN_OBSERVER_ID in Init() Attachment #666885 - Attachment is obsolete: true Attachment #666885 - Flags: review?(kyle) Attachment #666891 - Flags: review?(kyle) remove observers of mozsettings-changed in Cleanup() Attachment #666891 - Attachment is obsolete: true Attachment #666891 - Flags: review?(kyle) Attachment #666894 - Flags: review?(kyle) try server: Attachment #666894 - Attachment is obsolete: true Status: NEW → RESOLVED Closed: 7 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla18
https://bugzilla.mozilla.org/show_bug.cgi?id=796827
CC-MAIN-2019-47
en
refinedweb
Singleton with parameters I need a singleton class to be instantiated with some arguments. The way I'm doing it now is: class SingletonExample { private SingletonExample mInstance; //other members... private SingletonExample() { } public SingletonExample Instance { get { if (mInstance == null) { throw new Exception("Object not created"); } return mInstance; } } public void Create(string arg1, string arg2) { mInstance = new SingletonExample(); mInstance.Arg1 = arg1; mInstance.ObjectCaller = new ObjectCaller(arg2); //etc... basically, create object... } } The instance is created 'late', meaning I don't have all of the needed arguments on app startup. In general I don't like forcing an ordering of method calls, but I don't see another way here. The IoC wouldn't resolve it either, since where I can register it in the container, I can also call Create()... Do you consider this an OK scenario? Do you have some other idea? edit: I know that what I wrote as an example it's not thread safe, thread-safe isn't part of the question Answers A Singleton with parameters smells fishy to me. Consider whateva's answer and the following code: Singleton x = Singleton.getInstance("hello", "world"); Singleton y = Singleton.getInstance("foo", "bar"); Obviously, x==y and y works with x's creation parameters, while y's creation parameters are simply ignored. Results are probably... confusing at least. If you really, really fell like you have to do it, do it like this: class SingletonExample { private static SingletonExample mInstance; //other members... private SingletonExample() { // never used throw new Exception("WTF, who called this constructor?!?"); } private SingletonExample(string arg1, string arg2) { mInstance.Arg1 = arg1; mInstance.ObjectCaller = new ObjectCaller(arg2); //etc... basically, create object... } public static SingletonExample Instance { get { if (mInstance == null) { throw new Exception("Object not created"); } return mInstance; } } public static void Create(string arg1, string arg2) { if (mInstance != null) { throw new Exception("Object already created"); } mInstance = new SingletonExample(arg1, arg2); } } In a multithreading environment, add synchronisation to avoid race conditions. Singleton is ugly but since user whateva can't be bothered to correct his own code... public class Singleton { private static Singleton _instance = null; private static Object _mutex = new Object(); private Singleton(object arg1, object arg2) { // whatever } public static Singleton GetInstance(object arg1, object arg2) { if (_instance == null) { lock (_mutex) // now I can claim some form of thread safety... { if (_instance == null) { _instance = new Singleton(arg1, arg2); } } } return _instance; } } Skeet blogged about this years ago I think, it's pretty reliable. No exceptions necessary, you aren't in the business of remembering what objects are supposed to be singletons and handling the fallout when you get it wrong. Edit: the types aren't relevant use what you want, object is just used here for convenience. Better answer: Create an interface: ISingleton (containing whatever actions you want it do to) And your type: Singleton : ISingleton Assuming you have access to a UnityContainer: IUnityContainer _singletonContainer = new UnityContainer(); // or whatever code to initialize the container - When you are ready to create your type use (assuming you are using Unity for DI): _singletonContainer.RegisterType(typeof(ISingleton), new Singleton(params)); - If you want to grab the singleton just use: var localSingletonVar = _singletonContainer.Resolve<ISingleton>(); Note: If the container doesn't have a type registered for the ISingleton interface, then it should either throw exception, either return null. Old Answer: public class Singleton { private static Singleton instance = null; private Singleton(String arg1, String arg2) { } public static Singleton getInstance(String arg1, String arg2) { if (instance != null) { throw new InvalidOperationException("Singleton already created - use getinstance()"); } instance = new Singleton(arg1, arg2); return instance; } public static Singleton getInstance() { if (instance == null) throw new InvalidOperationException("Singleton not created - use GetInstance(arg1, arg2)"); return instance; } } I'd go with something similar (you could need to check if instance was created too), or, if your DI container supports throwing exception on non registered types, i would go with that. ATTN: Non thread safe code :) The double locking singleton solution provided by annakata will not work everytime on all platforms. there is an flaw in this approach which is well documented. Do not use this approach or you will end up with problems. The only way to solve this issue is to use the volatile keyword e.g. private static volatile Singleton m_instance = null; This is the only thread safe approach. If you're using .NET 4 (or higher), you can use the System.Lazy type. It will take care for you the thread safety issue and do it lazy so you won't create instance unnecessarily. This way the code is short and clean. public sealed class Singleton { private static readonly Lazy<Singleton> lazy = new Lazy<Singleton>(() => new Singleton(),LazyThreadSafetyMode.ExecutionAndPublication); private Singleton() { } public static Singleton Instance { get { return lazy.Value; } } } I actually can't see a singleton in your code. Use a static, parameterized getInstance method which returns the singleton and creates it if it wasn't used before. /// <summary> Generic singleton with double check pattern and with instance parameter </summary> /// <typeparam name="T"></typeparam> public class SingleObject<T> where T : class, new() { /// <summary> Lock object </summary> private static readonly object _lockingObject = new object(); /// <summary> Instance </summary> private static T _singleObject; /// <summary> Protected ctor </summary> protected SingleObject() { } /// <summary> Instance with parameter </summary> /// <param name="param">Parameters</param> /// <returns>Instance</returns> public static T Instance(params dynamic[] param) { if (_singleObject == null) { lock (_lockingObject) { if (_singleObject == null) { _singleObject = (T)Activator.CreateInstance(typeof(T), param); } } } return _singleObject; } } Need Your Help Jquery Remove All Event Handlers Inside Element javascript jquery event-handlingI have a div element with several elements inside it like buttons and so forth which have event handlers attached to them. I know its possible to go:
http://www.brokencontrollers.com/faq/10945186.shtml
CC-MAIN-2019-47
en
refinedweb
In this post, I’ll show you how to add lights to a wheeled robot so that the light is red when the robot is moving backwards and is green when the robot is moving forwards. Requirements Here are the requirements: - Add lights to a wheeled robot so that the light is red when the robot is moving backwards and is green when the robot is moving forwards. You Will Need The following components are used in this project. You will need: - Wheeled Robot - 10 LED Bi-Color Green/Red 565nm/697nm 2-Pin (Jameco Part no.: 94553) - 330 Ohm 0.25 Watt Resistor Directions Get the bi-color LED. Cut the shorter lead of the LED so that it is 1/4 inches in length. Get the 300 Ohm resistor. Cut one of the ends so that it is 3/8 inches. Solder the short lead of the LED to the short lead of the 300 Ohm resistor. Cut the bottom of the resistor so that its lead is 3/8 inches in length. Cut the lead of the LED that is not connected to the resistor so that it is the same length as the lead that has the resistor soldered to it. Insert the LED into the Arduino board. The lead with the resistor goes into pin 11. The lead that does not have the resistor gets inserted into pin 12. Upload the following code to the Arduino board. you should see the LED flashing red and green. /** * Make a bi-color LED flash red and green. * * @author Addison Sears-Collins * @version 1.0 2019-05-15 */ #define LED_RED 11 #define LED_GREEN 12 /* * This setup code is run only once, when * Arudino is supplied with power. */ void setup() { // Define output pins pinMode(LED_RED, OUTPUT); pinMode(LED_GREEN, OUTPUT); // Set output values digitalWrite(LED_RED, LOW); digitalWrite(LED_GREEN, LOW); } /* * This code is run again and again to * make the LED blink. */ void loop() { red_blink(); green_blink(); } // Method to blink the red LED void red_blink() { digitalWrite(LED_RED, HIGH); delay(250); digitalWrite(LED_RED, LOW); delay(250); } // Method to blink the green LED void green_blink() { digitalWrite(LED_GREEN, HIGH); delay(250); digitalWrite(LED_GREEN, LOW); delay(250); } Now, upload the following code to the Arduino board. In this code, the LED will flash green when the robot is moving forward, and the LED will flash red when the robot is moving backwards. #include <Servo.h> /** * Make a robot whose light is red when the robot * is moving backwards and is green when the robot * is moving forwards. * * @author Addison Sears-Collins * @version 1.0 2019-05-15 */ Servo right_servo; Servo left_servo; volatile int left_switch = LOW; // Flag for left switch volatile int right_switch = LOW; // Flag for right switch boolean started = false; // True after first start #define LED_RED 11 #define LED_GREEN 12 void setup() { // Set pin modes for switches pinMode(2, INPUT); pinMode(3, INPUT); pinMode(4, OUTPUT); // Set internal pull up resistors for switches // These go LOW when pressed as connection // is made with Ground. digitalWrite(2, HIGH); // Right switch digitalWrite(3, HIGH); // Left switch digitalWrite(4, LOW); // Pin 4 is ground right_servo.attach(9); // Right servo to pin 9 left_servo.attach(10); // Left servo to pin 10 // Set up the interrupts attachInterrupt(0, bump_right, FALLING); attachInterrupt(1, bump_left, FALLING); started = true; // OK to start moving pinMode(LED_GREEN, OUTPUT); pinMode(LED_RED, OUTPUT); digitalWrite(LED_GREEN, LOW); digitalWrite(LED_GREEN, LOW); } void loop() { if (left_switch == HIGH) { // If the left switch hit go_backwards(); // Go backwards for 0.5 sec delay(500); turn_right(); // Spin for 1 second delay(1000); go_forward(); // Go forward left_switch = LOW; // Reset flag shows bumped } if (right_switch == HIGH) { // If right switch hit go_backwards(); delay(500); turn_left(); delay(1000); go_forward(); right_switch = LOW; } } // Interrupt handlers void bump_left() { if (started) // If robot has begun left_switch = HIGH; } void bump_right() { if (started) right_switch = HIGH; } // Motion Routines: forward, backwards, turn, stop // Continuous servo motor void go_forward() { right_servo.write(0); left_servo.write(180); led_green(); } void go_backwards() { right_servo.write(180); left_servo.write(0); led_red(); } void turn_right() { right_servo.write(180); left_servo.write(180); led_off(); } void turn_left() { right_servo.write(0); left_servo.write(0); led_off(); } void stop_all() { right_servo.write(90); left_servo.write(90); } void led_green() { digitalWrite(LED_GREEN, HIGH); digitalWrite(LED_RED, LOW); } void led_red() { digitalWrite(LED_GREEN, LOW); digitalWrite(LED_RED, HIGH); } void led_off() { digitalWrite(LED_GREEN, LOW); digitalWrite(LED_RED, LOW); } If for some reason you get a situation where you get the opposite result of what should occur (flashes red when moving forward and green when in reverse), the LED is reversed. Turn it around. Also, if you are getting a situation where your servos are not moving, it likely means that voltage is insufficient. Changing the location of the servo power wire, the wire that connects the red servo line to the red line of the 4xAA battery pack usually does the trick. If not, get new batteries for the servo.
https://automaticaddison.com/how-to-add-lights-to-a-wheeled-robot-arduino/
CC-MAIN-2019-47
en
refinedweb
msgrcv - XSI message receive operation [XSI] #include <sys/msg.h>#include <sys/msg.h> ssize_t msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg); The msgrcv() function operates on XSI message queues (see XBD -1 shall shall be decremented by 1. - msg_lrpid shall be set to the process ID of the calling process. - msg_rtime shall be set to the current time, as described in IPC General Description. Upon successful completion, msgrcv() shall return a value equal to the number of bytes actually placed into the buffer mtext. Otherwise, no message shall be received, msgrcv() shall return -1, XSI Interprocess Communication. - . Receiving |gsnd, sigaction XBD Message Queue, <sys/msg.h> First released in Issue 2. Derived from Issue 2 of the SVID. The type of the return value is changed from int to ssize_t, and a warning is added to the DESCRIPTION about values of msgsz larger the {SSIZE_MAX}. The note about use of POSIX Realtime Extension IPC routines has been moved from FUTURE DIRECTIONS to the APPLICATION USAGE section. The normative text is updated to avoid use of the term "must" for application requirements. POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0398 [345] and XSH/TC1-2008/0399 [421] are applied. return to top of pagereturn to top of page
https://pubs.opengroup.org/onlinepubs/9699919799/functions/msgrcv.html
CC-MAIN-2019-47
en
refinedweb
?. - AWS Elastic Beanstalk Flask Application: Toolkit Can't Find Resource That Appears to be Present on EC2 Instance I'm trying to deploy a flask/python app using AWS Elastic Beanstalk and getting a '500 internal server' error resulting from a missing resource. The app works locally but one of the backend components can't find a resource it needs when running on the EC2 instance that Elastic Beanstalk is managing. I am using the Natural Language Toolkit which I include in my requirements.txt file to be downloaded as a pip package. The nltk package install seems to be have been successful as I'm not getting an error on the line: import nltk The line I am getting the error on in my application code is: sent_detector = nltk.data.load('tokenizers/punkt/english.pickle') The error I am getting in my log ending is: [Wed Feb 14 22:17:10.731016 2018] [:error] [pid 13894] [remote 172.31.0.22:252] Resource \x1b[93mpunkt\x1b[0m not found. [Wed Feb 14 22:17:10.731018 2018] [:error] [pid 13894] [remote 172.31.0.22:252] Please use the NLTK Downloader to obtain the resource: [Wed Feb 14 22:17:10.731020 2018] [:error] [pid 13894] [remote 172.31.0.22:252] [Wed Feb 14 22:17:10.731023 2018] [:error] [pid 13894] [remote 172.31.0.22:252] \x1b[31m>>> import nltk [Wed Feb 14 22:17:10.731025 2018] [:error] [pid 13894] [remote 172.31.0.22:252] >>> nltk.download('punkt') [Wed Feb 14 22:17:10.731027 2018] [:error] [pid 13894] [remote 172.31.0.22:252] \x1b[0m [Wed Feb 14 22:17:10.731029 2018] [:error] [pid 13894] [remote 172.31.0.22:252] Searched in: [Wed Feb 14 22:17:10.731031 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/home/wsgi/nltk_data' [Wed Feb 14 22:17:10.731034 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/share/nltk_data' [Wed Feb 14 22:17:10.731036 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/local/share/nltk_data' [Wed Feb 14 22:17:10.731038 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/lib/nltk_data' [Wed Feb 14 22:17:10.731040 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/usr/local/lib/nltk_data' [Wed Feb 14 22:17:10.731043 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/opt/python/run/venv/nltk_data' [Wed Feb 14 22:17:10.731045 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '/opt/python/run/venv/lib/nltk_data' [Wed Feb 14 22:17:10.731047 2018] [:error] [pid 13894] [remote 172.31.0.22:252] - '' [Wed Feb 14 22:17:10.731049 2018] [:error] [pid 13894] [remote 172.31.0.22:252] When I added the line nltk.download('punkt') to my application in order to ensure that the resource I need would be downloaded, I get this message in the error log: [Wed Feb 14 22:30:07.861273 2018] [:error] [pid 28765] [nltk_data] Downloading package punkt to /home/wsgi/nltk_data... which is then followed by a series of errors that comes down to: [Wed Feb 14 22:30:07.864521 2018] [:error] [pid 28765] [remote 172.31.0.22:55448] FileNotFoundError: [Errno 2] No such file or directory: '/home/wsgi/nltk_data' So I SSH-d into my EC2 instance, entered the virtual environment that it seems like my app is running on from the opt/python/run directory using $source venv/bin/activate and opened up the python interpreter. When I ran >>import nltk >>nltk.download('punkt') I got back [nltk_data] Downloading package punkt to /home/ec2-user/nltk_data... [nltk_data] Package punkt is already up-to-date! True So I also tried >>> nltk.data.load('tokenizers/punkt/english.pickle') and got back: <nltk.tokenize.punkt.PunktSentenceTokenizer object at 0x7fb8afd34080> So, it seems like the nltk package on my EC2 instance knows where the nltk_data resource is as long as it's not being asked by my Flask application. I also tried entering >>nltk.data.path.append('home/ec2-user/nltk_data') and still got the same error as I posted above with no indication that my attempts append the list of paths to check for nltk_data had gone through. I am not sure what I need to get nltk to locate where the nltk_data resource it is trying to find is located. I have seen .ebextensions mentioned in reference to dependency issues and tried to read the AWS page about it, but am not sure exactly how it fits into the issue occurring with my application. Probably a learning-curve web dev literacy issue on my end. Thanks for any clarity that can be provided regarding this situation! - Flask-SQLAlchemy is not inserting data into database table I am using Flask-SQLAlchemy and I'm trying to insert values taken from a form. I'm not getting an error, but for some reason, it's not inserting into the database and I don't know why. In models.py: class Appliance(db.Model): """""" __tablename__ = "appliances" id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String) state = db.Column(db.String, default='Off') room_id = db.Column(db.Integer, db.ForeignKey("rooms.id")) room = db.relationship("Room", backref=db.backref( "appliances", order_by=id), lazy=True) class Door(db.Model): """""" __tablename__ = "doors" id = db.Column(db.Integer, primary_key=True) state = db.Column(db.String, default='Closed') room_id = db.Column(db.Integer, db.ForeignKey("rooms.id")) room = db.relationship("Room", backref=db.backref( "doors", order_by=id), lazy=True) In forms.py: class ApplianceForm(Form): types = [('Appliance', 'Appliance'), ('Door', 'Door') ] names = [('Light', 'Light'), ('Television', 'Television'), ('Air Conditioner', 'Air Conditioner'), ('Oven', 'Oven'), ('Curtain', 'Curtain') ] name = SelectField('Name', choices=names, validators=[validators.required()]) room = SelectField('Room', validators=[validators.required()]) type = RadioField('Type', choices=types, validators=[validators.required()]) In views.py: @app.route('/new_appliance', methods=['GET', 'POST']) def new_appliance(): form = ApplianceForm(request.form) form.room.choices = [(r.id, r.name) for r in db_session.query(Room)] print form.errors if request.method == 'POST': name=request.form['name'] room=request.form['room'] type=request.form['type'] print name, " ", room, " ", type if request.method == 'POST' and form.validate(): if form.type.data == 'Appliance': appliance = Appliance() appliance.room = room appliance.name = form.name.data db_session.add(appliance) else: door = Door() door.room = room db_session.add(door) db_session.commit() flash('Appliance added!') return render_template('new_appliance.html', form=form) I think that the problem is in the state column. But even if I write it explicitly in the new_appliance()function, like this: application.state = 'Off' It still doesn't change anything. What might the problem be? - ApScheduler does not share global variables shared across workers of the Flask app when running on Gunicorn I'm relativelly new to Python whereas should have decent background in Java. Struggling with the following use case: We've got a microservice running on Gunicorn app server. Service is intended to be called by multiple clients to performs certain manipulations. In order to serve individual client's requests service need to load a heavy model (that takes ~ 10 seconds to load) and then it can re-use the same model to perform all subsequent requests. Thus in order to make all subsequent requests faster we decided to keep model loaded in memory for some time and age upon inactivity. As stated earlier, I'm relativelly new to python. And could not come up with a better way than storing model in a dictionary (Java's Map alternative) {clientId:model}. In order to age models was planning to use a scheduled task that would be able to access the dictionary and remove models that are not used any more. Run into ApScheduler and in a short time was able to integrate it in my Flask application. Created an integration test that run perfectly fine(Test uses app.run() directly). However when I run the application on Gunicorn seems like the apScheduler task always gets an empty version of the shared dictionary resource. Was wondering if anyone else run into this issue before... and how did you solve it if you did. Created a quick code snipped that to highlight the problematic behavior:. from flask import Flask from apscheduler.schedulers.background import BackgroundScheduler # GLOBAL STORAGE app = Flask(__name__); # imaginary heavy resource __global_count__ = 0; # FLASK SERVICES @app.route("/addOne") def addOne(): global __global_count__ __global_count__ = __global_count__ + 1; return str(__global_count__); @app.route("/count") def count(): global __global_count__ return str(__global_count__); # Scheduled Task def reset(): with app.app_context(): global __global_count__ # Always prints 0 here print("GlobalCount: %d"% __global_count__); __global_count__ = 0; sched = BackgroundScheduler(daemon=True); sched.add_job(reset,'interval',seconds=30); sched.start(); Would greatly appreciate any help. - Deploying Django Project with Supervisor and Gunicorn getting FATAL Exited too quickly (process log may have details) error Im trying to launch my django project using this tutorial. I am currently setting up gunicorn and supervisor. These are the configs... Here is my Gunicorn config: #!/bin/bash NAME="smart_suvey" DIR=/home/smartsurvey/mysite2 USER=smartsurvey GROUP=smartsurvey WORKERS=3 BIND=unix:/home/smartsurvey/run/gunicorn.sock DJANGO_SETTINGS_MODULE=myproject.settings DJANGO_WSGI_MODULE=myproject.wsgi LOG_LEVEL=error cd $DIR source ../venv/bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DIR:$PYTHONPATH exec ../venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \ --name $NAME \ --workers $WORKERS \ --user=$USER \ --group=$GROUP \ --bind=$BIND \ --log-level=$LOG_LEVEL \ --log-file=- This is my supervisor config: [program:smartsurvey] command=/home/smartsurvey/gunicorn_start user=smartsurvey autostart=true autorestart=true redirect_stderr=true stdout_logfile=/home/smartsurvey/logs/gunicorn.log When Ive saved my supervisor, i run sudo supervisorctl rereadfollowed by sudo supervisorctl update. These throw no errors. I then run sudo supervisorctl status smartsurvey, which gives the error smartsurvey FATAL Exited too quickly (process log may have details) This is my first time putting my project on the internet so help would be appreciated! Thanks - git server and django program inside nginx I want to run a git server inside my django program. my nginx config is like this: server{ listen 192.168.1.250:80; root /var/www/html/git; location /server\.git { client_max_body_size 0; # Git pushes can be massive, just to make sure nginx doesn't suddenly cut the connection add this. auth_basic "Git Login"; # Whatever text will do. auth_basic_user_file "/var/www/html/html } location / { include proxy_params; proxy_pass; } } my django program run correctly, but for git server, I cannot open that. but when I change the location of django program, both of them work correctly. location /user { include proxy_params; proxy_pass; } I want to use just "/" and not to "/" + string. what should I do?? - gunicorn + django + nginx -- recv() not ready (11: Resource temporarily unavailable) I am getting this issue. I am trying to setup a server and cannot get it running. I am using django, gunicorn and nginx. here are the logs nginx log 2018/02/09 22:22:32 [debug] 1421#1421: *9 http write filter: l:1 f:0 s:765 2018/02/09 22:22:32 [debug] 1421#1421: *9 http write filter limit 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 writev: 765 of 765 2018/02/09 22:22:32 [debug] 1421#1421: *9 http write filter 0000000000000000 2018/02/09 22:22:32 [debug] 1421#1421: *9 http copy filter: 0 "/?" 2018/02/09 22:22:32 [debug] 1421#1421: *9 http finalize request: 0, "/?" a:1, c:1 2018/02/09 22:22:32 [debug] 1421#1421: *9 set http keepalive handler 2018/02/09 22:22:32 [debug] 1421#1421: *9 http close request 2018/02/09 22:22:32 [debug] 1421#1421: *9 http log handler 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01ACBE0 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01C6FB0, unused: 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01B9F80, unused: 214 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01C9460 2018/02/09 22:22:32 [debug] 1421#1421: *9 hc free: 0000000000000000 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 hc busy: 0000000000000000 0 2018/02/09 22:22:32 [debug] 1421#1421: *9 reusable connection: 1 2018/02/09 22:22:32 [debug] 1421#1421: *9 event timer add: 3: 70000:1518215022208 2018/02/09 22:22:32 [debug] 1421#1421: *9 post event 000055D4A01D8BD0 2018/02/09 22:22:32 [debug] 1421#1421: *9 delete posted event 000055D4A01D8BD0 2018/02/09 22:22:32 [debug] 1421#1421: *9 http keepalive handler 2018/02/09 22:22:32 [debug] 1421#1421: *9 malloc: 000055D4A01C9460:1024 2018/02/09 22:22:32 [debug] 1421#1421: *9 recv: fd:3 -1 of 1024 2018/02/09 22:22:32 [debug] 1421#1421: *9 recv() not ready (11: Resource temporarily unavailable) 2018/02/09 22:22:32 [debug] 1421#1421: *9 free: 000055D4A01C9460 gunicorn log: [2018-02-09 22:21:35 +0000] [2514] [INFO] Worker exiting (pid: 2514) [2018-02-09 22:21:35 +0000] [2510] [INFO] Worker exiting (pid: 2510) [2018-02-09 22:21:35 +0000] [2523] [INFO] Worker exiting (pid: 2523) [2018-02-09 22:21:35 +0000] [2501] [INFO] Shutting down: Master [2018-02-09 22:21:36 +0000] [2556] [INFO] Starting gunicorn 19.7.1 [2018-02-09 22:21:36 +0000] [2556] [INFO] Listening at: unix:/var/www/myapp/application/live.sock (2556) [2018-02-09 22:21:36 +0000] [2556] [INFO] Using worker: sync [2018-02-09 22:21:36 +0000] [2563] [INFO] Booting worker with pid: 2563 [2018-02-09 22:22:31 +0000] [2556] [CRITICAL] WORKER TIMEOUT (pid:2563) [2018-02-09 22:22:32 +0000] [2598] [INFO] Booting worker with pid: 2598 what can it be? I am stuck for hours now. here is my gunicorn service: [Unit] Description=gunicorn daemon After=network.target [Service] User=www-data Group=www-data WorkingDirectory={{ app_dir }} ExecStart={{ virtualenv_dir }}/bin/gunicorn --workers 1 --bind unix:{{ app_dir }}/live.sock {{ wsgi_module }}:application --error-logfile /var/log/gunicorn.log [Install] WantedBy=multi-user.target - Python thread worker - append main thread list Hello is there any way to append main_thread list from another threads? I need to thread infinite loop. Something like: Class Main(object): list = [] def __init__(self): Thread(target=self.thread, args=()).start() def thread(self): while True: self.list.append("test") - How to order results from workers as if there are no workers used? Suppose that I have the following code to read lines and multiple each line by 2 and print each line out one by one. I'd like to use N workers. Each worker takes M lines each time and processes them. More importantly, I'd like the output to be printed in the same order as the input. But the example here does not guarantee the output is printed in the same order as the input. The following URL also shows some examples. But I don't think they fit my requirement. The problem is that the input can be arbitrarily long. There is no way to hold everything in memory before they are printed. There must be a way to get some output from the workers can determine if the output of a worker is ready to be printed and then it is print. It sounds like there should be a master goroutine to do this. But I am not sure how to implement it most efficiently, as this master gorountine can easily be a bottleneck when N is big. How to collect values from N goroutines executed in a specific order? Could anybody show an example program that results from the workers in order and prints the results as early as they can be printed? $ cat main.go #!/usr/bin/env gorun // vim: set noexpandtab tabstop=2: package main import ( "bufio" "fmt" "strconv" "io" "os" "log" ) func main() { stdin := bufio.NewReader(os.Stdin) for { line, err := stdin.ReadString('\n') if err == io.EOF { if len(line) != 0 { i, _ := strconv.Atoi(line) fmt.Println(i*2) } break } else if err != nil { log.Fatal(err) } i, _ := strconv.Atoi(line[:(len(line)-1)]) fmt.Println(i*2) } } - How to design a NodeJs worker to handle concurrent long running jobs I'm working on a small side project and would like to grow it out, but I'm not too sure how. My question is, how should I design my NodeJs worker application to be able to execute multiple long running jobs at the same time? (i.e. should I be using multiprocessing libraries, a load-balancer, etc) My current situation is that I have a NodeJs app running purely to serve web requests and put jobs on a queue, while another NodeJs app reading off that queue carries out those jobs (on a heroku worker dyno). Each job may take anywhere from 1 hour to 1 week of purely writing to a database. Due to the nature of the job, and it requiring an npm package specifically, I feel like I should be using Node, but at the same time I'm not sure it's the best option when considering I would like to scale it so that hundreds of jobs can be executed at the same time. Any advice/suggestions as to how I should architect this design would be appreciated. Thank you.
http://codegur.com/46685820/how-to-run-initialization-for-each-gunicorn-worker
CC-MAIN-2018-09
en
refinedweb
Application for Automatic Extension of Time To File U.S. Individual Income Tax Return - Lynne Terry - 2 years ago - Views: Transcription 1 Form 4868 Department of the Treasury Internal Revenue Service (99) Application for Automatic Extension of Time To File U.S. Individual Income Tax Return Information about Form 4868 and its instructions is available at OMB No There are three ways to request an automatic extension of time to file a U.S. individual income tax return. 1. You can file Form 4868 and pay all or part of your estimated income tax due. See How To Make a Payment, on page You can file Form 4868 electronically by accessing IRS e-file using your home computer or by using a tax professional who uses e-file. 3. You can file a paper Form It s Convenient, Safe, and Secure IRS e-file is the IRS s electronic filing program. You can get an automatic extension of time to file your tax return by filing Form 4868 electronically. You will receive an electronic acknowledgment once you complete the transaction. Keep it with your records. Do not mail in Form 4868 if you file electronically, unless you are making a payment with a check or money. Purpose of. Gift and generation skipping transfer (GST) tax return (Form 709). An extension of time to file your 2014 calendar year income tax return also extends the time to file Form 709 for However, it does not extend the time to pay any gift and GST tax you may owe for To make a payment of gift and GST tax, see Form If you do not pay the amount due by the regular due date for Form 709, you will owe interest and may also be charged penalties. If the donor died during 2014, see the instructions for Forms 709 and Qualifying for the Extension To get the extra time you must: Pay Electronically You do not need to submit a paper Form 4868 if you file it with a payment using our electronic payment options. Your extension will be automatically processed when you pay part or all of your estimated income tax electronically. You can pay online or by phone (see page 3). E-file Using Your Personal Computer or Through a Tax Professional Refer to your tax software package or tax preparer for ways to file electronically. Be sure to have a copy of your 2013 tax return you will be asked to provide information from the return for taxpayer verification. If you wish to make a payment, you can pay by electronic funds withdrawal or send your check or money order to the address shown in the middle column under Where To File a Paper Form 4868 (see page 4). File a Paper Form 4868 If you wish to file on paper instead of electronically, fill in the Form 4868 below and mail it to the address shown on page 4. For information on using a private delivery service, see page 4. Note. If you are a fiscal year taxpayer, you must file a paper Form General Instructions 1. Properly estimate your 2014 tax liability using the information available to you, 2. Enter your total tax liability on line 4 of Form 4868, and 3. File Form 4868 by the regular due date of your return. DETACH HERE. For more details, see Interest and Late Payment Penalty on page 2.. Form 4868 Department of the Treasury Internal Revenue Service (99) Part I Identification 1 Your name(s) (see instructions) Address (see instructions) Application for Automatic Extension of Time To File U.S. Individual Income Tax Return For calendar year 2014, or other tax year beginning, 2014, ending, 20. City, town, or post office State ZIP Code 2 Your social security number 3 Spouse's social security number OMB No Part II Individual Income Tax 4 Estimate of total tax liability for $ 5 Total 2014 payments Balance due. Subtract line 5 from line 4 (see instructions) Amount you are paying (see instructions)... 8 Check here if you are out of the country and a U.S. citizen or resident (see instructions) Check here if you file Form 1040NR or 1040NR-EZ and did not receive wages as an employee subject to U.S. income tax withholding For Privacy Act and Paperwork Reduction Act Notice, see page 4. Cat. No W Form 4868 (2014) 2 Form 4868 (2014) Page 2 When To File Form 4868 File Form 4868 by April 15, Fiscal year taxpayers, file Form 4868 by the original due date of the fiscal year return., File this form and be sure to check the box on line 8 if you need an additional 4 months to file your return. If you are out of the country and a U.S. citizen or resident, you may qualify for special tax treatment if you meet the bona fide residence or physical presence tests. If you do not expect to meet either of those tests by the due date of your return, request an extension to a date after you expect to meet the tests by filing Form 2350, Application for Extension of Time To File U.S. Income Tax Return. You are out of the country if: You live outside the United States and Puerto Rico and your main place of work is outside the United States and Puerto Rico, or You are in military or naval service on duty You must file Form 4868 by the regular due date of the return. If you did not receive wages as an employee subject to U.S. income tax withholding, and your return is due June 15, 2015, check the box on line 9. Total Time Allowed Generally, we cannot extend the due date of your return for more than 6 months (October 15, 2015, 15, You are considered to have reasonable cause for the period covered by this automatic extension if at least 90% of your actual 2014 tax liability is paid before the regular due date of your return through withholding, estimated tax payments, or payments made with your reason for filing late. Do not attach the statement to Form. 3 Form 4868 (2014) Page 3 TAX-FORM ( ).. An ITIN is for tax use only. It does not entitle you to social security benefits or change your employment or immigration status under U.S. law. Part II Individual Income Tax Rounding off to whole dollars. You can round off cents to whole dollars on.-. How To Make a Payment. For Forms 1040A, 1040EZ, 1040NR-EZ, 1040-PR, and 1040-SS, do not include on line 5 the amount you are paying with this Form Note. If you e-file Form 4868 and mail a check or money order to the IRS for payment, use a completed paper Form 4868 as a voucher. 4 Form 4868 (2014) Page 4 Where To File a Paper Form 4868 If you live in: Alabama, Georgia, Kentucky, New Jersey, North Carolina, South Carolina, Tennessee, Virginia And you are making a payment, send Form 4868 with your payment to Internal Revenue Service: And you are not making a payment, send Form 4868 to Department of the Treasury, Internal Revenue Service Center: P.O. Box Louisville, KY Kansas City, MO Connecticut, Delaware, District of Columbia, Maine, Maryland, Massachusetts, Missouri, New Hampshire, New York, Pennsylvania, Rhode Island, Vermont, West Virginia P.O. Box Hartford, CT Kansas City, MO Florida, Louisiana, Mississippi, Texas P.O. Box 1302 Charlotte, NC Austin, TX Alaska, Arizona, California, Colorado, Hawaii, Idaho, Nevada, New Mexico, Oregon, Utah, Washington, Wyoming Arkansas, Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Montana, Nebraska, North Dakota, Ohio, Oklahoma, South Dakota, Wisconsin P.O. Box 7122 San Francisco, CA Fresno, CA P.O. Box Cincinnati, OH Fresno, are a nonpermanent P.O. Box 1302 Charlotte, NC USA Austin, TX USA resident of Guam or the U.S. Virgin Islands All foreign estate and trust Form 1040NR filers P.O. Box 1303 Charlotte, NC USA Cincinnati, OH USA All other Form 1040NR, 1040NR-EZ, 1040-PR, and 1040-SS filers P.O. Box 1302 Charlotte, NC USA Austin, TX USA Private Delivery Services You can use certain private delivery services designated by the IRS to meet the timely mailing as timely filing/paying rule for tax returns and payments. These private delivery services include only the following.. or provide incomplete or false information, you may be liable. Individual Taxpayer Identification Number Understing Your IRS Individual Taxpayer Identification Number ITIN TABLE OF CONTENTS Important Changes to Note 4 General Information 5 What is an ITIN? 5 What is the purpose of an ITIN? 5 When did IRS State of New Jersey Department of Labor and Workforce Development State of New Jersey Department of Labor and Workforce Development Please Read This Guide And Save It For Future Reference PR-94 (R-3-15) ON THE INTERNET Visit for unemployment and reemployment 2014 Instructions for Schedule A (Form 1040) Department of the Treasury Internal Revenue Service 2014 Instructions for Schedule A (Form 1040) Itemized Deductions Use Schedule A (Form 1040) to figure your itemized deductions. In most cases, your federal Information for Retirees and Their Families Information for Retirees and Their Families Federal Employees Group Life Insurance (FEGLI) This pamphlet contains important information for retired Federal employees about rights and benefits as an annuitant Employer Guide to Reemployment Tax* Employer Guide to Reemployment Tax* RT-800002 R. 03/15 Table of Contents Introduction...2 Preface...2 Background...2 Classification of Workers...2 State Unemployment Tax Act (SUTA)...2 Federal Unemployment GENERAL INSTRUCTIONS If you have any questions about this form, how to fill it out, or about VA benefits, contact your nearest VA regional office. You can locate the address of the nearest regional office in your telephone Starting a Business in Pennsylvania Starting a Business in Pennsylvania A BEGINNER S GUIDE REV-588 PO (04-14) This guide is published by the PA Department of Revenue to provide information to new business owners. It is not intended as Request for Taxpayer Identification Number and Certification Form W-9 (Rev. December 2014) Department of the Treasury Internal Revenue Service Request for Taxpayer Identification Number and Certification 1 Name (as shown on your income tax return). Name is required Completing your Superannuation guarantee charge statement quarterly Instructions and form for employers Completing your Superannuation guarantee charge statement quarterly This statement is to be used for quarters starting on or after 1 July 2003. To obtain additional QUESTIONNAIRE FOR NATIONAL SECURITY POSITIONS Standard Form 86 Revised December 2010 U.S. Office of Personnel Management 5 CFR Parts 731, 732, and 736 QUESTIONNAIRE FOR NATIONAL SECURITY POSITIONS Form approved: OMB No. 3206 0005 Follow instructions BENEFIT HANDBOOK YOUR BENEFIT HANDBOOK ETF P O Box 7931 Madison, WI 53707-7931 ET-2119 (REV 10/13) TABLE OF CONTENTS INTRODUCTION... 2 VESTING REQUIREMENTS... 2 WISCONSIN RETIREMENT SYSTEM... 3 Retirement Benefits... Placement of Children With Relatives STATE STATUTES SERIES Current Through July 2013 Placement of Children With Relatives In order for States to receive Federal payments for foster care and adoption assistance, Federal law under title IV-E APPLICATION FOR A U.S. PASSPORT APPLICATION FOR A U.S. PASSPORT PLEASE DETACH AND RETAIN THIS INSTRUCTION SHEET FOR YOUR RECORDS I applied: Place: Date: IMPORTANT NOTICE TO APPLICANTS WHO HAVE HAD A PREVIOUS U.S. PASSPORT BOOK AND/OR New Jersey Sales and Use Tax EZ Telefile System Instructions New Jersey Sales and Use Tax EZ Telefile System Instructions (Forms ST-50 Quarterly Return and ST-51 Monthly Remittance Statement) Do NOT Use for 3rd Quarter 2006 Filing Forms ST-50/51 by Phone Complete Who May Adopt, Be Adopted, or Place a Child for Adoption? STATE STATUTES Current Through January 2012, Be Adopted, or Place a Child for Adoption? For an adoption to take place, the person available to be adopted must be placed in the home of a person or persons A Guide to Naturalization A Guide to Naturalization M-476 (rev. 03/12) Table of Contents Welcome What Are the Benefits and Responsibilities of Citizenship? Frequently Asked Questions Who Is Eligible for Naturalization? Table of
http://docplayer.net/4659-Application-for-automatic-extension-of-time-to-file-u-s-individual-income-tax-return.html
CC-MAIN-2018-09
en
refinedweb
Why can't I move text when I select it?dylanlolwut Jul 22, 2015 8:55 AM In the middle of making a typography clip. Everything has been going fine for about an hour up until now When I create new text and try drag it to place it somewhere else on the screen it just warps the text as I drag the mouse, why is this and how do I stop it? Also, is there a way of creating new text without have to exit out of the text tool? I usually select the text tool, type whatever it is I need to, then select another tool like the brush to exit the text tool then click back on to it to write a new layer of text. It's quite annoying to have to do it everytime, I'm sure theres a way to to write a new text layer without having to deselect the text tool. Apologies if it doesn't make sense..Not the best at explaining things. Thank you in advance 1. Re: Why can't I move text when I select it?Dave LaRonde Jul 22, 2015 9:44 AM (in response to dylanlolwut) More than anything else.... more than any description or piece of advice someone may give you... you need this: Getting started with After Effects (CS4, CS5, CS5.5, CS6, & CC) Blow it off at your peril. 2. Re: Why can't I move text when I select it?Rick Gerard Jul 22, 2015 11:18 AM (in response to Dave LaRonde) Improper use of the selection tool. You are grabbing a corner and scaling instead of grabbing the entire text and moving. If all else fails just use the position controls in the timeline. 3. Re: Why can't I move text when I select it?Warren Heaton Jul 22, 2015 1:07 PM (in response to dylanlolwut) Hi Dylan: Sounds like you are experiencing the difference in behavior between having a text layer actively selected or passively selected. As such, you just need to complete the text entry before clicking with the Type Tool again. As I'm sure you've noticed, if you press "Enter" while entering text you get a carriage return (getting a new line of text in the current text layer). If you're on Windows, try pressing "Control + Enter" to complete the text entry. If you're on Mac, try pressing "Enter" instead of "Return" (on a standard Mac keyboard, this is fn + Return; on an extended Mac keyboard, it's the "Enter" key that's grouped with the numeric keypad keys). The text layer will still be selected, but you'll be free to click and create another type layer as well use press other keys for keyboard shortcuts (like the arrow keys to nudge the type layer or the "v" key to get back to the Selection tool). -Warren 4. Re: Why can't I move text when I select it?dylanlolwut Jul 23, 2015 2:44 AM (in response to Warren Heaton) thank you for the command! I knew there was a way to do, just didn't know how lol. Still having the same problem when trying to drag the text somewhere, it just warps it making it bigger or flips it. I can't see the corner markers on the text when I try to select it, even when I select the anchor point tool I can't see the centre. Did I somehow accidentally hit a command where all of the guideline points are hidden, or is it just my lack of knowledge on after effects making this stressful for me? Also, is it by default that the point on each text is near the bottom left hand corner? I remember yesterday I had to drag them all one by one to the centre point. 5. Re: Why can't I move text when I select it?Warren Heaton Jul 23, 2015 10:26 AM (in response to dylanlolwut) ***View Options Sounds like you want to check your current settings in the View Options for the Comp. To set your View Options: - Select the Comp panel. - Choose View > View Options... Specific to what you're experiencing, make sure that "Layer Controls" and "Handles" are checked in the View Options dialog box. With those two enabled, you should be able to easily see the Handles that you're unintentionally clicking and dragging right now (they're always there, even when hidden from view). You might also find it helpful to zoom in before you click and drag your text layer to move it. If you press period (".") on the standard keyboard, you'll zoom into the Comp window. If you press comma (","), you'll zoom out of the Comp window. You can also use the Photoshop/Illustrator/InDesign shortcuts for zooming in and out: "command =" and "command -", Mac; "control =" and "control -", Windows. ***Paragraph Penal Yes, text aligns to the left by default (so, the anchor point appears at the left edge of the baseline). In the Paragraph panel, you can choose Left align text, Center text, or Right align text. AE should remember the most recently used settings the next time you create a text layer. 6. Re: Why can't I move text when I select it?Dave LaRonde Jul 23, 2015 10:52 AM (in response to dylanlolwut) dylanlolwut wrote: ...is it just my lack of knowledge on after effects making this stressful for me? That would be the reason. AE doesn't behave like any other multimedia application you may have used previously. You need your basic schoolin', or you suffer frustration.
https://forums.adobe.com/thread/1906534
CC-MAIN-2018-09
en
refinedweb
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Nelson Nadal Ranch Hand 170 104 Threads 0 Cows since Jun 06, 2002 (170/100) Number Threads Started (104 (170/10) Number Threads Started (104/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Moderation Tools Recent posts by Nelson Nadal "Learn java development book" question Are there any existing patterns we can apply in the development for this new device or environment? How steep is the learning curve from ordinary java programmer? Thankd show more 3 years ago Android Learnnowonline Is the site official, true, authentic (learnnowonline.com)? I received email from you promoting the site, just want to make sure if this is really coming from you, thanks. show more 4 years ago Ranch Office 2 javac.exe (also in path) does it generate cannot find symbol error?. show more 5 years ago Beginning Java Web Services Books 9 yrs ago... is it worthwhile to read as a follow up... according to WEB SERVICES FAQ... JAX-RPC, JWSDP, Axis 1, Apache SOAP are now obsolete... and those are discussed on those books I mentioned. That's why I am asking if I am just wasting my time reading it or at least if I will gain something as a starter to web services, thanks.. show more 5 years ago Web Services Web Services Books 9 yrs ago... is it worthwhile to read I got books about Java and Web Services (below) XML programming: Web Application and WEb Srvices with JSP, XML Web Services Professional Projects, Pro XML Dev w/Java technology, Building Web Services with Java but it published around 2003, 2002.... is it worthwhile to read and learn from it or I am just wasting my time? Many thanks. Nelson show more 5 years ago Web Services variable to initialize not working? I'm just newbie in JSF and I was stuck in this code for the whole day. I've got a simple JSP program where you can put on a textfield and guess the generated random number, it has a link where you can reload (to generate new random number and initialize the i variable as number of attemps made) . When I use the navigational case xml in faces-config.xml and use the class in CODE A, the program is working fine. But when I modified the the class and make another class as listener the i variable and generatedNumber is not initializing anymore. I almost ran out of solution. Please help. the jsp program simply runs like this. in start of prgram, the class will generated a random number (0-9). Then I will put my number to guess. If not correct i variable will increment by one (this is the number of attempts made) In case I click the 'Reload' link, there should be another random number and i variable shld go to 0 again. The problem: When I click 'Reload' link its not doing another random number, and i variable never return to 0 instead the old random number shows and i variable continues to increment. Other story: The CODE A, this is working fine. When I click' Reload' in my JSP, its generated new number and variable i initializes again. Then I changed it as listener, then the problem (I stated above) occurs. I hope my story is clear, I really stuck in this problem, thanks and appreciate your help. If you need more clearer explanation please just tell me to elaborate more. CODE A package game; import java.util.*; import javax.faces.context.FacesContext; import javax.servlet.http.HttpSession; public class GameBean { /*Property declaration*/ private String userNumber; private String result; private String attempt; private int generatedNumber; int i=0; public GameBean() { /*Generate a random number*/ generatedNumber = (int)(Math.random()*10); } /*Accessor method for user number*/ public String getUserNumber() { return userNumber; } /*Mutator method for user number*/ public void setUserNumber(String userNumber) { this.userNumber=userNumber; } /*Accessor method for result*/ public String getResult() { return result; } /*Mutator method for result*/ public void setResult(String result) { this.result=result; } /*Accessor method for attempt*/ public String getAttempt() { return attempt; } /*Mutator method for result*/ public void setAttempt(String attempt) { this.attempt=attempt; } /*Implement the play() method*/ public void play() { i++; int tempNumber=Integer.parseInt(userNumber); /*If user specified number is equal to generated number*/ if (tempNumber==generatedNumber) { /*Set result and attempt properties*/ setResult("Congratulation! You specified "+userNumber+" and your number matches with our number."); setAttempt("You have attempted "+i+" times"); } /*If user specified number is less than generated number*/ else if(tempNumber<generatedNumber) { /*Set result and attempt properties*/ setResult("Sorry! You specified "+userNumber+", but your number does not match with our number. Enter a higher number."); setAttempt("You have attempted "+i+" times"); } /*If user specified number is greater than generated number*/ else if(tempNumber>generatedNumber) { /*Set result and attempt properties*/ setResult("Sorry! You specified "+userNumber+", but your number does not match with our number. Enter a lower number."); setAttempt("You have attempted "+i+" times"); } } public void load() { /*Invalidate session*/ FacesContext context = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) context.getExternalContext().getSession(false); session.invalidate(); } } CODE B package game; import java.util.*; import javax.faces.event.AbortProcessingException; import javax.faces.event.ActionEvent; import javax.faces.event.ActionListener; import javax.faces.context.FacesContext; import javax.faces.el.ValueBinding; import javax.faces.FactoryFinder; import javax.servlet.http.HttpSession; import javax.faces.application.ApplicationFactory; import javax.faces.application.Application; public class GameListener implements ActionListener { private String userNumber; private String result; private String attempt; private int generatedNumber; private ValueBinding resultBinding, attemptBinding; private int i; /*Class constructor*/ public GameListener() { /*Generate single digit random number*/ generatedNumber = (int)(Math.random()*10); } public void processAction(ActionEvent event) throws AbortProcessingException { /*Retrieve the command ID that generates an event*/ String command= event.getComponent().getId(); /*If button generates event*/ if(command.equals("submit")) { i++; /*Generate a random number*/ String current = event.getComponent().getId(); /*Retrieve the FacesContext object of the current application*/ FacesContext facesContext = FacesContext.getCurrentInstance(); /*Retrieve end user specified number*/ userNumber =(String)getValueBinding("#{gameBean.userNumber}").getValue(facesContext); /*Convert number to primitive int type*/ int tempNumber=Integer.parseInt(userNumber); /*Retrieve ValueBinding objects for the result and attempt properties*/ resultBinding= getValueBinding("#{gameBean.result}"); attemptBinding=getValueBinding("#{gameBean.attempt}"); /*if user specified number is equal to generated number*/ if (tempNumber==generatedNumber) { /*Set result and attempt properties*/ result="Congratulation! You specified "+userNumber+" and your number matches with our number."; resultBinding.setValue(facesContext, result); attempt="You have attempted "+i+" times"; attemptBinding.setValue(facesContext, attempt); } /*If user specified number is less than generated number*/ else if(tempNumber<generatedNumber) { /*Set result and attempt properties*/ result="Sorry! You specified "+userNumber+", but your number does not match with our number. Enter a higher number."; attempt="You have attempted "+i+" times"; resultBinding.setValue(facesContext, result); attemptBinding.setValue(facesContext, attempt); } /*If user specified number is greater than generated number*/ else if(tempNumber>generatedNumber) { /*Set result and attempt properties*/ result="Sorry! You specified "+userNumber+", but your number does not match with our number. Enter a lower number."; attempt="You have attempted "+i+" times"; resultBinding.setValue(facesContext, result); attemptBinding.setValue(facesContext, attempt); System.out.println(attempt+" inside greater"); } } else if (command.equals("link")) { /*Invalidate session*/ FacesContext context = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) context.getExternalContext().getSession(false); session.invalidate(); i=0; } } /*Method that returns a ValueBinding object for a particular value binding expression*/ private static ValueBinding getValueBinding(String valueRef) { /*Create an ApplicationFactory object*/ ApplicationFactory factory = (ApplicationFactory)FactoryFinder.getFactory(FactoryFinder.APPLICATION_FACTORY); /*Obtain a Application object that represents the JSF application*/ Application application = factory.getApplication(); /*Call the createValueBinding() method to create a ValueBindin object*/ return application.createValueBinding(valueRef); } } show more 9 years ago JSF CSS in JSF yes you're right, this is only matter of testing or familiriazation of generated html by JSF custom tag. Thanks again. show more 9 years ago JSF CSS in JSF. Thanks. show more 9 years ago JSF Reviewing for SCJD I'm a web developer (passed SWCD) but wanted to get SCJD too. I 've SCJD Exam with J2SE 5, but to be honest I got lost always in the middle of Thread topic. I knew generally how it works but the details makes me always sick. I always restart that topic to understand it. So I cant move on. I dont know if there is another way around to have a good look on SJCD. Maybe I just need encouragement or more advice how to approach this kind of review. Am I just going to code, compile and run the provided codes here first and then later just read the explanation. Thanks, just confuzed here. show more 9 years ago Developer Certification (OCMJD) what is nill? Thanks a lot, i thought this a new reserved keyword. show more 9 years ago Programmer Certification (OCPJP) When is the class that extends ActionForm executed Thank you very much for prompt reply, will read more on this. Till next time. show more 9 years ago Struts what is nill? Here is the part of the code from a book titled Web Development:Using Struts by instantCode public class SearchAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) { com.shoppingcart.SearchForm SearchForm = (com.shoppingcart.SearchForm) form; /* Retrieves the category and keyword from the search page */ String category=SearchForm.getCategory(); String keyWord=SearchForm.getKeyword(); /* Verifies wheather the proper category and keyword is submitted */ if(category.equals("nill") && keyWord.length()<3) { SearchForm.setStatus("Please specify a valid search criteria. A keyword must be of 3 or more characters."); request.setAttribute("searchForm",SearchForm); return mapping.getInputForward(); } else show more 9 years ago Programmer Certification (OCPJP) what is nill? Just want to know what is nill? is it the same as null? Thanks. show more 9 years ago Programmer Certification (OCPJP) When is the class that extends ActionForm executed I have a login.jsp that when you submit the action is /login it has a LoginAction.java that has a code... return mapping.findForward("success"); in struts-config.xml <action path="/login" type="com.shoppingcart.LoginAction" attribute="com.shoppingcart.LoginFormBean" name="LoginFormBean" scope="request" input="/Login.jsp"> <forward name="success" path="/Search.jsp"/> </action> ... so when it forwards to "/Search.jsp" my question is... will it execute the SearchForm.java indicated also in struts-config.xml (below)? <!-- Action mapping for the search action --> <action path="/search" type="com.shoppingcart.SearchAction" attribute="com.shoppingcart.SearchForm" name="SearchForm" scope="request" input="/Search.jsp"> </action> What I only know is, it will only execute if submit a form and the action in the form is action="/search". Or it will still execute even only you forward it to "/Search.jsp"? Please enlighten me, thanks show more 9 years ago Struts logic:present name tag Im just new to struts, though I've read some documentation with <logic resent name> tag I still dont understand it... My question is is the name attribute referring to ANY object variable present in the session? Thanks. show more 9 years ago Struts
https://coderanch.com/u/31083/Nelson-Nadal
CC-MAIN-2018-09
en
refinedweb
#include <hallo.h> * Helen Faulkner [Mon, Aug 02 2004, 08:28:01PM]: > So I've spent most of today battling my way through the New Maintainers > Guide, trying to work out how to package some very simple things. I > *think* I'm making headway... Would it appease you if the introduction would contain the sentense: "all following names and role descriptions are considered to be gender neutral" or something like that? Reason: Reading he/she, him/her, etc. again and again is annoying. Unfortunately, often there is no gender neutral description if you don't wish to switch to lawyer language. Try reading this: | "There is a checklist of what a NM (person A or just A here and later) | has to do. Basically, A needs a GPG key signed by a | developer, A has to answer some Philosophy and Procedures questions | and in the Tasks and Skills test. A has to show that A has the | experience to be a good Debian developer. You should advocate | someone when you think that A is ready to be a developer -- when A | has the required skills and when A has been involved with the project | for some time. Just ask yourself if you want to see A in Debian -- if | you think A should be a Debian developer then go ahead and recommend | A. Eduard. -- <stockholm> Overfiend: why dont you flame him? you are good at that. <Overfiend> I have too much else to do.
https://lists.debian.org/debian-women/2004/08/msg00016.html
CC-MAIN-2018-09
en
refinedweb
This is a short tutorial to making some progress bars in the command prompt. I will be using C# because I like to enhance my progress bars with color, however baring that you can easily take these tactics int C++, Java, etc... Components First lets take a look at some components of a progress bar. You will have 3 independent variables, and by independent I mean that they do not rely on another variable to be derived. - total - The total will be your goal (i.e. if I was counting regular season games in the NFL my total will be 17 ) - progress - The progress will be how far along you are in reaching your goal (i.e. Week 5 will be progress = 5) - size - The size is how long your progress bar will be in the prompt (Using standards, 80 characters is good) - percent done - how far along - percent left - etc... Math Lets start simple and derive our first and quite important variable of percent done! percentDone = progress / total Since we are doing math like this percentDone will be between 0 and 1, if you want percentDone*100 will give you your actual percent, but for now im going to keep the decimal form. Proportions Why are these important, because the save you work if you understand them! Quote A small anecdote: My roommate and each had to make a left to right speaker slider bar for a music player in flash for our homework. Because I understood proportions I was able to quickly evaluate what value the volume was at and where to display the slider (slider bars very similar to progress bars in nature), my roommate didn't and he had to spend half the night trying to find the best parabolic equation to evaluate where his sliders were. Once I heard the head banging against the desk I was able to help him and after a few choice words at the programming gods we both finished our projects successfully My point is proportions are helpful! Lets take your room, and a football field , and you walk across 10% of this distance of each the distance traveled will be different, but they will proportional! So lets take your room, and its 20 feet and someone tells you to walk the same percentage that the football player ran from the endzone to the first 35 yard line marker (Ugghh word problems). We can use a proportion to solve it: Let the variable X be the distance I need to travel across my room x out of 20 feet will be the same 35 yards ran out of 100 yards x/ 20 =35 /100 x =(35 /100 )20 x = 7 feet However take not that you can use cross multiplication as well x/ 20 =35 /100 x ( 100 ) = 20 ( 35 ) x = ( 20 ( 35 ) ) / 100 Now you may be saying, hey the second way took longer. True that time it did, however wouldn't it be convenient if it was simply just variables, and we never had to think about it again? x / size = progress / total Congrats! you have found X, or the number of marks to fill in the bar of any given size, with any given progress out of any given total! The Code So go ahead and make a new Console Project in Visual Studio, and add a class ProgressBar to the solution (or however you want to follow along, by no means do you actually need visual studio, just need a compiler really). And I am going to do this following OOP practices as its easier to demonstrate, however all the tricks can and are often done sequentiality as well. The Variables I was talking about components of progress bars earlier, and that sounds a lot like variables of a class for ProgressBars (go figure, right?). So lets go ahead and place are variables in the class like so: public class ProgressBar{ public const int DEFAULT_SIZE = 80; private int size; private double progress, total; Now your first question may be why use doubles and ints, why not just pick one? Well I used ints for the size because there is no way only displaying a portion of a character in the command prompt so we only really a whole number value. The progress and total however we want a bit more precision to make it more versatile for the user and so when we divide the progress from the total, we can maintain that precision (integer division will truncate too much). Also I just have a default number there to remind myself the standard prompt is 80 characters wide. Notice i didn't write down percentageDone. That is because it is a derived variable and every time something changes i will have to call a method to update that variable, so I'm just going to leave it as that, a method (well technically a Property, but we'll get to that). Constructors Well in a default constructor we are going to need to set some of the other variables to default values, however we can overload th constructor to add some more ways to build a ProgressBar in the future. A good default for progress is at 0, since a process is usually started at the beginning. The total is unspecified and should probably be 100, since its the easiest number to work with, and we will make the default size 80 because of the command prompt's size. I've taken the liberty to write a few, constructors to make the ProgressBar easier to initially set up depending on the situation public ProgressBar() { this.size = DEFAULT_SIZE-2; progress = 0; total = 100; } public ProgressBar( double total ) { this.size = DEFAULT_SIZE-2; progress = 0; this.total = total; } public ProgressBar( double total, int size ) { this.size = size -2; progress = 0; this.total = total; } public ProgressBar( double progress, double total, int size ) { this.size = size - 2; this.progress = progress; this.total = total; this.progress = progress; } When I create a Progress bar I like to make it look better by putting caps on the ends, this is why the size is set to inputted size -2. For example a basic progress bar of size 10 and then my progress bar of size 12: Basic: ***------- Mine: [***-------]
http://www.dreamincode.net/forums/topic/147429-progress-bars-in-command-prompt-part-a/
CC-MAIN-2018-09
en
refinedweb
Physician Utilization, Risk-Factor Control, and CKD Progression Among Participants in the Kidney Early Evaluation Program (KEEP) - Leonard Daniel - 4 months ago - Views: Transcription 1 KEEP 2011 Physician Utilization, Risk-Factor Control, and CKD Progression Among Participants in the Kidney Early Evaluation Program (KEEP) Claudine T. Jurkovitz, MD, MPH, 1 Daniel Elliott, MD, MSCR, 1 Suying Li, PhD, 2 Georges Saab, MD, 3 Andrew S. Bomback, MD, MPH, 4 Keith C. Norris, MD, 5 Shu-Cheng Chen, MS, 2 Peter A. McCullough, MD, MPH, 6 and Adam T. Whaley-Connell, DO, MSPH, 7 on behalf of the KEEP Investigators* Background: Chronic kidney disease (CKD) is a well-known risk factor for cardiovascular mortality, but little is known about the association between physician utilization and cardiovascular disease risk-factor control in patients with CKD. We used data from the National Kidney Foundation s Kidney Early Evaluation Program (KEEP) to examine this association at first and subsequent screenings. Methods: Control of risk factors was defined as control of blood pressure, glycemia, and cholesterol levels. We used multinomial logistic regression to examine the association between participant characteristics and seeing a nephrologist after adjusting for kidney function and paired t tests or McNemar tests to compare characteristics at first and second screenings. Results: Of 90,009 participants, 61.3% had a primary care physician only, 2.9% had seen a nephrologist, and 15.3% had seen another specialist. The presence of 3 risk factors (hypertension, diabetes, and hypercholesterolemia) increased from 26.8% in participants with CKD stages 1-2 to 31.9% in those with stages 4-5. Target levels of all risk factors were achieved in 7.2% of participants without a physician, 8.3% of those with a primary care physician only, 9.9% of those with a nephrologist, and 10.3% of those with another specialist. Of up to 7,025 participants who met at least one criterion for nephrology consultation at first screening, only 12.3% reported seeing a nephrologist. Insurance coverage was associated strongly with seeing a nephrologist. Of participants who met criteria for nephrology consultation, 406 (5.8%) returned for a second screening, of whom 19.7% saw a nephrologist. The percentage of participants with all risk factors controlled was higher at the second screening (20.9% vs 13.3%). Conclusion: Control of cardiovascular risk factors is poor in the KEEP population. The percentage of participants seeing a nephrologist is low, although better after the first screening. Identifying communication barriers between nephrologists and primary care physicians may be a new focus for KEEP. Am J Kidney Dis. 59(3)(S2):S24-S by the National Kidney Foundation, Inc. INDEX WORDS: Cardiovascular disease risk factors; chronic kidney disease; nephrologist care; primary care. Chronic kidney disease (CKD) is a well-known risk factor for cardiovascular mortality and morbidity. 1,2 Cardiovascular disease (CVD) risk factors, such as hypertension, diabetes, and dyslipidemia, are highly prevalent and poorly controlled in patients with CKD. 3 Recent reports suggest that of patients with an estimated glomerular filtration rate (egfr) 60 ml/min/1.73 m 2, only 37% of those with known hypertension achieved blood pressure control to a level 130/80 mm Hg, 4 and low-density lipoprotein cholesterol level was within the normal range for 17.9%. 3 Most people with early-stage CKD (egfr 60 ml/min/1.73 m 2 with established proteinuria) are managed exclusively by primary care providers, with rates of nephrologist comanagement increasing as CKD progresses. 5-7 The National Kidney Foundation s Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines recommend referral to and/or comanagement by nephrologists for patients with CKD stage 4, macroalbuminuria, hyperkalemia (potassium 5.5 meq/l), or resistant hypertension or for patients at increased risk of CKD progression From the 1 Christiana Care Health System, Center for Outcomes Research, Newark, DE; 2 Chronic Disease Research Group, Minneapolis Medical Research Foundation, Minneapolis, MN; 3 Renal Division, Washington University School of Medicine, St. Louis, MO; 4 Department of Medicine, Division of Nephrology, Columbia University College of Physicians and Surgeons, New York, NY; 5 Charles R. Drew University of Medicine and Science, Los Angeles, CA; 6 St. John Providence Health System, Providence Park Heart Institute, Novi, MI; and 7 Harry S Truman VA Medical Center and the Department of Internal Medicine, Division of Nephrology and Hypertension, University of Missouri-Columbia School of Medicine, Columbia, MO. * A list of the KEEP Investigators appears in the Acknowledgements. Received August 23, Accepted in revised form November 3, Address correspondence to Claudine T. Jurkovitz, MD, MPH, Christiana Care Center for Outcomes Research, 131 Continental Dr, Ste 202, Newark, DE christianacare.org 2012 by the National Kidney Foundation, Inc /$36.00 doi: /j.ajkd S24 2 Physician Care and Risk-Factor Control Timely nephrologist referral has been associated with improved outcomes, including delayed progression to end-stage renal disease, decreased mortality before hemodialysis therapy initiation, and improved first-year survival on hemodialysis therapy. 11,12 However, little is known about the interplay of physician utilization, CVD risk-factor control, and kidney disease progression in people screened for CKD. We used data from the Kidney Early Evaluation Program (KEEP), a community-based health screening program that enrolls participants at high risk of kidney disease, to: (1) assess CVD risk-factor control and physician utilization at baseline, (2) determine predictors of nephrology consultation in participants with identified indications for consultation or referral, and (3) explore CKD progression, CVD risk-factor control, and physician utilization in participants with recurrent KEEP screenings. METHODS KEEP Screening Procedures KEEP is a free community-based health screening program that targets populations at high risk of kidney disease. KEEP recruitment methods have been described previously. 13,14 Eligible participants are 18 years or older with self-reported diabetes or hypertension or a first-degree relative with diabetes, hypertension, or kidney disease. People with kidney transplants or receiving regular dialysis therapy are excluded. After providing informed consent, participants complete the screening questionnaire, which consists of sociodemographic information, personal and family health history, smoking status, and information about participant primary care and specialty physicians. Height, weight, blood pressure, plasma glucose, microalbuminuria, and albumin-creatinine ratio (ACR) are measured. Blood samples are drawn from consenting participants and sent to a central laboratory. Study Population Because lipid measurements at KEEP screenings started in 2005, we limited our study population to participants enrolled in for whom measurements of egfr and albuminuria and information about diabetes, hypertension, and cholesterol were available. Because measurement of low-density lipoprotein cholesterol was not available until 2008, we used total cholesterol level to assess hypercholesterolemia. Definition of Variables Physicians Participants who had seen a physician in the past year were considered to have a physician; those not meeting this time criterion were considered not to have a physician. A primary care practitioner was defined as a family practice physician, internist, obstetrician/gynecologist, gerontologist, nurse practitioner, or physician assistant. Seeing a nephrologist was defined as nephrologist consultation/care with or without a primary care practitioner or another specialist (cardiologist or endocrinologist). Comorbid Conditions Diabetes was defined as a history of diabetes (self-report or retinopathy), use of diabetes medications, or newly diagnosed diabetes (fasting blood glucose 126 mg/dl or nonfasting blood glucose 200 mg/dl) in the absence of self-report or medication use. Hypertension was defined as history of hypertension (selfreport), use of hypertension medications, or newly diagnosed hypertension 15 defined as. Hypercholesterolemia was defined as receiving medication for high cholesterol level or total cholesterol level 200 mg/dl. CVD was defined as self-reported history of heart angina, heart attack, heart bypass surgery, heart angioplasty, stroke, heart failure, abnormal heart rhythm, or coronary heart disease. Body mass index was calculated as weight (in kilograms) divided by height (in meters) squared. Kidney Function Serum creatinine was measured and calibrated to the Cleveland Clinic Research Laboratory as previously described. 16 GFR was estimated using the CKD Epidemiology Collaboration (CKD-EPI) equation. 17 Microalbuminuria was defined as a spot urine ACR 30 mg/g, and macroalbuminuria as ACR 300 mg/g. Kidney function stages were defined according to egfr levels and KDOQI guidelines as follows 9 : normal kidney function, egfr 60 ml/min/1.73 m 2 and ACR 30 mg/g; CKD stages 1-2, egfr 60 ml/min/1.73 m 2 and ACR 30 mg/g; CKD stage 3, egfr 60 and 30 ml/min/1.73 m 2 ; CKD stage 4, egfr 30 and 15 ml/min/1.73 m 2 ; and CKD stage 5, egfr 15 ml/min/1.73 m 2. Outcomes Control of all risk factors was defined as blood pressure control (systolic blood pressure 130 mm Hg and diastolic blood pressure 80 mm Hg if history of diabetes or CKD; otherwise, systolic blood pressure 140 mm Hg and diastolic blood pressure 90 mm Hg), blood glucose control (fasting blood glucose 126 mg/dl, nonfasting blood glucose 200 mg/dl, and hemoglobin A 1c 7%), and cholesterol control ( 200 mg/dl). In addition to CKD stage 4 or higher, possible indications for nephrology consultation/referral were macroalbuminuria and risk factors for progression, such as type 2 diabetes with microalbuminuria in patients with egfr 60 ml/min/1.73 m 2. 8 Castro et al 8 use diabetic retinopathy as a marker of CKD progression in patients with CKD stage 3, but we could not because of inconsistency in its collection in KEEP; we used diabetes with egfr 60 ml/min/ 1.73 m 2 instead. Likewise, we could not use hyperkalemia because it is not assessed in KEEP. Because medication and detailed clinical information are not collected, we could not infer about the presence of resistant hypertension. Statistical Analysis We used the Cochran-Armitage test of trend to analyze the distribution of participant characteristics according to CKD stages and 2 tests to evaluate the univariate association between type of physician and risk factors. We used logistic regression to examine the independent association between participant characteristics and all risk-factor control (dependent variable) and multinomial logistic regression for the independent association between participant characteristics and seeing a nephrologist (dependent variable) after adjusting for kidney function. Seeing a nephrologist was compared with seeing another physician or with not seeing a physician. To avoid decreasing the number of records used in the model because of missing data, we created an unknown category for each variable with missing data. Finally, we used paired t tests for continuous variables or McNemar tests for categorical vari- S25 3 Jurkovitz et al ables to compare participant characteristics at first and second screening. Data were analyzed using SAS, version 9.1 (). RESULTS Participant Population A total of 101,439 participants were enrolled in KEEP between 2005 and Exclusion of participants who had undergone kidney transplant or were receiving hemodialysis (n 272) and those with missing values for albuminuria, egfr, hypertension, diabetes, or cholesterolemia (n 11,158) resulted in a final cohort for analysis of 90,009. Of 90,009 participants, 77.2% had no CKD, 8.0% had CKD stages 1-2, 13.9% had stage 3, and 0.9% had stages 4-5 (Table 1). Approximately one-fifth of the study population had not seen a physician in the last year; in the entire cohort, 61.3% had a primary care physician only, 2.9% had seen a nephrologist, and 15.3% had seen another specialist. Of participants with CKD stages 4-5, only 35.3% had seen a nephrologist. Participants with advanced CKD (stages 3-5) were older and more likely to be white, have insurance, and have 12 years or fewer of education. CVD Risk-Factor Control and Physician Utilization Participants with advanced CKD were more likely to have CVD, hypertension, and hypercholesterolemia (Table 1). The presence of 3 risk factors (hypertension, diabetes, and hypercholesterolemia) was more prevalent with increasing stages of CKD. The rate of control was low; only 8.4% achieved target levels of all risk factors (blood pressure, glycemia, and cholesterolemia). Participants with CKD stages 1-2 were least likely to achieve target levels of all risk factors (6.0%), and those with CKD stages 4-5 were slightly more likely (9.0%). CVD risk-factor control varied little based on physician utilization; 7.2% of participants without a physician, 8.3% of those seeing only a primary care physician, 9.9% of those seeing a nephrologist, and 10.3% of those seeing another specialist achieved target levels of all risk factors. However, nephrologists and specialists were more likely than primary care physicians to see participants with 3 risk factors (28.3% and 30.9%, respectively, vs 17.3%; P 0.001). Results of multivariable analysis confirmed these results (Table 2). After adjusting for demographic and clinical characteristics, participants with CKD stages 1-2 remained 40% less likely to achieve target levels of all risk factors than participants without CKD. CVD risk-factor control was more likely for participants who had seen a physician in the last year than for those who had not, regardless of physician type. Odds ratios were 1.22 (95% confidence interval [CI], ) for primary care physician, 1.48 (95% CI, ) for specialist, and 1.52 (95% CI, ) for nephrologist. Participants with hypertension and hypercholesterolemia were respectively 22% and 70% less likely to achieve target levels, and participants with diabetes were almost 60% more likely. Consultation/Referral Indications and Physician Utilization A total of 7,025 participants (7.8%) met at least one criterion for nephrology consultation/referral at baseline (Table 3). Of these, 12.3% reported seeing a nephrologist; 50.1%, a primary care physician only; and 29.1%, another specialist. As expected, participants with CKD stages 4-5 (egfr 30 ml/min/1.73 m 2 ) were most likely to report seeing a nephrologist (35.3%) compared with 11.6% of those with macroalbuminuria and egfr 30 ml/min/1.73 m 2 and 12.4% of diabetic participants with microalbuminuria and egfr of ml/min/1.73 m 2. Results of the multivariable model assessing the likelihood of seeing a nephrologist versus seeing another physician and versus seeing no physician in participants who met criteria for consultation/referral are listed in Table 4. Because 25.7% of the data were missing, we created an unknown category for each variable with missing data. For both analyses, seeing a nephrologist was associated strongly with decreasing egfr and increasing albuminuria. After controlling for these factors, several clinical and demographic characteristics also were associated with seeing a nephrologist. Compared with seeing another physician, predictors of seeing a nephrologist were male sex, other race (includes Asians and Pacific Islanders), insurance coverage, more than 12 years of education, family history of kidney disease, and CVD. Participants with diabetes were less likely to see a nephrologist than another physician. Compared with not seeing any physician, the strongest predictor was insurance coverage; this effect was even stronger than effects of egfr of ml/min/1.73 m 2 and albuminuria. Other predictors that remained significantly associated with seeing a nephrologist were male sex, more than 12 years of education, family history of kidney disease or hypertension, CVD, and hypertension. Native Americans were more likely to not have a physician. Physician Utilization and CVD and Kidney Disease Progression Risk-Factor Control at Subsequent Screening Of participants with at least one indication for consultation/referral, 406 (5.8%) returned for a second KEEP screening (Table 5). The average interval S26 4 Physician Care and Risk-Factor Control Table 1. Characteristics of KEEP Participants, CKD All None Stages 1-2 Stage 3 Stages 4-5 P a No. 90,009 69,492 7,166 12, Medical care No physician Primary care b only Nephrologist with or without primary care Other specialists c with or without primary care Mean age (y) Age 65 y Men Race/ethnicity White African American Native American Other Hispanic Any insurance Education 12 y Smoking (prior or current) Family history Kidney disease Hypertension Diabetes History of CVD Mean BMI (kg/m 2 ) BMI 30 kg/m Risk factors Hypertension Diabetes Hypercholesterolemia risk factors d risk factors risk factor only All risk factors controlled e Note: Unless otherwise indicated, continuous variables are given as means; categorical variables are shown as percentages. Included KEEP participants with nonmissing values for egfr and albuminuria and information about diabetes, hypertension, and hypercholesterolemia status. Abbreviations: BMI, body mass index; CKD, chronic kidney disease; CVD, cardiovascular disease; egfr, estimated glomerular filtration rate; KEEP, Kidney Early Evaluation Program. a Test of trend. b Family practice physician, internist, obstetrician/gynecologist, gerontologist, nurse practitioner, or physician assistant. c Cardiologist or endocrinologist.). e In participants with at least one risk factor. Denominator: all participants with hypertension, diabetes, or hypercholesterolemia, as defined. between screenings was 1.55 years (median, 1.02 years). Compared with participants who met criteria for consultation/referral but did not return (n 6,619), those who returned were more likely to have a physician and to see a specialist (P 0.03). They were older (72.3 vs 69.3 years; P 0.001), more likely to be white (72.7% vs 63.5%; P 0.001) and to have insurance (92.1% vs 88.1%; P 0.02), and less likely S27 5 Jurkovitz et al Table 2. Characteristics Independently Associated With Control of All Risk Factors Variable OR (95% CI) P No physician 1.00 (reference) Primary care only 1.22 ( ) Nephrologist 1.52 ( ) Specialist a 1.48 ( ) Age 0.99 ( ) Men 1.08 ( ) Race/ethnicity White 1.00 (reference) African American 0.93 ( ) 0.01 Native American 0.71 ( ) Other 1.19 ( ) Hispanic 1.12 ( ) 0.02 Insurance coverage 0.96 ( ) 0.2 Unknown (n 2,765; 3.4%) 0.95 ( ) 0.5 Education 12 y b 1.03 ( ) 0.3 Unknown (n 1,093; 1.3%) 0.67 ( ) Family history Kidney disease 1.03 ( ) 0.5 Unknown (n 5,824; 7.1%) 1.04 ( ) 0.4 Hypertension 1.17 ( ) Unknown (n 6,063; 7.4%) 1.72 ( ) Diabetes 1.01 ( ) 0.8 Unknown (n 5,448; 6.6%) 0.99 ( ) 0.9 History of CVD 1.06 ( ) 0.07 Unknown (n 568; 0.7%) 1.39 ( ) 0.02 BMI 25 kg/m ( ) 0.3 Unknown (n 886; 1.1%) 0.86 ( ) 0.3 Hypertension 0.88 ( ) Diabetes 1.57 ( ) Hypercholesterolemia 0.30 ( ) CKD None 1.00 (reference) Stages ( ) Stage ( ) 0.6 Stages ( ) 0.3 Note: OR is for all risk factors controlled. Participants with at least one CVD risk factor (hypertension, diabetes, or hypercholesterolemia), n 82,313. C index Abbreviations: BMI, body mass index; CI, confidence interval; CKD, chronic kidney disease; CVD, cardiovascular disease; OR, odds ratio. a Cardiologist or endocrinologist. b Reference is 12 years or less. to smoke (35.2% vs 41.2%; P 0.02). They were more likely to have CVD risk factors (hypertension, diabetes, and hypercholesterolemia; 64.5% vs 55.6%; P 0.001) and CKD stage 3 (91.6% vs 80.8%; P 0.001) and less likely to have CKD stages 4-5 (6.9% vs 11.9%; P 0.002) and macroalbuminuria (5.4% vs 12.6%; P 0.001). The proportion of participants who saw a nephrologist increased from 11.6% to 19.7% (P 0.001) between screenings (Table 5). Participants were more likely to have all 3 CVD risk factors at the return visit (72.9% vs 64.5% at baseline; P 0.001), largely due to more diagnoses of hypercholesterolemia; however, the percentage of participants with all risk factors controlled was higher at the second than at the first screening (20.9% vs 13.3%; P 0.002). DISCUSSION We investigated CVD risk-factor control and physician utilization in KEEP participants and in the subset who returned for a subsequent screening. The major findings are: (1) generally poor risk-factor control and only modest improvement with advancing CKD, (2) low likelihood of nephrologist encounter despite clinical indications for consultation/referral at earlier CKD stages, (3) higher likelihood of a nephrologist visit after the first screening, and (4) improved CVD riskfactor control in returning participants. Hypertension, diabetes, and hyperlipidemia are highly prevalent in patients with end-stage renal disease or CKD. 1,3 Of National Health and Nutrition Examination Survey (NHANES) participants with egfr 60 ml/min/1.73 m 2, only 37% of those with known hypertension had normal blood pressure. 4 Likewise, both diabetes and hyperlipidemia control are poor in patients with CKD. 3 Secondary analyses of large clinical trials of statins for primary prevention of cardiovascular events show a beneficial effect in patients with CKD 18,19 ; however, physicians have been reluctant to prescribe statins for fear of secondary effects 20 and due to lack of efficacy in randomized controlled trials of hemodialysis patients. 21 As expected, we found that the prevalence of CVD risk factors increased with kidney disease severity. Risk-factor control is low (8.4%) in the KEEP population, possibly explaining the high rates of cardiovascular events and death reported previously. 22,23 Interestingly, participants with CKD stages 4-5 seem to have slightly better control of risk factors than those with less advanced CKD, possibly due to a larger proportion reporting nephrologist care. In the overall KEEP population, risk-factor control does not seem to depend on type of physician seen. However, nephrologists and other specialists are more likely to see patients with high levels of comorbidity, and controlling risk factors in such practice settings might be more difficult. Almost 8% of KEEP participants met criteria for nephrologist consultation/referral. This probably is an underestimate because we could not include participants with resistant hypertension or hyperkalemia. In NHANES, Castro and Coresh 8 found in patients with CKD stage 3 that 18.6% met one of these referral S28 6 Physician Care and Risk-Factor Control Table 3. Distribution of Medical Care by Referral Criteria Medical Care All No Physician Primary Care Only a Other Specialist b With or Without Primary Care Nephrologist With or Without Other Specialist or Primary Care P c No. 90,009 18,467 55,182 13,735 2,625 Criteria for nephrologist referral CKD stages Macroalbuminuria d at CKD stages Diabetes microalbuminuria e at CKD stage 3 1, Diabetes without albuminuria f at CKD stage 3 4, Any of these criteria 7, Note: Results are row percentages. For example, in participants with CKD stages 4-5, the percentage of participants who have no physician is 9.7. The denominator is number of participants with CKD stages 4-5. Categories are mutually exclusive. Abbreviations: ACR, albumin-creatinine ratio; CKD, chronic kidney disease. a Family practice physician, internist, obstetrician/gynecologist, gerontologist, nurse practitioner, or physician assistant. b Cardiologist or endocrinologist. c 2. d ACR 300 mg/g. e ACR of mg/g. f ACR 30 mg/g. criteria. Another possible reason for our lower prevalence is that we did not limit our analysis to participants with CKD stage 3. Only 12.3% of participants who met any referral criterion reported seeing a nephrologist. This low referral rate may be related to the low CKD awareness (10.0%) consistently reported in KEEP. 24 The referral rate increases to 19.7% at the second screening, which does not strongly support the notion that awareness increases nephrologist utilization. The decision to refer to a nephrologist depends on physician and participant factors, and one of the major goals of KEEP is to improve awareness of CKD in both these groups. Primary care practitioner awareness of the KDOQI guidelines is a critical factor in nephrology referral decisions. Although distinguishing awareness from motivation is challenging, several investigators have attempted to assess knowledge of these guidelines among physicians. Navaneethan et al 25 recently found that only 36.5% of primary care practitioners were aware of CKD guidelines and only 31.8% used CKD stages for referral. In a cross-sectional survey of internists, geriatricians, and nephrologists, regarding referral of older patients, investigators reported that 100% of surveyed nephrologists, 31.3% of internists, and 57.1% of geriatricians were aware of the KDOQI guidelines related to referral. 26 A subsequent study showed that primary care physicians with more than 10 years in practice were least likely to recommend referral of patients with CKD but more likely to express a desire for collaborative care, yet the differences were small (89% vs 82%). 27,28 General internists who were aware of existing guidelines were 14 times more likely to recommend referral. 27 In our analysis, after adjusting for kidney disease progression, participant factors associated with seeing a nephrologist included male sex, insurance coverage, more than 12 years of education, family history of kidney disease and CVD. Notably, participants with insurance coverage were nearly twice as likely to be referred to a nephrologist as those without insurance, compared with seeing another physician. These results are similar to results reported by other investigators, who found that patient characteristics such as age older than 65 years, female sex, and nonwhite race were significantly associated with nonreferral. 25 Although the small group of participants who returned for a second screening seems to be a highly selected population of older participants with better socioeconomic status, only 19.7% reported having seen a nephrologist. Nevertheless, KEEP seems to have been successful in encouraging a nephrology visit because this is a 70% increase from the first screening. KEEP is actively engaged in a longitudinal program, inviting previous participants to return for a repeated examination. These results suggest that rescreening, in addition to focusing on participants with criteria for CKD progression, should focus on the most vulnerable participants (no health insurance, minority race/ethnicity, and low level of education). Finally, a large percentage of KEEP participants who meet criteria for referral have seen a physician in the year preceding the first screening. Although KEEP provides the screening results to consenting participants physicians, lack of improvement or deteriora- S29 7 Jurkovitz et al Table 4. Model Predicting Nephrology Consultation in Participants Who Met Criteria for Referral Seeing Nephrologist vs Seeing Another Physician Seeing Nephrologist vs Not Seeing a Physician Variable OR (95% CI) P OR (95% CI) P Age 0.97 ( ) ( ) 0.8 Men 1.45 ( ) ( ) 0.05 Race/ethnicity White 1.00 (reference) 1.00 (reference) African American 0.94 ( ) ( ) 0.05 Native American 0.73 ( ) ( ) 0.04 Other 1.39 ( ) ( ) 0.5 Hispanic 0.78 ( ) ( ) 0.2 Insurance coverage 1.95 ( ) ( ) Unknown (n 342; 4.9%) 1.97 ( ) ( ) Education 12 y a 1.21 ( ) ( ) 0.02 Unknown (n 100; 1.4%) 0.68 ( ) ( ) 0.1 Family history Kidney disease 1.56 ( ) ( ) 0.03 Unknown (n 701; 10.0%) 1.12 ( ) ( ) 0.2 Hypertension 1.08 ( ) ( ) 0.02 Unknown (n 882; 12.6%) 1.27 ( ) ( ) 0.9 Diabetes 1.06 ( ) ( ) 0.5 Unknown (n 661; 9.4%) 1.03 ( ) ( ) 0.9 History of CVD 1.30 ( ) ( ) Unknown (n 42; 0.6%) 1.40 ( ) ( ) 0.9 BMI 25 kg/m ( ) ( ) 0.2 Unknown (n 53; 0.8%) 0.50 ( ) ( ) 0.3 Hypertension 1.25 ( ) ( ) Diabetes 0.71 ( ) ( ) 0.4 Hypercholesterolemia 1.08 ( ) ( ) 0.5 CKD egfr (reference) 1.00 (reference) egfr of ( ) ( ) egfr ( ) ( ) No albuminuria 1.00 (reference) 1.00 (reference) ACR of ( ) ( ) ACR ( ) ( ) Note: Results from multinomial logistic regression (n 7,025). Abbreviations and definitions: ACR, albumin-creatinine ratio (in mg/g); BMI, body mass index; CI, confidence interval; CKD, chronic kidney disease; CVD, cardiovascular disease; egfr, estimated glomerular filtration rate (in ml/min/1.73 m 2 ); OR, odds ratio. a Reference is 12 years or less. tion remains prevalent at the second screening. Communication barriers between primary care physicians and specialists should be assessed, as should barriers to guideline implementation. The definition of CKD based on a single egfr and ACR measurement, not on measurements over 3 months, is a limitation inherent in the cross-sectional design of KEEP, as is ascertainment of ACR as the only marker of kidney damage. This definition may lead to overestimating CKD prevalence in our study population because some individuals with acute changes in kidney function may have been misclassified. The small number of participants who met criteria for kidney disease progression and returned for a second screening is another serious limitation. Because this is a self-selected group likely highly motivated for care, selection bias may have been introduced, and the improvement in percentage of nephrologist visits and risk-factor control may be overestimated. In addition, because of the small numbers of participants, we could not assess the impact of physician visits on clinical outcomes. However, these results provide insight into the effectiveness of screening regarding participant referral. Finally, we could S30 8 Physician Care and Risk-Factor Control Table 5. Risk-Factor Control and CKD Progression in Participants Who Met Criteria for Nephrologist Referral and Returned for a Second KEEP Screening Met Criteria for Nephrologist Referral First KEEP Screening Second P a No. 7, Physician care No physician Primary care only b Nephrologist Other specialist c and primary care Mean age (y) Age 65 y Men d Race/ethnicity White d African American d Native American d Other d Hispanic d Any insurance Education 12 y Smoking (former or current) Family history of kidney disease d History of CVD Mean BMI (kg/m 2 ) BMI 30 kg/m Hypertension Diabetes Hypercholesterolemia Presence of risk factors e All risk factors controlled f CKD stages CKD stage Criteria for nephrologist referral CKD stages Macroalbuminuria g Diabetes microalbuminuria h at CKD stage Diabetes without albuminuria i at CKD stage Note: Unless otherwise indicated, values are percentages. Abbreviations: ACR, albumin-creatinine ratio; BMI, body mass index; CKD, chronic kidney disease; CVD, cardiovascular disease; KEEP, Kidney Early Evaluation Program. a Paired t test or McNemar test. b Family practice physician, internist, obstetrician/gynecologist, gerontologist, nurse practitioner, or physician assistant. c Cardiologist or endocrinologist. d Values were the same for both screenings.). f In participants with at least one risk factor. Denominator: all participants with hypertension, diabetes, or hyperlipidemia, as defined. g ACR 300 mg/g. h ACR of mg/g. i ACR 30 mg/g. S31 9 Jurkovitz et al assess parameters at only the screening and return screening; an analysis including interim data between these visits would likely further elucidate the nature of improvements (or lack thereof) in risk factors. In conclusion, we found that a large number of participants met criteria for referral to a nephrologist and that control of cardiovascular risk factors was poor in the KEEP population, but seemed to improve after screening. Socioeconomic status, including insurance coverage, is a major patient-related determinant of nephrology consultation. Although KEEP was effective in increasing the percentage of participants seeing a nephrologist, the rate was low and probably overestimated in our sample. These results also highlight that a large percentage of the population who returned had seen a physician in the year before the second screening. Identifying the communication barriers between nephrologists and primary care physicians may be a new focus for KEEP, particularly with the current emphasis on accountable care organizations and medical home designations. ACKNOWLEDGEMENTS The KEEP Investigators are Peter A. McCullough, Adam T. Whaley-Connell, Andrew Bomback, Kerri Cavanaugh, Linda Fried, Claudine Jurkovitz, Mikhail Kosiborod, Samy McFarlane, Rajnish Mehrotra, Keith Norris, Rulan Savita Parekh, Carmen A. Peralta, Georges Saab, Stephen Seliger, Michael Shlipak, Lesley Inker, Manjula Kurella Tamura, John Wang; ex-officio, Bryan Becker, Allan Collins, Nilka Ríos Burrows, Lynda A. Szczech, Joseph Vassalotti; advisory group, George Bakris, Wendy Brown; data coordinating center, Shu-Cheng Chen. We thank the participants and staff who volunteered their time to make the KEEP screening a successful event and Chronic Disease Research Group colleagues Shane Nygaard, BA, for manuscript preparation and Nan Booth, MSW, MPH, ELS, for manuscript editing. Support: The KEEP is a program of the National Kidney Foundation Inc and is supported by Amgen, Abbott, Siemens, Astellas, Fresenius Medical Care, Genzyme, LifeScan, Nephroceuticals, and Pfizer. Dr Norris receives support from National Institutes of Health grants RR and MD Dr Whaley- Connell receives support from the Veteran s Affairs Career Development Award-2, National Institutes of Health grant R03AG , and American Society of Nephrology-Association of Specialty Professors-National Institute on Aging Development Grant in Geriatric Nephrology. Financial Disclosure: The authors declare that they have no other relevant financial interests. REFERENCES 1. Go AS, Chertow GM, Fan D, et al. Chronic kidney disease and the risks of death, cardiovascular events, and hospitalization. N Engl J Med. 2004;351(13): Matsushita K, van der Velde M, Astor BC, et al. Association of estimated glomerular filtration rate and albuminuria with all-cause and cardiovascular mortality in general population cohorts: a collaborative meta-analysis. Lancet. 2010;375(9731): Collins AJ, Foley RN, Herzog C, et al. Excerpts from the US Renal Data System 2009 Annual Data Report. Am J Kidney Dis. 2010;55(suppl 1):S1-S420, A426-A Peralta CA, Hicks LS, Chertow GM, et al. Control of hypertension in adults with chronic kidney disease in the United States. Hypertension. 2005;45(6): Bayliss EA, Bhardwaja B, Ross C, et al. Multidisciplinary team care may slow the rate of decline in renal function. Clin J Am Soc Nephrol. 2011;6(4): Hemmelgarn BR, Zhang J, Manns BJ, et al. Nephrology visits and health care resource use before and after reporting estimated glomerular filtration rate. JAMA. 2010;303(12): Sprangers B, Evenepoel P, Vanrenterghem Y. Late referral of patients with chronic kidney disease: no time to waste. Mayo Clin Proc. 2006;81(11): Castro AF, Coresh J. CKD surveillance using laboratory data from the population-based National Health and Nutrition Examination Survey (NHANES). Am J Kidney Dis. 2009;53(suppl 3):S46- S National Kidney Foundation. K/DOQI Clinical Practice Guidelines for Chronic Kidney Disease: evaluation, classification and stratification. Am J Kidney Dis. 2002;39(suppl 1):S17-S National Kidney Foundation. K/DOQI Clinical Practice Guidelines on Hypertension and Antihypertensive Agents in Chronic Kidney Disease. Am J Kidney Dis. 2004;43(suppl 1):S1-S Black C, Sharma P, Scotland G, et al. Early referral strategies for management of people with markers of renal disease: a systematic review of the evidence of clinical effectiveness, costeffectiveness and economic analysis. Health Technol Assess. 2010; 14(21): Chan MR, Dall AT, Fletcher KE, et al. Outcomes in patients with chronic kidney disease referred late to nephrologists: a meta-analysis. Am J Med. 2007;120(12): Brown WW, Peters RM, Ohmit SE, et al. Early detection of kidney disease in community settings: the Kidney Early Evaluation Program (KEEP). Am J Kidney Dis. 2003;42(1): Jurkovitz C, Qiu Y, Wang C, et al. The Kidney Early Evaluation Program (KEEP): program design and demographic characteristics of the population. Am J Kidney Dis. 2008;51(suppl 2):S3-S Chobanian AV, Bakris GL, Black HR, et al. Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension. 2003; 42(6): Stevens LA, Stoycheff N. Standardization of serum creatinine and estimated GFR in the Kidney Early Evaluation Program (KEEP). Am J Kidney Dis. 2008;51(suppl 2):S77-S Levey AS, Stevens LA, Schmid CH, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9): Kendrick J, Shlipak MG, Targher G, et al. Effect of lovastatin on primary prevention of cardiovascular events in mild CKD and kidney function loss: a post hoc analysis of the Air Force/ Texas Coronary Atherosclerosis Prevention Study. Am J Kidney Dis. 2010;55(1): Ridker PM, MacFadyen J, Cressman M, et al.;55(12): Thompson PD, Clarkson P, Karas RH. Statin-associated myopathy. JAMA. 2003;289(13): Wanner C, Krane V, Marz W, et al. Atorvastatin in patients with type 2 diabetes mellitus undergoing hemodialysis. N Engl J Med. 2005;353(3): S32 10 Physician Care and Risk-Factor Control 22. McCullough P, Li S, Jurkovitz C, et al. CKD and cardiovascular disease in screened high-risk volunteer and general populations: the Kidney Early Evaluation Program (KEEP) and National Health and Nutrition Examination Survey (NHANES) Am J Kidney Dis 2008;51(suppl 2):S38-S McCullough PA, Jurkovitz CT, Pergola PE, et al. Independent components of chronic kidney disease as a cardiovascular risk state. Results from the Kidney Early Evaluation Program (KEEP). Arch Intern Med. 2007;167(11): Tamura MK, Anand S, Li S, et al. Comparison of CKD awareness in a screening population using the Modification of Diet in Renal Disease (MDRD) Study and CKD Epidemiology Collaboration (CKD-EPI) equations. Am J Kidney Dis. 2011;57(suppl e):s17-s Navaneethan SD, Kandula P, Jeevanantham V, et al. Referral patterns of primary care physicians for chronic kidney disease in general population and geriatric patients. Clin Nephrol. 2010; 73(4): Campbell KH, Sachs GA, Hemmerich JA, et al. Physician referral decisions for older chronic kidney disease patients: a pilot study of geriatricians, internists, and nephrologists. J Am Geriatr Soc. 2010;58(2): Boulware LE, Troll MU, Jaar BG, et al. Identification and referral of patients with progressive CKD: a national study. Am J Kidney Dis. 2006;48(2): Diamantidis CJ, Powe NR, Jaar BG, et al. Primary carespecialist collaboration in the care of patients with chronic kidney disease. Clin J Am Soc Nephrol. 2011;6(2): S33 MANAGEMENT OF LIPID DISORDERS: IMPLICATIONS OF THE NEW GUIDELINES MANAGEMENT OF LIPID DISORDERS: IMPLICATIONS OF THE NEW GUIDELINES Robert B. Baron MD MS Professor and Associate Dean UCSF School of Medicine Declaration of full disclosure: No conflict of interest EXPLAINING, Diabetes and the Kidneys Diabetes and the Kidneys Aim(s) and objective(s) This guideline focuses on the detection, prevention, and management of kidney disease in people with diabetes. The management of end-stage renal disease Absolute cardiovascular disease risk assessment Quick reference guide for health professionals Absolute cardiovascular disease risk assessment This quick reference guide is a summary of the key steps involved in assessing absolute cardiovascular risk: EXPANDING THE EVIDENCE BASE IN OUTCOMES RESEARCH: USING LINKED ELECTRONIC MEDICAL RECORDS (EMR) AND CLAIMS DATA EXPANDING THE EVIDENCE BASE IN OUTCOMES RESEARCH: USING LINKED ELECTRONIC MEDICAL RECORDS (EMR) AND CLAIMS DATA A CASE STUDY EXAMINING RISK FACTORS AND COSTS OF UNCONTROLLED HYPERTENSION ISPOR 2013 WORKSHOP 2012 Georgia Diabetes Burden Report: An Overview r-,, 2012 Georgia Diabetes Burden Report: An Overview Background Diabetes and its complications are serious medical conditions disproportionately affecting vulnerable population groups including: aging Addressing Racial/Ethnic Disparities in Hypertensive Health Center Patients Addressing Racial/Ethnic Disparities in Hypertensive Health Center Patients Academy Health June 11, 2011 Quyen Ngo Metzger, MD, MPH Data Branch Chief, Office of Quality and Data U.S. Department of Health Diabetic Nephropathy Diabetic Nephropathy Kidney disease is common in people affected by diabetes mellitus Definition Urinary albumin excretion of more than 300mg in a 24 hour collection or macroalbuminuria Abnormal renal) Brief Coronary Heart Disease (CHD) Brief What is Coronary Heart Disease? Coronary Heart Disease (CHD), also called coronary artery disease 1, is the most common heart condition in the United States. It occurs, Type 1 Diabetes ( Juvenile Diabetes) Type 1 Diabetes W ( Juvenile Diabetes) hat is Type 1 Diabetes? Type 1 diabetes, also known as juvenile-onset diabetes, is one of the three main forms of diabetes affecting millions of people worldwide. Diabetes Prevention in Latinos Diabetes Prevention in Latinos Matthew O Brien, MD, MSc Assistant Professor of Medicine and Public Health Northwestern Feinberg School of Medicine Institute for Public Health and Medicine October 17, 2013 Improving cardiometabolic health in Major Mental Illness Improving cardiometabolic health in Major Mental Illness Dr. Adrian Heald Consultant in Endocrinology and Diabetes Leighton Hospital, Crewe and Macclesfield Research Fellow, Manchester University Metabolic Calculating the stage of Renal Disease Calculating the stage of Renal Disease When the Refresh Template/Check Labs button is depressed, the box next to MDRD, will be automatically checked. In order to use this in the calculation of the stage Prediction of Kidney Disease Progression in Patients with Diabetes Prediction of Kidney Disease Progression in Patients with Diabetes John Arthur, MD, PhD Medical University of South Carolina SEKDC Meeting September 8, 2012 Objectives Understand the importance of predicting The Canadian Association of Cardiac Reinventing Cardiac Rehabilitation Outside of acute care institutions, cardiovascular disease is a chronic, inflammatory process; the reduction or elimination of recurrent acute coronary syndromes is a Coding to be more efficient and accurate Why we need to code well! Coding to be more efficient and accurate Diabetes without Complication Diabetes with opthamologic or unspecified complication Diabetes with acute complication $1833 $2931 $3836, Summary Evaluation of the Medicare Lifestyle Modification Program Demonstration and the Medicare Cardiac Rehabilitation Benefit The Centers for Medicare & Medicaid Services' Office of Research, Development, and Information (ORDI) strives to make information available to all. Nevertheless, portions of our files including charts, Improving Diabetes Care for All New Yorkers Improving Diabetes Care for All New Yorkers Lynn D. Silver, MD, MPH Assistant Commissioner Bureau of Chronic Disease Prevention and Control Diana K. Berger, MD, MSc Medical Director Diabetes Prevention PRESCRIBING GUIDELINES FOR LIPID LOWERING TREATMENTS for SECONDARY PREVENTION Hull & East Riding Prescribing Committee PRESCRIBING GUIDELINES FOR LIPID LOWERING TREATMENTS for SECONDARY PREVENTION For guidance on Primary Prevention please see NICE guidance InDependent Diabetes Trust InDependent Diabetes Trust Kidneys and Diabetes Updated July 2015 Registered Company Number 3148360 Registered Charity No 1058284 Contents Introduction Healthy Kidneys Kidney disease and diabetes The use Diabetes and Your Kidneys American Kidney Fund reaching out giving hope improving lives Diabetes and Your Kidneys reaching out giving hope improving lives Diabetes: The #1 Cause of Kidney Failure Your doctor told you that you have The contribution of chronic kidney disease to the global burden of major noncommunicable diseases & 2011 International Society of Nephrology The contribution of chronic kidney disease to the global burden of major noncommunicable diseases William G. Couser 1, Giuseppe Texas Diabetes Fact Sheet I. Adult Prediabetes Prevalence, 2009 According to the 2009 Behavioral Risk Factor Surveillance System (BRFSS) survey, 984,142 persons aged eighteen years and older in Texas (5.4% of this age group) have An International Atherosclerosis Society Position Paper: Global Recommendations for the Management of Dyslipidemia An International Atherosclerosis Society Position Paper: Global Recommendations for the Management of Dyslipidemia Introduction Executive Summary The International Atherosclerosis Society (IAS) here updates Facts about Diabetes in Massachusetts Facts about Diabetes in Massachusetts Diabetes is a disease in which the body does not produce or properly use insulin (a hormone used to convert sugar, starches, and other food into the energy needed Diabetes Complications Managing Diabetes: It s s Not Easy But It s s Worth It Presenter Disclosures W. Lee Ball, Jr., OD, FAAO (1) The following personal financial relationships with commercial interests relevant to this presentation Racial and ethnic disparities in type 2 diabetes Racial and ethnic disparities in type 2 diabetes Nisa M. Maruthur, MD, MHS Assistant Professor of Medicine & Epidemiology Welch Center for Prevention, Epidemiology, & Clinical Research The Johns Hopkins Connecticut Diabetes Statistics Connecticut Diabetes Statistics What is Diabetes? State Public Health Actions (1305, SHAPE) Grant March 2015 Page 1 of 16 Diabetes is a disease in which blood glucose levels are above normal. Blood glucose Inaccuracy of ICD-9 Codes for Chronic Kidney Disease: A Study from Two Practice-based Research Networks (PBRNs) BRIEF REPORT Inaccuracy of ICD-9 Codes for Chronic Kidney Disease: A Study from Two Practice-based Research Networks (PBRNs) Charlotte W. Cipparone, BA, Matthew Withiam-Leitch, MD, PhD, Kim S. Kimminau,, Kansas Behavioral Health Risk Bulletin Kansas Behavioral Health Risk Bulletin Kansas Department of Health and Environment November 7, 1995 Bureau of Chronic Disease and Health Promotion Vol. 1 No. 12 Diabetes Mellitus in Kansas Diabetes mellitus Organization and Structure of a Peritoneal Dialysis Program: an important ingredient for success Organization and Structure of a Peritoneal Dialysis Program: an important ingredient for success Fredric O. Finkelstein Hospital of St. Raphael, Yale University New Haven. CT 1 Overview of Presentation The ACC 50 th Annual Scientific Session Special Report The ACC 50 th Annual Scientific Session Part Two From March 18 to 21, 2001, physicians from around the world gathered to learn, to teach and to discuss at the American College of Cardiology Scottish Diabetes Survey 2013. Scottish Diabetes Survey Monitoring Group Scottish Diabetes Survey 2013 Scottish Diabetes Survey Monitoring Group Contents Contents... 2 Foreword... 4 Executive Summary... 6 Prevalence... 8 Undiagnosed diabetes... 18 Duration of Diabetes... 18 Electronic health records to study population health: opportunities and challenges Electronic health records to study population health: opportunities and challenges Caroline A. Thompson, PhD, MPH Assistant Professor of Epidemiology San Diego State University [email protected] Louisiana Report 2013 Louisiana Report 2013 Prepared by Louisiana State University s Public Policy Research Lab For the Department of Health and Hospitals State of Louisiana December 2015 Introduction The Behavioral Risk Factor The VITamin D and OmegA-3 TriaL (VITAL) The VITamin D and OmegA-3 TriaL (VITAL) JoAnn E. Manson, MD, DrPH Principal Investigator, VITAL Brigham and Women's Hospital Professor of Medicine and the Michael and Lee Bell Professor of Women s National Diabetes Audit. Report 2: Complications and Mortality National Diabetes Audit 2011 2012 Report 2: Complications and Mortality The National Diabetes Audit is commissioned by The Healthcare Quality Improvement Partnership (HQIP) promotes quality in healthcare. Cardiac Rehabilitation: An Under-utilized Resource Making Patients Live Longer, Feel Better Cardiac Rehabilitation: An Under-utilized Resource Making Patients Live Longer, Feel Better Marian Taylor, M.D. Medical University of South Carolina Director, Cardiac Rehabilitation I have no disclosures. FACILITATOR/MENTOR GUIDE FACILITATOR/MENTOR GUIDE Descriptive analysis variables table shells hypotheses Measures of association methods design justify analytic assess calculate analysis problem stratify confounding statistical Cardiac Rehabilitation An Underutilized Class I Treatment for Cardiovascular Disease Cardiac Rehabilitation An Underutilized Class I Treatment for Cardiovascular Disease What is Cardiac Rehabilitation? Cardiac rehabilitation is a comprehensive exercise, education, and behavior modification
http://docplayer.net/64150793-Physician-utilization-risk-factor-control-and-ckd-progression-among-participants-in-the-kidney-early-evaluation-program-keep.html
CC-MAIN-2018-09
en
refinedweb
In ASP.NET webapi, I send a temporary file to client. I open a stream to read the file and use the StreamContent on the HttpResponseMessage. Once the client receives the file, I want to delete this temporary file (without any other call from the client) Once the client recieves the file, the Dispose method of HttpResponseMessage is called & the stream is also disposed. Now, I want to delete the temporary file as well, at this point. One way to do it is to derive a class from HttpResponseMessage class, override the Dispose method, delete this file & call the base class's dispose method. (I haven't tried it yet, so don't know if this works for sure) I want to know if there is any better way to achieve this. Actually your comment helped solve the question... I wrote about it here: Delete temporary file sent through a StreamContent in ASP.NET Web API HttpResponseMessage Here's what worked for me. Note that the order of the calls inside Dispose differs from your comment: public class FileHttpResponseMessage : HttpResponseMessage { private string filePath; public FileHttpResponseMessage(string filePath) { this.filePath = filePath; } protected override void Dispose(bool disposing) { base.Dispose(disposing); Content.Dispose(); File.Delete(filePath); } }
https://codedump.io/share/6OBcDLjI7spd/1/how-to-delete-the-file-that-was-sent-as-streamcontent-of-httpresponsemessage
CC-MAIN-2018-09
en
refinedweb
Hi buddies :) I'm required to reorder an array which contains several items with a given format. For example, like this: m=11:00_12:00,d=10:00_10:30,a=13:00_13:30 As we can see, any item's given format is a string like x=whateverinitialhour_whateverfinalhour As stated before, I have to reorder these items depending on initial hour, so the expected result should be: d=10:00_10:30,m=11:00_12:00,a=13:00_13:30 Well, I have been searching a lot and found usort, and I found similar questions too like How to sort multidimensional PHP array (recent news time based implementation) So, I'm coding this: $arr = array(); $newarr = array(); $arr = ('m=11:00_12:00','d=10:00_10:30','a=13:00_13:30'); usort($arr, function($a,$b) {return strtotime(substr($a[0],2,5))-strtotime(substr($b[0],2,5));}); foreach ($arr as $value) { $newarr[] = $value; } Look at your usort() function, usort($arr, function($a,$b) { return strtotime(substr($a[0],2,5))-strtotime(substr($b[0],2,5)); ^^^^^ ^^^^^ }); $a and $b are not arrays, they are elements of the array. Also, you don't need a new array to store the sorted array, usort() applies the manipulation in the original array itself. So your code should be like this: $arr = array('m=11:00_12:00','d=10:00_10:30','a=13:00_13:30'); usort($arr, function($a,$b){ return (strtotime(substr($a,2,5)) < strtotime(substr($b,2,5))) ? -1 : 1; }); var_dump($arr);
https://codedump.io/share/ze4SprV4rYip/1/reorder-array-items-depending-on-substring-representing-time-in-php
CC-MAIN-2018-09
en
refinedweb
hello. this is my first post in here :) im new to programming and im learning C as my first language. I need help on this problem, i am making a running clock and it prints out in this format: 1:1:11 but i would like to make it look like in this format: 01:01:11 here is my code so far, i am using Turbo C :P i really want to google this problem, but i don't know what to search for. :D #include <stdio.h> main() { int sec, min, hour; clrscr(); for (hour=0; hour<=24; hour++) { for (min=0; min<=60; min++) { for (sec=0; sec<=60; sec++) { printf("%d:%d:%d", hour, min, sec); delay(100000000); clrscr(); } } } getch();
https://www.daniweb.com/programming/software-development/threads/226823/time-hour-min-sec
CC-MAIN-2018-09
en
refinedweb
Robin Becker <robin at jessikat.fsnet.co.uk> wrote in message YR13DVAUNld9EwXH at jessikat.demon.co.uk... >. then the JVM is buggy (very strange btw) because is should not use more than -Xmx memory and then say "out of memory" and the default should be 64Mb but not 1Gb for sure. [To prove the point consider this example program and two runs: (under w2k btw). * outofmem.java * public class outofmem { public static void main(String args[]) { java.util.ArrayList list = new java.util.ArrayList(); int count = 0; while(true) { list.add(new int[1024/4*1024*5]); // ~5MB System.out.println(++count); } } } * runs * D:\exp\java-out-of-memory>\usr\jdk1.4.0\bin\javac outofmem.java D:\exp\java-out-of-memory>\usr\jdk1.4.0\bin\java -version java version "1.4.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-b92) Java HotSpot(TM) Client VM (build 1.4.0-b92, mixed mode) D:\exp\java-out-of-memory>\usr\jdk1.4.0\bin\java outofmem 1 2 3 4 5 6 7 8 9 10 11 Exception in thread "main" java.lang.OutOfMemoryError D:\exp\java-out-of-memory>\usr\jdk1.4.0\bin\java -Xmx80m outofmem 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Exception in thread "main" java.lang.OutOfMemoryError ] Are you spawning processes through os.system or something? otherwise you are not telling some piece of the picture. regards.
https://mail.python.org/pipermail/python-list/2002-September/125727.html
CC-MAIN-2016-40
en
refinedweb
Ok so I already did a search for this problem and followed several sets of instructions none of which worked. Whenever my car lands on any surface from a high distance, or with a lot of velocity it winds up bouncing ridiculously. Currently right now both the car, and the object I am bouncing it on is made out of a bounceless material with the following properties: Additionally the giant cardboard box I am bouncing this one has the following script attached to it: using UnityEngine; using System.Collections; public class killBounce : MonoBehaviour { void OnCollisionEnter(Collision collision){ //set y velocity of collider to 0 and stop the annoying gamebreaking bouncing Debug.Log("Killbounce activated"); var currentVelocity = collision.rigidbody.velocity; currentVelocity.y = 0f; collision.rigidbody.velocity = currentVelocity; Debug.Log (collision.rigidbody.velocity.y); } } But alas it is to no avail. The car still bounces like a kickball on a trampoline. What can I do to fix this? Ok I fixed it by adjusting the gravity, and bounce threshold. Thanks for the help. Answer by clunk47 · Aug 08, 2013 at 08:02 PM Was going to convert my comment to an answer, but for some reason it's missing. So I'll post again. Try adjusting your gravity. If this helps, please accept/voteup the answer. Thanks for the help. Wouldn't let me upvote though. There now you can upvote. Also, use the check mark if you want to accept the answer, thumbs up is upvote. Yes, i would also adjust the gravity and the mass of the object. I f you make it more heavier it will bounce less. Ok I upvoted. Cool cool thanks, and glad I could help . Why Rigidbody Collisions Bounce? 2 Answers How to make an infinite mass solid body 1 Answer Ball issue and question 0 Answers Creating a bounce effect? 2 Answers Mario Mushroom Bouncy 1 Answer
http://answers.unity3d.com/questions/510730/stop-my-car-from-bouncing-like-a-rubber-band-ball.html#answer-510835
CC-MAIN-2016-40
en
refinedweb
glibmm: Gio::Icon Class Reference This is a very minimal interface for icons. More... #include <giomm/icon.h> Detailed Description This is a very minimal interface for icons. It provides functions for checking the equality of two icons, hashing of icons and serializing an icon to and from strings and Variants. Gio::Icon does not provide the actual pixmap for the icon as this is out of GIO's scope. However implementations of Icon may contain the name of an icon (see ThemedIcon), or the path to an icon (see LoadableIcon). To obtain a hash of an Icon instance, see hash(). To check if two Icon instances are equal, see equal(). For serializing an Icon, use serialize() and deserialize(). Constructor & Destructor Documentation You should derive from this class to use it. Member Function Documentation Generate an Icon instance from str. This function can fail if str is not valid. See to_string() for discussion. If your application or library provides one or more Icon implementations, you need to ensure that each GType is registered with the type system prior to calling create(). - Parameters - - Exceptions - Deserializes a Icon previously serialized using g_icon_serialize(). - Parameters - Get the GType for this class, for use with the underlying GObject type system. Provides access to the underlying C GObject. Provides access to the underlying C GObject. Gets a hash for an icon. Virtual: hash - Returns - A unsigned intcontaining a hash for the icon, suitable for use in a HashTable or similar data structure. Serializes a Icon into a Variant. An equivalent Icon can be retrieved back by calling g_icon_deserialize() on the returned value. As serialization will avoid using raw icon data when possible, it only makes sense to transfer the Variant between processes on the same machine, (as opposed to over the network), and within the same file system namespace. - Returns - A Variant, or nullptrwhen serialization fails. Icon except in the following two cases - If icon is a FileIcon, the returned string is a native path (such as /path/to/my icon.png) without escaping if the File for icon is a native file. If the file is not native, the returned string is the result of g_file_get_uri() (such as s). - If icon is a ThemedIcon with exactly one name, the encoding is simply the name (such as network-server). Virtual: to_tokens - Returns - An allocated NUL-terminated UTF8 string or nullptrif icon can't be serialized. Friends And Related Function Documentation A Glib::wrap() method for this object. - Parameters - - Returns - A C++ instance that wraps this C instance.
https://developer.gnome.org/glibmm/unstable/classGio_1_1Icon.html
CC-MAIN-2016-40
en
refinedweb
sec_login_become_delegate-Causes an intermediate server to become a delegate in traced delegation chain #include <dce/sec_login.h> sec_login_handle_t sec_login_become_delegate ( rpc_authz_cred_handle_t callers_identity, - callers_identity A handle of type rpc_authz_cred_handle_t to the authenticated identity of the previous delegate in the delegation chain. The handle is supplied by the rpc_binding_inq_auth_caller() call. - my_login_context A value of sec_login_handle_t that provides an opaque handle to the identity of the client that is becoming the intermediate delegate. authenticated identity if a new identity was established Note that this identity specified by sec_login_handle_t must be a simple login context; it cannot be a compound identity created by a previous sec_login_become_delegate(). Note that intermediate client identified by my_login_context. These servers are added to delegates permitted by the delegate_restrictions parameter of the sec_login_become_initiator() call. - target_restrictions A pointer to a sec_id_restriction_set_t that supplies a list of servers that can act as targets for the intermediate client identified by my_login_context. These servers are added to targets specified by the target_restrictions parameter of the sec_login_become_initiator() call. - optional_restrictions A pointer to a sec_id_opt_req_t that supplies a list of application-defined optional restrictions that apply to the intermediate client identified by my_login_context. These restrictions are added to the restrictions identified by the optional_restrictions parameter of the sec_login_become_initiator() call. - required_restrictions A pointer to a sec_id_opt_req_t that supplies a list of application-defined required restrictions that apply to the intermediate client identified by my_login_context. These restrictions are added to the restrictions identified required_restrictions parameter of the sec_login_become_initiator() call. - compatibility_mode A value of sec_id_compatibility_mode_t that specifies the compatibility mode to be used when the intermediate client operates_delegate(). Any delegate, target, required, or optional restrictions specified in this call are added to the restrictions specified by the initiating client and any intermediate clients. The sec_login_become_delegate() call is run only if the initiating client enabled traced delegation by setting the delegation_type_permitted parameter in the sec_login_become_initiator() call to sec_id_deleg_type_traced. - _context - sec_login_s_compound_delegate - sec_login_s_invalid_deleg_type - sec_login_s_invalid_compat_mode - sec_login_s_deleg_not_enabled - error_status_ok Functions: rpc_binding_inq_auth_caller(), sec_login_become_impersonator(), sec_login_become_initiator(), sec_login_get_current_context(), sec_login_setup_identity(), sec_login_validate_identity().
http://pubs.opengroup.org/onlinepubs/9696989899/sec_login_become_delegate.htm
CC-MAIN-2016-40
en
refinedweb
elasticsearch-dsl 2.1.0 Python client for Elasticsearch Let’s have a typical search request written directly as a dict: from elasticsearch import Elasticsearch client = Elasticsearch() response = client.search( index="my-index", body={ ') of the documentation. Migration from elasticsearch-py() Documentation Documentation is available: Honza Král - - Package Index Owner: honza.kral - DOAP record: elasticsearch-dsl-2.1.0.xml
https://pypi.python.org/pypi/elasticsearch-dsl
CC-MAIN-2016-40
en
refinedweb
Copyright © 2015-2016 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and document use rules apply. version includes additional refinements to the authoring guidance. Feedback on the information provided here is essential to the ultimate success of Rich Internet Applications that afford full access to their information and operations. The Accessible Rich Internet Applications Working Group asks in particular: To comment, send email to [email protected] (comment archive) or file an issue in the W3C ARIA GitHub repository, using the "APG" label in the issue. informative. The WAI-ARIA Authoring Practices Guide is intended to provide an understanding of how to use WAI-ARIA to create an accessible Rich Internet Application. It describes recommended WAI-ARIA usage patterns and provides an introduction to the concepts behind them. This guide is one part of a suite of resources that support the WAI-ARIA specification. The WAI-ARIA suite fills accessibility gaps identified by the [WAI-ARIA-ROADMAP]. As explained in Background on WAI-ARIA,) Protocols and Formats working group (PFWG) is addressing these deficiencies through several W3C standards efforts, with a focus on the WAI-ARIA specifications. For an introduction to WAI-ARIA, see the Accessible Rich Internet Applications Suite (WAI-ARIA) Overview. With the understanding many prefer to learn from examples, the guide begins with a section that demonstrates how to make common widgets accessible with descriptions of expected behaviors supported by working code. Where it is helpful to do so, the examples refer to detailed explanations of supporting concepts in subsequent sections. The sections that follow the examples first provide background that helps build understanding of how WAI-ARIA works and how it fits into the larger web technology picture. Next, the guide covers general steps for building an accessible widget using WAI-ARIA, JavaScript, and CSS, including detailed guidance on how to make rich internet applications keyboard accessible. The scope then widens to include the full application, addressing the page layout and structural semantics critical to enabling a usable experience with assistive technologies on pages containing both rich applications and rich documents. It includes guidance on dynamic document management, use of WAI-ARIA Form properties, and the creation of WAI-ARIA-enabled alerts and dialogs. This section demonstrates how to make common rich internet application widgets and patterns accessible by applying WAI-ARIA roles, states, and properties and implementing keyboard support. keyboard conventions are applicable to many of the patterns described in subsequent sections. The following guidance on nested widgets will be moved to another section., if there are two widgets A and B on a page where widget A contains within it Widget A1 and Widget A1 contains within it Widget A2, the focus sequence when pressing the tab key would be A1, A2, A3, B. This section has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending. ([WAI] for an in-depth definition for the role to find the supported states, properties, and other attributes. For example, the toolbar role definition includes:. Commented out link to conference paper, which a) was broken, and b) we should have more canonical references than conference papers for things like this. Thomas comment: is confusing that the example switches from toolbar to treeitem. Maybe the best overall sample is to do a tree because it demonstrates each of the points you want to make? You should consider binding user interface changes directly to changes in WAI-ARIA states and properties, such as through the use of CSS attribute selectors. For example, the setting of the aria-selected color,. This section has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending. multi. Label each region: If the entire web page has a role of application then it should not be treated as a navigational landmark by an assistive="application": Set the Application Role as follows:Example 17: When constructing a grid preferable This section has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending.. <div>and you need to associate a label for the dialog. With a WAI-ARIA role of dialog, you can indicate its widget type and define a label using an HTML header and then associate that label with the dialog using the aria-labelledby relationship. > <div id="descriptionClose">Closing this window will discard any information entered and return you back to the main page</div> non cyclical in that each node may only have one parent. In some situations, a child is reused by multiple parents or a child is separated from its siblings> This section has had only minor edits since it was integrated from APG version 1.0 -- a complete APG task force review is pending.. There are times to suppress AT presentation changes while a region is updating. For that you can use the aria-busy property. To suppress presentation of changes until a region is finished updating or until a number of rapid-fire changes are finished, set aria-busy=-relevant="removals"or aria-relevant="all"should be used sparingly. Notification of an assistive technology when content is removed may cause an unwarranted number of changes to be notified to the user. altattribute of images. This example shows two live regions. If both regions update simultaneously, liveRegionA should be spoken first because its message has a higher priority than liveRegionB. Example 38 role role instead if something inside the alert is to receive focus. Both alert and alertdialog appear to pop-up to the user to get their attention. status - You must use the status role does property:Example 39 attribute. has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending. accessibility Core Accessibility API Mappings describes the cases where this occurs: has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending.. This section has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending. mathematical: Todo: add aria-label here also > aria-grabbed and aria-dropeffect have been deprecated in ARIA 1.1 - as such this section has been removed. Advice for implementing drag and drop will be added to a future version of the Authoring Practices Guide. This section has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending. Core Accessibility API Mappings [CORE-AAM]. behavior. This section has not been updated since it was integrated from APG version 1.0 -- an APG task force review is pending. has not been updated since it was integrated from the ARIA 1.0 Primer -- an APG task force review is pending. [WAI-WEBCONTENT], WAI-ARIA as a prelude to using the [WAI accessibility WAI-ARIA, and the WAI-ARIA Authoring Practices describe recommended usage patterns for Web content developers. The WAI-ARIA Suite fills gaps identified by the [WAI assistive technologies is provided only by the HTML element's tag name, with only the accessibility attributes that tag can provide. [WAI-ARIA]. The goal is to make these standard features in HTML 5. Using Fig. 1 Accessibility Interoperability at a DOM Node without JavaScript. This section has not been updated since it was integrated from the ARIA 1.0 Primer -- an APG task force review is pending. WAI WAI-ARIA. Role attribute - The role attribute, borrowed from the, [ROLE-ATTRIBUTE], allows the author to annotate host languages with machine-extractable semantic information about the purpose of an element. It is targeted for accessibility, device adaptation, server-side processing, and complex data description. WAI-ARIA uses the role attribute to provides the role information, in Fig. 2 Accessibility Interoperability at a DOM Node with JavaScript to an assistive technology. Role document landmark values - These values, borrowed from the [ROLE-ATTRIBUTE] WAI-ARIA role values - The necessary core roles found in Accessibility API sets for Windows and Linux as well as roles representative of document structure, such as banner [RDF-CONCEPTS] and [OWL-FEATURES]. [[XHTML Access]]. Web Content Accessibility Guidelines 2.0 calls for the WAI [JAPI], [MSAA]], [AXAPI], and the [ATK], or [UI-AUTOMATION] into XHTML 1.x <?xml version="1.1" encoding="us-ascii"?> <!DOCTYPE html PUBLIC "Accessible Adaptive Applications//EN" ""> <html xmlns="" > <body> <div role="menu"> File </div> </body> </html> WAI used RDF/OWL to model our taxonomy for WAI WAI-ARIA (including those specified by the XHTML Role attribute module) to be specified without a namespace prefix. Additionally, WAI-ARIA states and properties shall be represented as aria- followed by the concatenated WAI. The WAI-ARIA role taxonomy was modeled using semantic web technology, in the form of [RDF-CONCEPTS] and the [OWL-FEATURES],. Fig. 4 Example, Partial RDF Map for a possible ButtonUndo role as an extended role to WAI-ARIA WAI-ARIA roles without namespaces, the RDF representation for a given role may be referenced using a qname from a Host XML markup language. This examples shows an XHTML reference to a grid role in the RDF representation of the WAI. Fig. 5 DHTML Example-WAI-ARIA Roles, States, and Properties specification, are added as attributes to each of the XHTML elements repurposed as GUI widgets dynamically. The user agent, in this case Firefox, maps this information to the platform accessibility API. shows the Microsoft Active Accessibility rendering of the new accessibility markup provided on the DataGrid page tab which has focus. Fig. 6 Microsoft Inspect Tool rendering of the page tab DataGrid. This section has not been updated since it was integrated from the ARIA 1.0 Primer -- an APG task force review is pending.). Authors may choose to achieve visual synchronization of these interface elements by using a script or by using CSS attribute selectors. 2. Design Patterns and Widgets.. This section has not been updated since it was integrated from the ARIA 1.0 Primer -- an APG task force review is pending. WAI . Editor's Note: Figure 7, described as WAI-ARIA tree widget usability comparison, refers to a resource that has not yet been found. Fig. 7 Usability of Tree Widget Using WAI-ARIA Semantics to Implement WCAG 2.0 Guidelines Compared to WCAG 1.0 Without WAI-ARIA shows an "accessible" widget for a tree item, within a tree widget, using WCAG 1.0 without WAI-ARIA, which ,when supplied to a screen reader, may say "link folder Year." There is no information to indicate that the folder is closed ( aria-expanded = "false"). There is no information to indicate its depth ( aria-level="2"), WAI-ARIA version might say "Closed Folder Year, Closed Folder one of two, Depth two, unchecked." Furthermore, the WAI.
http://www.w3.org/TR/wai-aria-practices-1.1/
CC-MAIN-2016-40
en
refinedweb
Hello :) Installed NetBeans 7.0 full package. This issue is only in C++ dev. since I checked in in Java and haven’t found this problem there. The problem: I try to go step by step F7 (line by line) in debug mode but it jumps over the lines to the next breakpoint. Sometimes it jumps to next breakpoint and from there it continues as expected line by line. For example: #include <iostream> int main(int argc, char**argv) { // Prints welcome message... std::cout << "Welcome ..." << std::endl; int i = 5; int j = 10; std::string s1 = "TEST"; std::string s2 = "TEST2"; // Prints arguments... if (argc > 1) { std::cout << std::endl << "Arguments:" << std::endl; for (int i = 1; i < argc; i++) { std::cout << i << ": " << argv[i] << std::endl; } } return 0; } Scenarios: (BP=breakpoint) 1) If I put BP on the first line std::out << Welcome... it stops there and the next hit on F7 goes to the end of the program (executes everything) NOT GOOD 2) The same as (1) but after the stop on BP I do "run to cursor" and let's assume the cursor now on "int j = 10;". After this hit on F7 will jump to next line as expected and then to the next next next. Works fine, but this is annoying on every debug to make run to cursor in order to debug step by step. 3) If I put BP on the first line "..cout<< Welcome..." and second BP on "s1="TEST"; it stops on first BP then F7 will jump to next BP (skipping the lines with int i= 5; and int j=10;:-( ) and then F7 works fine again. I saw the same problem when I Googled it even in earlier versions but it seems like nobody took care of reporting/treating the bug. FYI Best Regards, respectfully, Miki *** This bug has been marked as a duplicate of bug 197493 *** Should be added to test scenarios. Now I think it is duplicate of 200196 (not a iz197493) Added testStepOverAfterBreakpoint to StepOver gdb test suite. please clarify what OS and gdb version you were using? looks like the same problem from the bug 200196 *** This bug has been marked as a duplicate of bug 200196 ***
https://netbeans.org/bugzilla/show_bug.cgi?format=multiple&id=198612
CC-MAIN-2016-40
en
refinedweb
Windows. There are two types of .NET add-ins for Media Center: One of the first things you need to know about creating add-ins for Windows XP Media Center Addition 2005 is that they must be written and compiled in .NET 1.0. This is due to the fact that the add-in is hosted by the Media Center process, and that's the only version that Microsoft supports at the present time. This causes many developers a little discomfort, because they are accustomed to working with Visual Studio 2003, which supports only .NET 1.1. However, you can continue to use your existing editor of choice to write the code and then use the .NET compiler on the Media Center computer to compile it. That is the approach that I'll take in this article so that you can get a quick feel for developing an add-in for Media Center without having to install an older version of Visual Studio. Before we get started, make sure you've downloaded and installed the Media Center Software Developer Kit (SDK). Related Reading Home Hacking Projects for Geeks By Eric Faulkner, Tony Northrup In order to allow your class to interact with Media Center, you need to be familiar with two interfaces: Let's create a class that will implement the required interfaces and show users a dialog when they run it. It's important that you not pop your own dialogs or create your own windows from your add-in. All interaction with the user should be done through the provided interface. Media Center provides us with a dialog that we can use. Create a directory on the Media Center machine to contain your source code. Then, save the code below to a file named HelloWorld.cs: using System; using System.Collections; using Microsoft.MediaCenter.AddIn; namespace Sample { public class HelloWorld : MarshalByRefObject, IAddInModule, IAddInEntryPoint { private AddInHost mcHost; void IAddInModule.Initialize( IDictionary dictAppInfo, IDictionary dictEntryPoint) { } void IAddInModule.Uninitialize() { } void IAddInEntryPoint.Launch(AddInHost host) { mcHost = host; mcHost.HostControl.Dialog("Hello, World", "Greetings", 1,10,false); } } } Notice that the Launch method of the IAddInEntryPoint handed us a reference to an instance of the AddInHost. This is our doorway to the functionality provided by Media Center. Everything you'll ever need to do with Media Center is in that AddInHost instance. We simply asked the HostControl instance to pop a dialog for us. Launch IAddInEntryPoint AddInHost HostControl We need to tell the compiler a little bit about our class, so create a text file named AssemblyInfo.cs, paste the following code into it, and save it in the same folder with the HelloWorld.cs file: using System.Reflection; using System.Runtime.CompilerServices; [assembly: AssemblyTitle("Media Center Hello World")] [assembly: AssemblyDescription("Hello, World")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("")] [assembly: AssemblyProduct("")] [assembly: AssemblyCopyright("")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] [assembly: AssemblyVersion("1.
http://archive.oreilly.com/pub/a/dotnet/2005/04/05/mce_part2.html
CC-MAIN-2016-40
en
refinedweb
Data Hiding in Java Data Hiding in Java Data Hiding is an aspect of Object Oriented Programming (OOP) that allows developers to protect private data and hide implementation details. In this tutorial we examine basic data hiding techniques in Java. We also provide illustrations. Object Oriented Programming provides techniques for modeling entities in the 'real world'. It has been observed that humans relate to real world things as objects. Objects have two basic characteristics, which we will briefly discuss here. For better or worse, the OOP industry has come up with various descriptors for these characteristics: - Properties, data, information, variables and constants, etc. - Code, methods, messages, actions, logic, etc. This list could be extended indefinitely and is not intended to be complete by any means. The concept of Data Hiding is primarily concerned with the first category of characteristics, which will be referred to here simply as as data. The Difference between a Class User and a Class Developer In Java, a class is a programmer-defined data type that contains data and code (refer to the two categories of characteristics listed above). The term user refers to software programmers and system designers who apply a class in a larger program. Note that this definition of the word user might be contrary to common parlance. We are not referring to the end-user of an application program. That definition is certainly valid, but not in this context. We differentiate between the author of the Java class and the user of the Java class. Both are humans, both have extensive software development knowledge, and both understand OOP concepts. Conceptually, the primary difference between the two humans is that the class author has domain knowledge and the user may not. Domain knowledge is the understanding of a real-world entity in order to model it with a software object. All programming obligates some domain knowledge. For example, a human who knows how to build a telephone might have the domain knowledge necessary to devise a software model of a telephone. Domain knowledge is 'built-in' to a software class. A user, on the other hand, can gather some domain knowledge from an existing software class but may not possess sufficient knowledge to actually program that model. Data Hiding Prevents Class Users from Seeing Implementation Details Class users typically do not need to see the implementation details of classes they employ in their programs. Hiding the details actually increases productivity because class users are not required to read reams of source code that has already been debugged by programmers with domain knowledge. In a perfect world, class users refer to class documentation (JavaDocs for example) instead of browsing class source code to discern functionality. Data Hiding shields implementation details from class users and provides a convenient platform for enforcing data integrity rules on the class developer side. We illustrate both concepts in the following source code. package DataHiding; /** * This class illustrates basic data hiding concepts * @author nicomp */ public class Person { // public - a violation of data hiding 'rules' public int height; // private - not visible outside the class private float weight; // Here are 'get' and 'set' methods for the // private variable declared above. public float GetWeight() { return weight;} public float SetWeight(float weight){ if (weight >= 0) this.weight = weight; return this.weight; } } General Rules for Data Hiding In general, all variables in a class should be private. Since these variables, by definition of being private, are not visible outside the class, the class developer is obligated to provide 'gatekeeper' methods to class users. Certainty some private variables will be intended for use inside the class, but those that are intended to be visible to class users must be supplemented with "get" and "set" methods that are themselves public. Examples of 'get' and 'set' Methods Data Hiding is realized by 'get' and 'set' methods that provide access to private variables in a class. Lines 12, 17, and 19 (all above) illustrate this technique for the variable called weight. The class developer declared weight as private and coded methods to allow class users to read and write the variable. Data Hiding rules are violated in line 9 (above). The variable height has been declared to be public. Syntactically the code is proper, but data integrity may suffer in the long run. Class users may unknowingly (or knowingly) corrupt the data in application programs. One job of the class developer is to provide mechanisms to protect data stored in the class; that responsibility was not met. Code Written by the Class User The following code illustrates how the class user might make use of the class we introduced above. Line 16 violates data hiding rules by accessing the class data directly. Note that we are not blaming the class user in this case; the class developer did not provide 'get' and 'set' methods nor did he/she make any attempt to protect the variable by declaring it private. The class user has no option other than to directly reference the variable. Lines 20 and 21 illustrate proper techniques for making use of 'get' and 'set' methods by the class user. The class developer has declared the weight variable as private (see code above) and also provided methods for indirectly reading and writing the variable. The method GetWeight() (see code above) allows the class user to obtain a copy of the current value of the private variable. The method SetWeight(float weight) allows the class user to store a new value in the private variable. Note that SetWeight also includes a tiny bit of data integrity logic. The class user is prevented from storing a negative value in the weight. Data integrity logic is a direct function of domain knowledge; the class developer must have it. package DataHiding; /** * * @author nicomp */ public class Main { /** * @param args the command line arguments */ public static void main(String[] args) { Person dude = new Person(); // This is correct syntax, but bad programming dude.height = 60; // Use the 'get' and 'set methods to // access the weight property. dude.SetWeight((float)100.99); System.out.println("Weight = " + dude.GetWeight()); } } Conclusion Data Hiding in Java provides techniques for implementing data integrity logic and for enforcing domain knowledge rules in Java classes. More by this Author - 19 We could nag, but this is better. It is less work for us to use the words of others to emphasize the importance of doing laundry. Besides, if you saw the way we dressed, you'd laugh your mouse off. -... I know this is a serious hub, nicomp, with tons of serious information but I'm still laughing at the acronym, OOP. Sorry. Wanted to give you a heads-up that I just wrote a new hub about choosing avatars and linked to you using your avatar as an example. 'Twas my pleasure and you can feel free to reciprocate any time. hat the hell are you sayingman the ibfo given by the author is almost perfect,the details given about name hiding and classes is the way it should have been ,thanks a lot respected author One should not be allowed posting tutorials on Java for method names like these. In Java method and variable names start with a lowercase letter. Brilliant.. Thanks for the share its very gud explnation .very happy what is difference between data hiding and encapsulation ? 12
http://hubpages.com/technology/Data-Hiding-in-Java
CC-MAIN-2016-40
en
refinedweb
Installing themes with Buildout For a lot of website projects, a theme downloaded from plone.org () or the Python Package Index () is enough to launch a professional-looking site. If your project falls into this category, or if you just want to experiment, follow the steps in this chapter. Searching for themes on plone.org We will need to find a theme we like. We can do that by browsing to. Next, click on Downloads Add-on Product Releases | Themes|. You should see (result similar to): Click on a theme to view a screenshot and select one you like, for example beyondskins.ploneday.site2010, and add the package to your buildout.cfg file. Adding themes with Buildout In 03-appearance-wpd2010.cfg, we extend the last known working configuration file from Chapter 2, that is 02-site-basics-blog.cfg. It looks like this: [buildout] extends = 02-site-basics-blog.cfg [instance] eggs += beyondskins.ploneday.site2010 zcml += beyondskins.ploneday.site2010 In addition to adding the package name to the eggs parameter, we must add it to the zcml parameter as well. Now stop Plone (with Ctrl + C or Ctrl +Z/Enter) and run: $ bin/buildout -c 03-appearance.cfg Updating zope2. Updating fake eggs Updating instance. Getting distribution for 'beyondskins.ploneday.site2010'. Got beyondskins.ploneday.site2010 1.0.3. Now start Plone: $ bin/instance fg Installing themes in Plone Browse to. Now, click on Site Setup Add/Remove Products| and you should see: Check the box next to WorldPloneDay: Theme for 2010 edition 1.0.3 and click on Install. Now browse to and you should see: This theme is the courtesy of Simples Consultoria (). Thank you! You can examine the anonymous view (what everyone else sees) by loading in your browser (that is. by using the IP address instead of the hostname). You can also load either of these URLs ( or) from another web browser (besides the one you are currently using) to see the anonymous view (for example, Safari or Internet Explorer, instead of Firefox). To display the blog entry to the public, we have transitioned the other objects in the site root to the private state. Examining themes with Omelette and Python Simply put, a theme is a collection of templates, images, CSS, JavaScript, and other files (such as Python scripts) that control the appearance of your site. Typically these files are packaged into a Python package, installed in your Plone site with the help of Buildout, and installed in Plone via the Add/Remove Products configlet in Site Setup. Once installed, certain elements of the theme can be edited through the Web using the ZMI. However, these changes only exist in the site's database. Currently there is no easy way to transfer changes made through the Web from the database to the filesystem; so there is a trade-off for performing such customizations through the Web. If you lose your database, you lose your customizations. Depending on your goals, it may not be entirely undesirable to store customizations in your database. But nowadays, most folks choose to separate their site's logical elements (for example themes, add-on functionality, and so on) from their site's content (that is data). Creating a filesystem theme and resisting the urge to customize it through the Web accomplishes this goal. Otherwise, if you are going to customize your theme through the Web, consider these changes volatile, and subject to loss. Installing and using Omelette A great way to examine files on the filesystem is to add Omelette () to your buildout.cfg and examine the files in the parts/omelette directory. Omelette is a Buildout recipe that creates (UNIX filesystem) symbolic links from all the Zope 2, Plone, and add-on installation files to one convenient location. This makes the job of examining files on the filesystem much easier. To install Omelette, in 03-appearance-omelette.cfg, we have this: In this configuration file, we extend the sections and parameters in 03-appearancewpd2010.cfg, and add a new section called omelette to the parts parameter(in the buildout section). Remember that using the += syntax adds the new value to the current value, so the parts list becomes: [buildout] extends = 03-appearance-wpd2010.cfg parts += omelette [omelette] recipe = collective.recipe.omelette eggs = ${instance:eggs} packages = ${zope2:location}/lib/python (We can examine the current state of the buildout by looking in the .installed.cfg file in the root directory of the buildout.) We also tell Omelette what files to link in the eggs and packages parameters: - ${instance:eggs}: Refers to the packages in the eggs directory. - ${zope2:location}/lib/python: Refers to the modules in the parts/zope2/lib/python/ directory. You can read more about how to configure Omelette here:. Now stop Plone (with Ctrl + C or Ctrl +Z/Enter) and run: parts = zope2 instance plonesite omelette You should see: $ bin/buildout -c 03-appearance-omelette.cfg Omelette in Windows As of version 0.5, Windows supports Omelette if you have the Junction program installed (and configured in your path). Junction is easy to install, and available here:. Now that Omelette has been installed, take a look in the parts/omelette directory.You should see: $ bin/buildout -c 03-appearance-omelette.cfg Getting distribution for 'collective.recipe.omelette'. Got collective.recipe.omelette 0.9. Installing omelette. Exploring modules with zopepy These files correspond directly to the modules available in Python when Plone is running. To demonstrate this, let us configure a Python interpreter with the modules added to the sys.path. In 03-appearance-zopepy.cfg, we have this: $ ls -1 parts/omelette Products/ archetypes/ beyondskins/ borg/ collective/ easy_install.py@ elementtree@ feedparser.py@ five/ kss/ markdown.py@ mdx_footnotes.py@ mdx_rss.py@ openid@ pkg_resources.py@ plone/ setuptools@ simplejson@ site.py@ webcouturier/ wicked/ z3c/ zc/ We extend the last working configuration in 03-appearance-omelette.cfg and add a new section called zopepy (short for "Zope Python"). Now stop Plone (with Ctrl + C or Ctrl +Z/Enter) and run Buildout: [buildout] extends = 03-appearance-omelette.cfg parts += zopepy [zopepy] recipe = zc.recipe.egg eggs = ${instance:eggs} interpreter = zopepy extra-paths = ${zope2:location}/lib/python scripts = zopepy You should see: $ bin/buildout -c 03-appearance-zopepy.cfg This creates a script that invokes the Python interpreter with the Plone modules added to the sys.path. Now if you run bin/zopepy, you can explore modules in Python. For example, you can import the beyondskins module (from the top-level namespace): $ bin/buildout -c 03-appearance-zopepy.cfg Updating omelette. The recipe for omelette doesn't define an update method. Using its install method. Installing zopepy. Generated interpreter '/Users/aclark/Developer/plone-site-admin/ buildout/bin/zopepy'. Python will try to evaluate any statement you input. For example, type beyondskins and hit Enter: $ bin/zopepy >>> import beyondskins >>> Python will tell you that beyondskins is a module. You can also try to call beyondskins (as if it were a function or class method): >>> beyondskins <module 'beyondskins' from '/Users/aclark/Developer/plone-siteadmin/ buildout/eggs/beyondskins.ploneday.site2010-1.0.3-py2.4.egg/ beyondskins/__init__.pyc'> Python will now tell you that module objects are not callable. You can use the builtin dir function to view the attributes of the beyondskins module: >>> beyondskins() Traceback (most recent call last): File "<console>", line 1, in ? TypeError: 'module' object is not callable Now, Python will tell you its attributes. You can inquire about a specific attribute such as __path__: >>> dir(beyondskins) ['__builtins__', '__doc__', '__file__', '__name__', '__path__','ploneday'] Python will return its value. Just for fun, you can inquire about any attribute of any module: >>> beyondskins.__path__ ['/Users/aclark/Developer/plone-site-admin/buildout/ eggs/beyondskins. ploneday.site2010-1.0.3-py2.4.egg/beyondskins'] >>> The value of Products.__path__ is too long to print here. It contains a list of all packages that contain a Products module. All of this code ends up in the Products namespace in Python, handy! We could spend the rest of the book learning Python, but let's return to theming instead. Overview of theme package files You will notice a beyondskins directory in parts/omelette that contains two files: - __init__.py - ploneday The __init__.py file tells Python to treat this directory as a module, and that ploneday is another directory. Plone packages typically do not make use of the top-level or mid-level namespaces, so let us look inside (the third-level module) beyondskins/ploneday/site2010 instead: >>> Products.__path__ ['/Users/aclark/Developer/plone-site-admin/buildout/ eggs/Plone-3.3.5- py2.4.egg/Products', … Here is an overview of (most of) these files and directories: - __init__.py: This file tells Python the directory is a module; see for more information. - __init.__pyc: This file contains compiled byte code; see for more information. - __init__.pyo: This file contains optimized and compiled byte code; see for more information. - browser/: This directory contains new-style customization code (that is, code that makes use of the Zope Component Architecture), such as browser views, portlets, viewlets, and resources like image, CSS, and JavaScript files. Visit: for more information. - configure.zcml: This file contains ZCML code used to load components defined in your package. It is also responsible for loading other ZCML files within a package: $ ls -H -1 parts/omelette/beyondskins/ploneday/site2010 README.txt __init__.py __init__.pyc __init__.pyo browser/ configure.zcml doc/ locales/ profiles/ profiles.zcml setuphandlers.py setuphandlers.pyc setuphandlers.pyo skins/ skins.zcml tests.py tests.pyc tests.pyo updateTranslations.sh version.txt - locales/: This directory contains translations for multilingual sites. Visit for more information. - profiles/: This directory contains GenericSetup () code typically in the default directory, to indicate the default profile. GenericSetup can be used to configure all manner of settings in Plone. If you are not familiar with it, try the following: - Navigate to Site Setup Zope Management Interface | portal_setup | Export| - Scroll to the bottom and click on Export all steps - You will get a compressed, archived file that contains many XML files containing various settings. You can add these files to the profiles/default/ directory of your add-on package, then edit them to customize the settings. - setuphandlers.py: While many settings can be configured using GenericSetup, some cannot. This file holds ad hoc Python code used to configure various settings that cannot be configured elsewhere (that is, using GenericSetup). - skins/: This directory contains Zope 2 File System Directory Views (FSDV) that contain Content Management Framework (CMF) skin layers that contain templates, images, CSS/JavaScript files, and Python scripts. - skins.zcml/: This file contains ZCML code that registers FSDVs used by the CMF to facilitate skin layers. <configure xmlns="" xmlns: <!-- File System Directory Views registration --> <cmf:registerDirectory <cmf:registerDirectory <cmf:registerDirectory <!-- Note: This could also be done for all folders at once by replacing the previous lines with this one: <cmf:registerDirectory --> </configure> - tests.py: This file (or directory) typically contains unit tests (and doctests) for the package. See for more information. Summary In this article we have learned: - Installing themes with Buildout - Examining themes with Omelette and Python
https://www.packtpub.com/books/content/examining-themes-omelette-and-python
CC-MAIN-2016-40
en
refinedweb
Tree views Use a tree view to display objects, such as folders, in a hierarchical manner. Objects in the tree view are nodes. The highest node is the root node. A node in the tree can have child nodes under it. A node that has a child is a parent node. Users can perform the following action in a tree view: Best practice: Implementing tree views - Use the TreeField class to create tree views. - Provide a pop-up menu if users can perform multiple actions when they click a parent node. - Include a root node only if users need the ability to perform actions on the entire tree. Otherwise, exclude the root node. Create a field to display a tree view Use a tree view to display objects, such as a folder structure, in a hierarchical manner. A TreeField contains nodes. The highest node is the root node. A node in the tree can have child nodes under it. A node that has a child is a parent node. - Import the required classes and interfaces. import net.rim.device.api.ui.component.TreeField; import net.rim.device.api.ui.component.TreeFieldCallback; import net.rim.device.api.ui.container.MainScreen; import java.lang.String; - Implement the TreeFieldCallback interface. - Invoke TreeField.setExpanded() on the TreeField object to specify whether a folder is collapsible. Create a TreeField object and multiple child nodes to the TreeField object. Invoke TreeField.setExpanded() using node4 as a parameter to collapse the folder. String fieldOne = new String("Main folder"); ... TreeCallback myCallback = new TreeCallback(); TreeField myTree = new TreeField(myCallback, Field.FOCUSABLE); int node1 = myTree.addChildNode(0, fieldOne); int node2 = myTree.addChildNode(0, fieldTwo); int node3 = myTree.addChildNode(node2, fieldThree); int node4 = myTree.addChildNode(node3, fieldFour); ... int node10 = myTree.addChildNode(node1, fieldTen); myTree.setExpanded(node4, false); ... mainScreen.add(myTree); - To repaint a TreeField when a node changes, create a class that implements the TreeFieldCallback interface and implement the TreeFieldCallback.drawTreeItem method. The TreeFieldCallback.drawTreeItem method uses the cookie for a tree node to draw a String in the location of a node. The TreeFieldCallback.drawTreeItem method invokes Graphics.drawText() to draw the String. private class TreeCallback implements TreeFieldCallback { public void drawTreeItem( TreeField _tree, Graphics g, int node, int y, int width, int indent) { String text = (String)_tree.getCookie(node); g.drawText(text, indent, y); } }
https://developer.blackberry.com/bbos/java/documentation/tree_views_1970223_11.html
CC-MAIN-2016-40
en
refinedweb
This is your resource to discuss support topics with your peers, and learn from each other. 01-15-2013 10:45 PM Since a lot of people are able to reproduce this behavior, I guess it is a bug. I think this bug is pretty serious because RIM is expecting BlackBerry 10 users to constantly go in an out of apps, if the user comes back to the app and find the animation broken, it will seriously take away the BlackBerry 10 flow experience. I have never logged a bug report, or even know how to. Can someone either do it or teach me? It would be fastest if someone from RIM see this thread. 01-15-2013 11:08 PM 01-15-2013 11:14 PM - edited 01-15-2013 11:15 PM Thanks. I'm going to try and build a test case to reproduce and escalate. 01-16-2013 12:04 AM Reproduced: on Dev Alpha A running 10.0.09.2320 import bb.cascades 1.0 Page { Container { layout: StackLayout { } Container { layout: AbsoluteLayout { } preferredHeight: 500 Label { id: fred text: "Fred" } } Button { text: "Dance Freddy DANCE!" onClicked: { fred.translationY = 400 - fred.translationY; } } } } 01-16-2013 09:25 AM We are also experiencing this bug in our app since the latest SDK update. The transition animations are broken after the app has been minimized once. Only a restart of the app fixes the problem. I discovered that the bug does NOT occur if "Application.cover" is set like e.g. in the QML Cookbook sample app. However we have not yet decided whether we prefer to display an app cover or just the default thumbnail view when our app is minimized, so it would be appreciated if this bug could be fixed before the final release of the OS. 01-16-2013 09:54 AM Good news, this appears fixed on newer (internal) builds. Tested on 2372. 01-16-2013 10:04 AM 01-16-2013 01:24 PM That is good news. For our current releases, should we wait for the upcoming SDK build or shall we do the workaround for now? 01-16-2013 01:45 PM Great!! now I can get back to coding in calm . Thanks for the info. 01-16-2013 06:16 PM
https://supportforums.blackberry.com/t5/Native-Development/Error-slogger2-buffer-handle-not-initialized/m-p/2100201
CC-MAIN-2016-40
en
refinedweb
Devel::Trace::Cwd - Print out each line before it is executed and track cwd changes version 0.02 perl -d:Trace::Cwd program If you run your program with perl -d:Trace::Cwd. If the current working directory changes during execution that will be printed to standard error with a CWD: prefix. Inside your program, you can enable and disable tracing by doing $Devel::Trace::Cwd::TRACE = 1; # Enable $Devel::Trace::Cwd::TRACE = 0; # Disable or Devel::Trace::Cwd::trace('on'); # Enable Devel::Trace::Cwd::trace('off'); # Disable trace Devel::Trace exports the trace function if you ask it to: import Devel::Trace::Cwd 'trace'; Then if you want you just say trace 'on'; # Enable trace 'off'; # Disable We'll see. Mark-Jason Dominus ( [email protected]), Plover Systems co. See the Devel::Trace.pm Page at for news and upgrades. Chris Williams <[email protected]> This software is copyright (c) 2011 by Chris Williams and Mark-Jason Dominus. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/dist/Devel-Trace-Cwd/lib/Devel/Trace/Cwd.pm
CC-MAIN-2016-40
en
refinedweb
Now for the fun. Now for Dart. I kid (a little). One of the joys of writing Patterns in Polymer is slipping back and forth between JavaScript and Dart. If I am being honest, I prefer Dart, but both languages have their joys. More than experiencing those joys, I appreciate trying solutions for Polymer in one language and then another. If a solution is pleasant in both languages, then I feel like I have found a nice solution that transcends language—making it a true Polymer pattern. In some ways, switch back and forth between languages feels a little like spiking solutions, then doing it for real. In real programming that is loads of fun (not to mention is leads to better solutions). Surprisingly, I have found this to be a bad idea writing the book. It is easy to mistake tinghe post-spike flow for a transcendent pattern. Of course it works when switching languages, I already pushed through the gaps in knowledge in the other language. Instead, I have been making a concerted effort to push the solution forward when switching between languages. When I got my initial spike of <a-form-input>(it allows Polymer elements to work in native HTML forms) working in Dart, I did not try to get it working in JavaScript. Instead I tried building a base class / element in JavaScript. Now that I have that working in JavaScript, I try something a little different in Dart. Instead of just making <a-form-input>a base class in Dart, I am going to see if I can make it work with the is="a-form-input"syntax. I start by reproducing my latest JavaScript work in Dart. That is, I create a Dart package for <a-form-input>. In a new git repository, I add a simple Dart Pub pubspec.yaml: name: a-form-input dependencies: polymer: anyThen, I copy in a_form_element.dartfrom my initial spike in some of the book's exploratory code into lib/a_form_input.dart. For the most part, the JavaScript version of this library looks identical to the Dart solution—aside from the code annotations: @CustomTag('a-form-input') class AFormInput extends PolymerElement { @PublishedProperty(reflect: true) String name; @PublishedProperty(reflect: true) String value; // ... }The same class in JavaScript is the more compact: Polymer('a-form-input', { publish: { name: {reflect: true}, value: {reflect: true} }, // ... });I am going to be honest here: I love the Polymer.dart annotations. If you would have told this old dynamic language aficionado a year ago that I'd love annotations, I'd have thought you were crazy. But now I really appreciate how they distinguish meta-information in the class from actual code. These published properties are core to the functionality of <a-form-input>, but in JavaScript that information gets lost in the rest of the code:; } });Still, the task at hand tonight is not language comparing. Instead, I want to change the test element in Dart from extending the base. So. in the smoke test page for the book example, I add the isattribute: <!DOCTYPE html> <html lang="en"> <head> <link rel="import" href="packages/form_example/elements/x-pizza.html"> <link rel="import" href="packages/a-form-input/a-form-input.html"> <script type="application/dart">export 'package:polymer/init.dart';</script> </head> <body> <form action="test" method="post"> <h1>Plain-old Form</h1> <input type=text <x-pizza</x-pizza> <button>Order my pizza!</button> </form> </body> </html>Which almost works… except not at all (some of the functionality works, but ultimately crashes ugly). I finally read the actual documentation to find that this is not how "is" works at all! It is meant to extends native DOM elements, not arbitrary elements like I am trying to do here. Sometimes there is nothing for it, but to admit that I am an idiot. OK, much times. But while working this, I ran into an interesting problem. My AFormInputclass does not seem to see attributes changing when moved into a package. That is, the same exact a_form_input.dartwhen used locally or used via a package… behaves differently. And it is an important difference. The attributeChanged()callback is never invoked in the super class via a package: library a_form_input; import 'package:polymer/polymer.dart'; import 'dart:html'; @CustomTag('a-form-input') class AFormInput extends PolymerElement { @PublishedProperty(reflect: true) String name; @PublishedProperty(reflect: true) String value; // ... AFormInput.created(): super.created(); void attached() { print('attached!'); super.attached(); } void attributeChanged(String name, String oldValue, String newValue) { print('$name: $oldValue → $newValue'); if (name == 'name') lightInput.name = newValue; if (name == 'value') lightInput.value = newValue; } }If my test element (the old <x-pizza>standby) uses a local copy of this same file, then it can update value: import 'package:polymer/polymer.dart'; import 'a_form_input.dart'; // import 'package:a-form-input/a_form_input.dart'; @CustomTag('x-pizza') class XPizza extends AFormInput { // ... _updateText() { value = 'First Half: ${model.firstHalfToppings}\n' 'Second Half: ${model.secondHalfToppings}\n' 'Whole: ${model.wholeToppings}'; } // ... }But if I swap the a_form_input.dartlines, then changes to the value attribute (or the name attribute) never trigger the attributeChanged()callback. I play with this for some time, but I cannot solve this. Everything else works—a hidden input is generated by <a-form-input>in either case. The attached()lifecycle method is called in either case. But I am clearly missing something in the packaged version. Something to solve tomorrow. Day #13
https://japhr.blogspot.com/2014/12/is-form-input-extending-polymerdart.html
CC-MAIN-2016-40
en
refinedweb
(For more resources on Search Engine, see here.) Client API implementations for Sphinx Sphinx comes with a number of native searchd client API implementations. Some third-party open source implementations for Perl, Ruby, and C++ are also available. All APIs provide the same set of methods and they implement the same network protocol. As a result, they more or less all work in a similar fashion, they all work in a similar fashion. All examples in this article are for PHP implementation of the Sphinx API. However, you can just as easily use other programming languages. Sphinx is used with PHP more widely than any other language. Search using client API Let's see how we can use native PHP implementation of Sphinx API to search. We will add a configuration related to searchd and then create a PHP file to search the index using the Sphinx client API implementation for PHP. Time for action – creating a basic search script - Add the searchd config } - Start the searchd daemon (as root user): $ sudo /usr/local/sphinx/bin/searchd -c /usr/local/sphinx/etc/ sphinx-blog.conf - Copy the sphinxapi.php file (the class with PHP implementation of Sphinx API) from the sphinx source directory to your working directory: $ mkdir /path/to/your/webroot/sphinx $ cd /path/to/your/webroot/sphinx $ cp /path/to/sphinx-0.9.9/api/sphinxapi.php ./ - Create a simple_search.php script that uses the PHP client API class to search the Sphinx-blog index, and execute it in the browser: <?php require_once('sphinxapi.php'); // Instantiate the sphinx client $client = new SphinxClient(); // Set search options $client->SetServer('localhost', 9312); $client->SetConnectTimeout(1); $client->SetArrayResult(true); // Query the index $results = $client->Query('php'); // Output the matched results in raw format print_r($results['matches']); - The output of the given code, as seen in a browser, will be similar to what's shown in the following screenshot: What just happened? Firstly, we added the searchd configuration section to our sphinx-blog.conf file. The following options were added to searchd section: -. Once we were done with adding searchd configuration options, we started the searchd daemon with root user. We passed the path of the configuration file as an argument to searchd. The default configuration file used is /usr/local/sphinx/etc/sphinx.conf. After a successful startup, searchd listens on all network interfaces, including all the configured network cards on the server, at port 9312. If we want searchd to listen on a specific interface then we can specify the hostname or IP address in the value of the listen option: listen = 192.168.1.25:9312 The listen setting defined in the configuration file can be overridden in the command line while starting searchd by using the -l command line argument. There are other (optional) arguments that can be passed to searchd as seen in the following screenshot: searchd needs to be running all the time when we are using the client API. The first thing you should always check is whether searchd is running or not, and start it if it is not running. We then created a PHP script to search the sphinx-blog index. To search the Sphinx index, we need to use the Sphinx client API. As we are working with a PHP script, we copied the PHP client implementation class, (sphinxapi.php) which comes along with Sphinx source, to our working directory so that we can include it in our script. However, you can keep this file anywhere on the file system as long as you can include it in your PHP script. Throughout this article we will be using /path/to/webroot/sphinx as the working directory and we will create all PHP scripts in that directory. We will refer to this directory simply as webroot. We initialized the SphinxClient class and then used the following class methods to set upthe Sphinx client API: - SphinxClient::SetServer($host, $port)—This method sets the searchd hostname and port. All subsequent requests use these settings unless this method is called again with some different parameters. The default host is localhost and port is 9312. - SphinxClient::SetConnectTimeout($timeout)—This is the maximum time allowed to spend trying to connect to the server before giving up. - SphinxClient::SetArrayResult($arrayresult)—This is a PHP client APIspecific method. It specifies whether the matches should be returned as an array or a hash. The Default value is false, which means that matches will be returned in a PHP hash format, where document IDs will be the keys, and other information (attributes, weight) will be the values. If $arrayresult is true, then the matches will be returned in plain arrays with complete per-match information. After that, the actual querying of index was pretty straightforward using the SphinxClient::Query($query) method. It returned an array with matched results, as well as other information such as error, fields in index, attributes in index, total records found, time taken for search, and so on. The actual results are in the $results['matches'] variable. We can run a loop on the results, and it is a straightforward job to get the actual document's content from the document ID and display it. Matching modes When a full-text search is performed on the Sphinx index, different matching modes can be used by Sphinx to find the results. The following matching modes are supported by Sphinx: - SPH_MATCH_ALL—This is the default mode and it matches all query words, that is, only records that match all of the queried words will be returned. - SPH_MATCH_ANY—This matches any of the query words. - SPH_MATCH_PHRASE—This matches query as a phrase and requires a perfect match. - SPH_MATCH_BOOLEAN—This matches query as a Boolean expression. - SPH_MATCH_EXTENDED—This matches query as an expression in Sphinx internal query language. - SPH_MATCH_EXTENDED2—This matches query using the second version of Extended matching mode. This supersedes SPH_MATCH_EXTENDED as of v0.9.9. - SPH_MATCH_FULLSCAN—In this mode the query terms are ignored and no text-matching is done, but filters and grouping are still applied. Time for action – searching with different matching modes - Create a PHP script display_results.php in your webroot with the following code: <?php // Database connection credentials $dsn ='mysql:dbname=myblog;host=localhost'; $user = 'root'; $pass = ''; // Instantiate the PDO (PHP 5 specific) class try { $dbh = new PDO($dsn, $user, $pass); } catch (PDOException $e){ echo'Connection failed: '.$e->getMessage(); } // PDO statement to fetch the post data $query = "SELECT p.*, a.name FROM posts AS p " . "LEFT JOIN authors AS a ON p.author_id = a.id " . "WHERE p.id = :post_id"; $post_stmt = $dbh->prepare($query); // PDO statement to fetch the post's categories $query = "SELECT c.name FROM posts_categories AS pc ". "LEFT JOIN categories AS c ON pc.category_id = c.id " . "WHERE pc.post_id = :post_id"; $cat_stmt = $dbh->prepare($query); // Function to display the results in a nice format function display_results($results, $message = null) { global $post_stmt, $cat_stmt; if ($message) { print "<h3>$message</h3>"; } if (!isset($results['matches'])) { print "No results found<hr />"; return; } foreach ($results['matches'] as $result) { // Get the data for this document (post) from db $post_stmt->bindParam(':post_id', $result['id'], PDO::PARAM_INT); $post_stmt->execute(); $post = $post_stmt->fetch(PDO::FETCH_ASSOC); // Get the categories of this post $cat_stmt->bindParam(':post_id', $result['id'], PDO::PARAM_INT); $cat_stmt->execute(); $categories = $cat_stmt->fetchAll(PDO::FETCH_ASSOC); // Output title, author and categories print "Id: {$posmt['id']}<br />" . "Title: {$post['title']}<br />" . "Author: {$post['name']}"; $cats = array(); foreach ($categories as $category) { $cats[] = $category['name']; } if (count($cats)) { print "<br />Categories: " . implode(', ', $cats); } print "<hr />"; } } - Create a PHP script search_matching_modes.php in your webroot with the following code: <?php // Include the api class Require('sphinxapi.php'); // Include the file which contains the function to display results require_once('display_results.php'); $client = new SphinxClient(); // Set search options $client->SetServer('localhost', 9312); $client->SetConnectTimeout(1); $client->SetArrayResult(true); // SPH_MATCH_ALL mode will be used by default // and we need not set it explicitly display_results( $client->Query('php'), '"php" with SPH_MATCH_ALL'); display_results( $client->Query('programming'), '"programming" with SPH_MATCH_ALL'); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_ALL'); // Set the mode to SPH_MATCH_ANY $client->SetMatchMode(SPH_MATCH_ANY); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_ANY'); // Set the mode to SPH_MATCH_PHRASE $client->SetMatchMode(SPH_MATCH_PHRASE); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_PHRASE'); display_results( $client->Query('scripting language'), '"scripting language" with SPH_MATCH_PHRASE'); // Set the mode to SPH_MATCH_FULLSCAN $client->SetMatchMode(SPH_MATCH_FULLSCAN); display_results( $client->Query('php'), '"php programming" with SPH_MATCH_FULLSCAN'); - Execute search_matching_modes.php in a browser (). (For more resources on Search Engine, see here.) What just happened? The first thing we did was created a script, display_results.php, which connects to the database and gathers additional information on related posts. This script has a function, display_results() that outputs the Sphinx results returned in a nice format. The code is pretty much self explanatory. Next, we created the PHP script that actually performs the search. We used the following matching modes and queried using different search terms: - SPH_MATCH_ALL (Default mode which doesn't need to be explicitly set) - SPH_MATCH_ANY - SPH_MATCH_PHRASE - SPH_MATCH_FULLSCAN Let's see what the output of each query was and try to understand it: display_results( $client->Query('php'), '"php" with SPH_MATCH_ALL'); display_results( $client->Query('programming'), '"programming" with SPH_MATCH_ALL'); The output for these two queries can be seen in the following screenshot: The first two queries returned all posts containing the words "php" and "programming" respectively. We got posts with id 2 and 5 for "php", and 5 and 8 for "programming". The third query was for posts containing both words, that is "php programming", and it returned the following result: This time we only got posts with id 5, as this was the only post containing both the words of the phrase "php programming". We used SPH_MATCH_ANY to search for any words of the search phrase: // Set the mode to SPH_MATCH_ANY $client->SetMatchMode(SPH_MATCH_ANY); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_ANY'); The function call returns the following output (results): As expected, we got posts with ids 5,2, and 8. All these posts contain either "php" or "programming" or both. Next, we tried our hand at SPH_MATCH_PHRASE, which returns only those records that match the search phrase exactly, that is, all words in the search phrase appear in the same order and consecutively in the index: // Set the mode to SPH_MATCH_PHRASE $client->SetMatchMode(SPH_MATCH_PHRASE); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_PHRASE'); display_results( $client->Query('scripting language'), '"scripting language" with SPH_MATCH_PHRASE'); The previous two function calls return the following results: The query"php programming" didn't return any results because there were no posts that match that exact phrase. However, a post with id 2 matched the next query: "scripting language". The last matching mode we used was SPH_MATCH_FULLSCAN. When this mode is used the search phrase is completely ignored, (in our case "php" was ignored) and Sphinx returns all records from the index: // Set the mode to SPH_MATCH_FULLSCAN $client->SetMatchMode(SPH_MATCH_FULLSCAN); display_results( $client->Query('php'), '"php programming" with SPH_MATCH_FULLSCAN'); The function call returns the following result (for brevity only a part of the output is shown in the following image): SPH_MATCH_FULLSCAN mode is automatically used if empty string is passed to the SphinxClient::Query() method. SPH_MATCH_FULLSCAN matches all indexed documents, but the search query still applies all the filters when sorting and grouping. However, the search query will not perform any full-text searching. This is particularly useful in cases where we only want to apply filters and don't want to perform any full-text matching (For example, filtering all blog posts by categories). Boolean query syntax Boolean mode queries allow expressions to make use of a complex set of Boolean rules to refine their searches. These queries are very powerful when applied to full-text searching. When using Boolean query syntax, certain characters have special meaning, as given in the following list: - &: Explicit AND operator - |: OR operator - -: NOT operator - !: NOT operator (alternate) - (): Grouping Let's try to understand each of these operators using an example. Time for action – searching using Boolean query syntax - Create a PHP script search_boolean_mode.php in your webroot with); display_results( $client->Query('php programming'), '"php programming" (default mode)'); // Set the mode to SPH_MATCH_BOOLEAN $client->SetMatchMode(SPH_MATCH_BOOLEAN); // Search using AND operator display_results( $client->Query('php & programming'), '"php & programming"'); // Search using OR operator display_results( $client->Query('php | programming'), '"php | programming"'); // Search using NOT operator display_results( $client->Query('php -programming'), '"php -programming"'); // Search by grouping terms display_results( $client->Query('(php & programming) | (leadership & success)'), '"(php & programming) | (leadership & success)"'); // Demonstrate how OR precedence is higher than AND display_results( $client->Query('development framework | language'), '"development framework | language"'); // This won't work display_results($client->Query('-php'), '"-php"'); Execute the script in a browser (the output shown in next section). What just happened? We created a PHP script to see how different Boolean operators work. Let's understand the working of each of them. The first search query, "php programming", did not use any operator. There is always an implicit AND operator, so "php programming" query actually means: "php & programming". In second search query we explicitly used the & (AND) operator. Thus the output of both the queries were exactly same, as shown in the following screenshot: Our third search query used the OR operator. If either of the terms get matched whilst using OR, the document is returned. Thus "php | programming" will return all documents that match either "php" or "programming", as seen in the following screenshot: The fourth search query used the NOT operator. In this case, the word that comes just after the NOT operator should not be present in the matched results. So "php –programming" will return all documents that match "php" but do not match "programming" We get results as seen in the following screenshot: Next, we used the grouping operator. This operator is used to group other operators. We searched for "(php & programming) | (leadership & success)", and this returned all documents which matched either; "php" and "programming" or "leadership" and "success", as seen in the next screenshot: After that, we fired a query to see how OR has a precedence higher than AND. The query "development framework | language" is treated by Sphinx as "(development) & (framework | language)". Hence we got documents matching "development & framework" and "development & language", as shown here: Lastly, we saw how a query like "-php" does not return anything. Ideally it should have returned all documents which do not match "php", but for technical and performance reasons such a query is not evaluated. When this happens we get the following output: Extended query syntax Apart from the Boolean operators, there are some more specialized operators and modifiers that can be used when using the extended matching mode. Let's understand this with an example. Time for action – searching with extended query syntax - Create a PHP script search_extended_mode.php in your webroot); // Set the mode to SPH_MATCH_EXTENDED2 $client->SetMatchMode(SPH_MATCH_EXTENDED2); // Returns documents whose title matches "php" and // content matches "significant" display_results( $client->Query('@title php @content significant'), 'field search operator'); // Returns documents where "development" comes // before 8th position in content field display_results( $client->Query('@content[8] development'), 'field position limit modifier'); // Returns only those documents where both title and content // matches "php" and "namespaces" display_results( $client->Query('@(title,content) php namespaces'), 'multi-field search operator'); // Returns documents where any of the field // matches "games" display_results( $client->Query('@* games'), 'all-field search operator'); // Returns documents where "development framework" // phrase matches exactly display_results( $client->Query('"development framework"'), 'phrase search operator'); // Returns documents where there are three words // between "people" and "passion" display_results( $client->Query('"people passion"~3'), 'proximity search operator'); // Returns documents where any of the // two words from the phrase matches display_results( $client->Query('"people development passion framework"/2'), 'quorum search operator'); - Execute the script in a browser (the output is explained in the next section). What just happened? For using extended query syntax, we set the match mode to SPH_MATCH_EXTENDED2: $client->SetMatchMode(SPH_MATCH_EXTENDED2); The first operator we used was field search operator. Using this operator we can tell Sphinx which fields to search against (instead of searching against all fields). In our example we searched for all documents whose title matches "php" and whose content matches "significant". As an output, we got posts (documents) with the id 5, which was the only document that satisfied this matching condition as shown below: @title php @content significant The search for that term returns the following result: Following this we used field position limit modifier. The modifier instructs Sphinx to select only those documents where "development" comes before the 8th position in the content field, that is, it limits the search to the first eight positions within given field. @content[8] development And we get the following result: Next, we used the multiple field search operator. With this operator you can specify which fields (combined) should match the queried terms. In our example, documents are only matched when both title and content matches "php" and "namespaces". @(title,content) php namespaces This gives the following result: The all-field search operator was used next. In this case the query is matched against all fields. @* games This search term gives the following result: The phrase search operator works exactly same as when we set the matching mode to SPH_MATCH_PHRASE. This operator implicitly does the same. So, a search for the phrase "development framework" returns posts with id 7, since the exact phrase appears in its content. "development framework" The search term returns the following result: Next we used the proximity search operator. The proximity distance is specified in words, adjusted for word count, and applies to all words within quotes. So, "people passion"~3 means there must be a span of less than five words that contain both the words "people" and "passion". We get the following result: The last operator we used is called as a quorum operator. In this, Sphinx returns only those documents that match the given threshold of words. "people development passion framework"/2 matches those documents where at least two words match out of the four words in the query. Our query returns the following result: Using what we have learnt above, you can create complex search queries by combining any of the previously listed search operators. For example: @title programming "photo gallery" -(asp|jsp) @* opensource The query means that: - The document's title field should match 'programming' - The same document must also contain the words 'photo' and 'gallery' adjacently in any of the fields - The same document must not contain the words 'asp' or 'jsp' - The same document must contain the word 'opensource' in any of its fields There are few more operators in extended query syntax and you can see their examples at. Summary In this article, we saw how to use the Sphinx API to search from within your application. With the index, in this article: - We wrote different search queries - We saw how PHP's implementation of the Sphinx client API can be used in PHP applications to issue some powerful search queries Further resources on this subject: - Drupal 6 Search Engine Optimization [Book] - Search Engine Optimization in Joomla! [Article] - Blogger: Improving Your Blog with Google Analytics and Search Engine Optimization [Article]
https://www.packtpub.com/books/content/sphinx-index-searching
CC-MAIN-2016-40
en
refinedweb
sigsetjmp() Save the environment, including the signal mask Synopsis: #include <setjmp.h> int sigsetjmp( sigjmp_buf env, int savemask ); Arguments: - env - A buffer where the function can save the calling environment. - savemask - Nonzero if you want to save the process's current signal mask, otherwise 0. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The sigsetjmp() function behaves in the same way as the setjmp() function when savemask is zero. If savemask is nonzero, then sigsetjmp() also saves the thread's current signal mask as part of the calling environment.: Zero on the first call, or nonzero if the return is the result of a call to siglongjmp().
http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/s/sigsetjmp.html
CC-MAIN-2013-20
en
refinedweb
pthread_cond_wait - Causes a thread to wait for the specified condition variable to be signaled or broadcasted #include <pthread.h> int pthread_cond_wait( pthread_cond_t *cond, pthread_mutex_t *mutex ); DECthreads POSIX 1003.1c Library (libpthread.so) Interfaces documented on this reference page conform to industry standards as follows: IEEE Std 1003.1c-1995, POSIX System Application Program Interface Condition variable that the calling thread waits on. Mutex associated with the condition variable specified in cond. This routine causes a thread to wait for the specified condition variable to be signaled or broadcasted. Each condition corresponds to one or more Boolean relations, called a predicate, based on shared data. The calling thread waits for the data to reach a particular state for the predicate to become true. However, the return from this routine does not imply anything about the value of the predicate, and it should be reevaluated upon return. Call this routine after you have locked the mutex specified in mutex. The results of this routine are unpredictable if this routine is called without first locking the mutex. This routine atomically releases the mutex and causes the calling thread to wait on the condition. When the thread regains control after calling pthread_cond_wait(3), the mutex is locked and the thread is the owner. This is true regardless of why the wait ended. If general cancelability is enabled, the thread reacquires the mutex (blocking for it if necessary) before the cleanup handlers are run (or before the exception is raised). A thread that changes the state of storage protected by the mutex in such a way that a predicate associated with a condition variable might now be true, must call either pthread_cond_signal(3) or pthread_cond_broadcast(3) for that condition variable. If neither call is made, any thread waiting on the condition variable continues to wait. This routine might (with low probability) return when the condition variable has not been signaled or broadcasted. When this occurs, the mutex is reacquired before the routine returns. To handle this type of situation, enclose each call to this routine in a loop that checks the predicate. The loop provides documentation of your intent and protects against these spurious wakeups, while also allowing correct behavior even if another thread consumes the desired state before the awakened thread runs. It is illegal for threads to wait on the same condition variable by specifying different mutexes. The only routines which are supported for use with asynchronous cancelability enabled are those which disable asynchronous cancelability. If an error condition occurs, this routine returns an integer value indicating the type of error. Possible return values are as follows: Successful completion. The value specified by cond, or mutex is invalid, or: Different mutexes are supplied for concurrent pthread_cond_wait(3) operations or pthread_cond_timedwait operations on the same condition variable, or: The mutex was not owned by the calling thread at the time of the call. DECthreads cannot acquire memory needed to block using a statically initialized condition variable. None Functions: pthread_cond_broadcast(3), pthread_cond_destroy(3), pthread_cond_init(3), pthread_cond_signal(3), pthread_cond_timedwait(3), Manuals: Guide to DECthreads and Programmer's Guide pthread_cond_wait(3)
http://nixdoc.net/man-pages/Tru64/man3/pthread_cond_wait.3.html
CC-MAIN-2013-20
en
refinedweb
01-11-2013 02:49 PM I'm loading XML into a ListView and no matter what, it's coming in in the reverse order of which it is listed. Here's my code (with personal website removed)... anyone have any ideas how to change the order back to the way it's originally listed? import bb.cascades 1.0 import bb.data 1.0 Page { content: ListView { id: myListView dataModel: dataModel listItemComponents: [ ListItemComponent { type: "item" StandardListItem { title: ListItemData.test } } ] } attachedObjects: [ GroupDataModel { id: dataModel }, DataSource { id: dataSource source: "" query: "root/test" onDataLoaded: { dataModel.insertList(data); } } ] onCreationCompleted: { dataSource.load(); } } I can't use sortingKeys because I need the XML to come in the way it's listed, which isn't sortable. I tried using sortedAscending: false/true and it didn't change the order. Thank you! 01-25-2013 02:45 AM Hi Loomist, I am also facing the same issue right now. Did u found its solution?? Can you please share it. Thanks in advance. 01-29-2013 09:53 PM Likewise I have the exact same problem described by Loomist. 01-29-2013 10:06 PM - edited 01-29-2013 10:06 PM I still haven't found a solution to this problem. I have actually set aside the app that is requiring this, because it's just too big of an issue and I won't submit an app with such a glaring problem. I'm guessing it might be a bug in the XML handling if others are having this issue. 01-29-2013 10:08 PM - edited 01-29-2013 10:09 PM I'm getting the issue using JSON data. The problem is probably internal to GroupDataModel (unless we're using it incorrectly). 02-11-2013 10:08 PM Hi all, Try using an ArrayDataModel instead of a GroupDataModel. A GroupDataModel is designed to sort your data automatically, while an ArrayDataModel allows you to arrange the data in the order you want (including to leave it as is). Also, use the append() function instead of the insert(). For more information on the ArrayDataModel class, check this link: Samar Abdelsayed - Application Development Consultant - BlackBerry Did this answer your quetion? Please accept post as solution. Please refrain from posting new questions in solved threads. Found a bug? Report it using the Issue Tracker 02-12-2013 06:02 AM Hi Samar, I tried your solution and it works fine.When trying to delete items it do so from the listview and xml when there is more than one item in the list. But when there exist only a single item in the list, deleting it removes data from xml but doesnt reflect the change in the listview even if I have given datasource.load(); to make the changes effective. Please, Help me out ot find a solution
http://supportforums.blackberry.com/t5/Cascades-Development/XML-DataSource-is-sorting-in-reverse-unable-to-change-sort-order/m-p/2117883
CC-MAIN-2013-20
en
refinedweb
Search MediaWiki's full-text search is split between back-end and front-end classes, allowing for the backend to be extended with plugins. Front end Work for MediaWiki 1.13 Some work is going on now (March 2008) to improve the front-end for MediaWiki 1.13. - Thumbnails - done Hits that return 'Image:' pages now include a thumbnail and basic image metadata instead of raw text extracts. - Thumbnails for media gallery pages and categories might be great! - Text extracts - These need to be prettified... a lot :) - Categories and other metadata? - Be nice to add some stuff while keeping it clean - Search refinement field - Recommend moving this up to the top, like the limited one LuceneSearch extension had. - The namespace checklist is currently awful, should be cleaned up in some nice way. - ?? Back end MediaWiki currently ships with two functional search backends; one for MySQL 4.x and later, and the other for PostgreSQL. Custom classes can also be loaded, and $wgSearchType set to the desired class name. SearchEngine This is an abstract base class, with stubs for full-text searches in the title and text fields, and a standard implementation for doing exact-title matches. SearchMySQL4 The SearchMySQL and SearchMySQL4 classes implement boolean full-text searches using MySQL's fulltext indexing for MyISAM tables. (In old versions of MediaWiki there was a SearchMySQL3 class as well, which did some extra work to do boolean queries.) Work for MediaWiki 1.13 Put some notes on my blog... SearchPostgres I'm not actually sure how SearchPostgres and SearchTsearch2 classes relate. Is one of them obsolete, or... ? Extensions MWSearch The MWSearch extension provides a SearchEngine subclass which contacts Wikimedia's Lucene-based search server. This replaces the older LuceneSearch extension which reimplemented the entire Special:Search page.
http://www.elinux.org/Search
CC-MAIN-2013-20
en
refinedweb
Java Calculator - Java Beginners Simple Java Calculator Write a Java program to create simple Calculator for 4 basic Math operations, Addition, Subtraction, Multiplication..., Please visit the following link: simple Projects Simple Java Projects Hi, I am beginner in Java and trying to find Simple Java Projects to learn the different concepts on Java. Can anyone tell me where to get it? Thanks Java: Example - Simple Calculator Java: Example - Simple Calculator Here is the source for the simple... (Calc.java) - A simple main program. User interface (CalcGUI.java... place. Altho this simple example doesn't show the full power of separating Java Program Java Program A Simple program of JSP to Display message, with steps to execute that program Hi Friend, Follow these steps: 1)Go...:'hello.jsp' <%@page language="java"%> <%String st="Hello World"; %> java program - Java Beginners java program hi sir, i want a simple java program to pick out the biggest in an array of integers Hi Friend, Try the following code: import java.util.*; public class LargestNo{ public static void Write a Java program that calculates and prints the simple interest using the formula : Simple Interest = PNR / 100 Input values P,N,R should be accepted as command line input as below. e.g. java SimpleInterest calculator - Java Beginners simple calculator how can i create a simple calculator using java codes? Hi Friend, Please visit the following link: Thanks Writing Simple Java Script Program Writing Simple Java Script Program  ... of JavaScript and create your first JavaScript program. Creating your first JavaScript Program In the last lesson you learned how to create simple java script Simple java applet Simple java applet Create a Java applet to display your address on the screen. Use different colors for background and text Please visit the following link: Applet Examples Class Average Program Class Average Program This is a simple program of Java class. In this tutorial we will learn how to use java program for displaying average value. The java instances spring simple application java spring simple application hai I have design a simple application in this I always found class not found exception. I am sendig code as follows please resolve this and send me.my directory structure is as follows Project Java Program - Swing AWT Java Program A program to create a simple game using swings. Hi Friend, Please visit the following link: Thanks java program for java program for java program for printing documents,images and cards Simple Java Question - Java Beginners Simple Java Question [color=#0040BF] Dear All, I have a huge text file with name animal.txt, I have the following sample data: >id1 lion >id2 horse cat >id3 mouse tiger I need to save the contents simple team leader simple team leader Could you please help me to write a program for a simple team leader? Please clarify your problem. Which type of program you want to create in java? Please specify simple code - Java Beginners simple code to input a number and check wether it is prime or not and print its position in prime nuber series. Hi friend, Code to help in solving the problem : import java.io.*; class PrimeNumber { public Java Program Java Program A Java Program that print the data on the printer but buttons not to be printed Simple Java Desktop Upload application Simple Java Desktop Upload application I try do simple example for upload applicationtake file from c:\ put to d:\ :) PLEASE HELP java java Im asking for simple java program with encapsulation pls java java Im asking for simple java program with polymophysm pls java java Im asking for simple java program of format I/O java program how to write an addition program in java without using arithematic operator java program java program write java program for constructor,overriding,overriding,exception handling java program java program write a java program to display array list and calculate the average of given array Java Program Java Program java program to insert row in excel sheet after identifying an object java program java program java program to implement the reflection of a particular class details like constructor,methods and fields with its modifiers java program java program write a program to print 1234 567 89 10 java program java program Write a program to demonstrate the concept of various possible exceptions arising in a Java Program and the ways to handle them.  ... in Java Java program Java program How to write code in order to draw a face in java java program java program wap to show concept of class in java Java program Java program How to write for drawing a face in java java program java program write a java program to compute area of a circle.square,rectangle.triangle,volume of a sphere ,cylinder and perimeter of cube using method over riding Simple Java Programs Simple Java Programs In this section we will discuss about the Java programs... problem while compiling your Java program. Java class, object and methods... learner can learn how to write first program in Java. Java Constructor java program java program Write a program to find the difference between sum of the squares and the square of the sums of n numbers java program java program write a program to create text area and display the various mouse handling events java java Im asking for simple java program which converts rands to dollars/pound/euro/yuan/yen pls java java Im asking for simple java program of threads and they work pls.thank you java java sir i need a java simple program to reverse a string using array java program java program voter or non voter display I want to Write a program in JAVA to display to create a class called MATRIX using a two-dimensional array of integers. Perform the addition and subtraction of two matrices. Help me java program java program i want a applet program that accepts two input strings using tag and concatenate the strings and display it in status window. please give mi he code for this in core java Simple IO Application - Java Beginners Simple IO Application Hi, please help me Write a simple Java application that prompts the user for their first name and then their last name. The application should then respond with 'Hello first & last name, what Simple banking system using Java Simple banking system using Java I am trying to make a simple banking system that has only 3 interfaces which does not connect to the database... inheritance in java please help.Thank you create server and client such that server receives data from client using BuuferedReader and sends reply to client using PrintStream program for HashSet - Java Beginners program for HashSet I need a program that illustratest the concept..., LinkedHashSet, and TreeSet classes implement these interfaces. Here is the simple...:// Thanks. Amardeep java program java program write a program showing two threads working simultanously upon two objects(in theater one object "cut the ticket" andsecond object "show is the seat java program java program write a java program which shows how to declare and use STATIC member variable inside a java class. The static variable are the class variables that can be used without any reference of class. class java program java program Write a program that computes loan payments. The loan can be a car loan, a student loan, or a home mortgage loan. The program lets the user enter the interest rate, number of years, loan amount, and displays Java program Java program Write a Java program that demonstaress multiple inheritance. Java does not support Multiple Inheritance but you can show... the following link: help with program - Java Beginners Help with program Simple Java Program // Defining class Stars.java to print stars in certain orderclass Stars{// main() functionpublic static void main(String[] args){int a = 1, b = 5, c, d, i; // declaring 5 int java program java program Write a Java application to print your name, ID, gender, and nationality.. the output will look like this : my id is 0123456789 i am a man i am from thailand java program java program hello.. do u all have any source to do the java programming for Snake and Ladder Game or any reservation system.. help me plz java program java program Hi, I got interview question like this.Coding should be in java. String input: TTSAAS Step 1: Reduce the consecutive same word SAAS Step 2: SS Step 3: -1 Java Program for Calculating Marks Java Program for Calculating Marks Hi Everyone, I have a assignment that requires me to a write simple java program that will calculate marks for 10 students. I must use an array to ask user to key in the marks for the 10 Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/90173
CC-MAIN-2013-20
en
refinedweb
posix_spawn_file_actions_init() Initialize a spawn file actions object Synopsis: #include <spawn.h> int posix_spawn_file_actions_init( posix_spawn_file_actions_t *fact_p); Arguments: - fact_p - A pointer to the spawn file action object that you want to initialize. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:()). Returns: - EOK - Success. - EINVAL - An argument was invalid. - ENOMEM - Insufficient memory exists to initialize the spawn file actions object.
http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/p/posix_spawn_file_actions_init.html
CC-MAIN-2013-20
en
refinedweb
NAME getsid - get session ID SYNOPSIS #include <unistd.h> pid_t getsid(pid_t pid); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): getsid(): _XOPEN_SOURCE >= 500 DESCRIPTION getsid(0) returns the session ID of the calling process. getsid(p) returns the session ID of the process with process ID p. (The session ID of a process is the process group ID of the session leader.) RETURN VALUE On success, a session ID is returned. On error, (pid_t) -1 will be returned, and errno is set appropriately. ERRORS EPERM A process with process ID p exists, but it is not in the same session as the calling process, and the implementation considers this an error. ESRCH No process with process ID p was found. VERSIONS This system call is available on Linux since version 2.0. CONFORMING TO SVr4, POSIX.1-2001. NOTES Linux does not return EPERM. SEE ALSO getpgid(2), setsid(2), credentials(7) COLOPHON This page is part of release 3.21 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/karmic/man2/getsid.2.html
CC-MAIN-2013-20
en
refinedweb
projects so you need not worry about them. have you read document...struts Hi, i want to develop a struts application,iam using eclipse... as such. Moreover war files are the compressed form of your projects JAva Projects - Java Magazine JAva Projects I need Some Java Projects J2EE - Struts J2EE what is Struts Architecture? Hi Friend, Please visit the following links: Java Project Outsourcing, Java Outsourcing Projects, Oursource your Java development projects Java Project Outsourcing - Outsource Java development projects Java... the quality products in less time. Outsource your Java Projects to our... projects. We use latest software frameworks such as Spring j2ee - Struts j2ee hi can you explain what is proxy interface in delegater design pattern struts struts <p>hi here is my code in struts i want to validate my... }//execute }//class struts-config.xml <struts..."/> </plug-in> </struts-config> validator Tutorials - Jakarta Struts Tutorial Struts Tutorials - Jakarta Struts Tutorial Learn Struts Framework with the help of examples and projects. Struts 2 Training! Get..., Struts Projects, Struts Presentations, Struts MappingDispatchAction Example Struts Books for applying Struts to J2EE projects and generally accepted best practices as well... covers everything you need to know about Struts and its supporting technologies...- projects: AppFuse - A baseline Struts application to be j2ee j2ee I want program for login page with database connectivity using struts framework. that application should session management and cookies Building Projects - Maven2 Building Projects - Maven2  ... Source build tool that made the revolution in the area of building projects... is non trivial because all file references need to be relative, environment must java projects java projects i have never made any projects in any language. i want to make project in java .i don't know a bit about this .i am familar with java.please show me the path please...... Hi, You can develop How to build a Struts Project - Struts How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips Java projects Easy Projects .NET Easy Projects .NET Easy Projects .NET is - AJAX-based project management and team collaboration... and whistles. No need to worry about a sophisticated setup process. It's as easy Java Marketing projects Java Marketing projects Java Marketing Threads in realtime projects Threads in realtime projects Explain where we use threads in realtime projects with example java projects - Java Beginners java projects hi, im final yr eng student.plz give me latest java or web related topics for last yr projects Struts - Struts Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2: Struts 2 UI java - Struts java hi sir, i need Structs Architecture and flow of the Application. Hi friend, Struts is an open source framework used for developing J2EE web applications using Model View Controller (MVC) design - History of Struts 2 framework. So, the team of Apache Struts and another J2EE framework, WebWork...; Strut2 contains the combined features of Struts Ti and WebWork 2 projects... Struts 2 History 2 project samples struts 2 project samples please forward struts 2 sample projects like hotel management system. i've done with general login application and all. Ur answers are appreciated. Thanks in advance Raneesh struts - Struts struts Hi, I need the example programs for shopping cart using struts with my sql. Please send the examples code as soon as possible. please send it immediately. Regards, Valarmathi Hi Friend, Please projects on cyber cafe projects on cyber cafe To accept details from user like name Birth date address contact no etc and store in a database Hi Friend, Try this: import java.awt.*; import java.sql.*; import javax.swing.*; import struts - Struts knows about all the data that need to be displayed. It is model who is aware about.... ----------------------------------------------- Read for more information. Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source framework used for developing J2EE web applications using Model View Controller tiles - Struts Struts Tiles I need an example of Struts Tiles Open Source projects Open Source projects Mono Open Source Project Mono provides...; SGI Open Source Project List The following projects...; Open-source projects get free checkup More open-source software - JDBC struts-taglib.jar I am not able to locate the struts-taglib.jar in downloaded struts file. Why? Do i need to download it again Outsourcing PHP Projects, Outsource PHP Projects Outsourcing PHP Projects - Outsource your PHP development projects Outsource your PHP projects to our PHP development in India. We have dedicated... outsourcing needs. Looking for outsourcing your PHP projects to company? Our Why Struts in web Application - Struts Why Struts in web Application Hi Friends, why struts introduced in to web application. Plz dont send any links . Need main reason for implementing struts. Thanks Prakash Free Java Projects - Servlet Interview Questions Free Java Projects Hi All, Can any one send the List of WebSites which will provide free JAVA projects with source code on "servlets" and "Jsp" relating to Banking Sector? don't Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://roseindia.net/tutorialhelp/comment/91453
CC-MAIN-2013-20
en
refinedweb
IEnumerable Distinct comparer example - Wednesday, April 09, 2008 12:47 AM I'm having trouble implementing an explicit comparer for the distinct method. I have an expression defined in AddressResults that returns a list of address from an in-memory object list of type <Addresses>. I want to return a distinct list of addresses Here is some example code: var results = (from p in AddressResults select p).Distinct(new AddressComparer<Addresses>()); public class AddressComparer : IEqualityComparer<Addresses> { public bool Equals(Tester x, Addresses y) { if (x.address == y.address && x.zip == y.zip) return true; return false; } public int GetHashCode(Addresses a) { string z = a.address+a.zip; return (z.GetHashCode()); } } I am getting a compile error: The non-generic type AddressComparer cannot be used with type arguments on the var results... line. What am I doing wrong? All Replies - Wednesday, April 09, 2008 2:32 AMJeff: You've already specified the generic type parameter you need when you derived from IEqualityComparer<Addresses>. AddressComparer doesn't need a type param, so just remove the type param when you construct the comparer. For example: var results = (from p in AddressResults select p).Distinct(new AddressComparer()); Hope that helps, - Thursday, April 10, 2008 4:23 AMHi there. The compiler is right. You are creating an instance of a non generic type (AddressComparer) as if it is generic. Just Change this line of code: Code Snippet var results = (from p in AddressResults select p).Distinct(new AddressComparer()); You also have a slight mistake in the Equal signature. The parameter x must be of type Addresses too. Cheers.
http://social.msdn.microsoft.com/forums/en-US/linqprojectgeneral/thread/c5b71644-b2d9-422b-b7fe-ef3bef30bbac/
CC-MAIN-2013-20
en
refinedweb
@Deprecated public class ServletContextFactoryBean extends Object implements FactoryBean<ServletContext>, ServletContextAware FactoryBeanthat exposes the ServletContext for bean references. Can be used as alternative to implementing the ServletContextAware callback interface. Allows for passing the ServletContext reference to a constructor argument or any custom bean property. Note that there's a special FactoryBean for exposing a specific ServletContext attribute, named ServletContextAttributeFactoryBean. So if all you need from the ServletContext is access to a specific attribute, ServletContextAttributeFactoryBean allows you to expose a constructor argument or bean property of the attribute type, which is a preferable to a dependency on the full ServletContext. ServletContext, ServletContextAware, ServletContextAttributeFactoryBean, WebApplicationContext.SERVLET_CONTEXT_BEAN_NAME clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public ServletContextFactoryBean() ServletContext<ServletContext> null) FactoryBeanNotInitializedException public Class<? extends ServletContext><ServletContext><ServletContext> FactoryBean.getObject(), SmartFactoryBean.isPrototype()
http://static.springsource.org/spring-framework/docs/3.2.0.RELEASE/javadoc-api/org/springframework/web/context/support/ServletContextFactoryBean.html
CC-MAIN-2013-20
en
refinedweb
RAISE(3) BSD Programmer's Manual RAISE(3) raise - send a signal to the current process #include <signal.h> int raise(int sig); The raise() function sends the signal sig to the current process. Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and the global variable errno is set to indicate the error. The raise() function may fail and set errno for any of the errors speci- fied for the library functions getpid(2) and kill(2). getpid(2), kill(2) The raise() function conforms to ANSI X3.159-1989 ("ANSI C")..
http://www.mirbsd.org/htman/i386/man3/raise.htm
CC-MAIN-2013-20
en
refinedweb
seeking info - JSP-Servlet seeking info Looking for the information on Java, JSP and Servlet Programming. Hello NaveenDon't worry as every one faces the same kind... codes. - JSP pro SEEKING THE POSTION OF EOF SEEKING THE POSTION OF EOF HOW TO KNOW THE END OF THE FILE OR SEEK THE POSITION OF DATA IN A FILE Advantages of Servlets over CGI java servlets provide excellent framework for server side processing. Using... Advantages of Servlets over CGI Servlets are server side components that provides a powerful mechanism Servlets Books that any Java web developer who uses JavaServer Pages or servlets will use every day... Servlets Books  ... Courses Looking for short hands-on training classes on servlets Accessing Database from servlets through JDBC! technology java servlets provide excellent framework for server side... Java Servlets - Downloading and Installation Java Servlets are server servlets - Servlet Interview Questions servlets is there anybody to send me the pdf of servlets tutorial? Thanks in advance for who cares about me Final Cut Pro X Final Cut Pro X The Final Cut Pro X is the name of new innovation in the video... during video editing. In fact, today Final Cut Pro X has created... introduces the many formerly missing in action section of its Final Cut Pro site Microsoft Visual Fox Pro - SQL Microsoft Visual Fox Pro how can create a query using sql on microsoft visual fox pro 6.0 Servlets - JSP-Servlet Servlets Hello Sir, can you give me an example of a serve let program which connects to Microsoft sql server and retrieves data from it and display... visit the following link: WindowTester Pro WindowTester Pro Experience ultimate testing flexibility WindowTester Pro offers cutting-edge technology for testing Swing and SWT Java graphical user who is client who is client tell me who is client is management servlets servlets why we are using servlets servlets servlets what is the duties of response object inlets the servlets SERVLETS Programming Style Guideline . If there's one idea that could serve as a guide to good programming..., RH, who had "unusual" style beliefs. He firmly held that the first clause J2EE Online Training to the occasion, various institutes are offering online J2EE training to those who... of the leaders in online J2EE online training courses, has an excellent faculty having... all the topic of J2EE that include Servlets, Servlet model, Servlet Life cycle Java Training Online from India . The student seeking Java training online from India has a number of options... excellent training in online java programming. With an excellent faculty, state Techniques used for Generating Dynamic Content Using Java Servlets. Open Source ISO Alliance, who predicted the vote will serve as "a springboard" for adoption of ODF... Standards Organization, a move that supporters say will serve as a springboard... of Massachusetts, and will be seeking assurance from the European Commission MCA Project Training ; The training is designed for the MCA students who want to speed up... students. The students who have appeared for the fourth semester exam may also apply... seeking for a good start in the IT Industry. While undergoing the Project Training VoIP Billing Solution Billing solution is the strategic choice for service providers seeking telco... a seamless VoIP backbone. webVoIP is proud to serve Voice carriers, Service... cards application server for VOIP. Boasting excellent performance First Step towards JDBC! Tomcat Books something for everyone who uses Tomcat. System and network administrators will find... with Java servlets and JavaServer Pages. Tomcat Book Information... Pages and Java Servlets. It can run as a stand-alone server or be integrated java program for me to see who are on my server java program for me to see who are on my server Hello, Im a beginner in java, doing some tutorials and stuff. I wanna write a java program to put on my server where i wanna see if like 3 people are on my server PLUS that i want servlets - Servlet Interview Questions what is servlets in Java what is servlets in Java Techniques used for Generating Dynamic Content Using Java Servlets. Servlets provide excellent framework to develop server-side application without... these plug-in are difficult and also learning curve is also very high. Java Servlets Java Servlets eliminated all these problems. The first truly platform Servlets Program Servlets Program Hi, I have written the following servlet: [code] package com.nitish.servlets; import javax.servlet.*; import java.io.*; import java.sql.*; import javax.sql.*; import oracle.sql.*; public class RequestServlet Uses of GPS Tracking System . This technology is particularly beneficial for those who do not understand... reminder makes it possible to serve the required purpose. It is necessary to avoid... to determine that who are the most productive employees. This consequentially servlets - Struts servlets - JDBC Servlets - JDBC Java Servlets database and servlets jsp and servlets Error in servlets Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/13331
CC-MAIN-2013-20
en
refinedweb
Discussion for the HoloViz.org tools (HoloViews, Datashader, Panel, hvPlot, GeoViews, Param, etc.) and PyViz in general Even when i do dynamic=False same Error : finalplot=tiles*rasterize(hmap1,dynamic=False).options(**opts) #finalplot finalplot2=hv.output(finalplot,holomap='gif',fps=3,backend='bokeh') finalplot2 Error: Format 'gif' does not appear to be supported. For help with hv.util.output call help(hv.util.output) height=..., width=...arguments you like so that there's something in the image to zoom into. Here, I'm not sure you're actually getting the Panel master version; try import panel as pn ; pn.__version__to see if you have the right Panel version that does support gif output for Bokeh. hv.Image(data1).opts(axiswise=True) + hv.Image(data2).opts(axiswise=True). I don't know what the normalizeoption is, but I don't think it would be having any effect here. widgetsargument of the panel to substitute any specialized widget you want for specific cases, but again, at instantiation time. @jbednar After installing master version of holoviews i am getting module to install selenium , so i have installed selenium , now it is asking for `` RuntimeError: PhantomJS is not present in PATH or BOKEH_PHANTOMJS_PATH. Try "conda install phantomjs" or "npm install -g phantomjs-prebuilt" :HoloMap [Date and Time :,region] :Overlay .WMTS.I :WMTS [Longitude,Latitude] .Image.I :Image [Longitude,Latitude] (MeanWavePeriod) `` tiles = gv.tile_sources.Wikipedia hmap1 = hv.HoloMap(allplot, kdims=['Date and Time :','region']) #hmap2 = hv.HoloMap(allplot2, kdims=['Date and Time :','region']) dd=df_div.opts(width=70, height=70) dd1=df_div1.opts(width=600, height=90) dd2=df_div2.opts(width=100,height=10) finalplot2=hv.output(tiles*rasterize(hmap1,dynamic=False).options(**opts),holomap='gif',fps=3,backend='bokeh') finalplot2 I made my code rasterize dynamic=False,removed panle object . should i installed phantomjs , as per i know it is headless browser ? @jbednar I'm sorry, but I'm a bit confused - what does "instantiating the Param panel" means? If, for example, I have: class MyParam(param.Parameterized): selector = param.ListSelector(...) Now, to instantiate the param object, I would do: mp = MyParam() To make the panel, I would do something like: pn.Column(mp.param.selector, ...) How how would I control the properties of the ListSelector widget at the instantiation time here? I couldn't see an obvious way to do this on the docs. pn.Param(CustomExample.param, widgets={ 'select_string': pn.widgets.RadioButtonGroup, 'select_number': pn.widgets.DiscretePlayer} ) import pandas as pd import hvplot.pandas df = pd.DataFrame({'time':[1,2,3,4,5], 'value':[5,1,2,4,3]}) df['variable'] = 'var' df.hvplot.line(x='time', y='value', hover_cols = ['value', 'variable'])
https://gitter.im/pyviz/pyviz?at=5daefe4ee1c5e91508b4372d
CC-MAIN-2019-51
en
refinedweb
fs2-cryptofs2-crypto Essential support of TLS for fs2. This library adds sport of TLS for fs2 programs. Library provides two levels of TLS: - TLSEngine A Low level abstraction of java's SSLEngineallowing asynchronous functional usage of SSLEngine - TLSSocket A wrapper of standard fs2.io.tcp.Socketthat allows to receive/send data over TLS/SSL SBTSBT Add this to your sbt build file : libraryDependencies += "com.spinoco" %% "fs2-crypto" % "0.4.0" DependenciesDependencies TLSSocketTLSSocket To create encrypted TLS socket, library allows very simple usage by just using current fs2.io.tcp.Socket. User just need to create tcp socket and then pass it to TLSSocket that creates encrypted version of the socket. Following is the example how this can be done in scala : import javax.net.ssl._ import java.net.InetSocketAddress import fs2.io.tcp.client import spinoco.fs2.crypto._ import spinoco.fs2.crypto.io.tcp.TLSSocket import java.nio.channels.AsynchronousChannelGroup import java.nio.channels.spi.AsynchronousChannelProvider import java.util.concurrent.{Executors, ThreadFactory} import cats.effect.IO._ import scala.concurrent.ExecutionContext import scala.{Stream => _ } import fs2.Stream implicit val executionContext: ExecutionContext = ExecutionContext.Implicits.global val sslCtx = SSLContext.getInstance("TLS") sslCtx.init(null, null, null) implicit val tcpACG: AsynchronousChannelGroup = AsynchronousChannelProvider .provider() .openAsynchronousChannelGroup(Executors.newCachedThreadPool(), 8) val ctx = SSLContext.getInstance("TLS") ctx.init(null, null, null) val engine = sslCtx.createSSLEngine() engine.setUseClientMode(true) val address = new InetSocketAddress("127.0.0.1", 6060) client(address) flatMap { socket => TLSSocket(socket, engine) } flatMap { tlsSocket => // perform any operations with tlsSocket as you would with normal Socket. ??? } Note that client initiates initial TLS handshake. As such, client needs to send always some data to server to trigger SSL handshake. It is engough to send empty chunk of bytes to trigger initial TLS handshake from client side. TLSEngineTLSEngine Java's SSLEngine is not most pleasant api to work with, and therefore there is TLSEngine. TLSEngine wraps SSLEngine and provides nonblocking asynchronous interface to SSLEngine that can be used to create TLS (with java 9 even DTLS) encrypted connections. TLSEngine is used to create TLSSocket, which can be used as real example on how TLSEngine can construct the higher level abstractions. TLSEgine provides two simple methods: - encrypt to encrypt data from application and send resulting data over transport layer - decrypt to decrypt data received from network transport. TLSEngine encryptionTLSEngine encryption When encrypting data with TLSEngine, you use the encrypt method that may return any of three results: Encrypted(data)indicating that data was successfully encrypted supplying the resulting frame to be sent over network Closedindicating that engine is closed and no more application data may be encrypted Handshake(data, next)providing data to be sent to network party and next operation to perfom immediately after the data will be send to remote party, but before any other encryptinvocation. TLSEngine decryptionTLSEngine decryption When decrypting data with TLSEngine, you use the decrypt method, that may return any of three possible results: Decrypted(data)indicating data that were decrypted. Note that data may be empty, for example in case the received data was not enough to form full TLS Record. If the data supplied to decrypt forms multiple TLS records, then this will return content of both TLS Recors, concatenated. Closedidicating that engine is closed and no more data can be decrypted. Handshake(data, signalSentEventually)providing data to be sent to remote party during handshake. Eventually signalSentEventuallymay be provided to perform operation immediately when the datawill be sent to network, before any other decrypt.
https://index.scala-lang.org/spinoco/fs2-crypto/fs2-crypto/0.3.0?target=_2.12
CC-MAIN-2019-51
en
refinedweb
Anyone can help me to debug the Python program? I want to find all the .sav files in E:\007, and conver the generators and loads use cong( ) and conl( ) function, but there are some errors in the program I wrote, anyone can help me ? Thank you very much! import os,sys sys.path.append(r"C:\Program Files\PTI\PSSE33\PSSBIN") os.environ['PATH'] = r"C:\Program Files\PTI\PSSE33\PSSBIN;" + os.environ['PATH'] os.chdir(r"E:\007") import psspy import redirect import glob PYTHONPATH = r'C:\Program Files\PTI\PSSE33\EXAMPLE' sys.path.append(PYTHONPATH) os.environ['PATH'] += ';' + PYTHONPATH redirect.psse2py() for CASE in glob.glob(os.path.join('E:\\007\\','*.sav')): psspy.psseinit(150000) psspy.case(CASE) psspy.cong(0) psspy.conl(0,1,1,[0,0],[0.0,0.0,0.0,0.0]) psspy.conl(0,1,2,[0,0],[0.0,0.0,0.0,0.0]) psspy.conl(0,1,3,[0,0],[0.0,0.0,0.0,0.0]) case_root = os.path.splitext(sys.argv[1])[0] psspy.save(case_root + "_C.sav") There ERRORS are as follow: Traceback (most recent call last): File "E:\007\21.py", line 32, in <module> case_root = os.path.splitext(sys.argv[1])[0] IndexError: list index out of range
https://psspy.org/psse-help-forum/question/1278/anyone-can-help-me-to-debug-the-python-program/
CC-MAIN-2019-51
en
refinedweb
Freed FoxOct 9, 2018Hi, @Tova (Wix), are there any way you could have an example for a dynamic page too? I tried it on my dynamic page but it seems the codes for the dynamic page and the product page is different, :) Hi, @Tova (Wix), are there any way you could have an example for a dynamic page too? I tried it on my dynamic page but it seems the codes for the dynamic page and the product page is different, :) Hi Debora and Freed Fox, Thanks for your feedback. Yes, you can add ratings and reviews to a dynamic page, and we've added it to our list of ideas for future code examples. In the meantime, you can try searching the example code for properties and functions that specifically relate to a Product page, and replace them with the equivalent for a dynamic page. For example, Line 15 of the Product Page Code uses the Product page getProduct() function. You can replace it with getCurrentItem(), which runs on your dynamic item page's dataset. Hi @Tova (Wix) , Please find my reply below: Ratings & Reviews Code First, I created (2) databases. One for Reviews-Stats and the other for Reviews. Next, I add the code found in Example Store Reviews & Ratings page and replaced the 'getProduct()' function with the 'getCurrent() function in my dynamic item page. Here is the page for review: Tova, how do I get the page to function similar to the sample? I have made a few layout changes. Previous / Next Button Code Issue Also, I am experiencing issues with my previous / next buttons. I attempted to follow the directions and added the code for the page with the dataset: But deleted the one for the dynamic page because it wasn't working. Please adv. Thank you. Debora Hi @Tova (Wix), I would apppreciate your help with fixing the issue(s) that I’m having with the code. Please see previous message. I realily would appreciate your help with this matter. Thank you. Debora Hi Debora, I took a look at your site. Here are a few pointers: You are using the code from the old example, which is more complicated. Please copy the code from the new example. The code in your Review Box lightbox is missing - you can copy it from the example. Please check that you have all the datasets used in the example and that their settings and connections are the same as in the example. Since you are using a dynamic page instead of a product page, make sure all references to productPage1 on the Experts dynamic page are replaced with references to your dataset: dynamicDataset. So for example, your code on Line 20 should state: product = await $w('#dynamicDataset').getCurrentItem();. HTH! Tova Hi @Tova (Wix), I've been struggling to add ratings and reviews to a dynamic page, and I've already set it based on code examples you provided above for Debora with replacing CODE product = await $w('#productPage1').getProduct(); with $w('#dynamicDataset').getCurrentItem();. However it doesn't work properly with no error. After submitting the review form, all I can see in the database on live is Name,Location,review title and review. No Rating and recommend plus a field that references the item in a collection as recipes in my case. For the record, I'm testing the code before applying to my real site. Here is the page as testing for review: I really need to get it solved for applying to my biz. Please diagnose and let me know what I'd have done. I really would appreciate if you advise me. Thank you. Jay @juneyoung park Hey Jay, good news and bad news. The bad news is that it's too early in the day for a beer. The good news is that you just need to change a Lightbox setting. Set the "Automatically display lightbox on pages" to NO so that OpenLightbox() passes the data to the Lightbox. I hope this helps. Have fun! Yisrael Hi Fellas Anyone manage to attached the rating display to repeater? It is not being attached. If its bug please fix it Wix Team. Thanks, DA Hi, @Tova (Wix)! I've got mine to working but the Recommended Text and Reviews sometimes does not load. Sometimes the text inside does not follow the code. Why is that? For example, when it must say There are no reviews", it doesn't/ It even show the placeholder text. Why is that? Hi @Freed Fox, I looked at the site you posted earlier in this thread and Reviews, Recommended Text, and "There are no reviews yet" worked for me. Can you describe the exact scenario where you run into problems? Make sure you are previewing from your "index" page with the repeater (in your case, Print on Demand Suppliers), and not directly from the dynamic item page. This is because the code needs to "grab" the product ID when you click the item in the repeater. @Tova (Wix) Are you guys working on being able to update a review? I experimented with it before but it does not update the review-stats database. How can we be able to do that? I am not a coder so I really can't solve it 😂 Hi @Tova (Wix), I have followed your directions and updated my current page(s). I would appreciate if you could review the code for the Dynamic Item page and the review box; plus help me with a couple of other code requests. Ratings & Reviews CodeHere is the page for review: Database Also, I have been experiencing issues with the Databases. Rich Text: Although the 'Text' font/size on the Dynamic Item page remains the same as the database data, the 'Rich Text 'doesn't take the set fonts/colors on the Dynamic Item Page. The color/font changes from Proxima to Times, etc. In addition, when adding the content, I am unable to add a link to the content or image. Pls. adv. Also, can I change the database name? If so, please adv. how. Star Ratings w/Rate Me Tova, can you adv. if I can add/connect the Star Ratings to Rate Me section of the Dynamic Item page for each individual profile. Is this doable? Please adv. Thank you. for all your help! Debora PS. Unable to add images. Will send under separate cover. (see other comments sent earlier) I know this is an old post, but can you use this function on a dynamic item page without the light box feature? if so how would I do this? I can't open the example link. Can you please repost it? Add ratings & reviews to dynamic page items I am trying to allow users to submit and see ratings on my dynamic Item pages. I followed the step-by-step (for over 6 long hours) tutorial on this example page. However, it is will not work correctly. The one thing that differs on my site from the example page in this tutorial is that I am not connecting the reviews to a Wix store. My website works as a directory for Veterinary offices and clinics and I need to set up reviews about individual Facilities in my database. I changed the product ids (see below) to match the field keys in my Wix data, but I am not sure if it's correct. Example Code: import wixData from 'wix-data'; import wixWindow from 'wix-window'; let product; $w.onReady(async function () { product = await $w('#productPage1').getProduct(); initReviews(); }); I might also add that I have the following permissions settings for my Databases: UnitedStates_Facilities–only members can add, update, anyone can update & read); Reviews & ReviewStats Database–allow anyone to read, add, and update Data. (Not sure if this would affect my anything or not.) Here's is a link to one of the dynamic pages and bellow is the code for both myDynamic Item Page and Lightbox: Dynamic Item Page Code: import wixData from 'wix-data'; import wixWindow from 'wix-window'; let facilityItem; $w.onReady(async function () { facilityItem = await $w('#facilityDataset').getCurrentItem(); initReviews(); }); async function initReviews() { await $w('#reviews').setFilter(wixData.filter().eq('itemId', facilityItem._id)); showReviews(); loadStatistics(); } async function loadStatistics() { const stats = await wixData.get('ReviewStats', facilityItem._id); if (stats) { let avgRating = (Math.round(stats.rating * 10 / stats.count) / 10); let percentRecommended = Math.round(stats.recommended / stats.count * 100); let ratings = $w('#generalRatings'); ratings.rating = avgRating; ratings.numRatings = stats.count; $w('#recoPercent').text = `${percentRecommended} % would recommend`; $w('#generalRatings').show(); } else { $w('#recoPercent').text = 'There are no reviews yet'; } $w('#recoPercent').show(); } export function reviewsRepeater_itemReady($w, itemData, index) { if (itemData.recommends) { $w('#recommendation').text = 'I recommend this product.'; } else { $w('#recommendation').text = "I don't recommend this product."; } $w('#oneRating').rating = itemData.rating; let date = itemData._createdDate; $w('#submissionTime').text = date.toLocaleString(); } export function showReviews() { if ($w('#reviews').getTotalCount() > 0) { $w('#reviewsRepeater').expand(); } else { $w('#reviewsRepeater').collapse(); } } export async function addReview_click(event, $w) { const dataForLightbox = { itemId: facilityItem._id }; let result = await wixWindow.openLightbox('Add Review', dataForLightbox); $w('#reviews').refresh(); loadStatistics(); $w('#thankYouMessage').show(); } Lightbox Code import wixWindow from 'wix-window'; import wixData from 'wix-data'; //-------------Global Variables-------------// // Current product's ID. let itemId; //-------------Lightbox Setup-------------// $w.onReady(function () { // Get the data passed by the page that opened the lightbox. itemId = wixWindow.lightbox.getContext().facilityName; // Set the action that occurs before the review is saved. $w('#submitReviews').onBeforeSave(() => { // If no rating was set: if ($w('#radioRating').value === '') { // Display an error message. $w('#rateError').show(); // Force the save to fail. return Promise.reject(); } // If a rating was set, set the element values into the fields of the dataset item. // These values will be saved in the collection. $w('#submitReviews').setFieldValues({ itemId, rating: $w('#radioRating').value, recommends: $w('#radioGroup1').value }); }); // Set the action that occurs after the review is saved. $w('#submitReviews').onAfterSave(async () => { // Update the product's statistics using the updateStatistics() function. await updateStatistics($w('#radioGroup1').value); // When the statistics have been updated, close the lightbox to return the user to the product page. wixWindow.lightbox.close(); }); }); // Update (or create) the product statistics. async function updateStatistics(isRecommended) { // Get the review statistics for the current product from the "ReviewStats" collection. let stats = await wixData.get('ReviewStats', itemId); // If statistics data already exist for this product: if (stats) { // Add the new rating to the total rating points. stats.rating += parseInt($w('#radioRating').value, 10); // Increase the ratings count by 1. stats.count += 1; // Increase the radioGroup1s by one if the user recommends the product. stats.recommended += (isRecommended === "true") ? 1 : 0; // Update the new product statistics in the "ReviewStats" collection. return wixData.update('ReviewStats', stats) } //If no statistics data exists for this product, create a new statistics item. stats = { // Set the statistics item's ID to the current product's ID. _id: itemId, // Set the statistics item's rating to the rating entered by the user. rating: parseInt($w('#radioGroup1').value, 10), // Set the statistics item's ratings count to 1 because this is the first rating. count: 1, // Set the statistics item's recommended property to 1 if the user recommends the product. recommended: (isRecommended === "true") ? 1 : 0 }; // Insert the new product statistics item into the "ReviewStats" collection. return wixData.insert('reviewStats', stats) } Database Screenshots: Sorry this is so long. I would greatly appreciate someone's help with this! Thanks! Hello, I'm wondering if anyone would be able to help me with this. I almost got my site to work perfectly, but my reviews are showing up on all product instead a specific product. Hey @Destiny Lozano, Please clarify the issue. You only want reviews to appear for a single product instead of all products on your Product page? The example shows how to add reviews to all products. Two questions that seems to don't be setup and tried but couldn't figure it out. 1. The load more button is hidden on load,even though there are 5 reviews or 15 the load more button will not be visible,how can we code this?A code that somehow checks the number of reviews? 2. The message "thank you for submitting a review" appears even when there is not review inserted,even if you X the lightbox,the code is not checking if a review was inserted just shows the text Thank you Hey @Mels Webb, 1. The load more button should not be hidden on load. It should be visible, as in the example. The ID of the load more button is #resultsPages. 2. True, if the site visitor opens the Review lightbox and then closes it without submitting a review, the thank you message appears. To handle this scenario, you can add a bit of code to the lightbox and the Product page. If a site visitor submits a review, the lightbox will close via the lightbox close() function. Closing the lightbox by clicking the 'X' will not run the close() function. You can pass an object with a boolean value of 'true' via the close() function, which will only reach the Product page if a review was submitted. Review Box code: On the Product page, when a site visitor clicks "Write a Review", reset the thank you message by hiding it. Then, after the site visitor closes the lightbox, check whether the object was passed from the lightbox to the Product page, indicating that a review was submitted. Only show the thank you message if the object was passed. Product Page code: HTH, Tova @Tova (Wix) hi Tova your reply is much appreciated I will try this approach I am trying to write a code that checks the number of items in the repeater and if there are less than 5 will hide the button. I am struggling with how to approach it Can you please have a look at my code and see if im on the rigth way Trying to check the no of results in the repeater if more than 5 show the load more button if no hide Added it to the on ready part $w("#reviewsRepeater").forEachItem( ($item, itemData, index) => { let total = data.length; if($item.total > 5 ){ //if bigger than 5 show the button else hidr Or maybe it is better to get the total count of the reviews from the repeater and hide/display the button accordingly? But than how i check the total number of reviews for each product id? Thank you Once a user does a review, how do we restrict live view until it is approved? is that possible? @Yoav (Wix)
https://www.wix.com/corvid/forum/corvid-tips-and-updates/example-store-reviews-rating/p-2
CC-MAIN-2019-51
en
refinedweb
Django application for organizing numerical model experiment output Project description Django application for organizing numerical model experiment output Documentation The full documentation is at. Quickstart Install Experiment: pip install dj-experiment Add it to your INSTALLED_APPS: INSTALLED_APPS = ( ... 'dj_experiment.apps.DjExperimentConfig', ... ) Add Experiment’s URL patterns: from dj_experiment import urls as dj_experiment_urls urlpatterns = [ ... url(r'^', include(dj_experiment_urls)), ... ] Features - TODO Running Tests Does the code actually work? source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install tox (myenv) $ tox Credits Tools used in rendering this package: History 0.1.0 (2017-07-31) - First release on PyPI. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dj-experiment/
CC-MAIN-2019-51
en
refinedweb
Hello, I am new to WixCode and i have an issue. I am trying to use twilio api to send an sms after a form is submitted. I followed Twilio documentation about the process and used a simple code to send sms after a button is clicked. The problem is, the documentation says that i need to use the 'Require()' function, and, after some searching i learn that function does not exist, and i couldn't find anything to substitude without giving me an error. How can i change this to work my needs? I followed the documentation here: Thank you You can install the Twilio NPM library from the Wix Code Package Manager. See the article Wix Code: How to Manage External Code Libraries with the Package Manager for information. Hey there, I have done all of that. Installed the node, followed the instructions, but it still shows me an error. The require('twilio') still shows me an error. Edit: When i check the console, it shows me the following error as well " Cannot find module 'twilio' in 'public/pages/t218p.js' " require error is ok, the code will work regardless the error The reason for your code not working must be something else. If you paste your code here, we can have a look Thank you @shan, My code is simple i just want to send a message. The code is the following: import twilio from 'twilio'; export function getTwilioName() { return com.twilio.Twilio(); } const accountSid = 'CodeSID'; const authToken = 'CodeToken'; const client = require('twilio')(accountSid, authToken); export function button1_click(event) { client.messages .create({from: '+351927942821', body: makeid(), to: '351934665900'}) .then(message => console.log(message.sid)) .done(); } I'm problably missing something crucial here, if you could help here, i would thank you a lot. Never write your API Key or any sensitive information on the page code. You should create a .jsw file in your backend which will contain this Then call the function from your Page code like this I have not tested it but it should work like this Hey, First of all, the api key was my mistake and i removed right away, thank you for warning. Second, the code works great, i assumed that i had to put everything in one place. It was my bad. Thank you for your help. Hi, I followed the script. It worked great if I hard coded the outbound phone number like const number = '+16472092190' : Example text number However, if I tried to get the phone number from the input text, var number = $w('#input1').value; or var number = '+1' + $w('#input1').value; It didn't work. Can you pls provide me some idea how to fix it?? Make sure you give some time for the input fields to properly update. See this forum post for more information.
https://www.wix.com/corvid/forum/community-discussion/require-doesn-t-work
CC-MAIN-2019-51
en
refinedweb
Cloud Dataproc and Apache Spark provide infrastructure and capacity that you can use to run Monte Carlo simulations written in Java, Python, or Scala. Monte Carlo methods can help answer a wide range of questions in business, engineering, science, mathematics, and other fields. By using repeated random sampling to create a probability distribution for a variable, a Monte Carlo simulation can provide answers to questions that might otherwise be impossible to answer. In finance, for example, pricing an equity option requires analyzing the thousands of ways the price of the stock could change over time. Monte Carlo methods provide a way to simulate those stock price changes over a wide range of possible outcomes, while maintaining control over the domain of possible inputs to the problem. In the past, running thousands of simulations could take a very long time and accrue high costs. Cloud Dataproc enables you to provision capacity on demand and pay for it by the minute. Apache Spark lets you use clusters of tens, hundreds, or thousands of servers to run simulations in a way that is intuitive and scales to meet your needs. This means that you can run more simulations more quickly, which can help your business innovate faster and manage risk better. Security is always important when working with financial data. Cloud Dataproc runs on Google Cloud Platform (GCP), which helps to keep your data safe, secure, and private in several ways. For example, all data is encrypted during transmission and when at rest, and GCP is ISO 27001, SOC3, and PCI compliant. Objectives - Create a managed Cloud Dataproc cluster with Apache Spark pre-installed. - Run a Monte Carlo simulation using Python that estimates the growth of a stock portfolio over time. - Run a Monte Carlo simulation using Scala that simulates how a casino makes money. Set up a Google Cloud Platform project Dataproc and Compute Engine APIs. - Install and initialize the Cloud SDK. Create a Cloud Storage bucket in your GCP project - In the Cloud. Creating a Cloud Dataproc cluster Follow the steps to create a Cloud Dataproc cluster from the Google Cloud Platform Console. The default cluster settings, which includes two-worker nodes, is sufficient for this tutorial. Disabling logging for warnings By default, Apache Spark prints verbose logging in the console window. For the purpose of this tutorial, change the logging level to log only errors. Follow these steps: Use ssh to connect to the Cloud Dataproc cluster's primary node - In the Cloud Console, go to the VM instances page. Go to the VM instances page - In the list of virtual machine instances, click SSH in the row of the instance that you want to connect to. A browser window opens at your home directory on the primary node. Connected, host fingerprint: ssh-rsa 2048 ... ... user@clusterName-m:~$ Change the logging setting From the primary node's home directory, edit /etc/spark/conf/log4j.properties. sudo nano /etc/spark/conf/log4j.properties Set log4j.rootCategoryequal to ERROR. # Set only errors to be logged to the console log4j.rootCategory=ERROR, Save the changes and exit the editor. If you want to enable verbose logging again, reverse the change by restoring the value for .rootCategoryto its original ( INFO) value. Spark programming languages Spark supports Python, Scala, and Java as programming languages for standalone applications, and provides interactive interpreters for Python and Scala. The language you choose is a matter of personal preference. This tutorial uses the interactive interpreters because you can experiment by changing the code, trying different input values, and then viewing the results. Estimating portfolio growth In finance, Monte Carlo methods are sometimes used to run simulations that try to predict how an investment might perform. By producing random samples of outcomes over a range of probable market conditions, a Monte Carlo simulation can answer questions about how a portfolio might perform on average or in worst-case scenarios. Follow these steps to create a simulation that uses Monte Carlo methods to try to estimate the growth of a financial investment based on a few common market factors. Start the Python interpreter from the Cloud Dataproc primary node. pyspark Wait for the Spark prompt >>>. Enter the following code. Make sure you maintain the indentation in the function definition. import random import time from operator import add def grow(seed): random.seed(seed) portfolio_value = INVESTMENT_INIT for i in range(TERM): growth = random.normalvariate(MKT_AVG_RETURN, MKT_STD_DEV) portfolio_value += portfolio_value * growth + INVESTMENT_ANN return portfolio_value returnuntil you see the Spark prompt again. The preceding code defines a function that models what might happen when an investor has an existing retirement account that is invested in the stock market, to which they add additional money each year. The function generates a random return on the investment, as a percentage, every year for the duration of a specified term. The function takes a seed value as a parameter. This value is used to reseed the random number generator, which ensures that the function doesn't get the same list of random numbers each time it runs. The random.normalvariatefunction ensures that random values occur across a normal distribution for the specified mean and standard deviation. The function increases the value of the portfolio by the growth amount, which could be positive or negative, and adds a yearly sum that represents further investment. You define the required constants in an upcoming step. Create many seeds to feed to the function. At the Spark prompt, enter the following code, which generates 10,000 seeds: seeds = sc.parallelize([time.time() + i for i in xrange(10000)]) The result of the parallelizeoperation is a resilient distributed dataset (RDD), which is a collection of elements that are optimized for parallel processing. In this case, the RDD contains seeds that are based on the current system time. When creating the RDD, Spark slices the data based on the number of workers and cores available. In this case, Spark chooses to use eight slices, one slice for each core. That's fine for this simulation, which has 10,000 items of data. For larger simulations, each slice might be larger than the default limit. In that case, specifying a second parameter to parallelizecan increase the number slices, which can help to keep the size of each slice manageable, while Spark still takes advantage of all eight cores. Feed the RDD that contains the seeds to the growth function. results = seeds.map(grow) The mapmethod passes each seed in the RDD to the growfunction and appends each result to a new RDD, which is stored in results. Note that this operation, which performs a transformation, doesn't produce its results right away. Spark won't do this work until the results are needed. This lazy evaluation is why you can enter code without the constants being defined. Specify some values for the function. INVESTMENT_INIT = 100000 # starting amount INVESTMENT_ANN = 10000 # yearly new investment TERM = 30 # number of years MKT_AVG_RETURN = 0.11 # percentage MKT_STD_DEV = 0.18 # standard deviation Call reduceto aggregate the values in the RDD. Enter the following code to sum the results in the RDD: sum = results.reduce(add) Estimate and display the average return: print sum / 10000. Be sure to include the dot ( .) character at the end. It signifies floating-point arithmetic. Now change an assumption and see how the results change. For example, you can enter a new value for the market's average return: MKT_AVG_RETURN = 0.07 Run the simulation again. print sc.parallelize([time.time() + i for i in xrange(10000)]) \ .map(grow).reduce(add)/10000. When you're done experimenting, press CTRL+Dto exit the Python interpreter. Programming a Monte Carlo simulation in Scala Monte Carlo, of course, is famous as a gambling destination. In this section, you use Scala to create a simulation that models the mathematical advantage that a casino enjoys in a game of chance. The "house edge" at a real casino varies widely from game to game; it can be over 20% in keno, for example. This tutorial creates a simple game where the house has only a one-percent advantage. Here's how the game works: - The player places a bet, consisting of a number of chips from a bankroll fund. - The player rolls a 100-sided die (how cool would that be?). - If the result of the roll is a number from 1 to 49, the player wins. - For results 50 to 100, the player loses the bet. You can see that this game creates a one-percent disadvantage for the player: in 51 of the 100 possible outcomes for each roll, the player loses. Follow these steps to create and run the game: Start the Scala interpreter from the Cloud Dataproc primary node. spark-shell Copy and paste the following code to create the game. Scala doesn't have the same requirements as Python when it comes to indentation, so you can simply copy and paste this code at the scala>prompt. val STARTING_FUND = 10 val STAKE = 1 // the amount of the bet val NUMBER_OF_GAMES = 25 def rollDie: Int = { val r = scala.util.Random r.nextInt(99) + 1 } def playGame(stake: Int): (Int) = { val faceValue = rollDie if (faceValue < 50) (2*stake) else (0) } // Function to play the game multiple times // Returns the final fund amount def playSession( startingFund: Int = STARTING_FUND, stake: Int = STAKE, numberOfGames: Int = NUMBER_OF_GAMES): (Int) = { // Initialize values var (currentFund, currentStake, currentGame) = (startingFund, 0, 1) // Keep playing until number of games is reached or funds run out while (currentGame <= numberOfGames && currentFund > 0) { // Set the current bet and deduct it from the fund currentStake = math.min(stake, currentFund) currentFund -= currentStake // Play the game val (winnings) = playGame(currentStake) // Add any winnings currentFund += winnings // Increment the loop counter currentGame += 1 } (currentFund) } returnuntil you see the scala>prompt. Enter the following code to play the game 25 times, which is the default value for NUMBER_OF_GAMES. playSession() Your bankroll started with a value of 10 units. Is it higher or lower, now? Now simulate 10,000 players betting 100 chips per game. Play 10,000 games in a session. This Monte Carlo simulation calculates the probability of losing all your money before the end of the session. Enter the follow code: (sc.parallelize(1 to 10000, 500) .map(i => playSession(100000, 100, 250000)) .map(i => if (i == 0) 1 else 0) .reduce(_+_)/10000.0) Note that the syntax .reduce(_+_)is shorthand in Scala for aggregating by using a summing function. It is functionally equivalent to the .reduce(add)syntax that you saw in the Python example. The preceding code performs the following steps: - Creates an RDD with the results of playing the session. - Replaces bankrupt players' results with the number 1and nonzero results with the number 0. - Sums the count of bankrupt players. - Divides the count by the number of players. A typical result might be: 0.998 Which represents a near guarantee of losing all your money, even though the casino had only a one-percent advantage. Cleaning up To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:. What's next For more on submitting Spark jobs to Cloud Dataproc without having to use sshto connect to the cluster, read Cloud Dataproc—Submit a job Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.
https://cloud.google.com/solutions/monte-carlo-methods-with-hadoop-spark?hl=sv
CC-MAIN-2019-51
en
refinedweb
Getting Started with Ionic: Angular Concepts & Syntax By Josh Morony This article was originally written following the release of Ionic 2, and focused on the new concepts and syntax introduced in the new version of Angular, along with some comparisons to Ionic 1.x and AngularJS. Since then, Ionic 4 has been released which now allows us to use Ionic with any framework (not just Angular). Angular still remains the most popular choice for Ionic development, so I decided to revisit this post and update it to be relevant today. This article focuses mostly on some basic concepts behind using Angular, which have mostly stayed the same since the initial release. With the current iterations of the Angular and Ionic frameworks, we are able to make apps that perform better on mobile, adhere to the latest web standards, are scalable, reusable, modular, and so on. As always, we continue to use the web tech (HTML, CSS, and Javascript) that we know and love to build applications, but there are some conceptual and syntax differences that we need to understand when using Ionic and Angular (versus standard HTML/CSS/JavaScript). For example, the HTML we use in our templates looks a little different than you might be used to: <ion-header> <ion-toolbar> <ion-title> Ionic Blank </ion-title> </ion-toolbar> </ion-header> <ion-content> <div class="ion-padding"> The world is your oyster. <p>If you get lost, the <a target="_blank" rel="noopener" href="">docs</a> will be your guide.</p> </div> <ion-list> <ion-item button * </ion-item> </ion-list> </ion-content> and the same goes for the Javascript: import { Component } from '@angular/core'; import { NavController } from '@ionic/angular'; @Component({ selector: 'app-home', templateUrl: 'home.page.html', styleUrls: ['home.page.scss'], }) export class HomePage { constructor(private navCtrl: NavController){ } viewItem(item){ this.navCtrl.navigateForward('/items/' + item.id) } } If you’re already familiar with ECMAScript 6 or TypeScript then a lot of this will likely already look somewhat familiar. I also have an introduction to ECMAScript 6 and Angular for Ionic developers, but in this post I wanted to dive into the actual syntax we will be using in Ionic/Angular applications. Angular & Ionic Concepts Before I get into the syntax, I wanted to cover a few new general concepts you might come across when learning Ionic & Angular. Transpiling Transpiling means converting from one language to another language. Why is this important to us? Basically, ES6 gives us all this new stuff to use, but ES6 is just a standard and it is not completely supported by browsers yet. We use a transpiler to convert our ES6 code into ES5 code (i.e. “normal Javascript”) that is compatible with the browsers of today. Once ES6 is widely supported, this step wouldn’t be necessary. In the context of Ionic applications, here’s an idea of how it might look: When we run ionic serve, our code inside of [the app folder] is transpiled into the correct Javascript version that the browser understands (currently, ES5). That means we can work at a higher level using TypeScript and ES6+, but compile down to the older form of Javascript the browser needs. - Ionic Website You don’t need to worry about how this process works, this all happens automatically when you build your Ionic applications. However, it is useful to understand why you can use the fancy new JavaScript stuff. Web Components Web Components are kind of the big thing that is emerging now – they weren’t really feasible to use in Angular 1.x but the current version of Ionic is entirely based on web components. Web Components are not specific to Angular or Ionic, they are becoming a new standard on the web to create modular, self-contained, pieces of code that can easily be inserted into a web page (kind of like Widgets in WordPress). In a nutshell, they allow us to bundle markup and styles into custom HTML elements. - Rob Dodson Rob Dodson wrote a great post on Web Components where he explains how they work and the concepts behind it. He also provides a really great example, and I think it really drives the point home of why web components are useful. Basically, if you wanted to add an image slider as a web component, the HTML for that might look like this: <img-slider> <img src="images/sunset.jpg" alt="a dramatic sunset"> <img src="images/arch.jpg" alt="a rock arch"> <img src="images/grooves.jpg" alt="some neat grooves"> <img src="images/rock.jpg" alt="an interesting rock"> </img-slider> instead of> Rather than downloading some jQuery plugin and then copying and pasting a bunch of HTML into your document, you could just import the web component and add something simple like the image slider code shown above to get it working. Web Components are super interesting, so if you want to learn more about how they work (e.g. The Shadow Dom and Shadow Boundaries) then I highly recommend reading Rob Dodson’s post on Web Components. Since originally writing this article, I have also released articles of my own about various aspects of web components (and how they relate to Ionic): However, keep in mind that some of these concepts are little on the advanced side. For the most part, you don’t actually need to know much about how web components work if you just want to use them in Ionic. If you are not interested in the mechanics of it all, then most of the time it is going to be as simple as dropping a web component into your template like this: <ion-button>Click me</ion-button> This is how Ionic works today, we can use the web components that Ionic provides to us, or we can create our own custom Angular components. By adding these tags to our templates, whatever functionality they provide will be embedded right there. Ionic provides a lot of these pre-made components that we can just drop into our applications to create sleek mobile user interfaces, which is one of the reasons the Ionic framework is so powerful (Ionic does most of the work for us). Classes Classes are a concept from Object Oriented Programming. There’s quite a lot to cover on the topic of classes, and I’m not going to attempt to do that here. A good place to start understanding the concept of classes is Introduction to Object-Oriented JavaScript, but keep in mind this is the current (soon to be old) way of implementing objects in JavaScript. JavaScript has never had a class statement, so instead of creating actual classes functions were used to act as classes, but now we will be able to use an actual class syntax with ES6. In general, a class represents an object. Each class has a constructor which is called when the class is created (this is where you would run some initialisation code and maybe set up some data that the class will hold), and methods that can be called (both from within the class itself, but also by code outside of the class that wants access to something). We could have a Page object for example. That Page object could store values like title, author and date which could be initialised in the constructor. Then we could add some methods to the class like getAuthor which would return the author of the page, or setAuthor which would change the author. How the concept of classes apply to Ionic/Angular applications should start to become apparent as you begin learning and building applications, but having a bit of background on the general concept helps. Angular Syntax Now let’s take a look at some actual Angular syntax that you will be using in your Ionic applications. Before I get into that though, I think it’s useful to know about the APIs that each and every DOM element (that is, a single node in your HTML like <input>) have. Let’s imagine we’ve grabbed a single node by using something like getElementById(‘myInput’) in JavaScript. That node will have attributes, properties, methods and events. An attribute is some data you supply to the element, like this: <input id = "myInput" value = "Hey there"> This attribute is used to set an initial property on the element. Attributes can only ever be strings. A property is much like an attribute, except that we can access it as an object and we can modify it after it has been created. For example: var myInput = document.getElementById('myInput'); console.log(myInput.value); // Hey there myInput.value = "What's up?"; console.log(myInput.value); // What's up? myInput.value = new Object(); // We can also store objects instead of strings on it A method is a function that we can call on the element, like this: myInput.setValue('Hello'); An element can also fire events like focus, blur, click and so on – elements can also fire custom events. Ok, now let’s take a look at some Angular code! There’s a great Angular cheat sheet you can check out here, I’ll be using some examples from there. The examples in the following section are specific to Angular. The syntax we will be using and the functionality they achieve is something built-in to Angular, unlike the stuff up until this point which have mostly just been generic web concepts. Binding a Property to a Value <input [value]="firstName"> This will set the elements value property to the expression firstName. Note that firstName is an expression, not a string. This means that the value of the firstName variable (defined in your class) will be used here, not the literal string ‘firstName’. Binding a Function to an Event <ion-button (click)="someFunction($event)"> This will call the someFunction function and pass in the event whenever the button is clicked. You can replace click with any native or custom event you like. Rendering Expressions with Interpolations <p>Hi, {{name}}</p> This will evaluate the expression and render the result in the template. In this case, it would just display the name variable here, but you can also create other expressions like {{1+1}} which would render 2 in the template. Two Way Data Binding Angular has a concept of two-way data binding, meaning that if we updated a value in our class the change would be reflected in the template, and if we changed the value in the template it would be reflected in the class. We could achieve this two way data binding in Angular like this: <input [value]="name" (input)="name = $event.target.value"> This sets the value to the expression name and when we detect the input event we update name to be the new value that was entered. To make this easier, we can use ngModel in Angular like this to achieve the same thing: <input [(ngModel)]="name"> This syntax is just a shortcut for the same syntax we described above. Creating a Template Variable to Access an Element <p #myParagraph></p> This creates a local variable that we can use to access the element, so if I wanted to add some content into this paragraph I could do the following: <ion-button (click)="myParagraph.innerHTML = 'Once upon a time...'"> NOTE: This is just an example to show that you can access the properties of the paragraph tag using the template variable – you shouldn’t actually modify the content of elements using innerHTML in this way. Structural Directives <section * <li * We can use structural directives to modify our templates. The *ngIf directive will remove a DOM element if the condition it is attached to is not met. The *ngFor directive can loop over an array, and repeat a DOM element for each element in that array. Decorators @Component({ selector: 'app-home', templateUrl: 'home.page.html', styleUrls: ['home.page.scss'], }) Decorators like @Component, @Directive and so on allow you to attach information to your components. The example above would sit on top of a class to indicate that it is a “component” and also additional information like the selector that should be used for the tag name, the path to the template that is being used with this class, and the associated styles as well. You can read more about decorators here which is a preview from my book. Import & Export ES6 allows us to Import and Export components. Take the following component for example: import { Component } from '@angular/core'; import { NavController } from '@ionic/angular'; @Component({ selector: 'app-cool-component', templateUrl: 'cool-component.component.html', styleUrls: ['cool-component.component.scss'], }) export class MyCoolComponent { constructor(private navCtrl: NavController){ } } This component is making use of Component and NavController so it imports them. The MyCoolComponent component that is being created here is then exported. Now you would be able to access MyCoolComponent by importing it elsewhere: import { MyCoolComponent } from './components/my-cool-component/my-cool-component'; Depdendency Injection Let’s take another look at one of the examples from above to briefly touch on what “dependency injection” is: import { Component } from '@angular/core'; import { NavController } from '@ionic/angular'; @Component({ selector: 'app-home', templateUrl: 'home.page.html', styleUrls: ['home.page.scss'], }) export class HomePage { constructor(private navCtrl: NavController){ } viewItem(item){ this.navCtrl.navigateForward('/items/' + item.id) } } We first import the NavController at the top of the file. We then “inject” it through the constructor like this: constructor(private navCtrl: NavController){} By adding navCtrl as an argument in the constructor and assigning it a “type” of NavController (the thing we just imported) it will set up a reference to NavController for us on a class member called navCtrl. This means that we can then access the functionality that NavController provides using the navCtrl variable which is now accessible throughout the entire class. This is what we are doing inside of the viewItem method. Summary I hope that this article has been able to familiarise you with a few key concepts behind Ionic & Angular. Of course, there is much more to learn about both Ionic & Angular on top of the concepts we have covered in this article (and even the stuff we have covered in this article have only been touched upon lightly). You will find plenty of additional tutorials on this website, as well as in my book, to help you along the way.
https://www.joshmorony.com/ionic-2-first-look-series-new-angular-2-concepts-syntax/
CC-MAIN-2019-51
en
refinedweb
Hey! Please refrain from roasting me as I am a new developer and inexperienced coder but in my game I have a main room which has doors which load different scenes. I'd like to return to the scene with a spawn point for the player which is as if he had just opened the door from the room. I know I can do this by making multiple copies of the scene but that is tedious and not sustainable. I'd like to be able to call to assign the gameobject and correct transforms for the player through the level loader/door script in the last scene heres what I have using UnityEngine; using System.Collections; public class PlayDoor : MonoBehaviour { public int button1function; public float width; public Transform player; //transform for the player public float height; public GameObject playercontroller; //gameobject of player public string ScreenText; private bool enter; void OnTriggerEnter ( Collider other ){ enter = true; } void OnGUI (){ if (enter) { GUI.Label (new Rect (Screen.width / 2 - width, Screen.height - height, 150, 30), ScreenText); if (Input.GetKeyDown ("f")) { audio.Play (); AutoFade.LoadLevel (button1function, (1 / 2), 1, Color.black); } } } void OnLevelWasLoaded(int level) //not sure if this is even possible or the correct usage of onlevelwasloaded, but to my understanding it calls something to be done right as the next scene was loaded? { if (level == 0) Instantiate (playercontroller,transform.position, transform.rotation ); } void OnTriggerExit ( Collider other ){ enter = false; } } If anyone knows how to fix this code/have multiple spawns in a simpler way and can point me in the right direction please let me know. My apologies if this answer is self evident and I am just lost. Answer by Andres Fernandez · Nov 25, 2014 at 08:31 AM You almost have said it yourself: "I'd like to be able to call to assign the gameobject and correct transforms for the player through the level loader/door" What you need is one persistent script that holds the position of the player when the scene loads. You need to create an object, use DontDestroyOnLoad to make it persistent (also use a singleton pattern to avoid duplicating the object, just google "unity singleton") and make it have a reference to the spawn point. When using a door, store the reference to that spawn point in your persistent object. Then, whenever you reload the scene, the persistent script can instantiate the player in the correct. Spawn player to a different scene 0 Answers I'm having problems with my code 1 Answer How do I control where enemies spawn? 1 Answer How do I spawn an object under another moving object at random in C#? 1 Answer Prevent object from spawning at a spawn point twice in a row? 3 Answers
https://answers.unity.com/questions/840028/one-scene-multiple-spawn-points.html
CC-MAIN-2019-51
en
refinedweb
IN THIS ARTICLE Hello, and welcome back to our series on Android development with Fabric and PubNub! In previous articles, we’ve shown how to create a powerful chat app with Android, Fabric, and Digits. In this blog entry, we highlight 2 key technologies, Twitter Fabric (mobile development toolkit) and MapBox Kit for Fabric (a world-class open source mapping toolkit). With these technologies, we can accelerate mobile app development and build an app with several realtime data features that you will be able to use as-is or employ easily in your own data streaming applications: - Send chat messages to a PubNub channel (Chat) : using the PubNub plugin for realtime messaging and Publish/Subscribe API. - Display current and historical chat messages (Chat) : using the PubNub plugin and History API. - Display a list of what users are online (Presence) : using the PubNub plugin and Presence API. - Display users on a real-world map (Presence Map) : using the MapBox Android API. - Log in with Digits auth (Login) : using the Digits plugin for Fabric allowing the easiest mobile user authentication possible. PubNub provides a global realtime Data Stream Network with extremely high availability and low latency. With PubNub, it’s possible for data exchange between devices (and/or sensors, servers, you name it – essentially anything that can talk TCP) in less than a quarter of a second worldwide. And of that 250ms, a large part comes from the last hop – the data carrier itself! As 4G LTE (5G won’t be far away) and cloud computing gain traction, those latencies will decrease even further. Twitter Fabric is a development toolkit for Android (as well as iOS and some Web capabilities) that gives developers a powerful array of options: - Familiar dev toolkit for iOS, Android and Web Applications : a unified developer experience that focuses on ease of development and maintenance. - Brings best-of-breed SDKs all in one place : taking the pain out of third-party SDK provisioning and using new services in your application. - Streamlined dependency management : Fabric plugin kits are managed together to consolidate dependencies and avoid “dependency hell”. - Rapid application development with task-based sample code onboarding : You can access and integrate sample code use cases right from the IDE. - Automated Key Provisioning : sick of creating and managing yet another account? So were we! Fabric will provision those API keys for you. - Open Source, allowing easier understanding, extension and fixes. MapBox is a high-quality SDK on the Fabric platform enabling easy integration of mapping capabilities (iOS, Python, and HTML/JavaScript implementations are also available). In our application, this will allow us to create a new UI that displays all online users on a dynamic map in realtime. This may seem like a lot to digest. How do all these things fit together exactly? - We are building an Android app because we want to reach the most devices worldwide. - We use the Fabric plugin for Android Studio, giving us our “mission control” for plugin adoption and app releases. - We adopt Best-of-Breed services (like PubNub) rapidly by quickly integrating plugin kits and sample code in Fabric. - We use PubNub as our Global Realtime Data Stream Network to power the Chat and Presence features. - In addition, we’ll use MapBox kit for Fabric to provide the best mapping capabilities possible. As you can see in the animated GIF above, once everything is together, we have built an application very quickly that provides a great feature set with relatively little code and integration pain. This includes: - Log in with Digits phone-based authentication (or your own alternative login mechanism). - Send & receive chat messages (or whatever structured realtime data you like). - Show a list of users online (or devices/sensors/vehicles, etc.). - Display users on a dynamic world map. This all seems pretty sweet, so let’s move on to the development side… If you haven’t already, you’ll want to create a Fabric account like this: You should be on your way in 60 seconds or less! Android Studio In Android studio, as you know, everything starts out by creating a new Project. In our case, we’ve done much of the work for you – you can jumpstart development with the sample app by downloading it from GitHub, or the “clone project from GitHub” feature in Android Studio if prefer. The Git url for the sample app is: Once you have the code, you’ll want to create a Fabric Account if you haven’t already. Then, you can integrate the Fabric Plugin according to the instructions you’re given. The interface in Android Studio should look something like this, under Preferences > Plugins > Browse Repositories: Once everything’s set, you’ll see the happy Fabric Plugin on the right-hand panel: Click the “power button” to get started, and you’re on your way! MapBox SDK Integration Adding MapBox is an easy 4-step process: - Click to Install from the list of Fabric kits - Enter your MapBox keys or have Fabric create a new account - Integrate any Sample Code you need to get started - Launch the App to verify successful integration… and that’s it! Here’s a visual overview of what that looks like: PubNub SDK Integration Adding PubNub is just as easy: - Click to Install from the list of Fabric kits - Enter your PubNub keys or have Fabric create a new account - Integrate any Sample Code you need to get started - Launch the App to verify successful integration Look familiar? That’s the beauty of Fabric! Using this same process, you can integrate over a dozen different toolkits and services with Fabric. Additional Background Information This article builds on the sample application described in a previous article in the series. If you would like more information about the core features and implementation, please feel free to check it out! There is also a pre-recorded training webinar available, as well as ongoing live webinars! Navigating the Code Once you’ve set up the sample application, you’ll want to update the publish and subscribe keys in the Constants class, your Twitter API keys in the MainActivity class, your Fabric API key in the AndroidManifest.xml, and MapBox API key in strings.xml. These are the keys you created when you made a new PubNub and MapBox accounts and PubNub application in previous steps. Make sure to update these keys, or the app won’t work! Here’s what we’re talking about in the Constants class: package com.pubnub.example.android.fabric.pnfabricchat; public class Constants { ... public static final String PUBLISH_KEY = "YOUR_PUBLISH_KEY"; // replace with your PN PUB KEY public static final String SUBSCRIBE_KEY = "YOUR_SUBSCRIBE_KEY"; // replace with your PN SUB KEY ... } These values are used to initialize the connection to PubNub when the user logs in. And in the MainActivity: public class MainActivity extends AppCompatActivity { private static final String TWITTER_KEY = "YOUR_TWITTER_KEY"; private static final String TWITTER_SECRET = "YOUR_TWITTER_SECRET"; ... } These values are necessary for the user authentication feature in the sample application. And in the AndroidManifest.xml: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: ... <application ...> ... <meta-data android: ... </application> ... </manifest> This is used by the Fabric toolkit to integrate features into the application. Here’s where to integrate MapBox in the strings.xml: <resources> ... <string name="com.mapbox.mapboxsdk.accessToken" translatable="false">YOUR_MAPBOX_TOKEN</string> ... </resources> As with any Android app, there are 2 main portions of the project – the Android code (written in Java), and the resource files (written in XML). The Java code contains 2 Activities, plus packages for each major feature: chat, presence, and mapbox. (The speech package is for another article on dictation and text-to-speech features – check it out!) The resource XML files include layouts for each activity, fragments for the 2 tabs, list row layouts for each data type, and a menu definition with a single option for “logout”. Whatever you need to do to modify this app, chances are you’ll just need to tweak some Java code or resources. In rare cases, you might add some additional dependencies in the build.gradle file, or modify permissions or behavior in the AndroidManifest.xml. In the Java code, there is a package for each of the main features: - chat : code related to implementing the realtime chat feature. - presence : code related to implementing the online presence list of users. - mapbox : helper code for working with the dynamic map UI. For ease of understanding, there is a common structure to each of these packages that we’ll dive into shortly. Android Manifest The Android manifest is very straightforward – we need 3 permissions (INTERNET, ACCESS_FINE_LOCATION, and ACCESS_NETWORK_STATE), and have 2 activities: LoginActivity (for login), and MainActivity (for the main application). You’ll also need to enable the Telemetry service for MapBox to work. The XML is available here and described in the previous article. Layouts Our application uses several layouts to render the application: - Activity : the top-level layouts for LoginActivityand MainActivity. - Fragment : layouts for our the tabs, Chat, Presence, and PresenceMap. - Row Item : layouts for the the types of ListView, Chatand Presence. These are all standard layouts that we pieced together from the Android developer guide, but we’ll go over them all just for the sake of completeness. Activity Layouts The login activity layout is pretty simple – it’s just one button for the Twitter login, and one button for the super-awesome Digits auth. The XML is available here and described in the previous article. It results in a layout that looks like this: The Main Activity features a tab bar and view pager – this is pretty much the standard layout suggested by the Android developer docs for a tab-based, swipe-enabled view: The XML is available here and described in the previous article. It results in a layout that looks like this: Fragment Layouts Ok, now that we have our top-level views, let’s dive into the tab fragments. The presence map tab layout features a dynamic map using the MapBox Map View. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns: <com.mapbox.mapboxsdk.maps.MapView android: </RelativeLayout> Pretty easy indeed! It creates a UI that looks like this: Java Code In the code that follows, we’ve categorized things into a few areas for ease of explanation. Some of these are standard Java/Android patterns, and some of them are just tricks we used to follow PubNub or other APIs more easily. - Activity : these are the high-level views of the application, the Java code provides initialization and UI element event handling. - Pojo : these are Plain Old Java Objects representing the “pure data” that flows from the network into our application. - Fragment : these are the Java classes that handle instantiation of the UI tabs. - RowUi : these are the corresponding UI element views of the Pojo classes (for example, the senderfield is represented by an TextViewin the UI). - PnCallback : these classes handle incoming PubNub data events (for publish/subscribe messaging and presence). - Adapter : these classes accept the data from inbound data events and translate them into a form that is useful to the UI. That might seem like a lot to take in, but hopefully as we go into the code it should feel a lot easier. LoginActivity The LoginActivity is pretty basic – we just include code for instantiating the view and setting up Digits login callbacks. If you look at the actual source code, you’ll also notice code to support Twitter auth as well. The Java code is available here and described in the previous article. We attach the login event to a callback with two outcomes: the success callback, which extracts the phone number and moves on to the MainActivity to display a Toast message; and the error callback, which does nothing but Log (for now). In a real application, you’d probably want to use the Digits user ID from the digitsSession to link it to a user account in the backend. MainActivity There’s a lot going on in the MainActivity. This makes sense, since it’s the place where the application is initialized and where UI event handlers live. Take a moment to glance through the code and we’ll talk about it below. We’ve removed a bunch of code to highlight the portions that are used for our dynamic location and mapping services. public class MainActivity extends AppCompatActivity { ... @Override protected void onCreate(Bundle savedInstanceState) { ... this.mLocationHelper = new LocationHelper(this, getLocationListener()); ... this.mPresenceMapAdapter = new PresenceMapAdapter(this); ... this.mPresenceCallback = new PresencePnCallback(this.mPresenceListAdapter, this.mPresenceMapAdapter); ... adapter.setPresenceMapAdapter(this.mPresenceMapAdapter); viewPager.setAdapter(adapter); viewPager.addOnPageChangeListener(new TabLayout.TabLayoutOnPageChangeListener(tabLayout)); ... initPubNub(); initChannels(); } ... @Override protected void onStart() { super.onStart(); if (this.mLocationHelper != null) { this.mLocationHelper.connect(); } } @Override protected void onStop() { super.onStop(); if (this.mLocationHelper != null) { this.mLocationHelper.disconnect(); } } ... private final LocationListener getLocationListener() { return new LocationListener() { @Override public void onLocationChanged(final Location newLocation) { JSONObject location = new JSONObject(); if (newLocation != null) { location.tryPut("lat", String.valueOf(newLocation.getLatitude())); location.tryPut("lon", String.valueOf(newLocation.getLongitude())); } MainActivity.this.mPubnub.setState(Constants.CHANNEL_NAME, MainActivity.this.mUsername, location, new Callback() { @Override public void successCallback(String channel, Object message) { Log.v("setState", channel + ":" + message); mPresenceMapAdapter.update(new PresenceMapPojo(mUsername, newLocation.getLatitude(), newLocation.getLongitude(), DateTimeUtil.getTimeStampUtc())); } }); } }; } ... private final void initChannels() { ... this.mPubnub.hereNow(Constants.CHANNEL_NAME, true, true, this.mPresenceCallback); ... } ... } The first thing you’ll notice in this code is that we create a LocationHelper instance, which is our helper code to bridge between Google Play Location Services and our dynamic mapping feature. We instantiate the LocationHelper with a reference to the Activity context, as well as a LocationListener instance to receive location update events. The most important things happening in the onCreate() method with respect to the speech features are as follows: - Create a PresenceMapAdapter, which will be responsible for translating location and presence events into map update events. - Pass the PresenceMapAdapterinto the PresencePnCallbackso that it can receive location state change events from PubNub. - Set the PresenceMapAdapter within the PresenceMapTabFragment via the fragment manager (since it’s in charge of instantiating the fragments itself). In addition, we’ll need to add code to: - Start/stop location services during those Android application events. - Publish location updates via PubNub setState()method and update the PresenceMapAdapter accordingly. - Modify our usual PubNub hereNow()call to ask for uuidand stateinformation (the 2 truebooleans in the hereNow()call). Stay tuned for more description of the location and mapping helpers below. Chat and Presence Features The Java code for the chat and presence features is available here and here and described in the previous article. The Pojo classes are the most straightforward of the entire app – they are just immutable objects that hold data values as they come in. We make sure to give them toString(), hashCode(), and equals() methods so they play nicely with Java collections. The RowUi object just aggregates the UI elements in a list row. Right now, these just happen to be TextView instances. The TabFragment object takes care of instantiating the tab and hooking up the ListAdapter. The PnCallback is the bridge between the PubNub client and our application logic. It takes an inbound messageObject object and turns it into a Pojo value that is forwarded on to the ListAdapter instance. PresenceMapPojo The PresenceMapPojo is very similar to the PresencePojo, except that it contains Double instances for latitude and longitude instead of a presence state. public class PresenceMapPojo { private final String sender; private final Double lat; private final Double lon; private final String timestamp; ... } PresenceMapTabFragment The PresenceMapTabFragment class is a little bigger than usual because we’re initializing the MapBox Map View. We create references to the MapView and MapboxMap so we can initialize the PresenceMapAdapter at the appropriate time. The MapView is the overall Map view implementation that integrates into the Android UI. The MapboxMap is the object we will interact with to add location markers for each user. public class PresenceMapTabFragment extends Fragment { private PresenceMapAdapter presenceMapAdapter; private AtomicReference<MapView> mapViewRef = new AtomicReference<MapView>(); private AtomicReference<MapboxMap> mapboxMapRef = new AtomicReference<MapboxMap>(); @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_presence_map, container, false); MapView mapView = (MapView) view.findViewById(R.id.mapboxMarkerMapView); mapViewRef.set(mapView); mapView.setAccessToken(getString(R.string.com_mapbox_mapboxsdk_accessToken)); mapView.onCreate(savedInstanceState); mapView.getMapAsync(new OnMapReadyCallback() { @Override public void onMapReady(MapboxMap mapboxMap) { mapboxMap.setStyle(Style.MAPBOX_STREETS); mapboxMapRef.set(mapboxMap); if (presenceMapAdapter != null) { presenceMapAdapter.refreshAll(); } } }); return view; } public void setAdapter(PresenceMapAdapter presenceMapAdapter) { this.presenceMapAdapter = presenceMapAdapter; presenceMapAdapter.setMapView(mapViewRef, mapboxMapRef); } ... } PresenceMapAdapter The PresenceMapAdapter follows the Android Adapter pattern, which is used to bridge data between Java data collections and user interfaces (although in this case, we’re bridging to a map view instead of a list view). Since we’re using PubNub, messages are coming in all the time, unexpected from the point of view of the UI. This adapter is invoked from the PresencePnCallback class: when a presence event comes in, the callback invokes PresenceMapAdapter.update() with a PresenceMapPojo object containing the relevant data. In the case of the PresenceMapAdapter, the backing collections are maps of uuid to PresenceMapPojo and Map MarkerView instances, so the update() and refresh() calls need to: - Update the items in the collection (actually prepend by doing put(uuid, value)). - Create, update, or remove the map marker view accordingly (this should happen on the UI thread). We use AtomicReference instances since the Map objects are initialized at different times in the application. The MapboxMap instance is created asynchronously in the Tab Fragment class when the MapView is initialized. Not too bad! public class PresenceMapAdapter { private final Context context; private final Map<String, MarkerView> latestMarker = new LinkedHashMap<>(); private final Map<String, PresenceMapPojo> latestPresence = new LinkedHashMap<>(); private AtomicReference<MapView> mapViewRef; private AtomicReference<MapboxMap> mapboxMapRef; public PresenceMapAdapter(Context context) { this.context = context; } public void setMapView(AtomicReference<MapView> mapViewRef, AtomicReference<MapboxMap> mapboxMapRef) { this.mapViewRef = mapViewRef; this.mapboxMapRef = mapboxMapRef; } public void update(final PresenceMapPojo message) { ... latestPresence.put(message.getSender(), message); if (mapboxMapRef.get() == null) { return; } ((Activity) this.context).runOnUiThread(new Runnable() { @Override public void run() { if (latestMarker.containsKey(message.getSender())) { mapboxMapRef.get().removeMarker(latestMarker.get(message.getSender())); latestMarker.remove(message.getSender()); } MarkerViewOptions markerOptions = new MarkerViewOptions() .position(new LatLng(message.getLat(), message.getLon())) .title(message.getSender()) .snippet(message.getTimestamp()); MarkerView marker = mapboxMapRef.get().addMarker(markerOptions); latestMarker.put(message.getSender(), marker); } }); } public void refresh(final PresencePojo message) { ... String presence = message.getPresence(); if ("timeout".equals(presence) || "leave".equals(presence)) { latestPresence.remove(message.getSender()); if (mapboxMapRef.get() == null) { return; } if (latestMarker.containsKey(message.getSender())) { ((Activity) this.context).runOnUiThread(new Runnable() { @Override public void run() { mapboxMapRef.get().removeMarker(latestMarker.get(message.getSender())); latestMarker.remove(message.getSender()); } }); } } } public void refreshAll() { for (PresenceMapPojo message : latestPresence.values()) { update(message); } } } PresencePnCallback The PresencePnCallback features a bunch of changes from the version in the previous article. The main difference is that we’re using the custom state API for propagating user location information. When presence events come in, we look for the “state” attribute (sometimes also called “data”) and update the corresponding user location accordingly. When “leave” or “timeout” events occur, we propagate null location events to remove the user location marker from the map. public class PresencePnCallback extends Callback { ... @Override public void successCallback(String channel, Object message) { ... try { Map<String, Object> presence = JsonUtil.fromJSONObject((JSONObject) message, LinkedHashMap.class); List<Map<String, Object>> uuids; if (presence.containsKey("uuids")) { uuids = (List<Map<String, Object>>) presence.get("uuids"); } else { uuids = ImmutableList.<Map<String, Object>>of(presence); } for (Map<String, Object> object : uuids) { ... if (object.containsKey("data") || object.containsKey("state")) { // we have a state change if (presenceMapAdapter != null) { Log.v(TAG, "presenceStateChange(" + JsonUtil.asJson(presence) + ")"); if ("timeout".equals(presenceString) || "leave".equals(presenceString)) { presenceMapAdapter.refresh(pm); } else { Map<String, Object> state = object.containsKey("data") ? (Map<String, Object>) object.get("data") : (Map<String, Object>) object.get("state"); ; if (state.containsKey("lat") && state.containsKey("lon")) { Double lat = Double.parseDouble((String) state.get("lat").toString()); Double lon = Double.parseDouble((String) state.get("lon").toString()); presenceMapAdapter.update(new PresenceMapPojo(sender, lat, lon, timestamp)); } } } } ... } } catch (Exception e) { throw Throwables.propagate(e); } } ... } The code is a little trickier than necessary because we’re using the same callback to send events to the PresenceListAdapter and PresenceMapAdapter instances. All in all though, it’s not too tough to wire everything together! Location Helper The location update feature uses Google Play Location Services, which has a friendly API to work with. There are a multitude of callbacks to implement for these APIs, which is the main reason why we broke out a LocationHelper class instead of implementing them in the MainActivity class. public class LocationHelper implements GoogleApiClient.ConnectionCallbacks, GoogleApiClient.OnConnectionFailedListener, LocationListener { private GoogleApiClient mGoogleApiClient; private LocationListener mLocationListener; public LocationHelper(Context context, LocationListener mLocationListener) { this.mGoogleApiClient = new GoogleApiClient.Builder(context) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); this.mGoogleApiClient.connect(); this.mLocationListener = mLocationListener; } public void connect() { this.mGoogleApiClient.connect(); } public void disconnect() { this.mGoogleApiClient.disconnect(); } @Override public void onConnected(@Nullable Bundle bundle) { try { Location lastLocation = LocationServices.FusedLocationApi.getLastLocation( mGoogleApiClient); if (lastLocation != null) { onLocationChanged(lastLocation); } } catch (SecurityException e) { Log.v("locationDenied", e.getMessage()); } try { LocationRequest locationRequest = LocationRequest.create().setInterval(5000); LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, locationRequest, this); } catch (SecurityException e) { Log.v("locationDenied", e.getMessage()); } } ... @Override public void onLocationChanged(Location location) { try { Log.v("locationChanged", JsonUtil.asJson(location)); } catch (Exception e) { throw Throwables.propagate(e); } mLocationListener.onLocationChanged(location); } @Override public void onConnectionSuspended(int i) { mLocationListener.onLocationChanged(null); } } The implementation initializes Location Services, asks for the last known location, and requests dynamic location updates from the Google Play Location API. When we receive location change events, we forward them on to the LocationListener instance that we were initialized with. In this case, it’s the LocationListener created in the MainActivity that calls PubNub.setState() with the new location. Not too shabby! In a real-world implementation, you’ll probably want to pay close attention to your location accuracy and power utilization. More updates equals more battery waste, so be frugal! Conclusion Thank you so much for staying with us this far! Hopefully it’s been a useful experience. The goal was to convey our experience in how to build an app that can: - Authenticate with Twitter or Digits auth. - Send chat messages to a PubNub channel. - Display current and historical chat messages. - Display a list of what users are online. - Display a dynamic map of online users. If you’ve been successful thus far, you shouldn’t have any trouble extending the app to any of your realtime data processing needs. Stay tuned, and please reach out anytime if you feel especially inspired or need any help! Resources - - - - - - - - - - - - - - - - - -
https://www.pubnub.com/blog/android-twitter-fabric-chat-mapping-and-location/
CC-MAIN-2019-51
en
refinedweb
Hi, How do I know that my servlet will be deployed against JSR289 as opposed to JSR116? I understand that the JSR289 has new behaviors but the container can maintain backwards compatibility to support JSR116. Topic Pinned topic Configuring servlet between JSR289 and JSR116 2011-05-06T14:46:47Z | Updated on 2011-05-08T08:55:07Z at 2011-05-08T08:55:07Z by SystemAdmin Re: Configuring servlet between JSR289 and JSR1162011-05-06T17:10:00Z This is the accepted answer. This is the accepted answer.The way the container identifies an application as a JSR 289 is through the <app-name> tag in the sip.xml deployment descriptor: <app-name>xxxxxxx</app-name> If this is removed or not included, the SIP container will assume its working with a JSR 116 app. From my experience, the one thing you have to be careful of when converting a JSR 116 app to a JSR 289 application is the new Invalidate When Ready feature that is enabled with JSR 289. This can cause issues with converged applications if a SIP app session is invalidated by the SIP container before the HTTP side of the converged application has finished using it. Just something to watch out for. Re: Configuring servlet between JSR289 and JSR1162011-05-06T17:58:00Z This is the accepted answer. This is the accepted answer. This brings up a follow-up question. We have actually deployed our servlet with and without the app-name and observed two different behaviours when receiving 302 MOVED. When not using the app-name tag, i.e. JSR116, when we send INVITE and receive 302 MOVED in response, we create a new INVITE but re-using the same session. No issues sending the second INVITE. When using the app-name tag, i.e. JSR289, when we receive 302 MOVED and try to create a new INVITE on the session, we get an error that the session is already invalidated. Any ideas why the difference? Darryl Re: Configuring servlet between JSR289 and JSR1162011-05-06T19:23:35Z This is the accepted answer. This is the accepted answer.My guess is that you are getting bit by the automatic invalidation of the session. This is the invalidate when ready feature of JSR 289 that I mentioned previously. You can test this theory be disabling invalidate when ready. You can do this in the source of your application each time a SIP Application Session is created by calling the setInvalidateWhenReady(false) on the SipApplicationSession. This should prevent the container from automatically invalidating the session. Re: Configuring servlet between JSR289 and JSR1162011-05-06T20:32:36Z This is the accepted answer. This is the accepted answer. Back to the JSR289 vs JSR116, our sip.xml has the app-name tag, but we're seeing this exception when trying to deploy the servlet. com.ibm.ws.scripting.ScriptingException: org.apache.tools.ant.BuildException: /opt/IBM/WebSphere/AppServer/feature_packs/cea/sar2war_tool/build1.xml:236: /tmp/build/mysipservlet.sar/WEB-INF/sip.xml is not a valid XML document. We looked in this build1.xml line 236 and the target name of this XML blcok is validateSipDotXml116. Does this mean the container thinks it is a JSR116 servlet? Here's our sip.xml <?xml version="1.0"?> <sip-app> <app-name>com.test.MySipServlet</app-name> <display-name>My Sip Servlet</display-name> <servlet> <servlet-name>MySipServlet</servlet-name> <display-name>MySipServlet</display-name> <servlet-class>com.test.MySipServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>MySipServlet</servlet-name> <pattern> <or> <equal ignore- <var>request.method</var> <value>INVITE</value> </equal> <equal ignore- <var>request.method</var> <value>NOTIFY</value> </equal> <equal ignore- <var>request.method</var> <value>OPTIONS</value> </equal> </or> </pattern> </servlet-mapping> <!--Declare class that implements TimerListener interface--> <listener> <listener-class>com.test.MySipServlet</listener-class> </listener> </sip-app> - SystemAdmin 110000D4XK45 Posts Re: Configuring servlet between JSR289 and JSR1162011-05-08T08:55:07Z This is the accepted answer. This is the accepted answer.During deployment there is a schema validation against your sip.xml deployment descriptor to verify that this is JSR289 application, it looks like the schema validation failed. My guess is that the problem is that the root element tag does not include the correct namespace declarations. Here is an example of a correct JSR289 sip.xml file: <?xml version="1.0" encoding="UTF-8"?> <sip-app <app-name>jsr289.app</app-name> </sip-app> Try to add the namespaces to the root element to see if this solves the problem. One other thing that you can do is to enable the sip container traces, using com.ibm.ws.sip.*=all traces string, deploy the application, and look for this string: "Failed to parse Sip.xml with jsr1.1 schema validation" this will help you to understand what is wrong with your sip.xml file. Re: Configuring servlet between JSR289 and JSR1162014-11-17T11:59:08Z This is the accepted answer. This is the accepted answer. Hello,I am a java developer,I meet the same problem(sip.xml is not a valid XML document). If anyone can solve it,please tell me what can i do ,Thanks!Updated on 2014-11-17T12:00:08Z at 2014-11-17T12:00:08Z by wcy
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014614022
CC-MAIN-2017-39
en
refinedweb
Python Data Analysis With Sqlite And Pandas As a step in me learning Data Analysis With Python I wanted to - set up a database - write values to it (to fake statistics from production) - read values from the database into pandas - do some filtering with pandas - make a plot with matplotlib. So this text describes these steps. Set up the environment To spice things up I wanted this to run on a raspberry pi (see Dagbok 20151215). I started with the Raspbian Lite image from the official Raspberry pi downloads page (see [1]). This was a fun but painfully slow way to set up the environment. I should probably have spend twice as much on the micro-SD card to get it faster if I had known this. I also first used Wifi instead of a wired ethernet connection. After running sudo raspi-config to make use of the entire storage I made an update and installed my favorite desktop environment (Xfce), a nice editor (Gnu Emacs) and the python packages I needed: sudo apt-get update sudo apt-get upgrade sudo apt-get install emacs-nox sudo apt-get install xfce4 xfce4-goodies xfce4-screenshooter sudo apt-get install sqlite3 sudo apt-get install python-scipy sudo apt-get install python-pandas sudo apt-get install ... Set up the database I wanted to use some form of SQL database, and sqlite is perfect for the job. Since I want to do this programmatically I go through python. In this short example I connect to a (new) database and create a table called sensor. conn = sqlite3.connect(FILENAME) cur = conn.cursor() sql = """ create table sensor ( sid integer primary key not null, name text, notes text );""" _ = cur.execute(sql) I fill this and the other tables with some values. In fact I do this in a very complicated way just for fun and it turned out to be very, very slow. If you feel like getting the details scroll down and read the code in the full example. sql = "insert into sensor(sid, name, notes) values (%d, '%s', '%s');" for (uid, name, notes) in [(201, 'Alpha', 'Sensor for weight'), \ (202, 'Beta', 'Sensor for conductivity'), (203, 'Gamma', 'Sensor for surface oxides'), (204, 'Delta', 'Sensor for length'), (205, 'Epsilon', 'Sensor for x-ray'), (206, 'Zeta', 'Color checker 9000'), (207, 'Eta', 'Ultra-violet detector'), ]: cur.execute(sql % (uid, name, notes)) The full example build this table, a few others and adds some 700 thousand faked sensor readings to the database. On my Raspberry Pi 2 this requires almost 6 minutes, but that's OK since it is intended to fake 7 years of sensor readings: $ time python build.py create new file /tmp/database.data insert values from line 1 [...] real 5m42.281s user 5m6.020s sys 0m11.460s Read values from the database We want to read the values and I experimented with sqlite default settings in my .sqliterc file. I tried this: $ cat ~/.sqliterc .mode column .headers on Anyway, I first try to do some database queries with the command line tool. If you have never used these before, I can only urge you to learn hand-crafting sql queries. It really speeds up debugging and experimentation to have a command line session running in parallel with the code being written. Here is a typical small session: $ sqlite3 database.data sqlite> select * from line; lid name notes ---------- ---------- ----------------------------- 101 L1 refurbished soviet line, 1956 102 L2 multi-purpose line from 1999 103 L3 mil-spec line, primary 104 L4 mil-spec line, seconday As we saw above, when we created the values, communicating through python is super-easy, so now we want these values to go into pandas for data-analysis. As it turns out: this was also very easy once you figure out how. The tricky part was to figure out that the command I needed was pandas.read_sql(query, conn). This example works fine using IPython (see Ipython First Look), to use the syntax completion features, but it also works in a regular python session, or as a script: import pandas import matplotlib.pyplot as plt import sqlite3 conn = sqlite3.connect('./database.data') limit = 1000000 query = """ select reading.rid, reading.timestamp, product.name as product, serial, line.name as line, sensor.name sensor, verdict from reading, product, line, sensor where reading.sid = sensor.sid and reading.pid = product.pid and reading.lid = line.lid limit %s; """ % limit data = pandas.read_sql(query, conn) We now have very many values in the data structure called data. My poor raspberry pi leaped from 225 MB of used memory to 465 MB, after peaking at more than 500 MB. Remember that this poor computer only has about 925 MB after giving some of it to the GPU. Let's try to take a look at it by counting the values based on what line and product they represent: print data.groupby(['line', 'product']).size() line product L1 PA 183364 L2 PA 47247 PB 57258 PC 375084 L3 PB 7971 PC 13311 L4 PD 1389 For someone who has not studied my toy example this means that on for example Line 3 we have recorder 7971 sensor readings on product of type PB and 13311 readings on products of type PC. These values are of course totally irrelevant, but imagine them being real values from a real raspberry pi project in a production site where you are responsible for improving the quality of the physical entities being produced. Then these values might mean that Line 4 is not living up to expectation and could be scrapped, or that product PB on Line L3 should instead be produced on line 4. Make a plot I made a bar-chart. But am not too happy with this example, I think the code is too verbose and bulky for a minimal example. Perhaps you can make it prettier. This nicely illustrates the power of scipy. fig, ax = plt.subplots() new_set = data.groupby('verdict').size() width = 0.8 ax.bar([1,2,3], new_set, width=width) plt.ylabel('Total') plt.title('Number of sensor readings per outcome') plt.xticks([1 + width/2, 2+ width/2,3+ width/2], ('OK', 'FAIL', 'No read')) plt.tight_layout() plt.savefig('python-data-analysis-sqlite-pandas.png') And here is the plot, as created on a Raspberry Pi: Summary Data Analysis With Python is extremely powerful and can be done, with some pain, even on a raspberry pi. Download the full example from here: [2] My next step is to pretend that the database solution does not scale to the new needs (all the new lines), so a front-end for presenting sensor readings and manually commenting on bad verdicts should be possible. We let this database be a legacy database and use Django: Python Data Presentation With Django And Legacy Sqlite Database This page belongs in Kategori Programmering See also Data Analysis With Python
http://pererikstrandberg.se/blog/index.cgi?page=PythonDataAnalysisWithSqliteAndPandas
CC-MAIN-2017-39
en
refinedweb
The select() function is available on most OS platforms. Yet even this common function has subtleties that make it harder to use than necessary. For example, consider how select() is used on page 144. Even though the most common use case for select() in a reactive server is to wait indefinitely for input on a set of socket handles, programmers must provide NULL pointers for the write and exception fd_sets and for the timeout pointer. Moreover, programmers must remember to call the sync() method on active_handles_ to reflect changes made by select(). Addressing these issues in each application can be tedious and error prone, which is why ACE provides the ACE::select() wrapper methods. ACE defines overloaded static wrapper methods for the native select() function that simplify its use for the most common cases. These methods are defined in the utility class ACE as follows: class ACE { public: static int select (int width, ACE_Handle_Set &rfds, const ACE_Time_Value *tv = 0); static int select (int width, ACE_Handle_Set *rfds, ACE_Handle_Set *wfds = 0, ACE_Handle_Set *efds = 0, const ACE_Time_Value *tv = 0); // ... Other methods omitted .... }; The first overloaded select() method in class ACE omits certain parameters and specifies a default value of no time-out value, that is, wait indefinitely. The second method supplies default values of 0 for the infrequently used write and exception ACE_Handle_Sets. They both automatically call ACE_Handle_Set::sync() when the underlying select() method returns to reset the handle count and size-related values in the handle set to reflect any changes made by select(). We devised these wrapper functions by paying attention to design details and common usages to simplify programming effort and reduce the chance for errors in application code. The design was motivated by the following factors: Simplify for the common case. As mentioned above, the most common use case for select() in a reactive server is to wait indefinitely for input on a set of socket handles. The ACE::select() methods simplify this common use case. We discuss this design principle further in Section A.3. Encapsulate platform variation. All versions of select() accept a timeout argument; however, only Linux's version modifies the timeout value on return to reflect how much time was left in the time-out period when one of the handles was selected. The ACE::select() wrapper functions declare the timeout const to unambiguously state its behavior, and include internal code to work around the nonstandard time-out-modifying extensions on Linux. We discuss this design principle further in Section A.5. Provide type portability. The ACE_Time_Value class is used instead of the native timer type for the platform since timer types aren't consistent across all platforms. The more useful and portable we make our logging server, the more client applications will want to use it and the greater its load will become. We therefore want to think ahead and design our subsequent logging servers to avoid becoming a bottleneck. In the next few chapters, we'll explore OS concurrency mechanisms and their associated ACE wrapper facades. As our use of concurrency expands, however, the single log record file we've been using thus far will become a bottleneck since all log records converge to that file. In preparation for adding different forms of concurrency therefore, we extend our latest reactive server example to write log records from different clients to different log files, one for each connected client. Figure illustrates the potentially more scalable reactive logging server architecture that builds upon and enhances the two earlier examples in this chapter. As shown in the figure, this reactive server implementation maintains a map container that allows a logging server to keep separate log files for each of its clients. The figure also shows how we use the ACE::select() wrapper method and the ACE_Handle_Set class to service multiple clients via a reactive server model. Our implementation starts by including several new header files that provide various new capabilities we'll use in our logging server. #include "ace/ACE.h" #include "ace/Handle_Set.h" #include "ace/Hash_Map_Manager.h" #include "ace/Synch.h" #include "Logging_Server.h" #include "Logging_Handler.h" We next define a type definition of the ACE_Hash_Map_Manager template, which is explained in Sidebar 15. typedef ACE_Hash_Map_Manager<ACE_HANDLE, ACE_ FILE IO *, ACE_Null_Mutex> LOG_MAP; We'll use an instance of this template to map an active ACE_HANDLE socket connection efficiently onto the ACE_FILE_IO object that corresponds to its log file. By using ACE_HANDLE as the map key, we address an important portability issue: socket handles on UNIX are small unsigned integers, whereas on Win32 they are pointers. We create a new header file named Reactive_Logging_Server_Ex.h that contains a subclass called Reactive_Logging_Server_Ex, which inherits from Logging_Server. The main difference between this implementation and the one in Section 7.2 is that we construct a log_map to associate active handles to their corresponding ACE_FILE_IO pointers efficiently. To prevent any doubt that an active handle is a stream socket the ACE_SOCK_Acceptor isn't added to the log_map. class Reactive_Logging_Server_Ex : public Logging_Server { protected: // Associate an active handle to an <ACE_FILE_IO> pointer. LOG_MAP log_map_; // Keep track of acceptor socket and all the connected // stream socket handles. ACE_Handle_Set master_handle_set_; // Keep track of read handles marked as active by <select>. ACE_Handle_Set active_read_handles_; // Other methods shown below... }; ACE provides a suite of containers classes, Including Singly and doubly linked lists Sets and multisets Stacks and queues Dynamic arrays String manipulation classes Where possible, these classes are modeled after the C++ standard library classes so that it's easy to switch between them as C++ compilers mature, The ACE_Hash_Map_Manager defines a set abstraction that associates keys to values efficiently. We use this class instead of the "standard" std::map [Aus98] for several reasons: The std::map isn't so standard—not all compilers that ACE works with Implement it, and those that do don't all implement its interface the same way. The ACE_Hash_Map_Manager performs efficient lookups based on hashing, something std::map doesn't yet support. More coverage of the ACE container classes appears in [HJS]. The open() hook method simply performs the steps necessary to initialize the reactive server. virtual int open (u_short port) { Logging_Server::open (port); master_handle_set_.set_bit (acceptor ().get_handle ()); acceptor ().enable (ACE_NONBLOCK); return 0; } The wait_for_multiple_events() hook method in this reactive server is similar to the one in Section 7.2. In this method, though, we call ACE::select(), which is a static wrapper method in ACE that provides default arguments for the less frequently used parameters to the select() function. virtual int wait_for_multiple_events () { active_read_handles_ = master_handle_set_; int width = (int) active_read_handles_.max_set () + 1; return ACE::select (width, active_read_handles_); } The handle_connections() hook method implementation is similar to the one in Reactive_Logging_Server. We accept new connections and update the log_map_ and master_handle_set_. virtual int handle_connections () { ACE_SOCK_Stream logging_peer; while (acceptor ().accept (logging_peer) != -1) { master_handle_set_.set_bit (logging_peer.get_handle ()); ACE_FILE_IO *log_file = new ACE_FILE_IO; // Use the client' s hostname as the logfile name. make_log_file (*log_file, &logging_peer); // Add the new <logging_peer>'s handle to the map and // to the set of handles we <select> for input. log_map_.bind (logging_peer.get_handle (), log_file); master_handle_set_.set_bit (logging_peer.get_handle ()); } return 0; } Note that we use the make_log_file() method (see page 85) inherited from the Logging_Server base class described in Section 4.4.1. The handle_data() hook method iterates over only the active connections, receives a log record from each, and writes the record to the log file associated with the client connection. virtual int handle_data (ACE_SOCK_Stream *) { ACE_Handle_Set_Iterator peer_iterator (active_read_handles_); for (ACE_HANDLE handle; (handle = peer_iterator ()) != ACE_INVALID_HANDLE; ) { ACE_FILE_IO *log_file; log_map_.find (handle, log_file); Logging_Handler logging_handler (*log_file, handle); if (logging_handler.log_record () == -1) { logging_handler.close (); master_handle_set_.clr_bit (handle); log_map_.unbind (handle); log_file->close (); delete log file; } } } return 0; } } When a client closes its connection, we close the corresponding Logging_ Handler, clear the handle from the master_handle_set_, remove the handle-to-file association from the log_map_, and delete the dynamically allocated ACE_FILE_IO object. Although we don't show much error handling code in this example, a production implementation should take appropriate corrective action if failures occur. Finally, we show the main() program, which is essentially identical to the ones we showed earlier, except that this time we define an instance of Reactive_Logging_Server_Ex. int main (int argc, char *argv[]) { Reactive_Logging_Server_Ex server; if (server.run (argc, argv) == -1) ACE_ERROR_RETURN ((LM_ERROR, "%p\n", "server.run()"), 1); return
http://codeidol.com/community/cpp/the-aceselect-methods/21872/
CC-MAIN-2017-39
en
refinedweb
Red Hat Bugzilla – Bug 46879 anaconda crash on graphical upgrade on vaio PCG-SR1K Last modified: 2007-04-18 12:34:21 EDT Sony Vaio PCG-SR1K (aka SR5K in US) with 128Mb RAM. This has a neomagic chipset. Attempted to upgrade RH 7.0+some updates+Ximian 1.4. Selected the default (which is graphical) mode. X flicked up for a moment and vanished again. On the console was (some formatting errors, but the numbers are correct): Probing for video card: NeoMagic 256 (laptop/notebook) Probing for monitor type: Unable to probe Probing for mouse type: generic - 3 button mouse (USB) Waiting for X-server to start...log located in /tmp/X.log .. X server started successfully. Traceback (innermost last): File /usr/bin/anaconda line 421 from splashscreen import splashScreenShow File splashscreen.py line 20 from gtk import * File gtk.py line 29 _gtk.gtk_init() Runtime error: cannot open display Install exited abnormally Attempted to get to the error log on the other virtual consoles, but sh appeared to have died so couldn't :( This seems repeatable at will. Just got exactly the same results twice more. I haven't managed to get at the X log because I keep trying to get at it just after it's given up and died, but if you need it, I can try to be faster with my fingers. *** This bug has been marked as a duplicate of 45544 ***
https://bugzilla.redhat.com/show_bug.cgi?id=46879
CC-MAIN-2017-39
en
refinedweb
tools for development..... in anad labo Project Description Release History Download Files tools for development….. in anad labo Installation pip install anad_dev Usage ecrypt e.g.: >>> from anad_dev import ecrypt >>>>> ec_pw = ecrypt.ecrypt.compr(pw) >>> ec_pw '' >>> >>> ecrypt.ecrypt.decompr(ec_pw) 'test_pass_phrase' this is just a easy encryption everybody CAN decrypt. I just use it when I don’t want to show my password at glimpse. send_gmail e.g.: from anad_dev import ecrypt, send_gmail account = '[email protected]' passwd = ecrypt.ecrypt.decompr('') body = 'test\nhehehe' subject = 'gmail test send' msg_to = '[email protected]' send_gmail.send_gmail.doIt(account, passwd, body, subject, msg_to) *** you need to turn on as your gmail account can used low level access on your gmail setting page. any questions to: Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/anad_dev/
CC-MAIN-2017-39
en
refinedweb
This FAQ provides answers to questions about Silverlight HTML Bridge. What is HTML Bridge? Why is HTML Bridge important? How do I use HTML Bridge classes in managed code? How do I call managed code from JavaScript? As well as several other questions, answers and code examples. What is HTML Bridge? HTML Bridge is a technology in Silverlight that enables you to access the HTML Document Object Model (DOM) from managed code, and to call managed code from JavaScript. Why is HTML Bridge important? A Silverlight application runs within the Silverlight Plug-in. When you need to communicate between the Silverlight application and the rest of your Web page, you would commonly use HTML Bridge to transfer information, such as events, types and data. In general, when would I use HTML Bridge? You would commonly use HTML Bridge to: · Control the HTML DOM from managed code. · Attach events to HTML controls that are handled in managed code. · Call managed code from JavaScript. More specifically, what does HTML Bridge allow me to do? HTML Bridge is an integrated set of types and methods that enable you to: · Attach Silverlight managed event handlers to HTML controls. · Attach JavaScript event handlers to Silverlight controls. · Expose complete managed types to JavaScript for scripting. · Expose individual methods of managed types to JavaScript for scripting. · Use managed containers for DOM elements such as window, document, and standard HTML elements. · Pass managed types as parameters to JavaScript functions and objects. · Return managed types from JavaScript. · Control various security aspects of your Silverlight-based application. Is it possible to hide the Silverlight plug-in and still use HTML Bridge? Yes. The Silverlight plug-in does not need to have a visible UI to access the underlying DOM of the page. What types can I use to access the HTML DOM from managed code? HTML Bridge includes three main types that enable you to access and modify the HTML DOM from managed code: · HtmlPage represents the Web page. · HtmlDocument represents the document object. · HtmlElement represents DOM elements. How do I use the HTML Bridge classes in managed code? To use the HTML Bridge classes in managed code, you must add a using or Imports statement accessing the System.Windows.Browser namespace. You can use the HtmlDocument.GetElementById method to access an HTML element from managed code if you include an ID with your HTML element. You can use the HtmlElement.GetProperty and HtmlElement.GetAttribute methods to get properties and attributes of an HTML element. Also, you can use the HtmlElement.SetProperty and HtmlElement.SetAttribute methods to set properties and attributes of an HTML element. HtmlDocument htmlDoc = HtmlPage.Document; HtmlElement htmlEl = htmlDoc.GetElementById("Input"); htmlEl.SetProperty("disabled", false); htmlEl.SetAttribute("value", "This text is set from managed code."); Dim htmlDoc As HtmlDocument = HtmlPage.Document Dim htmlEl As HtmlElement = htmlDoc.GetElementById("Input") htmlEl.SetProperty("disabled", False) htmlEl.SetAttribute("value", "This text is set from managed code.") Can you step me through an example of using the HTML Bridge classes? First, you must create a Silverlight project with a linked Web Site project. For details, see Walkthrough: Integrating XAML into an ASP.NET Web Site Using Managed Code. Next, locate your default.aspx page in your Web site project. Right-click the default.aspx page and select Set As Start Page. Also, add the following HTML to create two text boxes and a button. Enter lowercase text <input type="text" id="Input" disabled="true"/><br> Convert to uppercase <button type="button" id="Convert">Convert </button><br> Uppercase text <input type="text" id="Output"/> Add the following managed code to the page behind of the Page.xaml (Page.xaml.cs or Page.xaml.vb) file to access the HTML DOM from Silverlight. This code is in the Page constructor and runs during application startup. The code gets a reference to the Input element, sets the disabled property to false, and sets the value attribute to "This text is set from managed code." Input disabled false value public Page() { InitializeComponent(); } Public Sub New() InitializeComponent() End Sub Next, add a ScriptManager control and a Silverlight control to your default.aspx page. Make sure the Source attribute of the Silverlight control points to the .xap file located in the ClientBin of your Web site project. <asp:ScriptManager </asp:ScriptManager> <asp:Silverlight </asp:Silverlight> Now you are ready to run this example. When you run it, you will see the textbox labeled “Input” is enabled and contains text. The text is populated from managed code. What method do I use to attach a managed event handler to an HTML element? This HtmlElement.AttachEvent method enables you to attach a Silverlight event handler to an HTML element. HtmlElement derives from HtmlObject, which includes the HtmlObject.AttachEvent method. What would be a code example of attaching a managed event handler to an HTML element?The following managed code in the Page.xaml (Page.xaml.cs or Page.xaml.vb) file shows how the Convert button's onclick handler is attached to a managed event handler named OnConvertClicked by using the AttachEvent method. The code also shows the OnConvertClicked event handler, which gets the value attribute on the Input text box and sets the value attribute on the Output element to an uppercase string. // Add an event handler for the Convert button. htmlEl = htmlDoc.GetElementById("Convert"); htmlEl.AttachEvent("onclick", new EventHandler(this.OnConvertClicked)); void OnConvertClicked(object sender, EventArgs e) HtmlDocument htmlDoc = HtmlPage.Document; HtmlElement input = htmlDoc.GetElementById("Input"); HtmlElement output = htmlDoc.GetElementById("Output"); output.SetAttribute("value", input.GetAttribute("value").ToUpper()); Dim htmlEl As HtmlElement = htmlDoc.GetElementById("Input") ' Add an event handler for the Convert button. htmlEl = htmlDoc.GetElementById("Convert") htmlEl.AttachEvent("onclick", _ New EventHandler(Of HtmlEventArgs)(AddressOf OnConvertClicked)) Private Sub OnConvertClicked(ByVal sender As Object, ByVal e As HtmlEventArgs) Dim input As HtmlElement = htmlDoc.GetElementById("Input") Dim output As HtmlElement = htmlDoc.GetElementById("Output") output.SetAttribute("value", input.GetAttribute("value").ToUpper) How do I call managed code from JavaScript? · Make the managed types and members scriptable. · Include a JavaScript function that calls a managed method and returns an expected value. How do I make the managed types and members scriptable?To make the managed types and members scriptable, you need to: · Mark the managed type and members using the appropriate attributes. · Instantiate the marked managed type with the new operator. HTML Bridge provides the ScriptableType and ScriptableMember attributes. These attributes are used to mark managed types and members as scriptable. After these attributes are applied to a type or member, the type must be instantiated with the new operator, and then registered by using the HtmlPage.RegisterScriptableObject method. The type or method is then available to be called from JavaScript. What would be an example of calling managed code from JavaScript? Create a new Silverlight project with a linked Web site project. Added the following HTML to the default.aspx page: Enter the text to send <input type="text" id="Text1" /><br> Call Silverlight method <button type="button" onclick="Call_SL_Scriptable()"> Call Silverlight Scriptable method</button><br> Display the return string <input type="text" id="Text2"/> The following shows the JavaScript Call_SL_Scriptable function. The JavaScript gets the text from the Text1 text box and passes it to the managed SimpleMethodExample method. The text that is returned from the method is displayed in the Text2 text box. The SimpleMethodExample method call has the following format: · SLPlugin is a reference to the Silverlight plug-in. · strIn references the input string. · strOut references the output string. · Content is an object that represents the plug-in area. · SL_SMT is the name that is used to register the managed object show later in the managed code. · SimpleMethodExample is the managed method name that will be marked as scriptable later in the managed code. <script type="text/javascript"> function Call_SL_Scriptable() { var SLPlugin = document.getElementById("SLP"); var strIn = document.getElementById("Text1").value; var strOut = SLPlugin.Content.SL_SMT.SimpleMethodExample(strIn); document.getElementById("Text2").value = strOut; } </script> The following shows the managed code in the Page.xaml (Page.xaml.cs or Page.xaml.vb) file that the above JavaScript Call_SL_Scriptable function calls. In the below Page constructor, a new ScriptableManagedType is created and registered during application startup. This object is registered with the name SL_SMT by using the HtmlPage.RegisterScriptableObject method. Call_SL_Scriptable ScriptableManagedType SL_SMT The ScriptableManagedType class definition appears after the Page constructor. ScriptableManagedType has one method named SimpleMethodExample that has the ScriptableMember attribute. SimpleMethodExample calls the HtmlWindow.Alert method with the passed-in string. It returns a string to the JavaScript function. SimpleMethodExample // Create and register a scriptable object. ScriptableManagedType smt = new ScriptableManagedType(); HtmlPage.RegisterScriptableObject("SL_SMT", smt); public class ScriptableManagedType [ScriptableMember] public string SimpleMethodExample(string s) HtmlPage.Window.Alert("You called the Silverlight 'SimpleMethodExample'\n" + "and passed this string parameter:\n\n" + s); return "Returned from managed code: " + s; ' Create and register a scriptable object. Dim smt As ScriptableManagedType = New ScriptableManagedType() HtmlPage.RegisterScriptableObject("SL_SMT", smt) Public Class ScriptableManagedType <ScriptableMember()> _ Public Function SimpleMethodExample(ByVal s As String) As String HtmlPage.Window.Alert("You called the Silverlight 'SimpleMethodExample'" + _ vbCrLf + "and passed this string parameter:" + vbCrLf + vbCrLf + s) Return "Returned from managed code: " + s End Function End Class What is the Silverlight programming model? When using the Silverlight programming model, you can choose one of three API variations: JavaScript interpreted by the browser, managed code, or dynamic languages interpreted by the dynamic language runtime (DLR). This FAQ focuses on the more common managed API approach for creating and running Silverlight applications. You can use managed code to handle interaction with the Silverlight server control (a Silverlight 2 scenario) by using Visual Studio 2008 SP1 or Microsoft Expression Blend 2 SP1 to create a Silverlight application project that is compiled as a .xap package. This managed code contained within the .xap package can use HTML Bridge to transfer information between the Silverlight application and the rest of your Web page. For more information about creating a Silverlight application using managed code, see Walkthrough: Integrating XAML into an ASP.NET Web Site Using Managed Code. For more information about Silverlight XAP, see ASP.NET - Silverlight XAP FAQ. What is the managed Silverlight API? The Silverlight managed API defines a. For more information, see Managed API for Silverlight. What is XAML? XAML stands for Extensible Application Markup Language. XAML simplifies creating a UI for the .NET Framework. For more information, see XAML Overview. Do I have the latest Silverlight 2 plug-in or, where can I find it? What are the Compatible Operating Systems and Browsers for the Silverlight 2 plug-in? Compatible Operating Systems and Browsers What managed programming languages can I use to create a Silverlight 2 application? C#, Visual Basic What are some good links for additional information? ASP.NET Controls for Silverlight Silverlight FAQ Why Silverlight? Silverlight Resources HTML Bridge (Contains many of the above code examples)
http://blogs.msdn.com/b/erikreitan/archive/2008/12/02/silverlight-html-bridge-faq.aspx
CC-MAIN-2015-06
en
refinedweb
NAME gd_add, gd_madd -- add a field to a dirfile SYNOPSIS #include <getdata.h> int gd_add(DIRFILE *dirfile, const gd_entry_t *entry); int gd_madd(DIRFILE *dirfile, const gd_entry_t *entry, const char *parent); DESCRIPTION The gd_add() function adds the field described by entry to the dirfile specified by dirfile. The gd_madd() function behaves similarly, but adds the field as a metafield under the field indicated by the field code parent. The form of entry is described in detail on the gd_entry(3) man page. All relevant members of entry for the field type specified must be properly initialised. If entry specifies a CONST or CARRAY field, the field's data will be set to zero. If entry specifies a STRING field, the field data will be set to the empty string. When adding a metafield, the entry->field member should contain just the metafield's name, not the fully formed <parent-field>/<meta-field> field code. Also, gd_madd() ignores the value of entry->fragment_index, and instead adds the new meta field to the same format specification fragment in which the parent field is defined. Fields added with this interface may contain either literal parameters or parameters based on scalar fields. If an element of the entry->scalar array defined for the specified field type is non-NULL, this element will be used as the scalar field code, and the corresponding numerical member will be ignored, and need not be initialised. Conversely, if numerical parameters are intended, the corresponding entry->scalar elements should be set to NULL. If using an element of a CARRAY field, entry->scalar_ind should also be set. RETURN VALUE On success, gd_add() and gd_madd() return name provided in entry->field contained invalid characters. Alternately, the parent field code was not found, or was already a metafield. GD_E_BAD_DIRFILE The supplied dirfile was invalid. GD_E_BAD_ENTRY There was an error in the specification of the field described by entry, or the caller attempted to add a field of type RAW as a metafield. GD_E_BAD_INDEX The entry->fragment_index parameter was out of range. GD_E_BAD_TYPE The entry->data_type parameter provided with a RAW entry, or the entry->const_type parameter provided with a CONST or CARRAY entry, was invalid. GD_E_BOUNDS The entry->array_len parameter provided with a CARRAY entry was greater than GD_MAX_CARRAY_LENGTH. GD_E_DUPLICATE The field name provided in entry->field duplicated that of an already existing field. GD_E_INTERNAL_ERROR An internal error occurred in the library while trying to perform the task. This indicates a bug in the library. Please report the incident to the GetData developers.). NOTES GetData artifically limits the number of elements in a CARRAY to the value of the symbol GD_MAX_CARRAY_LENGTH defined in getdata.h. This is done to be certain that the CARRAY won't overrun the line when flushed to disk. On a 32-bit system, this number is 2**24. It is larger on a 64-bit system. SEE ALSO gd_add_bit(3), gd_add_carray(3), gd_add_const(3), gd_add_divide(3), gd_add_lincom(3), gd_add_linterp(3), gd_add_multiply(3), gd_add_phase(3), gd_add_polynom(3), gd_add_raw(3), gd_add_recip(3), gd_add_sbit(3), gd_add_spec(3), gd_add_string(3), gd_entry(3), gd_error(3), gd_error_string_sbit(3), gd_madd_spec(3), gd_madd_string(3), gd_metaflush(3), gd_open(3), dirfile-format(5)
http://manpages.ubuntu.com/manpages/precise/man3/gd_add.3.html
CC-MAIN-2015-06
en
refinedweb
16 August 2012 12:15 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> January LLDPE futures, the most actively traded contract on the Dalian Commodity Exchange (DCE), closed at yuan (CNY) 10,015/tonne ($1,575/tonne), up by CNY40/tonne from the previous settlement. Around 1.86m tonnes of LLDPE or 744,296 contracts for delivery in January 2013 were traded on Thursday, according to DCE data. US crude settled at $94.33/bbl on Wednesday, up by 0.96% from the previous day. ($1 = €0.81 /
http://www.icis.com/Articles/2012/08/16/9587753/china-lldpe-futures-rise-0.40-on-crudes-overnight-gains.html
CC-MAIN-2015-06
en
refinedweb
ILOGB(3) BSD Programmer's Manual ILOGB(3) ilogb, ilogbf - an unbiased exponent libm #include <math.h> int ilogb(double x); int ilogbf(float x); The ilogb() and ilogbf() functions return the exponent of the non-zero real floating-point number x as a signed integer value. Formally the re- turn value is the integral part of log_r | x , where r is the radix of the machine's floating-point arithmetic defined by the FLT_RADIX constant in #include <float.h>. ilog2(3), logb(3), math(3) The described functions conform to ISO/IEC 9899:1999 ("ISO C99"). Neither FP_ILOGB0 nor FP_ILOGBNAN is defined currently in NetBSD..
http://www.mirbsd.org/htman/i386/man3/ilogb.htm
CC-MAIN-2015-06
en
refinedweb
world, unless the real need for that. That makes practical knowledge on this keyword is minimal for most of the programmers. This article explores with very simple explanations and example to make you understand when and why transient variable will be used in Java. If you are Java programmer and want to receive the weekly updates on Java tips to improve your knowledge, please subscribe to our free newsletter here. also read: What is Serialization? If you want to understand what is transient, please learn about what is Serilization concept in Java if you are not familiar with that. Serialization is the process of making the object’s state is persistent. That means the state of the object is converted into stream of bytes and stored in a file. In the same way we can use the de-serilization concept to bring back the object’s state from bytes. This is one of the important concept in Java programming because this serialization is mostly used in the networking programming. The object’s which are needs to be transmitted through network has to be converted as bytes, for that purpose every class or interface must implement serialization interface. It is a marker interface without any methods. What is Transient?. Transient Keyword Example Look into the following example to understand the purpose of transient keyword: package javabeat.samples; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.Serializable; class NameStore implements Serializable{ private String firstName; private transient String middleName; private String lastName; public NameStore (String fName, String mName, String lName){ this.firstName = fName; this.middleName = mName; this.lastName = lName; } public String toString(){ StringBuffer sb = new StringBuffer(40); sb.append("First Name : "); sb.append(this.firstName); sb.append("Middle Name : "); sb.append(this.middleName); sb.append("Last Name : "); sb.append(this.lastName); return sb.toString(); } } public class TransientExample{ public static void main(String args[]) throws Exception { NameStore nameStore = new NameStore("Steve", "Middle","Jobs"); ObjectOutputStream o = new ObjectOutputStream (new FileOutputStream("nameStore")); // writing to object o.writeObject(nameStore); o.close(); // reading from object ObjectInputStream in =new ObjectInputStream( new FileInputStream("nameStore")); NameStore nameStore1 = (NameStore)in.readObject(); System.out.println(nameStore1); } } // output will be : First Name : Steve Middle Name : null Last Name : Jobs In the above example, the variable middle Name is declared as transient, so it will not be stored in the persistent storage. You can run the above example and check the results. your explanation is good pls give program explanation also Why dont you want to persist the middle name? Its just an example to explain the meaning of Transient keyword. Yes. It is just an example to use the Transient keyword. If you have any suggestions, please post it here. Thank you for the comments. In Swing/AWT i tried to serialize JPanel which is using GroupLayout. Unfortunatelly GroupLayout can not be serialized, so i try to set it's decleration with transient keyword. but this does not change anything? Why is that so, it only works if i remove grouplayout altogether, just declaring transient does not work… What could be the reason?? You can serialize JPanel properties (fields,…), and when deserialize create JPanel with the saved properties . The use of transient is to tell the serialization mechanism that the field should not be saved along with the rest of that object's state. i mean GroupLayout Excellent explanation! I like explanations like this, reveals the complete truth about the transient keyword. Serilization = Serialization”That means the state of the object is converted into stream of bytes and stored in a file.” Maybe you meant “in a byte stream”? Stopped reading on this sentence. Please correct the horrible grammar in this article. It will make people think that you are sloppy. Nicely explained but indeed please correct the grammar. Done some update on the post. It was written without much thought, A part from the ”Serilizition” , everything seems ok Thank you for pointing out the problem, I have updated it perfect thanks.. I have a question though.. new FileOutputStream(“nameStore”), new FileInputStream(“nameStore”).. where is the file extension? should it be.txt? I saw.ser extension in an article once.. I see it here completely ignored.. someone explain please.. Are you getting any error?. There is no need for providing the extension. nice explanation nice but just it’s joking nice but just it’s joking nice but just it’s joking nice but just it’s joking nice but just it’s joking It likes [XmlIgnoreAttribute] in C# when using XmlSerializer. A good explanation of transient keyword thanks:) I learned something else today, thanks…. Never came across the transient keyword so far. Krishna, Article is good and clears the concept. However Please try to resolve the grammer issue, no offence but people will not take it seriously until grammer is fixed. Hello Nirmal, Thank you for the comments. I have updated the content. Thanks, Krishna surprisingly clear article despite the grammatical issues. thx! HI, Thank you for the comments!! Thanks, Krishna what prospers xsd in the webservices ? 09654457708 you can find some more details in the below link,”” final int a=23; // This makes constants final int array[]={34,3445,5}; // but this snippet wont make constant but array Objects can make constants. how to make individual array elements constant? Dude you just made my day Thank you!! Thanks Krishna garu, It would have been good if you specify some real time uses of the “transient” keyword. Thanks, Ram Great post!
http://www.javabeat.net/what-is-transient-keyword-in-java/
CC-MAIN-2015-06
en
refinedweb
NAME mmap, munmap - map or unmap files or devices into memory SYNOPSIS #include <sys/mman.h> void * mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); int munmap(void *start, size_t length); DESCRIPTION. PROT_EXEC Pages may be executed. PROT_READ Pages may be read. PROT_WRITE Pages may be written. PROT_NONE Pages may not be accessed. The flags parameter specifies the type of the mapped object, mapping options and whether modifications made to the mapped copy of the page are private to the process or are to be shared with other references. It has bits MAP_FIXED Do not select a different address than the one specified. If the memory region specified by start and len overlaps pages of any existing mapping(s), then the overlapped part of the existing mapping(s) will be discarded. If the specified address cannot be used, mmap() will fail. If MAP_FIXED is specified, start must be a multiple of the pagesize. Use of this option is discouraged. MAP_SHARED Share this mapping with all other processes that map this object. Storing to the region is equivalent to writing to the file. (since Linux 2.5.37) Lock the pages of the mapped region into memory in the manner of mlock(). This flag is ignored in older kernels. supported on x86-64 for 64bit programs. MAP_POPULATE (since Linux 2.5.46) Populate (prefault) pagetables. MAP_NONBLOCK (since Linux 2.5.46) Do not block on IO.). NOTES It is architecture dependent whether PROT_READ includes PROT_EXEC or not. Portable programs should always set PROT_EXEC if they intend to execute code in the new mapping. or length or offset. (E.g., they are too large, or not aligned on a PAGESIZE boundary.)). AVAILABILITY On POSIX systems on which mmap(), msync() and munmap() are available, _POSIX_MAPPED_FILES is defined in <unistd.h> to a value greater than 0. (See also sysconf(3).) CONFORMING TO SVr4, POSIX.1b (formerly POSIX.4), 4.4BSD, SUSv2. SVr4 documents additional error codes ENXIO and ENODEV. SUSv2 documents additional error codes EMFILE and EOVERFLOW.. SEE ALSO getpagesize(2), mlock(2), mmap2(2), mremap(2), msync(2), setrlimit(2), shm_open(2) B.O. Gallmeister, POSIX.4, O’Reilly, pp. 128-129 and 389-391.
http://manpages.ubuntu.com/manpages/dapper/man2/munmap.2.html
CC-MAIN-2015-06
en
refinedweb
You can subscribe to this list here. Showing 2 results of 2 On Monday, Jul 21, 2003, at 15:12 US/Eastern, Jonathan Brandmeyer wrote: >> - After installing Fink, Gtk+, and Python22, make sure the Fink >> packages >> autoconf2.5, automake, automake1.6, numeric, and pkgconfig are >> installed >> too. > > You shouldn't need Autoconf or Automake for visual unless you are > regenerating aclocal.m4, Makefile.in (in any directory except cvisual) > or configure. Did you need to do that? The current scripts are > generated with autoconf 2.53 and automake 1.6. Did you need these > programs for some other purpose? After building Visual on my iMac, I see that you're quite correct. The autoconf2.5, automake, and automake1.6 packages aren't needed. I had them for compiling XEphem. I neglected to mention that the gtkglarea package is required. So, in addition to Gtk+ (plus all its dependencies) and Python22 (plus all its dependencies), the pkgconfig, gtkglarea, and numeric packages are required to build Visual. Of course, the system-xfree86 package is required if you're using Apple's X11 implementation, as is the X11 SDK. >> - Before issuing the "./configure --prefix=/sw" command, place the >> following >> lines in your ~/.cshrc file: >> >> setenv CFLAGS -I/sw/include >> setenv LDFLAGS -L/sw/include >> setenv CXXFLAGS $CFLAGS >> setenv CPPFLAGS $CXXFLAGS >> setenv BROWSER open > > A note about CFLAGS/CXXFLAGS. Autoconf automatically sets both of > these > variables to "-O2 -g" unless they are already set in the environment. > So, with these settings, just about any Autoconf-based installation > procedure will build without optimization or debugging symbols. So, > you > will probably want to `setenv CFLAGS "-I/sw/include -O2"` (note the > double quotes) at least, and if you want to provide more useful > bugreports, add -g as well. I actually forgot about this, but it's something I can do if I want to rebuild Visual. I hope these changes get put on the VPython OS X page for potential OS X users, especially that BROWSER flag. I nearly lost sleep over that one. Cheers, Joe Heafner ----- <> Use it! Be sure to complain about having to re-register every five years and about the fact that political fundraisers are exempt! This won't be an evaluation so much as some observations. As far as I can tell, Pythoncard is completely ready-for-primetime as far as it goes. It worked exactly as advertised and documented, and was a much gentler introduction to wxPython than wxPython. Ironically, however, I ended up using wxPython because I found a wxPython code sample which was very close to what I needed. I couldn't figure out how to get the same effects under Pythoncard. However, *however*, both of those frameworks (and Tk and probably others) are considerably more complicated than vpython. Not just insofar as they incorporate a vast number of widgets and gizmos, but also in that they "externalize" the mainloop which Visualpython so elegantly internalizes. Thus, a typical wxapp begins with something like this: if __name__ == '__main__': import sys app = wxPySimpleApp() frame = TestFrame(None, sys.stdout) frame.Show(true) app.MainLoop() I think the VisualPython model is more enlightened, and I'd like to suggest that you geniuses consider hooking into wx or pythoncard "under the hood" somehow, so that Visual retains its elegance and suitability "for mere mortals" but now with hooks into a windowing toolkit. I may be missing something, but it seems to me that Visual demonstrates that there *is* a better way, and that those other frameworks, derived from C and Tk and other benighted languages are too 'tolerant' of complexity and inelegance. Having said that, my impression is that wx is very mature, and that wxPython and Pythoncard are robust and useful packages. P.S. *After* completing my project, I came across David Beech's promising-looking 'course' on python and wxPython. See also the c++-oriented wx tutorials at. Regards, ------------------------------------------ Jonathan Schull, Ph.D. Schull@... <mailto:Schull@...> age.html < Page.html> 36 Brunswick St., Rochester NY 14607 585-738-6696 cell and v-mail 585-242-9497 landline 978-246-0487 fax ------------------------------------------ > -----Original Message----- > From: Bruce Peterson [mailto:bapeters@...] > Sent: Thursday, July 17, 2003 4:23 PM > To: jon schull > Subject: RE: [Visualpython-users] WxPython + vpython > > > Please keep us posted on your evaluation. PythonCard sounds > like a good > idea -- but I am deterred by their home page which calls the current > version a prototype and admits of little or no documentation. > One of the > great aspects of Python is the excellent documentation (at > least compared > to VB which admittedly is a pretty low bar) and the fact that methods > pretty much work as expected. > > At 12:14 PM 7/17/2003, you wrote: > > >Those all sound like good ideas. > > > >I suspect that many vpython users will evenutually gravitate toward > >PythonCard as well (which I am evaluating right now). PythonCard is > >built on top of wxpython AND it seems to have quite a good Resource > >Editor for wysiwyg UI development. > > > >So if you develop in PythonCard, you'll probably get where > faster, pave > >the way for other users, spare them from any 'gotchas' that may come > >with the PythonCard layer. > > > >------------------------------------------ > >Jonathan Schull, Ph.D. > >Schull@... <mailto:Schull@...> > > > chullOnOne > >P > >age.html > >< > SchullOnOne > >Page.html> > >36 Brunswick St., Rochester NY 14607 > >585-738-6696 cell and v-mail > >585-242-9497 landline > >978-246-0487 fax > >------------------------------------------ > > > > > > > > > -----Original Message----- > > > From: visualpython-users-admin@... > > > [mailto:visualpython-users-admin@...] > On Behalf Of > > > Bruce Peterson > > > Sent: Thursday, July 17, 2003 12:15 PM > > > To: visualpython-users@... > > > Subject: [Visualpython-users] WxPython + vpython > > > > > > > > > Another approach (that I've not tried) is possibly to set up the > > > VPython program as a server and control it from another process > > > running wxPython. > > > I've written a server that uses VPython and also a VBA (from > > > Excel) routine > > > that controls the VPython display. As they are running in separate > > > processes, they don't interfere with each other. (I also made > > > the server > > > routine multi-threaded to allow it to continue updating the > > > display while > > > responding to calls from VB). I've been thinking of moving > > > the UI from > > > Excel VBA (which is a great rapid prototype platform) to > > > wxPython -- I'd be > > > interested to hear if anyone knows of problems with (or has > > > tried) the > > > multi-process approach. > > > Bruce Peterson > > > > > > At 08:32 PM 7/16/2003, you wrote: > > > > > > >Message: 2 > > > >Subject: Re: [Visualpython-users] working visualpython + > wxPython > > > >program? > > > >From: Jonathan Brandmeyer <jbrandmeyer@...> > > > >Reply-To: jbrandmeyer@... > > > >To: Patrick Bouffard <patrick.bouffard@...> > > > >Cc: "Visualpython-users@..." > > > ><Visualpython-users@...> > > > >Date: 16 Jul 2003 11:44:31 -0400 > > > > > > > >The short answer is "That's not possible," at least not with any > > > >assurance of safety. The long answer is that visual uses > > > its own event > > > >loop to asynchronously cache and process things like > > > keyboard presses > > > >and mouse clicks & drags. Changing this behavior to > say, embed a > > > >vpython window within some larger python GUI program > (using pyGTK > > > >or wxPython or tkinter for examples) will require a mechanism > > > to provide a > > > >widget to visual that gives visual the necessary functionality. > > > > > > > >That ability to write a new toolkit-specific widget in > > > python and pass > > > >it to visual is not possible right now. However, it is top > > > on my list > > > >of things to do with the new Boost-based interface that is > > > cooking in > > > >CVS. > > > > > > > >-Jonathan Brandmeyer > > > > > > > > > > > > ------------------------------------------------------- > > > This SF.net email is sponsored by: VM Ware > > > With VMware you can run multiple operating systems on a single > > > machine. WITHOUT REBOOTING! Mix Linux / Windows / Novell virtual > > > machines at the same time. Free trial click > > > here: > > > _______________________________________________ > > > Visualpython-users mailing list > > > Visualpython-users@... > > > > > > > > Bruce Peterson, Ph.D. > Terastat, Inc > Information Access Systems > Voice (425) 466 7344 > >
http://sourceforge.net/p/visualpython/mailman/visualpython-users/?viewmonth=200307&viewday=22
CC-MAIN-2015-06
en
refinedweb
06 August 2010 23:59 [Source: ICIS news] LONDON (ICIS)--The European August phenol contract price has decreased by €12/tonne ($16/tonne) from July as a result of a fall in the price of the feedstock benzene, phenol producers and consumers confirmed on Friday. The August phenol pre-discounted contract price settled at €1,162-1,202/tonne FD (free delivered) NWE (northwest ?xml:namespace> The August benzene contract was agreed at €668/tonne FOB (free on board) NWE (northwest Despite the tight supply situation, the phenol contract price has moved down because it is fully linked to a benzene price formula. “I can confirm that phenol went down €12/tonne with benzene. Phenol demand is very good and very strong. We are sold-out and are unable to take any spot orders,” said a major producer. Referring to the phenol contract price, a second European producer said: “For phenol, it’s the same story: contract down with benzene. There is still very good demand and the market remains challenging.” A major phenol buyer confirmed that its August contract price moved down €12/tonne and that its demand for downstream bisphenol A (BPA) was healthy. “The €12/tonne drop in benzene will definitely be passed on. BPA demand is still strong and phenol is tight, but I am feeling that maybe availability is a little easier,” said the buyer. A distributor of phenol said that demand remained at a high level not only in Europe, but also in “There is still good demand and producers are shipping every available molecule to The main chemical intermediates and derivatives of phenol are BPA, which is used to make polycarbonate (PC), and epoxy resins, phenolic resins, caprolactam, alkylphenols, aniline and adipic acid. ($1 = €0.76) For more on phenol and benz
http://www.icis.com/Articles/2010/08/06/9382751/europe-august-phenol-falls-12tonne-on-benzene-decrease.html
CC-MAIN-2015-06
en
refinedweb
I'm not sure how to word this, but I did my best in the title. I'm creating a GUI and for simplicity it has a frame, a tabbed pane, a panel, and a button. Originally, they started in the same class, but I want to try splitting them into two classes. So now I have this... Code java: import FirstPanel.java public class GUI extends JFrame{ private JFrame frame; private JTabbedPane tabs; private FirstPanel firstPanel; //Constructor public GUI(){ frame = new JFrame(); tabs = new JTabbedPane(); firstPanel = new FirstPanel(); tabs.add("Tab #1", firstPanel); frame.add(tabs); //Set frame visible and the size etc... } public static void main(String[] args){ GUI gui = new GUI(); gui.setVisible(true); } What is FirstPanel? It's the second class! Code java: The output I am looking for is a frame with a tab with a panel with a button. Now, when I run GUI, I get a frame with a tab with a panel but there is no button displayed. I thought at first that it was because all components need to be in the same class, but I dismiss that because I am at least getting ONE panel. I get no errors or output from this, only a missing button. Any help is greatly appreciated! Cheers! -Polaris395 :confused:
http://www.javaprogrammingforums.com/%20awt-java-swing/10070-imported-class-components-hidden-printingthethread.html
CC-MAIN-2015-06
en
refinedweb
Automating the world one-liner at a time… We like to say that PowerShell uses Verb-Noun naming. If we were accurate, we would say that PowerShell uses Verb-PrefixNoun naming. For instance: Get-WinEvent . WIN is a prefix for Windows. Every now and again people look at this and ask “what's up with command prefixes ?!?”. Actually, the question usually comes from teams that are implementing Cmdlets and don't understand why they have to use prefixes. They sometimes feel like their noun is unique or they want to own the noun or they think they may get reorganized into a different group so they are not sure what a good prefix would be (hey - is a legitimate issue for some teams – it just has nothing to do with the customer.) Prefixes mitigate naming collisions. Consider the case of the recent cmdlet Get-WinEvent which was developed by the Windows Diagnostics team (the same great team that brought you the super awesome W7 troubleshooting [which uses PowerShell heavily]). What would've happened if they'd just called it Get-Event. The PowerShell team needed to use the name Get-Event Cmdlet (I'll explain why it doesn't use a prefix later) so without a prefix these two Cmdlet names would collide. So what happens with a collision? Well one wins and the other loses. Specifically, last writer wins. So if you had a script that uses Get-Event, it is going to use would ever last defined that name. In other words your script will break. (NOTE: For the record – the story is much more complicated that that. The Diag team used the name “Get-Event” first and we OKed it with them and then sometime later we got crisp about our policy for when PowerShell needs to own generic nouns and made them change it. It was an unfortunate situation and the diag team ended up “taking one for the team” and doing a bunch of extra work so that we could deliver a great customer experience. We all owe them a big THANK YOU.) No matter what we do, naming collisions will occur so let's talk about how you deal with that when it happens. The solution is simple, you use the full Cmdlet name. Full Cmdlet name? Yes that's right, every Cmdlet that has a short and a full name. The names you use all day long are the short Cmdlet names. The full name Get-WinEvent is Microsoft.PowerShell.Commands.Diagnostics\Get-WinEvent. That’s right, if anyone comes along and collides with Get-WinEvent, you just go change all your scripts to use Microsoft.PowerShell.Commands.Diagnostics\Get-WinEvent. OUCH! Clearly this is an undesirable situation but at the end of the day these things happen so you have to have a solution. That said, it should be obvious why we want to avoid collisions as much as possible. Think through the problem. When we thought through this problem we made some guesses about how many things needed to be named. It's been many years since we had the discussion but I believe the numbers were something like this: The key observation here was that we had to minimize the possibility of short names colliding on a particular machine (not a customer site or the industry). Verb names are meant to be standard so they're always going to collide - that's a good thing (because it means you can guess which you want to do in your guess will be correct). So this is really an issue of increasing the Hamming Distance of noun names. Adding a 2-4 character prefix (indicating the team or product that owns the noun) in front of the noun solves this problem. NOTE: The side benefit of using prefixes is that it provides an easy way to find all the commands from a particular source. For instance, the Active Directory team chose the prefix “AD” so you can type: Get-Command *-AD* to find all their Cmdlets. So that is the reason why we are hardcore on the guidance to use prefixes in front of your noun names. Oh wait, I promised to tell you why PowerShell sometimes doesn't use the prefix PS in its Cmdlets. There are three reasons for this: But I hope this clarifies the importance of prefixing. As a community it is in your best interest to enforce this standard quite getting grief to those teams and companies that don't adhere to it. Enjoy! Jeffrey Snover [MSFT] Distinguished Engineer Visit the Windows PowerShell Team blog at: Visit the Windows PowerShell ScriptCenter at: we have a complete network and applications testing framework written currently in v1 (now porting to v2) and we have always used a prefix. our company name is Front Porch so we use the FP in front of the noun for most cmdlet/function names unless it is specific to a product application then we use a Three Letter Acronym to identify the product the command is targeted to. Prefixes are awesome and we need more of them so people can see were the command comes from! It would be good if the prefixes were all consistent with the .dll / snapin. When you get multiple snapins loaded and potentially hundreds of cmdlets available in a session it would make tab-completion much easier. For instance, if all the Exchange cmdlets were prefixed with 'ex' it would speed up tabbing through a list of cmdlets if you don't remember the exact name, but know it's Exchange specific (eg get-user should have been get-exuser). IMHO I would have loved to see an aliasing capability for extremely long module names e.g. Set-ModuleAlias psdiag Microsoft.PowerShell.Commands.Diagnostics Then you would disambiguate Get-WinEvent simply as psdiag\Get-WinEvent. The benefit is that you don't create a new namespace i.e. the prefixed nouns of a module. Rather, your just creating a convenient alias for an existing namespace - the module name.
http://blogs.msdn.com/b/powershell/archive/2009/09/20/what-s-up-with-command-prefixes.aspx
CC-MAIN-2015-06
en
refinedweb
The aim of this article is to acquaint you with Ruby development on NetBeans 6. Specifically, I have used JRuby for development purpose here. However. the steps outlined here are applicable elsewhere as well. If you want to use the native Ruby interpreter, please refer to this link I Before we get started download the latest development version of NetBeans 6 from here. Please download one of the versions that includes Ruby support. Adhering with the tradition we shall start of saying "Hello World" from Ruby: File:New Ruby Application File:Project Name and Location File:Hello World from Ruby File:Output Window Let is now try to understand the project explorer window that is displayed at the Left side of the Ruby project we just created. File:Project Explorer File:Adding a new Ruby Class The class Fruit is created with the initialize function: class Fruit def initialize() end end For more on initialize and other Object Oriented Features of Ruby refer this link You can also add a new Ruby script (not a class) using similar steps above. File:Installed Ruby Gems Go to Tools->Ruby Gems and select the "New Gems" TAB. Select the appropriate RubyGem and click on the Install button File:Installing Ruby Gems File:Installing Ruby Gems File:Installing Ruby Gems This will hopefully get you Started with Native Ruby on NetBeans. Ideally, I would also like to cover the following topics sometime soon:
http://wiki.netbeans.org/RubyonNetBeans6
CC-MAIN-2015-06
en
refinedweb
Several weeks ago, I tweeted that a lot of people appear to be making their software far more complex than it needs to be. I was asked to elaborate and share details. The comment was prompted by reading dozens of forum posts by desperate developers in over their heads trying to apply enormous and complex frameworks to applications that really could use simple, straightforward solutions. I've witnessed this in projects I've taken over and worked with other developers and of course am guilty of making these same mistakes myself in the past. After years of working with line of business Silverlight applications, and speaking with several of my colleagues, what I thought might be a "Top 5" turned out to be a "Top 10" list. This is not necessarily in a particular order, but here are the ten most common mistakes I see developers make when they tackle enterprise Silverlight applications (and many of these apply to applications in general, regardless of the framework). 1. YAGNI YAGNI is a fun acronym that stands for, "You aren't going to need it." You've probably suffered a bit from not following YAGNI and that is one mistake I've definitely been guilty of in the past. YAGNI is why my Jounce framework is so lightweight, because I don't want to have another bloated framework with a dozen features when most people will only ever use one or two. YAGNI is violated by the framework you build that has this awesome policy-based logging engine that allows dynamic configuration and multiple types of logs ... even though in production you always dump it to the same rolling text file. YAGNI is violated by the complex workflow you wrote using the Windows Workflow Engine ... when you could have accomplished the same thing with a dozen lines of C# code without having to spin up a workflow process. Many of the other items here evolve around YAGNI. Learn how to avoid these mistakes in my new book, Designing Silverlight Business Applications: Best Practices for Using Silverlight Effectively in the Enterprise (Microsoft .NET Development Series) A good sign that YAGNI is being violated is when the team spends three months building the "application framework" without showing a single screen. It has to be built just right with a validation engine, a configurable business rules engine, an XSLT engine and your own data access adapter. The problem is that trying to reach too far ahead into the project is a recipe for disaster. Not only does it introduce a bulk of code that may not be used and adds unnecessary complexity, but many times as the project progresses you'll find you guessed wrong and now have to go back and rewrite the code. I had a boss once who would throw up his hands in exasperation and say, "No, don't ask me for another refactoring." Often users only think they know what they want, and building software too far into the future means disappointment when they figure out they want something different after testing the earlier versions of the code. Well-written software follows the SOLID principles. Your software should be made of small, compact building blocks. You don't have to boil the ocean. Instead, assume you aren't going to need it. That doesn't mean the logger or rules engine will never come into play, it just means you defer that piece of the software puzzle until it is relevant and comes into focus. Often you will find the users weren't really sure of what they really wanted, and waiting to place the feature until the software has been proven will save you lots of cycles and unnecessary overhead. Instead of pulling in the Enterprise Library, just implement an ILogger interface. You can always put the Enterprise Library behind it or refactor it later, but you'll often find that simple implementation that writes to the debug window is all you'll ever need. 2. Sledgehammer Framework Syndrome This syndrome is something I see quite a bit with the Prism framework. I see a lot of questions about how to scratch your left ear by reaching around your back with your right arm using five layers of indirection. Contrary to what some people have suggested, I am a huge fan of Prism and in fact give quite a bit of credit to Prism concepts in Jounce. The problem is that while Prism provides guidance for a lot of features, few people learn to choose what features are relevant and most simply suck in the entire framework and then go looking for excuses to use the features. You don't have to replace all of your method calls with an event aggregator message and an application with five menu items doesn't always have to be chopped into five separate modules. I can't tell you how many projects I've seen pull in the Enterprise Library to use the exception handling block only to find there are only two classes that actually implement it and the rest do the same old try ... catch ... throw routine. Understand the framework you are using, and use only the parts that are relevant and make sense. If the source is available, don't even compile the features you aren't going to need ... there it is, YAGNI again. The problem with pulling in the whole framework is what many of you have experienced. You jump into a new code base and find out there are one million lines of code but it just seems weird when the application only has a dozen pages. You started to work on a feature and find entire project libraries that don't appear to be used. You ask someone on the team about it, and they shrug and say, "It's working now ... we're afraid if we pull out that project, something might be broken that we won't learn about until after it's been in production for 6 months." So the code stays ... which is Not A Good Thing™. 3. Everything is Dynamic The first question I often get about Jounce is "how do I load a dynamic XAP" and then there is that stare like I've grown a third eye when I ask "Do you really need to load the XAP dynamically?" There are a few good reasons to load a XAP dynamically in a Silverlight application. One is plug-in extensibility — when you don't know what add-ons may be created in the future, handling the model through dynamic XAP files is a great way to code for what you don't know. Unfortunately, many developers know exactly what their system will need and still try to code everything dynamic. "Why?" I ask. "Because it is decoupled." "But why is that good?" "Because you told me to follow the SOLID principles." The SOLID principles say a lot about clean separation of concerns, but they certainly don't dictate the need to decouple an application so much that you can't even tell what it is supposed to load. Following SOLID means you can build the application as a set of compiled projects first, and then refactor modules into separate XAP files if and when they are needed. I mentioned one reason being extensibility. The other is managing the memory footprint. Why have the claims module referenced in your XAP file if you aren't going to use it? The thing is, there isn't much of a difference between delaying the creation of the claim view and the claim view model versus adding the complexity of loading it from a separate XAP file. If you profile the memory and resources, you'll find that most satellite XAP files end up being about 1% of the total size of the application. The main application is loaded with resources like images and fonts and brushes and controls, while the satellite XAP files are lightweight views composed of the existing controls and view models. Instead of making something dynamic just because it's cool, why not build the application as an integrated piece and then tackle the dynamic XAP loading only if and when it's needed? 4. Must ... Have ... Cache Caches are great, aren't they? They just automatically speed everything up and make the world a better place by reducing the load on the network. That sounds good, but it's a tough sell for someone who actually profiles their applications and is looking to improve performance. Many operations, even with seemingly large amounts of data, end up having a negligible impact on network traffic. What's worse, a cache layer adds a layer of complexity to the application and another level of indirection. In Silverlight, the main option for a cache is isolated storage. Writes to isolated storage are slower than slugs on ice due to the layer between the isolated storage abstraction and the local file system.. Often you will find that your application is taking more time to compute whether or not a cached item has expired and de-serializing it from isolated storage than it would have taken to simply request the object from the database over the network. Obviously, there are times when a cache is required such as when you want the application to work in offline mode. The key is to build the cache based on need, and sometimes you may find that you aren't going to need it. As always, run a performance analysis and measure a baseline with and without the cache and decide based on those results whether or not the cache is necessary — don't just add one because you assume it will speed things up. 5. Optimistic Pessimistic Bipolar Synchronization Synchronization is a problem that has been solved. It's not rocket science and there are great examples of different scenarios that deal with concurrency. Many applications store data at a user scope, so the "concurrency" really happens between the user and, well, the user. If you are writing an application that works offline and synchronizes to the server when it comes online, be practical about the scenarios you address. I've seen models that tried to address "What if the user went offline on their phone and updated the record, then updated the same record on their other offline phone then they went to their desktop in offline mode and updated the same record but the time stamp on the machine is off, and now both go online - what do we do?!" The reality is that scenario has about a 1 in 1,000,000 likelihood. Most users simply aren't offline that much and when they are, it's an intermittent exception case. Field agents who work in rural areas will be offline more often, but chances are they are using your application on one offline device, not multiples. It simply doesn't make sense to create extremely complex code to solve the least likely problem in the system, especially when it's something that can be solved with some simple user interaction. Sometimes it makes more sense to simply ask the user, "You have multiple offline updates. Synch with your phone or your desktop?" rather than trying to produce a complex algorithm that analyzes all of the changes and magically constructs the target record. 6. 500 Projects I'm a big fan of planning your projects carefully. For example, it often does not make sense to include your interfaces in the same project as your implementations. Why? Because it forces a dependency. If you keep your interfaces (contracts) in a separate project, it is possible to reference them across application layers, between test and production systems, and even experiment with different implementations. I've seen this taken to the extreme, however, with applications that contain hundreds of projects and every little item is separated out. This creates a convoluted mass of dependencies and building the project can take ages. Often the separation isn't even needed because groups of classes are often going to be updated and shipped together. A better strategy is to keep a solid namespace convention in place. Make sure that your folder structure matches your namespaces and create a folder for models, contracts, data access, etc. Using this approach enables you to keep your types in separate containers based on namespaces, which in turn makes it easy to refactor them if you decide that you do need a project. If you have a project called MyProject with a folder called Model and a class called Widget, the class should live in the MyProject.Model namespace. If you find you need to move it to a separate project, you can create a project called MyProject.Model, move the class to it and update references, and you're done - just recompile the application and it will work just fine. Designing Silverlight Business Applications: Best Practices for Using Silverlight Effectively in the Enterprise (Microsoft .NET Development Series) 7. No Code Behind! This is one that amazes me sometimes. Developers will swear MVVM means "no code behind" and then go to elaborate lengths to avoid any code-behind at all. Let's break this down for a minute. XAML is simply declarative markup for object graphs - it allows you to instantiate types and classes, set properties and inject behaviors. Code-behind is simply an extension of those types and the host class that contains them. The idea of MVVM is to separate concerns - keep your presentation logic and view-specific behaviors separate from your application model so you can test components in isolation and reuse components across platforms (for example, Windows Phone 7 vs. the new WinRT on Windows 8). Having business logic in your code-behind is probably NOT the right idea because then you have to spin up a view just to engage that logic. What about something completely view-specific, however? For example, if you want to kick off a storyboard after a component in the view is loaded, does that really have to end up in a view model somewhere? Why? It's view-only logic and doesn't impact the rest of the application. I think it's fine to keep concerns separated, but if you find you are spending 2 hours scouring forums, writing code and adding odd behaviors just so you can take some UI code and shove it into a view model, you're probably doing it wrong. Code-behind is perfectly fine when it makes sense and contains code that is specific to the view and not business logic. A great example of this is navigation on Windows Phone. Because of the navigation hooks, some actions simply make sense to write in the code-behind for the view. 8. Coat of Many Colors Have you ever worked on a system where your classes wear coats of many colors? For example, you have a data base table that is mapped to an Entity Framework class, which then gets shoved inside a "business class" with additional behaviors, that is then moved into a lightweight Data Transfer Object (DTO) to send over the wire, is received using the proxy version of the DTO generated by the service client, and then pushed into yet another Silverlight class? This is way too much work just to move bits over the wire. Modern versions of the Entity Framework allow you to create true POCO classes for your entities and simply map them to the underlying data model. Silverlight produces portable code that you can share between the client and server projects, so when you define a service you can specify that the service reuses the type and de-serializes to the original class instead of a proxy object. WCF RIA Services will handle all of the plumbing for sharing entities between the client and the server for you. You know you are a victim of this problem if you ask someone to add a new entity into the mix and they moan for 10 minutes because they know it's going to take forever to build the various incarnations of the object. When there is too much ritual and ceremony involved with moving a simple entity from the server to the Silverlight client, it's time to step back and re-evaluate. In some cases it might make sense to keep the entities but use code-generation techniques like T4 templates to simplify the repetive tasks, but in many cases you can probably get away with reusing the same class across your entire stack by separating your models into a lightweight project that you reference from both sides of the network pond. 9. Navigation Schizophrenia Are you building a web site in Silverlight, or an application? The presence of the navigation framework has led many projects down the path of using URL-driven navigation for line of business applications., To me, this is a complete disconnect. Do I use URLs in Excel? What does the "back" button mean in Word? The point is that some applications are well-suited to a navigation paradigm similar to what exists on the web. There is a concept of moving forward and "going back." Many line of business applications are framed differently with nested menus, multiple areas to dock panels and complex graphs, grids, and other drill-downs. It just doesn't make sense to try to force a web-browser paradigm on an application just because it is delivered over the web. Sometimes you have no choice - for example, navigation is an intrinsic part of the Windows Phone experience, and that's fine. Just make sure you are writing navigation based on what your application needs, rather than forcing a style of navigation on your application simply because there is a template for it. 10. Everything is Aggregated The final issue is one that I've seen a few times and is quite disturbing. If you are publishing an event using the event aggregator pattern, and receiving the same event on the same class that published it, there's something wrong. That's a lot of effort to talk to yourself. The event aggregator is a great pattern that solves a lot of problems, but it shouldn't be forced to solve every problem. I've always been a fan of allowing classes communicate with peers through interfaces. I don't see an issue with understanding there is a view model that handles the details for a query, so it's OK to expose an interface to that view model and send it information instead of using the event aggregator. I still expose events on objects as well. For example, if I have a repository that is going to raise a notification when the collection changes, I'll likely expose that as an event and not as a published message. Why? Because for that change to be interesting, the consumer needs to explicitly understand the repository and have an established relationship. The event aggregator pattern works great when you have messages to publish that may have multiple subscribers, impact parts of the system that may not be explicitly aware of the class publishing the message, and when you have a plug-in model that requires messages to cross application boundaries. Specific messages that are typically shared between two entities should be written with that explicit conversation in mind. In some cases you want the coupling to show the dependency because it is important enough that the application won't work well without it. There is nothing wrong with using the event aggregator, just understand the implications of the indirection you are introducing and determine when a message is really an API call, a local notification, or a global broadcast. Conclusion I love writing line of business software. I've been doing it for well over a decade across a variety of languages ranging from C++, Java, VB6, JavaScript and XSLT (yes, I called XSLT a language, if you've worked with systems driven by XSLT you know what I mean) ... and I've been guilty of most of the items I listed here. One thing I learned quickly was that most people equate "enterprise software" to "large, clumsy, complex and difficult to maintain software" and that doesn't have to be the case. The real breakthrough for me happened when I started to focus on the tenants of SOLID software design as well as DRY (don't repeat yourself) and YAGNI. I learned to focus on simple building-block elements and working with what I know and not spending too much time worrying about what I don't know. I think you'll find that keeping the solution simple and straightforward creates higher quality software in a shorter amount of time than over-engineering it and going with all of the "cool features" that might not really be needed. If there is nothing else you take away from this article, I hope you learn two things: first, don't code it unless you know you need it, and second, don't assume - measure, spike, and analyze, but never build a feature because you THINK it will benefit the system, only build it when you can PROVE that it will. Want to avoid these mistakes? Read about lessons learned from over a decade of enterprise application experience coupled with hands-on development of dozens of line of business Silverlight applications in my new book, Designing Silverlight Business Applications: Best Practices for Using Silverlight Effectively in the Enterprise (Microsoft .NET Development Series).
http://csharperimage.jeremylikness.com/2011_09_01_archive.html
CC-MAIN-2015-06
en
refinedweb
I am working with a pre-existing database called Employee. I have three separate fields i'd like to combine into a single field, but I can't add an additional field to the pre-exisiting database. I know the proper way to combine multiple fields into one field using python is '%s - %s %s' % (self.username, self.firstname, self.lastname) However, I can't call self outside the model, or at least i'm not sure where I would call self. My end goal is to have a select box with the combined field a user can search either first, last, or account name. My current model looks like the following: class Employee(models.Model): staff_id = models.IntegerField(db_column = 'Employee_ID') status_id = models.IntegerField(db_column = 'StatusID') username = models.CharField(db_column = 'SamAccountName',primary_key = True, max_length = 31) lastname = models.CharField(db_column = 'Surname', max_length = 63) firstname = models.CharField(db_column = 'GivenName', max_length = 63) title = models.CharField(db_column = 'Title', max_length = 127) class Meta: managed = False db_table = '[Employee]' I tried to add to my model, but when I call full_username it says the field doesn't exists, which is true because there isn't a field in the database. We aren't allowed to add a new field to the database. def get_full_name(self): full_username = '%s - %s %s' % (self.username, self.firstname, self.lastname) return full_username.split() Ideally i'd want my view to look something like this (i know it wont' work as is, i'd replace that with 'full_username): activeuserlist = Employee.objects.filter(staff_id = '1').values_list('%s - %s %s' % (Employee.username, Employee.firstname, Employee.lastname), flat = True) How would I get the full name added to my view, what am I missing with my logic or where would be the correct place to put it? You can give this a try: from django.db.models.functions import Concat from django.db.models import F, Value employees = Employee.objects.annotate(full_username=Concat(F('username'), Value(' - '), F('firstname'), Value(' '), F('lastname')))\ .filter(staff_id='1', full_username__icontains='hello') The icontains bits is just a demo, with this query you can filter the result based on the combined name as well. If you have to use this everywhere, then I recommend you create your own queryset/manager in your model then put this annotation into the default queryset. After that you can use your full_username filter any where you want without having to add the annotation first. First, try formatting the string in a good way full_username = '{} - {} {}'.format(self.username, self.firstname, self.lastname) Second you can the method get_full_name() in model and call it from the model object. employee = Employee.objects.get(id=1) full_name = employee.get_full_name() That should work. :) Try using a property:) If you don't need the functionality of a QuerySet, try this activeuserlist = [ '{} - {} {}'.format(user.username, user.firstname, user.lastname) for user in Employee.objects.filter(staff_id = '1') ] If you do need a QuerySet, I think it's not possible on python-level, only on SQL-level. See this thread on annotations.'m trying to open up my chrome profile which is signed into google however am not having any luck I have created an instance of SQLAlchemy model (User): Here's my starting dataframe:
https://cmsdk.com/python/proper-way-to-combine-a-field-outside-of-my-model-in-a-preexisting-database.html
CC-MAIN-2021-43
en
refinedweb
- David Engster authored * admin/grammars/c.by (expr-binop): Add MOD. (variablearg): Add 'opt-assign'. (variablearg, varnamelist): Add default values so that it can be later expanded into the tag. (opt-stuff-after-symbol): Rename to 'brackets-after-symbol' and remove empty match. (multi-stage-dereference): Adapt to above rename. (unaryexpression): Use 'symbol' instead of 'namespace-symbol', since the latter also leads to an empty match at the end which would make this too greedy. (variablearg-opt-name): Support parsing of function pointers inside an argument list. * semantic/analyze.el (semantic-analyze-find-tag-sequence-default): Always add scope to the local miniscope for each type. Otherwise, structure tags are not analyzed correctly. Also, always search the extended miniscope even when not dealing with types. * semantic/ctxt.el (semantic-get-local-variables-default): Also try to parse local variables for buffers which are currently marked as unparseable. Otherwise, it is often impossible to complete local variables. * semantic/scope.el (semantic-analyze-scoped-types-default): If we cannot find a type in the typecache, also look into the the types we already found. This is necessary since in C++, a 'using namespace' can be dependend on a previous one. (semantic-completable-tags-from-type): When creating the list of completable types, pull in types which are referenced through 'using' statements, and also preserve their filenames. * semanitc/bovine/c.el (semantic/analyze/refs): Require. (semantic-analyze-tag-references): New override. Mainly copied from the default implementation, but if nothing could be found (or just the tag itself), drop all namespaces from the scope and search again. This is necessary for implementations which are defined outside of the namespace and only pull those in through 'using' statements. (semantic-ctxt-scoped-types): Go through all tags around point and search them for using statements. In the case for using statements outside of function scope, append them in the correct order instead of using 'cons'. This is important since using statements may depend on previous ones. (semantic-expand-c-tag-namelist): Do not try to parse struct definitions as default values. The grammar parser seems to return the point positions slightly differently (as a cons instead of a list). Also, set parent for typedefs to 'nil'. It does not really make sense to set a parent class for typedefs, and it can also lead to endless loops when calculating scope. (semantic-c-reconstitute-token): Change handling of function pointers; instead of seeing them as variables, handle them as functions with a 'function-pointer' attribute. Also, correctly deal with function pointers as function arguments. (semantic-c-reconstitute-function-arglist): New function to parse function pointers inside an argument list. (semantic-format-tag-name): Use 'function-pointer' attribute instead of the old 'functionpointer-flag'. (semantic-cpp-lexer): Use new `semantic-lex-spp-paren-or-list'. * semantic/bovine/gcc.el (semantic-gcc-setup): Add 'features.h' to the list of files whose preprocessor symbols are included. This pulls in things like __USE_POSIX and similar. * semantic/format.el (semantic-format-tag-prototype-default): Display default values if available. * semantic/analyze/refs.el (semantic-analyze-refs-impl) (semantic-analyze-refs-proto): Add 'default-value' as ignorable in call to `semantic-tag-similar-p'. * semantic/db-mode.el (semanticdb-semantic-init-hook-fcn): Always set buffer for `semanticdb-current-table'. * semantic/db.el (semanticdb-table::semanticdb-refresh-table): The previous change turned up a bug in this method. Since the current table now correctly has a buffer set, the first clause in the `cond' would be taken, but there was a `save-excursion' missing. * semantic/lex-spp.el (semantic-c-end-of-macro): Declare. (semantic-lex-spp-token-macro-to-macro-stream): Deal with macros which open/close a scope. For this, leave an overlay if we encounter a single open paren and return a semantic-list in the lexer. When this list gets expanded, retrieve the old position from the overlay. See the comments in the function for further details. (semantic-lex-spp-find-closing-macro): New function to find the next macro which closes scope (i.e., has a closing paren). (semantic-lex-spp-replace-or-symbol-or-keyword): Go to end of closing macro if necessary. (semantic-lex-spp-paren-or-list): New lexer to specially deal with parens in macro definitions. * semantic/decorate/mode.el (semantic-decoration-mode): Do not decorate available tags immediately but in an idle timer, since EDE will usually not be activated yet, which will make it impossible to find project includes. * semantic/decorate/include.el (semantic-decoration-on-includes-highlight-default): Remove 'unloaded' from throttle when decorating includes, otherwise all would be loaded. Rename 'table' to 'currenttable' to make things clearer. * ede/linux.el (cl): Require during compile. * ede/linux.el (project-linux-build-directory-default) (project-linux-architecture-default): Add customizable variables. (ede-linux-project): Add additional slots to track Linux-specific information (out-of-tree build directory and selected architecture). (ede-linux--get-build-directory, ede-linux--get-archs) (ede-linux--detect-architecture, ede-linux--get-architecture) (ede-linux--include-path): Added function to detect Linux-specific information. (ede-linux-load): Set new Linux-specific information when creating a project. (ede-expand-filename-impl): Use new and more accurate include information. * semantic/scope.el (semantic-calculate-scope): Return a clone of the scopecache, so that everyone is working with its own (shallow) copy. Otherwise, if one caller is resetting the scope, it would be reset for all others working with the scope cache as well.b0fe992f
https://emba.gnu.org/emacs/emacs/-/blob/b0fe992f3657cf3c852c00d662783354fdab343d/lisp/cedet/semantic/bovine/c.el
CC-MAIN-2021-43
en
refinedweb
Xamarin.Forms - UI Automation Testing Introduction .. Prerequisites Visual Studio 2017 or later (Windows or Mac) Setting up a Xamarin.Forms Project Start by creating a new Xamarin.Forms project. You wíll learn more by going through the steps yourself. Create a new or existing Xamarin forms(.Net standard) Project. With Android and iOS Platform. that. <?xml version="1.0" encoding="utf-8"?> <ContentPage xmlns="" xmlns: <NavigationPage.TitleView> <TitleView:TitleView/> </NavigationPage.TitleView> <StackLayout Margin="0,50,0,0" VerticalOptions="StartAndExpand"> <Image VerticalOptions="Center" Source="xamarinmonkeysbanner.png"/> <Entry AutomationId="EntryPhoneNumber" Placeholder="Enter Phone Number" x: <Button AutomationId="ValidateButton" Text="Validate" Clicked="PhoneNumberValidate"/> <Label AutomationId="ResultLabel" x: </StackLayout> </ContentPage> Configure App Here, you need to configure both iOS and Android App paths see below code. AppInitializer.cs public static IApp StartApp(Platform platform) { if (platform == Platform.Android) { return ConfigureApp.Android.Debug().ApkFile("../../../XamarinApp.Android/bin/Debug/com.companyname.xamarinapp.apk").StartApp(); } return ConfigureApp.iOS.AppBundle("../../../XamarinApp.iOS/bin/iPhoneSimulator/Debug/XamarinApp.iOS.app").StartApp(); } Android You can specify your APK path. Go to your Android project debug/bin folder you can find the com.companyname.xamarinapp.apk file you copy the file path and set the ApkFile. ConfigureApp.Android.Debug().ApkFile("../../../XamarinApp.Android/bin/Debug/com.companyname.xamarinapp.apk").StartApp(); The device can be specified using the DeviceSerial method: ConfigureApp.Android.ApkFile("../../../XamarinApp.Android/bin/Debug/com.companyname.xamarinapp.apk") .DeviceSerial("03f80ddae07844d3") .StartApp(); iOS Specify the iOS IPA path. Go to your iOS debug/bin folder you can find the AppName.iOS file you copy the file path and set the AppBundle. ConfigureApp.iOS.AppBundle("../../../XamarinApp.iOS/bin/iPhoneSimulator/Debug/XamarinApp.iOS.app").StartApp(); iOS Enable Test Cloud To run tests on iOS, the Xamarin Test Cloud Agent NuGet package must be added to the project. Once it's been added, Add following code into the AppDelegate.FinishedLaunching method: AppDelegate public override bool FinishedLaunching(UIApplication app, NSDictionary options) { Xamarin.Calabash.Start(); global::Xamarin.Forms.Forms.Init(); LoadApplication(new App()); return base.FinishedLaunching(app, options); } [TestFixture(Platform.iOS)] public class Tests { IApp app; Platform platform; public Tests(Platform platform) { this.platform = platform; } [SetUp] public void BeforeEachTest() { app = AppInitializer.StartApp(platform); } [Test] public void PhoneNumberValidateTest() { app.WaitForElement(c => c.Marked("EntryPhoneNumber")); app.EnterText(c => c.Marked("EntryPhoneNumber"),"1234567890"); app.Tap(c => c.Marked("ValidateButton")); AppResult[] results = app.WaitForElement(c => c.Marked("ResultLabel")); Assert.IsTrue(results.Any()); } } Run The test method has been passed. Download full source from Github References I hope you have understood you will learn how to test Xamarin UI Elements Xamarin.Forms. Thanks for reading. Please share your comments and feedback. Happy Coding :)
https://tutorialslink.com/Articles/XamarinForms---UI-Automation-Testing/2484
CC-MAIN-2021-43
en
refinedweb
APIMASH: Porting to Windows Phone 8 You may recall that in May my evangelist colleagues and I launched a series of workshops around a bevy of API Starter Kits designed to kickstart HTML5/JS and C# developers into building their own, unique Windows 8 applications. Since then we’ve seen a few of your efforts appear in the Windows Store including Because Chuck Said So and my very own Eye on Traffic (which leverages the TomTom Traffic Cameras API). More recently I’ve been working on porting my Windows 8 Starter Kit (that leverages both the Bing Maps and TomTom APIs) to Windows Phone, and I thought I’d share some of my experiences in doing so. In a previous post, I spent a bit of time describing the architecture of the Windows 8 version, and it was certainly my goal to get as much code reuse as possible, though of course, I knew that the form factor of a Windows Phone device would necessitate rethinking the user experience. For this post, I’ll split the discussion in two parts: the services layer that handles the API calls and the front-end user experience. Services Layer From my architecture overview, you’ll recall there are two primary class libraries involved: - APIMASH which includes the plumbing code for issuing an HTTP request, parsing the response, and serializing the payload to the required formats, and - APIMASH_APIs which includes the object models and API-specific code to obtain the desired data from the Bing and TomTom REST APIs APIMASH Changes For the base layer, APIMASH, there were two notable changes required. Deserializing Byte Stream to a Bitmap In the class to deserialize HTTP response payloads, I had a method that takes the binary response from a GET request for a jpg image and turns it into a BitmapImage. It turns out things work a bit differently between Windows 8 and Windows Phone in this regard. As you might expect, the namespaces are different (Windows.UI.Xaml.Media.Imaging for Windows 8 and System.WIndows.Media.Imaging for Windows Phone), but it takes a bit more than a namespace modification to generate the same behavior. The Windows Phone case is a near no-brainer: public static T DeserializeImage(Byte[] objBytes) { try { BitmapImage image = new BitmapImage(); using (var stream = new MemoryStream(objBytes)) { image.SetSource(stream); } return (T) ((object) image); } catch (Exception e) { throw e; } } But I’m not too proud to say the Windows 8 version had taken me a while to come up with. The primary challenge was that the argument to SetSource is an IRandomAccessStream versus a good ole System.IO.Stream Below is what you’ll find in the current Windows 8 implementation, thought I suspect I could clean this up a bit by leveraging WindowsRuntimeStreamExtensions. public static T DeserializeImage(Byte[] objBytes) { try { BitmapImage image = new BitmapImage(); // create a new in memory stream and datawriter using (var stream = new InMemoryRandomAccessStream()) { using (DataWriter dw = new DataWriter(stream)) { // write the raw bytes and store synchronously dw.WriteBytes(objBytes); dw.StoreAsync().AsTask().Wait(); // set the image source stream.Seek(0); image.SetSource(stream); } } return (T) ((object) image); } catch (Exception e) { throw e; } } HTTPClient Windows 8 has this awesome HttpClient class which provides a very simple REST-inspired interface (methods like GetAsync, PostAsync, etc.). Unfortunately, that’s not available in Windows Phone… well sort of. There is a portable HttpClient that helps bring the light and airy HttpClient class to both Windows Phone and .NET 4, and that certainly seemed like the easiest approach for my porting effort. It does require adding a few dependencies like the Base Class Libraries, but the NuGet package makes that really simple to incorporate. All seemed to work well at first, except that my cam images didn’t seem to refresh on demand; the existing image never refreshed, but other cam images were being pulled in just fine. That led me on a chase that included discovering that Windows Phone has this helpful feature called image caching (read about it here), and for a while I was convinced somehow that was the culprit. It was not. Caching was indeed the root cause, but it wasn’t image caching, it was web response caching. Since each request for a given camera was hitting the same URI, and Windows Phone was caching the results, I was continually seeing the same image served from cache. One workaround for this is to tack on a (meaningless) URI parameter that always changes, say using a GUID. Presuming the server responding to the URI just ignores the superfluous parameter, all is well. That seemed a bit hacky though and could have a performance or memory hit since the device would now be caching data it would never ever re-access. The solution, elegant in its REST-fulness was a one-liner included before the call to GetAsync httpClient.DefaultRequestHeaders.IfModifiedSince = DateTime.UtcNow; APIMASH_APIs Changes As you may know, the preferred mapping option for Windows Phone is not the Bing Maps control but rather the one provided by Nokia as part of the platform (and available on all Windows Phone 8 devices). Knowing I’d be reworking some of the map user experience code (and being somewhat time constrained), I opted to pull out the Bing Maps API layer altogether. In the Windows 8 application, the Bing Maps API fueled the search experience - which I keenly wanted to highlight since it’s a Windows 8 platform differentiator - but for the Windows Phone version, I decided to scale back and drop the entire APIMASH_BingMaps.cs implementation. A coincident casualty of that decision (in conjunction with the use of Nokia maps) were two methods that no longer served any purpose for the Windows Phone versioh. Beyond that, the changes to the APIMASH_TomTom.cs implementation were incredibly minor: Imaging namespace modifications: System.Windows.Media.Imaging versus Windows.UI.Xaml.Media.Imaging. Image source pathing: The implementation includes two canned “error message” images should something go wrong when requesting a camera image. In Windows 8, those resources are delivered as loose content and accessible via the ms-appx:/// URI scheme. For Windows Phone 8, I needed to mark them as Resource on build and reference via them via URsI like /APIMASH_APIs;component/Assets/camera404.png. It is possible to create an implementation that will use resource streams and work across both Windows 8 and Windows Phone with no changes, but for the two images I’m dealing with here, it seemed like overkill. Application resource pathing: For both targeted platforms, my code stores the developer API key for TomTom as a static resource in the App.xaml file, but there is a nuance of difference in how resources are keyed. In Windows Phone, the ResourceDictionary class can be keyed off of any object type; in Windows 8, the ResourceDictionary is a clamped down a bit to deal with KeyValuePair specifically, so there are semantic differences between the Contains methods, and the Windows RT version of the class throws in a new ContainsKey method. Frankly, it took longer to explain it just now than to address it in code! User Experience From the look of the app on Windows 8 (shown earlier in this post), it should be clear that that specific user experience wasn’t going to transfer directly to Windows Phone, and in general, you should anticipate the front-end work of a port from Windows 8 to Windows Phone (or vice-versa) to require more thought, reflection, and elbow-grease than the plumbing (e.g., business logic and services layer). There isn’t a single formula for revamping a user experience for a different form factor, and there are lots of decisions large and small that are part of the process, many very specific to your application. I will say that despite the great feature set of the Windows Phone Emulator, once I began to focus on the UX changes (versus the services implementation), I very quickly adopted a workflow of deploying and debugging right from the device. What seemed to make perfect sense when interacting with the emulator via the mouse – or even touch, since I have a touch enabled laptop – seemed awkward when actually holding the device. I settled on the following UX for the phone version of the Starter Kit, and it’s quite obviously not feature-equivalent with the Windows 8 version, but I don’t think it needs to be.Notably, there’s no list view of all the cameras, but for the context of a Windows Phone (versus Windows 8) user, I don’t think that list is particularly useful, and the location of the user in context with the map view is paramount. To get to this point in my development, I started with a File->New Windows Phone 8 application, and one-by-one pulled in some of the existing assets from the Windows 8 application, including: - Common classes like BindableBase.cs (no changes needed) and a few converter classes used in the XAML. Those do require a bit of tweaking because the implementation of IValueConverter differs between the two platforms (specifically the last parameter of both of the conversion methods). - Custom map pin classes (CurrentLocationPin.xaml and PointOfInterestPin.xaml). One change was required here, namely the interpretation of the anchor point (from absolute pixels in the Windows 8 case, to a relative 0-1 scale in Windows Phone). Keeping in mind these assets are being used on two completely different map implementations, I was pleasantly surprised! - Given the change to Nokia maps from the Bing Maps control, I expected a lot of rewiring, but since I had abstracted much of the map UI integration into a BingMapsExtensions class, the changes weren’t difficult and very localized. (I also decided to change the class name, since Bing Maps wasn’t in the picture any more!) Reworking the screens though was pretty much a rewrite with a bit of cut-and-paste; here are some of the things you should be prepared for: - Windows 8 and Windows Phone have similar but not quite identical process lifetime and navigation models, - There are (often frustrating) nuances of difference in the XAML. For example, Windows uses using in namespace references, while Windows Phone uses clr-namespace. And you cannot always rely on feature parity across analogous user interface elements. In general though, Windows Phone XAML seems a bit more feature-rich than Windows 8, so I suspect my port from Windows 8 to Windows Phone was smoother than the reverse might have been. - Windows 8 doesn’t leverage theming all that much (it’s either light or dark), but in Windows Phone you typically will want to tap into styles based on the user-selected theme, leveraging resources like PhoneAccentBrush. - Be sure to check out the Windows Phone Toolkit for those things you can’t believe aren’t there by default :) Beyond that, I’ll leave it to you to crack open both projects and take a look at how I handle specific elements of the implementation. Whoa! What about these Portable Class Libraries I hear so much about? For those of you keenly following the evolution and merging of the Windows 8 and Windows Phone development experiences, you might be wondering why I haven’t mentioned anything about. Wouldn’t that have saved me a ton of time? Perhaps for the backend services implementation, but this wasn’t really a greenfield project, so did it (or does it in general) make sense to fix what ain’t broke: the Windows 8 version? For a real app that I’d expecte to evolve on both platforms, it might be worth going back and doing so to reduce the amount of duplicate code. But I'm being a bit disingenuous here as well, I pretty much knew I’d be evolving this to Windows Phone soon after starting the Windows 8 version, so the real answer is a bit more tactical and job-related. Portable class libraries are a Visual Studio Professional (and above) feature; in the interest of reaching as large an audience as possible with our Starter Kits, we wanted to make it as free and easy as downloading Visual Studio Express and hitting F5. If you do want to learn more about Portable Class Libraries and techniques for sharing code between Windows 8 and Windows Phone projects, there are a number of great resources, including: Channel 9 JumpStart Series Dev Center (Windows Phone 8 and Windows 8 app development) Real Talk: Sharing Code Between the Windows & Windows Phone Platforms (Build 2013)
https://docs.microsoft.com/en-us/archive/blogs/jimoneil/apimash-porting-to-windows-phone-8
CC-MAIN-2021-43
en
refinedweb
Memory Layout of a C Program Advertisement Memory Management is one of the most important topics for a Programmer, and so understanding the Memory Layout of a C Program and Memory Layout of a Process becomes essential. For high-level languages such as Java, Python, C#, Memory is partially managed by the language itself as it has a Garbage Collector, which deallocates and frees the allocated memory while not in use. But there is no such garbage collector in C & C++, and so the programmer must manually release the allocated memory. The C program is first compiled and translated to an executable object file. When the executable is run, it takes the main memory area, i.e. the RAM, and the CPU runs the executable instructions. If you are not aware of the processes involved in compiling the C program from source to binary, read C Program Compilation Process. The Typical Memory Layout of a C Program consists of the following segments: - Command Line Arguments - Stack - Heap - Uninitialized Data Segment (BSS) - Initialized Data Segment - Text/Code Segment The above layout segments can be broadly classified into two: - Static Memory Layout – Text/Code, Data Segments - Dynamic Memory Layout – Stack & Heap The C Program executable already contains some of the segments, and some are built dynamically at runtime. First Let’s Discuss each segment of the Memory Layout in detail: Static Memory Layout The Static Memory layout consists of three segments, Text/Code segment, Initialized, and Uninitialized (bss) Data Segment. These three segments are already present in the final executable object file of the c program and are directly copied to the main memory layout. We can use the size tool to take a look at the static memory layout of the c program executable object file. Let’s take a look: #include int main() { return 0; } When the executable object file is analyzed with the size command, the static memory layout is displayed. $ gcc .\cprogram.c -o cprogram.out $ size cprogram.out text data bss dec hex filename 1418 544 8 1970 7b2 cprogram.out Text/Code Segment Text or Code Segment includes the machine-level instructions for the final executable object file. This section is one of the key parts of the static memory structure as it includes the program’s central logic. The text segment in the memory structure is below the heap and the data segment. This layout is chosen to shield the Text section from overwriting if the stack or heap overflows. In the text section of the final executable object file, we only have read and execute permissions and no write permissions. This is done to prevent accidental modifications to the corresponding assembly code. You can use the objdump command to dump various parts of the executable object file. In this section, the Text/Code Segment will be dumped using the objdump tool. One point to remember here is that the objdump command will only run on Linux and not on any other platform. $ objdump -S cprogram.out cprogram.out: file format elf64-x86-64 Disassembly of section .init: 0000000000001000 <_init>: 1000: f3 0f 1e fa endbr64 1004: 48 83 ec 08 sub $0x8,%rsp Disassembly of section .plt: 0000000000001020 <.plt>: 1020: ff 35 a2 2f 00 00 pushq 0x2fa2(%rip) 1026: f2 ff 25 a3 2f 00 00 bnd jmpq *0x2fa3(%rip) 102d: 0f 1f 00 nopl (%rax) Disassembly of section .text: 0000000000001040 <_start>: 1040: f3 0f 1e fa endbr64 1044: 31 ed xor %ebp,%ebp 1046: 49 89 d1 mov %rdx,%r9 1049: 5e pop %rsi 0000000000001129 The output above is shortened, as we only need to see the main block. The main block in the above objdump output is the corresponding assembly code for the main function of the C Program. Initialized Data Segment All initialized global and static variables are stored in this section. The data segment has read and write permissions. This allows the program to execute and change the value of the variable in the data segment at runtime. Let’s change the previous C program and add some global variables. #include int a=10; char ch='A'; int arr[5] = {1,2,3,4,5}; int main() { return 0; } Find the size of the cprogram.out and compare it to the previous size. $ size cprogram.out text data bss dec hex filename 1418 580 8 1975 7b7 cprogram.out Previously the size of the data segment was 544 bytes and after initializing global variables it increased to 580 bytes. Uninitialized Data Segment (BSS) The Uninitialized Data Section, also known as the “bss” segment, was named after an old assembly operator that stands for “block started by the symbol“. The BSS Segment contains all the uninitialized global variables and static variables. This segment is placed above the data segment in the memory layout. This segment also has both the read and write permissions. #include int a,b,c; char ch; int main() { return 0; } Find the size of the cprogram.out and compare it to the previous size. $ size cprogram.out text data bss dec hex filename 1418 544 24 1986 7c2 cprogram.out This time size of the bss segment increased from 8 bytes to 24 bytes, because we declared global variables but didn’t initialize it. Dynamic Memory Layout This is the runtime memory of the process and exists as long as the process is running. Stack Program execution can take place without a heap memory, but not without a stack segment. This illustrates the importance of stack memory for the execution of a program. The stack is a region of memory in the process’s virtual address space where data is added or removed in the Last-in-First-out (LIFO) order. A new stack-frame is added to the stack memory when a new function is invoked. The corresponding stack-frame is removed when the function returns. One thing to note here is that every function has its own stack-frame, also known as Activation record. The size of the stack is variable since it depends on the size of the local variables, parameters, and function calls. The Stack grows from a higher address to a lower address. Every process has its own fixed/configurable stack memory. The stack memory is reclaimed by the OS when the process terminates. Using the ulimit -s command, we can see the max size of stack memory in the Linux system. $ ulimit -s 8192 Use ulimit -a command to list all the flags for the ulimit command. $ ulimit -a -t: cpu time (seconds) unlimited -f: file size (blocks) unlimited -d: data seg size (kbytes) unlimited -s: stack size (kbytes) 8192 -c: core file size (blocks) 0 -m: resident set size (kbytes) unlimited -u: processes 43585 -n: file descriptors 1024 -l: locked-in-memory size (kbytes) 65536 -v: address space (kbytes) unlimited -x: file locks unlimited -i: pending signals 43585 -q: bytes in POSIX msg queues 819200 -e: max nice 0 -r: max rt priority 0 -N 15: unlimited To find the limits of a running process in Linux, use cat /proc//limits command. Create a C program with an infinite loop. int main() { while(1){} } Run the executable object file in the background, it will give us the process id of the process. Use the process id to get the limits of the process. Kill the background running process, or it will run indefinitely. $ ./infi.out & [1] 4853 $ cat /proc/4853/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 43585 43585 processes Max open files 1024 1048576 files Max locked memory 67108864 67108864 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 43585 43585 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us $ kill 4853 [1] + 4853 terminated ./infi.out Let’s find the Stack Size using C Program. #include #include #include #include #include<sys/resource.h> int main() { struct rlimit lim; if(getrlimit(RLIMIT_STACK,&lim)==0) { printf("Soft Limit = %ld\n",lim.rlim_cur); printf("Max Stack Size = %ld\n",lim.rlim_max); } else printf("%s\n", strerror(errno)); return 0; } $ ./cprogram.out Soft Limit = 8388608 Max Stack Size = -1 Let’s now see the stack memory layout and what the stack frame for a function contains. Stack Memory Layout A Stack frame contains four types of information: - Parameters passed to the function (Reverse Order) - The return address of the caller function. - The base pointer of the caller function - Local variables of the function The size of the return address and base pointer is 4 bytes for 32-bit architecture and 8 bytes for 64-bit architecture. #include int sum(int a, int b) { return a + b; } float avg(int a, int b) { int s = sum(a, b); return (float)s / 2; } int main() { int a = 10; int b = 20; printf("Average of %d, %d = %f\n", a, b, avg(a, b)); return 0; } Below is a comprehensive illustration of how the stack memory would look like as we execute the above C program. The frame that is being executed is always the topmost frame of the stack. The pointer to the top-most frame in the stack is called the Frame Pointer or Base Pointer. The Base Pointer stores the starting address in callee’s stack frame where the caller’s base pointer value is copied. The pointer to the top of the stack is called the Stack Pointer. Stack Pointer stores the address of the top of the stack memory. The stack memory has automatic memory management for both allocation and de-allocation. The programmer has no control over the memory of the stack. When constructing a stack-frame, the local variable of the function is allocated and de-allocated when the stack-frame is about to pop up from the stack segment. This also defines the scope of a variable. Stack Error Conditions Let’s take a look at what errors we can face when dealing with the stack. Stack Overflow This is an error when a program has a long sequence of function calls, and the program stack expands past the full fixed size, resulting in a stack overflow. What causes stack overflow condition: - Recursive function calls - Declaration of large arrays Stack Memory has a limited size and thus it is not recommended to store large objects. Stack Corruption Stack corruption is a condition in which we corrupt the stack data by copying more data than the actual memory capacity. Example: #include #include int copy(char *argv) { char name[10]; strcpy(name, argv); } int main(int argc, char **argv) { copy(argv[1]); printf("Exit\n"); return 0; } There is a copy function in the above code where a name array of 10 bytes of the char data type has been specified. And we’re copying data from the argument on the command line. If the user passes a string with a size larger than 10 bytes, the stack frame will overwrite another block and this will lead to stack corruption. Heap As we’ve seen, the stack has a limited size that doesn’t allow us to work with big data, and we don’t have control over it. This problem is solved by the Heap memory, a continuous part of virtual address space where the allocation and de-allocation of memory can be performed in real-time. Unlike stack memory there is no such automatic memory management and the allocation and de-allocation of heap memory is the primary responsibility of the programmer. To harness the heap memory, we need the Glibc API, which provides the functions to allocate and de-allocate the heap memory. The malloc()/ calloc() function is used to assign a memory block from the heap segment and the free() function is used to restore the memory to the heap segment that was assigned by the malloc()/ calloc() function. Under the hood, the malloc() and calloc() functions use the brk() and sbrk() system calls to allocate and de-allocate the heap memory for a process. These functions malloc, calloc, realloc, and free are defined in the header file, stdlib.h. One factor to keep in mind is that we can only use pointers to address a heap memory block. Now let’s see an example of how the heap memory is allocated and de-allocated. #include #include int func() { int a = 10; int *aptr = &a; int *ptr = (int *)malloc(sizeof(int)); *ptr = 20; printf("Heap Memory Value = %d\n", *ptr); printf("Pointing in Stack = %d\n", *aptr); free(ptr); } int main() { func(); return 0; } The image above is a simple description of how a heap of memory is accessed using a malloc() function call. The picture indicates that the value of integer 20 is stored in the 4 Byte of heap area allocated by the malloc() function, but that is not really true. The value is actually stored in the physical memory, i.e. the RAM, the virtual address of the heap segment is converted to the physical address using the MMU (Memory Management Unit), and the value is written or accessed. The heap memory block has no scope, so the programmer has to manually free the reserved space from the heap. Hope you like it. Learn more interesting stuff. nice work. love the content , must have been a lot of hardwork . I appreciate your work.keep up the good work Really good, i would non-cautiously say this is the best blog in the net regarding memory layout in c, i’ll make sure to point any one needed an understanding here,
https://hackthedeveloper.com/memory-layout-c-program/
CC-MAIN-2021-43
en
refinedweb
A helper that generates composite images for the volume ray cast mapper. More... #include <vtkFixedPointVolumeRayCastCompositeShadeHelper.h> A helper that generates composite images for the volume ray cast mapper. This is one of the helper classes for the vtkFixedPointVolumeRayCastMapper. It will generate composite images using an alpha blending operation. This class should not be used directly, it is a helper class for the mapper and has no user-level API. Definition at line 39 of file vtkFixedPointVolumeRayCastCompositeShadeHelper.h. Definition at line 44 of file vtkFixedPointVolumeRayCastCompositeSh.
https://vtk.org/doc/nightly/html/classvtkFixedPointVolumeRayCastCompositeShadeHelper.html
CC-MAIN-2021-43
en
refinedweb
I am an aspiring data scientist and entrepreneur Well, not this Luigi. I’m not a very smart person, on a scale of 1 to 10 I would rank myself as a 5 on a good day, on other days I struggle to not walk into glass doors. In order for someone such as myself to understand most things, they need to be dumbed down and explained in the simplest way possible with relatable analogies to help make it more relatable and devoid of any jargon. I will try to the best of my abilities to explain what luigi is, how it works and what it can do in the simplest way possible. Think of a domino set, you can for example build a spiral with different tiles, the distance between each tile must be such that when one tile falls it triggers the next tile to fall and so on until all the tiles have fallen. Each tile represents a task, you need each task to run to ultimately get your desired output. At least for the purposes I used the package for, I would liken Luigi to a domino set, that allows me to stitch different tasks. When one task runs and is complete I trigger the next task to take output from the previous task and run till completion. According to the Luigi documentation it ‘helps you build complex pipelines of batch jobs’. Much like setting up a domino set such that n number of branches appear at a certain point and once the falling tiles reach that point the branches created fall parallel to each other, luigi allows you to parallelize things that need to be done in a given task. In a given pipeline you may have tasks- which are the domino tiles, you also have task parameters, which is the input the task takes. So for example when you create a function in python, the function may require an argument or n number of arguments. I would liken Luigi parameters to these arguments. To explain how these parameters are assigned its important to explain that there are typically three functions in a task/class: Luigi also provides a neat web interface that enables you to view pending, running and completed tasks along with a visualization of the running workflow and any dependencies required for each task in your pipeline. You can access this by running luigid on your terminal and opening the interface through the url:. Pipeline As explained, the pipeline is multiple tasks that have been stitched together using Luigi. The process starts off by taking input in the form of the csv file extracted from my data source, looks for the unique signup states and creates a separate csv for each unique state identified- these are four in total. Each csv has data specifically related to activities that occurred under that state. For the purposes of this assignment I named this task ‘separate_csv’ as it performs its namesake. Since I want to find out whether a given user moved from one state to another, the next task, ‘state_to_state_transitions1’, is dependent on the output of the ‘separate_csv’ task. Because of this, the parameter value for is assigned inis assigned in separate_csv If you recall, the ‘requires’ function runs first, therefore whenIf you recall, the ‘requires’ function runs first, therefore when state_to_state_transitions. runs it first runs that requires function that assigns original data csv to separate_csv.runs it first runs that requires function that assigns original data csv to separate_csv. state_to_state_transitions This logic is built into all my tasks. The purpose of is to create a sequence of marketing sources a given user clicked before completing the signup process, given the change in the objective of the model, this is more of a nice to have table showing the sequence of marketing channels engaged before signup.is to create a sequence of marketing sources a given user clicked before completing the signup process, given the change in the objective of the model, this is more of a nice to have table showing the sequence of marketing channels engaged before signup. state_to_state_transitions The next task, ‘state_to_state_transitions2’ then the 4 unique state files created and for each unique user checks whether that user moved from the first state to the second state all through the final state returning boolean values dependent on whether or not the transition occurred. The output produced are three files representing the transitions from one state to another. After this part of the pipeline I needed to get the probability distribution for each transition. For this I used the gaussian_kde module in SciPy, which is used to get an estimate of the probability distribution of a given dataset using a density estimator. This, ‘gaussian_kdefit’ task produces three pickle files with the probability distribution for each transition. Since I was dealing with relatively large amounts for rows/user activity I then had to get samples from each pickle file and save each sample as a separate csv. From the samples produced in the ‘get_samples’ task, I created a visualization to compare whether the distribution of transitions observed in these sample files matched the distribution in the population data. Since this was a task requiring more visual output, I did not include this in my pipeline. import pickle import numpy as np from scipy import stats import matplotlib.pyplot as plt files = ['Sessiontolead+sampleprobabs','leadtoopportunity+sampleprobabs','opportunitytocomplete+sampleprobabs'] pickles = ['Sessiontoleadprobabs','leadtoopportunityprobabs','opportunitytocompleteprobabs'] def func(sims, tag): file_path = 'C:\\Users\\User\\Documents\\GitHub\\Springboard-DSC\\AttributionModel\\Data\\ModelData\\original\\' sims = pd.read_csv(file_path+sims+'.csv') path = 'C:\\Users\\User\\Documents\\GitHub\\Springboard-DSC\\AttributionModel\\Data\\ModelData\\pickles\\' actuals = pd.read_pickle(path + tag + '.pck') y = np.linspace(start=stats.norm.ppf(0.1), stop=stats.norm.ppf(0.99), num=100) fig, ax = plt.subplots() ax.plot(y, actuals.pdf(y), linestyle='dashed', c='red', lw=2, alpha=0.8) ax.set_title(tag + ' Stats v Actual comparison', fontsize=15) # sims plot ax_two = ax.twinx() # simulations ax_two = plt.hist(sims.iloc[:, 1]) return fig.savefig(path + str(tag) + '.png') for x,y in zip(files,pickles): func(sims=x,tag=y) Output: The output of this ‘sanity check’ is the distribution of transitions between states for the population data, represented by the red dotted line and the transitions for the samples, represented by the bars. From these visualizations I got confirmation that the distribution of transitions from the samples matched the population transitions. The last task, ‘state_to_state_machine’ then creates simulations based on the probabilities in the probability distribution for each transition. Each simulation can be thought of as a hypothetical user who landed on one of the company’s signup pages and then proceeded to go through the signup process. We can then simulate users moving from st ate to state based on different filters, for example if the user was using a particular device, started signing up on a particular day, time of the day or based on a marketing campaign they may have clicked (with the right data). Each simulation creates a new file and once the value of a preceding transition is , there will also be a, there will also be a 0/False for the following states.for the following states. 0/False Finally, the pipeline ends with the ‘parent_wrapper’ that ties every task together by assigning parameter values to the state_to_state_machine task, joining together all files created through the simulation into a single file and running the entire pipeline. For a more detailed breakdown of the model, visit my GitHub repo. Create your free account to unlock your custom reading experience.
https://hackernoon.com/building-a-monte-carlo-markov-chain-pipeline-using-luigi-jocf3223
CC-MAIN-2021-43
en
refinedweb
SpringBoot - JUnit unit test In this section, you learn about spring boot using Junit for unit testing. 1. JUnit overview 1.1 introduction to JUnit JUnit. About - JUnit 5 JUnit is a unit testing framework for the Java language. It was established by Kent beck and Erich Gamma, and gradually became the most successful xUnit family of sUnit from Kent Beck. JUnit has its own JUnit extended ecosystem. Most java development environments have integrated JUnit as a unit testing tool. JUnit is a regression testing framework written by Erich Gamma and Kent Beck. JUnit test is a programmer test, that is, the so-called white box test, because the programmer knows How and What functions the tested software completes. JUnit is a framework. If you inherit the TestCase class, you can use JUnit for automatic testing. junit - Baidu Encyclopedia 1.2 JUnit5 features As the latest version of JUnit framework, JUnit 5 is very different from the previous version of JUnit framework. It consists of several different modules of three different subprojects: Unlike previous versions of JUnit, JUnit 5 is composed of several different modules from three different sub-projects. JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage The JUnit Platform serves as a foundation for launching testing frameworks on the JVM. It also defines the TestEngine API for developing a testing framework that runs on the platform. Furthermore, the platform provides a Console Launcher to launch the platform from the command line and a JUnit 4 based Runner for running any TestEngine on the platform in a JUnit 4 based environment. First-class support for the JUnit Platform also exists in popular IDEs (see IntelliJ IDEA, Eclipse, NetBeans, and Visual Studio Code) and build tools (see Gradle, Maven, and Ant).. It requires JUnit 4.12 or later to be present on the class/module path. What is JUnit 5? - JUnit 5 User Guide JUnit Platform: JUnit Platform is the basis for starting the test framework on the JVM. It supports not only JUnit's self-made test engine, but also other test engines. JUnit Jupiter: JUnit Jupiter provides a new programming model for JUnit 5 and is the core of JUnit 5's new features. A test engine is included internally to run on the JUnit Platform. JUnit Vintage: since JUnit has developed for many years, in order to take care of old projects, JUnit Vintage provides a test engine compatible with JUnit 4.x and JUnit 3.x. 1.3 spring boot integration JUnit Spring boot version 2.2.0 began to introduce JUnit 5 as the default library for unit testing. SpringBoot 2.4 removes the default dependency on JUnit Vintage. If you need to be compatible with JUnit 4, you need to introduce it yourself. JUnit 5's Vintage Engine Removed from spring-boot-starter-test If you upgrade to Spring Boot 2.4 and see test compilation errors for JUnit classes such as org.junit.Test, this may be because JUnit 5's vintage engine has been removed from spring-boot-starter-test. The vintage engine allows tests written with JUnit 4 to be run by JUnit 5. If you do not want to migrate your tests to JUnit 5 and wish to continue using JUnit 4, add a dependency on the Vintage Engine, as shown in the following example for Maven: <dependency> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-core</artifactId> </exclusion> </exclusions> </dependency> Spring Boot 2.4 Release Notes - GitHub/Spring-projects 2. Basic use of unit test 2.1 importing unit test module starter Maven introduces SpringBoot unit test module starter: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <!-- use SpringBoot New version management mechanism --> </dependency> 2.2 write and analyze simple test classes The existing business layer needs to test its CRUD function: @Service public class UserServiceImpl implements UserService { private UserMapper mapper; @Autowired public void setMapper(UserMapper mapper) { this.mapper = mapper; } /** * Get user record according to ID * * @param id ID * @return User record */ public User getUserById(int id) { return mapper.selectUserById(id); } } Write a simple test class according to the above code to test its CRUD function: @SpringBootTest @Slf4j public class TestUserServiceImpl { private UserService service; @Autowired public void setService(UserService service) { this.service = service; } @Test public void testGetUserById() { log.info("Test parameters:{},Test results:{}", 1, service.getUserById(1)); } } Run the test method and print the results on the console: 2021-10-12 10:32:25.709 INFO 6420 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2021-10-12 10:32:25.927 INFO 6420 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. 2021-10-12 10:32:25.979 INFO 6420 --- [ main] pers.dyj123.test.TestUserServiceImpl : Test parameters: 1, test results: Account(id=1, name=Zhang San, age=24) Simple analysis test class: - The test class needs @ SpringBootTest annotation to indicate that it is a SpringBoot test class; - @The Test test method can be run directly without writing the main method; - The test class of SpringBoot can use Spring function (using @ Autowired auto assembly function, etc.); - Naming specification for test classes and test methods: test classes shall be named according to test + class names to be tested, and test methods shall be named according to test + method names to be tested. 3. JUnit5 common notes JUnit 5 provides many common annotations. JUnit 5 annotations are different from JUnit 4 annotations: For more information on the introduction and use of annotations, please refer to JUnit official documents . 3.1 @DisplayName @The DisplayName annotation is used to set the name of the test class or test method. The parameters are as follows: Usage: @SpringBootTest @DisplayName("annotation@DisplayName Test class") public class TestDisplayName { @Test @DisplayName("annotation@DisplayName test method") public void testDisplayName() { System.out.println("Testing Examples "); } } Run the test method and test results: 3.2 @BeforeEach and @ AfterEach @The BeforeEach and @ AfterEach annotations are used to execute some methods before and after the execution of each test method. This annotation has no parameters. Usage: @SpringBootTest public class TestEach { @BeforeEach public void testBeforeEach() { System.out.println("Yes@BeforeEach method"); } @AfterEach public void testAfterEach() { System.out.println("Yes@AfterEach method"); } @Test public void test1() { System.out.println("Yes test1 method"); } @Test public void test2() { System.out.println("Yes test2 method"); } } Perform all test methods and test results: 3.3 @BeforeAll and @ AfterAll @The BeforeAll and @ AfterAll annotations are used to execute some methods before and after the execution of all test methods. This annotation has no parameters. It should be noted that this annotation is recommended for static methods: Such methods are inherited (unless they are hidden or overridden) and must be static (unless the "per-class" test instance lifecycle is used). 2.1. Annotations/@BeforeAll - JUnit 5 User Guide Usage: @SpringBootTest public class TestAll { @BeforeAll static void testBeforeAll() { System.out.println("Yes@BeforeAll method"); } @AfterAll static void testAfterAll() { System.out.println("Yes@AfterAll method"); } @Test public void test1() { System.out.println("Yes test1 method"); } @Test public void test2() { System.out.println("Yes test2 method"); } } Perform all test methods and test results: 3.4 @Disabled The test method marked with the @ Disabled annotation will be Disabled and the skipped test will not be executed; All test methods of the marked test class will not be executed. The parameters are as follows: Usage: @SpringBootTest public class TestDisabled { @Test @Disabled public void test1() { System.out.println("Yes test1 method"); } @Test public void test2() { System.out.println("Yes test2 method"); } } Perform all test methods and test results: 3.5 @Timeout If the execution of the test method marked by @ Timeout annotation exceeds the set time, it will report an error and end. The parameters are as follows: Usage: @SpringBootTest public class TestTimeout { @Test @Timeout(value = 500, unit = TimeUnit.MILLISECONDS) // Time limited 500 ms public void testTimeout() throws InterruptedException { Thread.sleep(600); // Thread sleep 600 ms System.out.println("Yes testTimeout method"); } } Execute the test method and test results: You can see that if the timeout is too long, the method will be forced to stop running. 3.6 @ExtendWith @The ExtendWith annotation is used to add or use functional extensions of other test frameworks in the test. It has the same effect as the @ RunWith annotation in JUnit4. The parameters are as follows: In JUnit 4, if we want to use the Spring function, we need to mark @ RunWith(SpringJUnit4ClassRunner.class) in the test class, In versions above SpringBoot 2.4, JUnit 5 uses the Spring function. We only need to mark the @ SpringBootTest annotation in the test class. Analyze @ SpringBootTest annotation: @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Documented @Inherited @BootstrapWith(SpringBootTestContextBootstrapper.class) @ExtendWith(SpringExtension.class) // Add Spring function extensions using @ ExtendWith public @interface SpringBootTest { // ... omit some code } It is found that the @ SpringBootTest annotation has used @ ExtendWith to add Spring function extensions for us, so the test class only needs to annotate the @ SpringBootTest annotation to use the Spring function. 3.7 @RepeatedTest @The RepeatedTest annotation can make the test method execute repeatedly, and the number of repeated executions can be set. The parameters are as follows: JUnit provides two display names: - SHORT_ DISPLAY_ Name (default): "repetition {currentRepetition} of {totalRepetitions}" - LONG_DISPLAY_NAME: "{displayName} :: repetition {currentRepetition} of {totalRepetitions}" The above three placeholders: {DisplayName} is the display name set with the @ DisplayName annotation, {currentRepetition} is the current number of cycles, {totalRepetitions} is the total number of cycles. You can use the above three placeholders to customize the presentation name of the loop execution. Usage: @SpringBootTest public class TestRepeatedTest { @RepeatedTest(value = 3, name = RepeatedTest.LONG_DISPLAY_NAME) @DisplayName("annotation@RepeatedTest test method") public void testRepeatedTest() { System.out.println("Yes testRepeatedTest method"); } } Execute the test method and test results: 4. Assertion mechanism Assertions are the core part of the test method. They are used to verify the conditions to be met by the test, check whether the data returned by the business logic is reasonable, and the characteristics of the assertion mechanism are as follows: - JUnit 5's built-in assertion methods are static methods in org.junit.jupiter.api.Assertions. - Each assertion method has a message parameter that allows you to specify custom error messages. - Once the assertion judgment in the test method fails, the code after the assertion will not run, and the test method test fails. The following will briefly introduce some assertion methods. For the usage and more information of other assertion methods, you can refer to JUnit 5 official document . 4.1 simple assertion The following methods are used for simple validation of a single value: Use the assertEquals(int expected, int actual) method to test the simple assertion mechanism: @Test @DisplayName("Test simple assertions") public void testAssertions() { int sum = 4 + 5; // Analog business logic Assertions.assertEquals(10, sum); // Perform assertion check System.out.println("Yes testSimpleAssertions method"); } Execute the test method and test results: 4.2 array assertion JUnit provides array assertion methods (part): [the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-gfajde6k-16340401079319) (H: \ Documents Library \ idea projects \ readmemage \ springboot_test_junit_ assertion mechanism array assertion array assertion method display. JPG)] Use the assertArraysEqual(int[] expected, int[] actual, String message) method to test the array assertion mechanism: @Test @DisplayName("Test array assertion") public void testArrayAssertions() { Assertions.assertArrayEquals(new int[]{1, 2}, new int[]{2, 1}, "Array mismatch"); System.out.println("Yes testArrayAssertions method"); } Execute the test method and test results: Array assertion principle: // org.junit.jupiter.api.AssertArrayEquals.assertArrayEquals(int[], int[], java.util.Deque<java.lang.Integer>, java.lang.Object) // AssertArrayEquals.java Line:233~247 private static void assertArrayEquals(int[] expected, int[] actual, Deque<Integer> indexes, Object messageOrSupplier) { if (expected == actual) { return; } assertArraysNotNull(expected, actual, indexes, messageOrSupplier); assertArraysHaveSameLength(expected.length, actual.length, indexes, messageOrSupplier); for (int i = 0; i < expected.length; i++) { // Use the for loop to use the elements of two arrays= One by one comparison. If an element does not match, the judgment fails if (expected[i] != actual[i]) { failArraysNotEqual(expected[i], actual[i], nullSafeIndexes(indexes, i), messageOrSupplier); } } } 4.3 combined assertions JUnit provides the combined assertion method assertAll(String heading, Executable... executables), which can combine multiple assertions into one assertion. If a single assertion fails, they all fail. The parameters are as follows: - heading: specify custom error information; - executables: Executable is a functional interface that allows you to write code to be tested using lambda expressions. Usage: @Test @DisplayName("Test combination assertion") public void testAllAssertions() { Assertions.assertAll( "Combination assertion judgment failed", () -> Assertions.assertEquals(1.0, 2.0), () -> Assertions.assertEquals("Test combination assertion", "Test combination assertion") ); System.out.println("Yes testAllAssertions method"); } Execute the test method and test results: 4.4 exception assertion JUnit provides exception assertion methods assertThrows and assertdoesnottthrow to judge whether a specific exception is thrown and whether no exception is thrown respectively. Usage: @Test @DisplayName("Test exception assertion assertThrows") public void testAssertThrows() { Assertions.assertThrows( ArithmeticException.class, () -> { int i = 10 / 1; } ); System.out.println("Yes testAssertThrows method"); } @Test @DisplayName("Test exception assertion assertDoesNotThrow") public void testAssertDoesNotThrow() { Assertions.assertDoesNotThrow( () -> { throw new NullPointerException(); } ); System.out.println("Yes testAssertDoesNotThrow method"); } Perform all test methods and test results: 4.5 timeout assertion JUnit provides the timeout assertion method asserttimeout (duration timeout, executable). If the execution method times out, the assertion will fail. Usage: @Test @DisplayName("Test timeout assertion") public void testTimeoutAssertions() { Assertions.assertTimeout( Duration.ofMillis(500), () -> { Thread.sleep(600); } ); System.out.println("Yes testTimeoutAssertions method"); } Execute the test method and test results: 4.6 failure assertion JUnit provides the failure assertion method fail. The test method using this assertion method must assert failure. It is often used to detect that the method that cannot be called is illegally called or some conditions are judged incorrectly. Usage: @Test @DisplayName("Test failure assertion") public void testFailAssertions() { Assertions.fail("This method cannot be called"); System.out.println("Yes testFailAssertions method"); } Execute the test method and test results: 5. Preconditions Preconditions (assumptions) in JUnit 5 are similar to assertions. The difference is that unsatisfied assertions will fail the test method, while unsatisfied preconditions will only terminate the execution of the test method (equivalent to marked with @ Disabled annotation). Preconditions can be regarded as the premise of test method execution. When the premise is not met, there is no need to continue execution. The built-in precondition methods of JUnit 5 are all static methods in org.junit.jupiter.api.Assumptions: Usage: @SpringBootTest public class TestAssumptions { @Test @DisplayName("Test preconditions are true") public void testAssumeTrue() { Assumptions.assumeTrue(false, "Preconditions need to be true"); // Preconditions not met System.out.println("Yes testAssumeTrue method"); } @Test @DisplayName("Test preconditions are false") public void testAssumeFalse() { Assumptions.assumeFalse(true, "Preconditions need to be false"); // Preconditions not met System.out.println("Yes testAssumeFalse method"); } } Perform all test methods and test results: Test methods that do not meet the preconditions will be skipped directly (not failed) and will not be executed. 6. Nested test JUnit supports adding test classes inside test classes, which are annotated with @ Nested annotation. - When the test method of the external test class is executed, the methods annotated with @ BeforeEach, @ AfterEach and other annotations of the internal test class will not be executed; - When the test method of the internal test class is executed, the methods annotated with @ BeforeEach, @ AfterEach and other annotations of the internal test class are executed normally. For example, write a test method: @SpringBootTest public class TestNested { List<Integer> list; // Declare a List collection @BeforeEach public void outerBeforeEach() { list = new ArrayList<>(); // The BeforeEach method of the external test class initializes the List collection } @Test @DisplayName("Test method 1 of external test class") public void outerTest1() { Assertions.assertNull(list); // The BeforeEach method of the external test class has initialized the List collection, so it is not null. The assertion failed } @Test @DisplayName("Test method of external test class 2") public void outerTest2() { Assertions.assertTrue(list.isEmpty()); // The external test class does not put elements into the List collection. When this method is executed, the BeforeEach method of the internal test class has not been executed, so the List collection is empty and the assertion is successful } @Nested class TestNestedInner { @BeforeEach public void innerBeforeEach() { list.add(1); // The BeforeEach method of the internal test class puts an element into the List collection } @Test @DisplayName("Test method of internal test class 1") public void innerTest1() { Assertions.assertNull(list); // The BeforeEach method of the external test class has initialized the List collection, so it is not null. The assertion failed } @Test @DisplayName("Test method of internal test class 2") public void innerTest2() { Assertions.assertTrue(list.isEmpty()); // The BeforeEach method of the internal test class has put an element into the List collection, so the List collection is not empty, and the assertion fails } } } Perform all test methods and test results: 7. Parametric test Parametric testing is a very important new feature of JUnit 5. It makes it possible to run tests multiple times with different parameters, and also brings a lot of convenience to our unit testing. Using @ ValueSource and other annotations to specify input parameters, we can use different parameters for multiple unit tests without adding a unit test every time a parameter is added, saving a lot of redundant code: The test method requiring parametric testing needs to be annotated with the @ ParameterizedTest annotation, and the above annotation needs to be used to provide the input source. For other input sources and detailed descriptions, please refer to JUnit official documents . Use @ ValueSource and @ MethodSource annotation examples to write test methods: @SpringBootTest public class TestParameterizedTest { @ParameterizedTest @ValueSource(ints = {1, 2, 3}) // Pass in an array as an input parameter public void testValueSource(int param) { System.out.println("Yes testValueSource Methods, para" + param + "Second execution"); } @ParameterizedTest @MethodSource("paramStream") // Use the stream returned by paramStream as the input parameter public void testMethodSource(int param) { System.out.println("Yes testMethodSource Methods, para" + param + "Second execution"); } static Stream<Integer> paramStream() { return Stream.of(1, 2, 3); } } Perform all test methods and test results:
https://programmer.help/blogs/springboot-day-22-junit-unit-test.html
CC-MAIN-2021-43
en
refinedweb
Recently Browsing 0 members No registered users viewing this page. Similar Content - This is a UDF for use Debenu PDF Viewer SDK - ActiveX component. You can read more about this ActiveX component here: v 0.2 2015/05/18 v0.3 - memerim #include <GDIPlus.au3> Text2PNG(@ScriptDir & "\x_2.png", 0x7DFFFFFF) ; Transparent text Func Text2PNG($sFile, $iColor) _GDIPlus_Startup() Local $hImage = _GDIPlus_BitmapCreateFromFile ( $sFile ) ;Local $hImage = _GDIPlus_BitmapCreateFromScan0(400, 250) ;$sFile2 = @ScriptDir & "\x_3.png" ;_GDIPlus_ImageSaveToFile($hImage, $sFile2) Local $hGraphics = _GDIPlus_ImageGetGraphicsContext($hImage) _GDIPlus_GraphicsSetSmoothingMode($hGraphics, $GDIP_SMOOTHINGMODE_HIGHQUALITY) _GDIPlus_GraphicsSetTextRenderingHint($hGraphics, $GDIP_TEXTRENDERINGHINT_ANTIALIAS) _GDIPlus_GraphicsClear($hGraphics, $iColor) _GDIPlus_GraphicsDrawString($hGraphics, "Hello", 0, 0, "Arial", 32, 0) _GDIPlus_ImageSaveToFile($hImage, $sFile) _GDIPlus_GraphicsDispose($hGraphics) _GDIPlus_BitmapDispose($hImage) _GDIPlus_Shutdown() EndFunc ;==>Text2PNG The image is at 50% transparency, im trying to write the text on it with the same transparency. What transparency lvl the text will need to be drawn, to achieve the same transparency as on the image, 50% too? (0x7DFFFFFF) But atm it does not draw anything unless i create a new bitmap from scan, whats going on? - Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/165806-quality-assurance-on-pdf-rendering-image-compare/?tab=comments#comment-1210804
CC-MAIN-2021-43
en
refinedweb
Request for Comments: 7940 Category: Standards Track ISSN: 2070-1721 ICANN A. Freytag ASMUS, Inc. August 2016 Representing Label Generation Rulesets Using XML Unicode code points are permitted for registrations, which alternative code points are considered variants, and what actions may be performed on labels containing those. Design Goals ....................................................5 3. Normative Language ..............................................6 4. LGR Format ......................................................6 4.1. Namespace ..................................................7 4.2. Basic Structure ............................................7 4.3. Metadata ...................................................8 4.3.1. The "version" Element ...............................8 4.3.2. The "date" Element ..................................9 4.3.3. The "language" Element ..............................9 4.3.4. The "scope" Element ................................10 4.3.5. The "description" Element ..........................10 4.3.6. The "validity-start" and "validity-end" Elements ...11 4.3.7. The "unicode-version" Element ......................11 4.3.8. The "references" Element ...........................12 5. Code Points and Variants .......................................13 5.1. Sequences .................................................14 5.2. Conditional Contexts ......................................15 5.3. Variants ..................................................16 5.3.1. Basic Variants .....................................16 5.3.2. The "type" Attribute ...............................17 5.3.3. Null Variants ......................................18 5.3.4. Variants with Reflexive Mapping ....................19 5.3.5. Conditional Variants ...............................20 5.4. Annotations ...............................................22 5.4.1. The "ref" Attribute ................................22 5.4.2. The "comment" Attribute ............................23 5.5. Code Point Tagging ........................................23 6. Whole Label and Context Evaluation .............................23 6.1. Basic Concepts ............................................23 6.2. Character Classes .........................................25 6.2.1. Declaring and Invoking Named Classes ...............25 6.2.2. Tag-Based Classes ..................................26 6.2.3. Unicode Property-Based Classes .....................26 6.2.4. Explicitly Declared Classes ........................28 6.2.5. Combined Classes ...................................29 6.3. Whole Label and Context Rules .............................30 6.3.1. The "rule" Element .................................31 6.3.2. The Match Operators ................................32 6.3.3. The "count" Attribute ..............................33 6.3.4. The "name" and "by-ref" Attributes .................34 6.3.5. The "choice" Element ...............................34 6.3.6. Literal Code Point Sequences .......................35 6.3.7. The "any" Element ..................................35 6.3.8. The "start" and "end" Elements .....................35 6.3.9. Example Context Rule from IDNA Specification .......36 6.4. Parameterized Context or When Rules .......................37 6.4.1. The "anchor" Element ...............................37 6.4.2. The "look-behind" and "look-ahead" Elements ........38 6.4.3. Omitting the "anchor" Element ......................40 7. The "action" Element ...........................................40 7.1. The "match" and "not-match" Attributes ....................41 7.2. Actions with Variant Type Triggers ........................41 7.2.1. The "any-variant", "all-variants", and "only-variants" Attributes .........................41 7.2.2. Example from Tables in the Style of RFC 3743 .......44 7.3. Recommended Disposition Values ............................45 7.4. Precedence ................................................45 7.5. Implied Actions ...........................................45 7.6. Default Actions ...........................................46 8. Processing a Label against an LGR ..............................47 8.1. Determining Eligibility for a Label .......................47 8.1.1. Determining Eligibility Using Reflexive Variant Mappings ...................................47 8.2. Determining Variants for a Label ..........................48 8.3. Determining a Disposition for a Label or Variant Label ....49 8.4. Duplicate Variant Labels ..................................50 8.5. Checking Labels for Collision .............................50 9. Conversion to and from Other Formats ...........................51 10. Media Type ....................................................51 11. IANA Considerations ...........................................52 11.1. Media Type Registration ..................................52 11.2. URN Registration .........................................53 11.3. Disposition Registry .....................................53 12. Security Considerations .......................................54 12.1. LGRs Are Only a Partial Remedy for Problem Space .........54 12.2. Computational Expense of Complex Tables ..................54 13. References ....................................................55 13.1. Normative References .....................................55 13.2. Informative References ...................................56 Appendix A. Example Tables ........................................58 Appendix B. How to Translate Tables Based on RFC 3743 into the XML Format ............................................63 Appendix C. Indic Syllable Structure Example ......................68 C.1. Reducing Complexity .......................................70 Appendix D. RELAX NG Compact Schema ...............................71 Acknowledgements ..................................................82 Authors' Addresses ................................................82 1. Introduction This document specifies a method of using Extensible Markup Language (XML) to describe Label Generation Rulesets (LGRs). LGRs are algorithms used to determine whether, and under what conditions, a given identifier label is permitted, based on the code points it contains and their context. These algorithms comprise a list of permissible code points, variant code point mappings, and a set of rules that act on the code points and mappings. LGRs form part of an administrator's policies. In deploying Internationalized Domain Names (IDNs), they have also been known as IDN tables or variant tables. There are other kinds of policies relating to labels that are not normally covered by LGRs and are therefore not necessarily a and may also be used for describing ASCII domain name label rulesets, or other types of identifier labels beyond those used for domain names. 2. Design Goals The following goals informed the design of this format: - The format needs to be implementable in a reasonably straightforward manner in software. - The format should be able to be automatically checked for formatting errors, so that common mistakes can be caught. - An LGR needs to be able to express the set of valid code points that are allowed for registration under a specific administrator's policies. - An LGR needs to be able to express computed alternatives to a given identifier based on mapping relationships between code points, whether one-to-one or many-to-many. These computed alternatives are commonly known as "variants". - Variant code points should be able to be tagged with explicit dispositions or categories that can be used to support registry policy (such as whether to allocate the computed variant or to merely block it from usage or registration). - Variants and code points must be able to be stipulated based on contextual information. For example, some variants may only be applicable when they follow a certain code point or when the code point is displayed in a specific presentation form. - The data contained within an LGR must be able to be interpreted unambiguously, so that independent implementations that utilize the contents will arrive at the same results. - To the largest extent possible, policy rules should be able to be specified in the XML format without relying on hidden or built-in algorithms in implementations. - LGRs should be suitable for comparison and reuse, such that one could easily compare the contents of two or more to see the differences, to merge them, and so registration policies are used for a particular zone are outside the scope of this memo. 3. Normative Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 4. LGR Format An LGR is expressed as a well-formed XML document [XML] that conforms to the schema defined in Appendix D. As XML is case sensitive, an LGR must be authored with the correct casing. For example, the XML element names MUST be in lowercase as described in this specification, and matching of attribute values is only performed in a case-sensitive manner. A document that is not well-formed, is non-conforming, or violates other constraints specified in this specification MUST be rejected. 4.1. Namespace The XML Namespace URI is "urn:ietf:params:xml:ns:lgr-1.0". See Section 11.2 for more information. 4.2. Basic Structure The basic XML framework of the document is as follows: <?xml version="1.0"?> <lgr xmlns="urn:ietf:params:xml:ns:lgr-1.0"> ... </lgr> The "lgr" element contains up to three sub-elements or sections. First is an optional "meta" element that contains all metadata associated with the LGR, such as its authorship, what it is used for, implementation notes, and references. This is followed by a required "data" element that contains the substantive code point data. Finally, an optional "rules" element contains information on rules for evaluating labels, if any, along with "action" elements providing for the disposition of labels and computed variant labels. <?xml version="1.0"?> <lgr xmlns="urn:ietf:params:xml:ns:lgr-1.0"> <meta> ... </meta> <data> ... </data> <rules> ... </rules> </lgr> A document MUST contain exactly one "lgr" element. Each "lgr" element MUST contain zero or one "meta" element, exactly one "data" element, and zero or one "rules" element; and these three elements MUST be in that order. Some elements that are direct or nested child elements of the "rules" element MUST be placed in a specific relative order to other elements for the LGR to be valid. An LGR that violates these constraints MUST be rejected. In other cases, changing the ordering would result in a valid, but different, specification. In the following descriptions, required, non-repeating elements or attributes are generally not called out explicitly, in contrast to "OPTIONAL" ones, or those that "MAY" be repeated. For attributes that take lists as values, the elements MUST be space-separated. 4.3. Metadata The "meta" element expresses metadata associated with the LGR, and the element SHOULD be included so that the associated metadata are available as part of the LGR and cannot become disassociated. The following subsections describe elements that may appear within the "meta" element. The "meta" element can be used to identify the author or relevant contact person, explain the intended usage of the LGR, and provide implementation notes as well as references. Detailed metadata allow the LGR document to become self-documenting -- for example, if rendered in a human-readable format by an appropriate tool. Providing metadata pertaining to the date and version of the LGR is particularly encouraged to make it easier for interoperating consumers to ensure that they are using the correct LGR. With the exception of the "unicode-version" element, the data contained within is not required by software consuming the LGR in order to calculate valid labels or to calculate variants. If present, the "unicode-version" element MUST be used by a consumer of the table to identify that it has the correct Unicode property data to perform operations on the table. This ensures that possible differences in code point properties between editions of the Unicode Standard do not impact the product of calculations utilizing an LGR.>> 4.3.3. The "language" Element Each OPTIONAL "language" element identifies a language or script for which the LGR is intended. The value of the "language" element MUST be a valid language tag as described in [RFC5646]. The tag may refer to a script plus undefined language if the LGR is not intended for a specific language. Example of an LGR for the English language: <language>en</language> If the LGR applies to a script rather than a specific language, the "und" language tag SHOULD be used followed by the relevant script subtag from [RFC5646]. For example, for a Cyrillic script LGR: <language>und-Cyrl</language> If the LGR covers a set of multiple languages or scripts, the "language" element MAY be repeated. However, for cases of a script-specific LGR exhibiting insignificant admixture of code points from other scripts, it is RECOMMENDED to use a single "language" element identifying the predominant script. In the exceptional case of a multi-script LGR where no script is predominant, use Zyyy (Common): <language>und-Zyyy</language>. For that type, the content of the "scope" element MUST be a domain name written relative to the root zone, in presentation format with no trailing dot. However, in the unique case of the DNS root zone, it is represented as ".". <scope type="domain">example.com</scope> There may be multiple "scope" tags used -- for example, to reflect a list of domains to which the LGR is applied. No other values of the "type" attribute are defined by this specification; however, this specification can be used for applications other than domain names. Implementers of LGRs for applications other than domain names SHOULD define the scope extension grammar in an IETF specification or use XML namespaces to distinguish their scoping mechanism distinctly from the base LGR namespace. An explanation of any custom usage of the scope in the "description" element is RECOMMENDED. <scope xmlns=""> ... content per alternate namespace ... </scope>" element to condition the application of the LGR's data and rules. The element has an OPTIONAL "type" attribute, which refers to the Internet media type [RFC2045] of the enclosed data. Typical types would be "text/plain" or "text/html". The attribute SHOULD be a valid media type. If supplied, it will be assumed that the contents are of that media type. If the description lacks a "type" value, it will be assumed to be plain text ("text/plain"). 4.3.6. The "validity-start" and "validity-end" Elements The "validity-start" and "validity-end" elements are OPTIONAL elements that describe the time period from which the contents of the LGR become valid (are used in registry policy) and time when the contents of the LGR cease to be used, respectively. The dates MUST conform to the "full-date" format described in Section 5.6 of [RFC3339]. <validity-start>2014-03-12</validity-start> character properties (Section 6.2.3). The value of a given Unicode character property may change between versions of the Unicode Character Database [UAX44], unless such change has been explicitly disallowed in [Unicode-Stability]. It is RECOMMENDED to only reference properties defined as stable or immutable. As an alternative to referencing the property, the information can be presented explicitly in the LGR. <unicode-version>6.3.0</unicode-version> It is not necessary to include a "unicode-version" element for LGRs that do not make use of Unicode character properties; however, it is RECOMMENDED. 4.3.8. The "references" Element An LGR may define a list of references that containing one or more "reference" elements, each with a unique "id" attribute. It is RECOMMENDED that the "id" attribute be a zero-based integer; however, in addition to digits 0-9, it MAY contain uppercase letters A-Z, as well as a period, hyphen, colon, or underscore. The value of each "reference" element SHOULD be the citation of a standard, dictionary, or other specification in any suitable format. In addition to an "id" attribute, a "reference" element MAY have a "comment" attribute for an optional free-form annotation. <references> <reference id="0">The Unicode Consortium. The Unicode Standard, Version 8.0.0, (Mountain View, CA: The Unicode Consortium, 2015. ISBN 978-1-936213-10-8)< its id as part of an optional "ref" attribute (see Section 5.4.1). The "ref" attribute may be used with many kinds of elements in the "data" or "rules" sections of the LGR, most notably those defining code points, variants, and rules. However, a "ref" attribute may not occur in certain kinds of elements, including references to named character classes or rules. See below for the description of these elements. 5. Code Points and Variants The bulk of an LGR is a description of which set of code points. Collectively, these are known as the repertoire. Discrete permissible code points or code point sequences (see Section 5.1) are declared with a "char" element. Here is a minimal example declaration for a single code point, with the code point value given in the "cp" attribute: <char cp="002D"/> As described below, a full declaration for a "char" element, whether or not it is used for a single code point or for a sequence (see Section 5.1), may have optional child elements defining variants. Both the "char" and "range" elements can take a number of optional attributes for conditional inclusion, commenting, cross-referencing, and character tagging, as described below. Ranges of permissible code points may be declared with a "range" element, as in this minimal example: <range first- The range is inclusive of the first and last code points. Any additional represented according to the standard Unicode convention but without the prefix "U+": they are expressed in uppercase hexadecimal and are zero-padded to a minimum of 4 digits. The rationale for not allowing other encoding formats, including native Unicode encoding in XML, is explored in [UAX42]. The XML conventions used in this format, such as element and attribute names, mirror this document where practical and reasonable to do so. It is RECOMMENDED to list all "char" elements in ascending order of the "cp" attribute. Not doing so makes it unnecessarily difficult for authors and reviewers to check for errors, such as duplications, or to review and compare against listing of code points in other documents and specifications. All "char" elements in the "data" section MUST have distinct "cp" attributes. The "range" elements MUST NOT specify code point ranges that overlap either another range or any single code point "char" elements. An LGR that defines the same code point more than once by any combination of "char" or "range" elements MUST be rejected. some code point is only eligible when preceded or followed by a certain a conditional context using an optional "when" attribute as described below in Section 5.2. Using a conditional context is more flexible because a context is not limited to a specific sequence of code points. In addition, using a context allows the choice of specifying either a prohibited or a required context. 5.2. Conditional Contexts A conditional context is specified by a rule that must be satisfied (or, alternatively, must not be satisfied) for a code point in a given label, often at a particular location in a label. To specify a conditional context, either a "when" or "not-when" attribute may be used. The value of each "when" or "not-when" attribute is a context rule as described below in Section 6.3. This rule can be a rule evaluating the whole label or a parameterized context rule. The context condition is met when the rule specified in the "when" attribute is matched or when the rule in the "not-when" attribute fails to match. It is an error to reference a rule that is not actually defined in the "rules" element. A parameterized context rule (see Section 6.4) defines the context immediately surrounding a given code point; unlike a sequence, the context is not limited to a specific fixed code point but, for example, may designate any member of a certain character class or a code point that has a certain Unicode character property. Given a suitable definition of a parameterized context rule named "follows-virama", this example specifies that a ZERO WIDTH JOINER (U+200D) is restricted to immediately follow any of several code points classified as virama: <char cp="200D" when="follows-virama" /> For a complete example, see Appendix A. In contrast, a whole label rule (see Section 6.3) specifies a condition to be met by the entire label -- for example, that it must contain at least one code point from a given script anywhere in the label. In the following example, no digit from either range may occur in a label that mixes digits from both ranges: <data> <range first- <range first- </data> (See Section 6.3.9 for an example of the "mixed-digits" rule.) The OPTIONAL "when" or "not-when" attributes are mutually exclusive. They MAY be applied to both "char" and "range" elements in the "data" element, including "char" elements defining sequences of code points, as well as to "var" elements (see Section 5.3.5). If a label contains one or more code points that fail to satisfy a conditional context, the label is invalid (see Section 7.5). For variants, the conditional context restricts the definition of the variant to the case where the condition is met. Outside the specified context, a variant is not defined. 5.3.. 5.3.1. Basic Variants Variant code points are specified using one of more "var" elements as children of a "char" element. The target mapping is specified using the "cp" attribute. Other, optional attributes for the "var" element are described below.) might hypothetically be specified as a variant for a LATIN SMALL LETTER O WITH DIAERESIS (U+00F6) as follows: <char cp="00F6"> <var cp="006F 0065"/> </char> The source and target of a variant mapping may both be sequences but not ranges. If the source of one mapping is a prefix sequence of the source for another, both variant mappings will be considered at the same location in the input label when generating permuted variant labels. If poorly designed, an LGR containing such an instance of a prefix relation could generate multiple instances of the same variant label for the same original label, but with potentially different dispositions. Any duplicate variant labels encountered MUST be treated as an error (see Section 8.4). only part of the LGR if spelled out explicitly. Implementations that require an LGR to be symmetric and transitive should verify this mechanically. 5.3.5.) 5.3 7. Whenever these values can represent the desired policy, they SHOULD be used. <char cp="767C"> <var cp="53D1" type="allocatable"/> <var cp="5F42" type="blocked"/> <var cp="9AEA" type="blocked"/> <var cp="9AEE" type="blocked"/> </char> By default, if a variant label contains any instance of one of the variants of type "blocked", the label would be blocked, but if it contained only instances of variants to be allocated, it could be allocated. See the discussion about implied actions in Section 7.6. The XML format for the LGR makes the relation between the values of the "type" attribute on variants and the resulting disposition of variant labels fully explicit. See the discussion in Section 7.2. Making this relation explicit allows a generalization of the "type" attribute from directly reflecting dispositions to a more differentiated intermediate value that is then used in the resolution of label disposition. Instead of the default action of applying the most restrictive disposition to the entire label, such a generalized resolution can be used to achieve additional goals, such as limiting the set of allocatable variant labels or implementing other policies found in existing LGRs (see, for example, Appendix B). Because variant mappings MUST be unique, it is not possible to define the same variant for the same "char" element with different "type" attributes (however, see Section 5.3.5). 5.3 mappings from null sequences are removed in variant label generation (see Section 5.3.2). 5.3.4. Variants with Reflexive Mapping At first glance,, let's assume that the goal is to allocate only those labels that contain a variant that is considered "preferred" in some way. As defined in the example, the code point U+3473 exists both as a variant of U+3447 and as a variant of itself (reflexive mapping). Assuming an original label of "U+3473 U+3447", the permuted variant "U+3473 U+3473" would consist of the reflexive variant of U+3473 followed by a variant of U+3447. Given the variant mappings as defined here, the types for both of the variant mappings used to generate that particular permutation would have the value "preferred": <char cp="3447" ref="0"> <var cp="3473" type="preferred" ref="1 3" /> </char> <char cp="3473" ref="0"> <var cp="3447" type="blocked" ref="1 3" /> <var cp="3473" type="preferred" ref="0" /> </char> Having established the variant types in this way, a set of actions could be defined that return a disposition of "allocatable" or "activated" for a label consisting exclusively of variants with type "preferred", for example. (For details on how to define actions based on variant types, see Section 8.) Another useful convention that uses reflexive variants is described below in Section 7.2.1. 5.3.5. Conditional Variants Fundamentally, variants are mappings between two sequences of code points. However, in some instances, for a variant relationship to exist, some context external to the code point sequence must also. As described in Section 5.2, an OPTIONAL "when" or "not-when" attribute may be given for any "var" element to specify required or prohibited contextual conditions under which the variant is defined. Assuming that> While a "var" element MUST NOT contain multiple conditions (it is only allowed a single "when" or "not-when" attribute), multiple "var" elements using the same mapping MAY be specified with different "when" or "not-when" attributes. The combination of mapping and conditional context defines a unique variant. For each variant label, care must be taken to ensure that at most one of the contextual conditions is met for variants with the same mapping; otherwise, duplicate variant labels would be created for the same input label. Any such duplicate variant labels MUST be treated as an error; see Section 8.4. Two contexts may be complementary, as in the following example, which shows ARABIC LETTER TEH MARBUTA (U+0629) as a variant of ARABIC LETTER HEH (U+0647), but with two different types. <char cp="0647" > <var cp="0629" not- <var cp="0629" when="arabic-final" type="allocatable" /> </char> The intent is that a label that uses U+0629 instead of U+0647 in a final position should be considered essentially the same label and, therefore, allocatable to the same entity, while the same substitution in a non-final position leads to labels that are different, but considered confusable, so that either one, but not both, should be delegatable. For symmetry, the reverse mappings must exist and must agree in their "when" or "not-when" attributes. However, symmetry does not apply to the other attributes. For example, these are potential; therefore, between them they implemented based on the joining contexts for Arabic code points. The mechanism defined here supports other forms of conditional variants that may be required by other scripts. 5.4. Annotations Two attributes, the "ref" and "comment" attributes, can be used to annotate individual elements in the LGR. They are ignored in machine-processing of the LGR. The "ref" attribute is intended for formal annotations and the "comment" attribute for free-form annotations. The latter can be applied more widely. 5.4.1. The "ref" Attribute Reference information MAY optionally be specified by a "ref" attribute consisting of a space-delimited sequence of reference identifiers (see Section 4.3.8). <char cp="5220" ref="0"> <var cp="5220" ref="5"/> <var cp="522A" 4.3.8). It is an error to repeat a reference identifier in the same "ref" attribute. It is RECOMMENDED that identifiers be listed in ascending order. In addition to "char", "range", and "var" elements in the "data" section, a "ref" attribute may be present for a number of element types contained in the "rules" element as described below: actions and literals ("char" inside a rule), as well as for definitions of rules and classes, but not for references to named character classes or rules using the "by-ref" attribute defined below. (The use of the "by-ref" and "ref" attributes is mutually exclusive.) None of the elements in the metadata take a "ref" attribute; to provide additional information, use the "description" element instead. 5, as well as definitions of classes and rules, but not on child elements of the "class" element. Finally, in the metadata, only the "version" and "reference" elements MAY have "comment" attributes (to match the syntax in [RFC3743]). 5 6.2.2) that can then be used in parameterized context or whole label rules (see Section 6.3.2). Each "tag" attribute MAY contain multiple values separated by white space. A tag value is an identifier that may also include certain punctuation marks, such as a colon. Formally, it MUST correspond to the XML 1.0 Nmtoken (Name token) production (see [XML] Section 2.3).. 6. Whole Label and Context Evaluation 6.1. Basic Concepts The "rules" element contains the specification of both context-based and whole label rules. Collectively, these are known as Whole Label Evaluation (WLE) rules (Section 6.3). The "rules" element also contains the character classes (Section 6.2) that they depend on, and any actions (Section 7) that assign dispositions to labels based on rules or variant mappings. A whole label rule is applied to the whole label. It is used to validate both original labels and any variant labels computed from them. A rule implementing a conditional context as discussed in Section 5.2 does not necessarily apply to the whole label but may be specific to the context around a single code point or code point sequence. Certain code points in a label sometimes need to satisfy context-based rules -- for example, for the label to be considered valid, or to satisfy the context for a variant mapping (see the description of the "when" attribute in Section 6.4).: - literal code points or code point sequences - character classes, which define sets of code points to be used for context comparisons - context operators, which define when character classes and literals may appear - nested rules, whether defined in place or invoked by reference Collectively, these are called "match operators" and are listed in Section 6.3.2. An LGR containing rules or match operators that - are incorrectly defined or nested, 2. have invalid attributes, or - have invalid or undefined attribute values MUST be rejected. Note that not all of the constraints defined here are validated by the schema. can be specified in several ways: - by defining the class via matching a tag in the code point data. All characters with the same "tag" attribute are part of the same class; - by referencing a value of one of the Unicode character properties defined in the Unicode Character Database; - by explicitly listing all the code points in the class; or - by defining the class as a set combination of any number of other classes."> 0061 4E00 </class> ... <rule> <class by- </rule> An empty "class" element with a "by-ref" attribute is a reference to an existing named class. The "by-ref" attribute MUST NOT. 6.2.2. Tag-Based Classes The "char" or "range" elements that are child elements of the "data" element MAY contain a "tag" attribute that consists of one or more space-separated tag values; for example: <char cp="0061" tag="letter lower"/> <char cp="4E00" tag="letter"/> a colon. Formally, they MUST correspond to the XML 1.0 Nmtoken production. While a "tag" attribute may contain a list of tag values, the "from-tag" attribute MUST always contain.. Unicode property values MUST be designated via a composite of the attribute name and value as defined for the property value in [UAX42], separated by a colon. Loose matching of property values and names as described in [UAX44] is not appropriate for an XML schema and is not supported; it is likewise not supported in the XML representation [UAX42] of the Unicode Character Database itself. A property-based class MAY be anonymous, or, when defined as an immediate child of the "rules" element, it MAY be named to relate a formal property definition to its usage, such as the use of the value 9 for ccc to designate a virama (or halant) in various scripts.].) All implementations processing LGR files SHOULD provide support for the following minimal set of Unicode properties: - General Category (gc) - Script (sc) - Canonical Combining Class (ccc) - Bidi Class (bc) - Arabic Joining Type (jt) - Indic Syllabic Category (InSC) - Deprecated (Dep) The short name for each property is given in parentheses. If a program that is using an LGR to determine the validity of a label encounters a property that it does not support, it MUST abort with an error. 6.2.4. Explicitly Declared Classes A class of code points may also be declared by listing all code points that are members; not doing so makes it unnecessarily difficult for users to detect errors such as duplicates or to compare and review these classes against other specifications. In a class definition, ranges of code points are represented by a hexadecimal> The contents of a class differ from a repertoire in that the latter MAY contain sequences as elements, while the former MUST NOT. Instead, they closely resemble character classes as found in regular expressions. 6.2.5. Combined Classes Classes may be combined using operators for set complement, union, intersection, difference (elements of the first class that are not in the second), and symmetric difference (elements in either class but not both). any of the match operator elements that allow child elements (see Section 6.3.2) by using the set combination as the outer element. 6.2.1). <rules> <union name="xxxyyy"> <class by- <class by- </union> ... </rules> Because (as for ordinary sets) a combination of classes is itself a class, no matter by what combinations of set operators a combined class is created, a reference to it always uses the "class" element as described in Section 6.2.1. That is, a named class is always referenced via an empty "class" element using the "by-ref" attribute containing the name of the class to be referenced. 6.3. Whole Label and Context Rules Each rule comprises a series of matching operators that must be satisfied in order to determine whether a label meets a given condition. Rules may reference other rules or character classes defined elsewhere in the table. 6.3.1. The "rule" Element A matching rule is defined by a "rule" element, the child elements of which are one of the match operators from Section 6.3.2. In evaluating a rule, each child element is matched in order. "rule" elements MAY be nested inside each other and inside certain match operators. A simple rule to match a label where all characters are members of some class called "preferred-codepoint": <rule name="preferred-label"> <start /> <class by- <end /> </rule> Rules are paired with explicit and implied actions, triggering these actions when a rule matches a label. For example, a simple explicit action for the rule shown above would be: <action disp="allocatable" match="preferred-label" /> The rule in this example would have the effect of setting the policy disposition for a label made up entirely of preferred code points to "allocatable". Explicit actions are further discussed in Section 7 and implicit actions in Section 7.5. Another use of rules is in defining conditional contexts for code points and variants as discussed in Sections 5.2 and 5.3.5. A rule that is an immediate child element of the "rules" element MUST be named using a "name" attribute containing a single identifier string with no spaces. A named rule may be incorporated into another rule by reference and may also be referenced by an "action" element, "when" attribute, or "not-when" attribute. If the "name" attribute is omitted, the rule is anonymous and MUST be nested inside another rule or match operator. 6.3.2. The Match Operators The child elements of a rule are a series of match operators, which are listed here by type and name and with a basic example or two. +------------+-------------+------------------------------------+ | 6.2.5) as well as references to named classes. All match operators shown as empty elements in the Examples column of the table above do not support child elements of their own; otherwise, match operators MAY be nested. In particular, anonymous "rule" elements can be used for grouping. 6.3.3. The "count" Attribute The OPTIONAL "count" attribute, when present, specifies the minimally required or maximal permitted number of times a match operator is used to match input. If the "count" attribute is n the match operator matches the input exactly n times, where n is 1 or greater.. A "count" attribute MUST NOT be applied to any element that contains a "name" attribute but MAY be applied to operators such as "class" that declare anonymous classes (including combined classes) or invoke any predefined classes by reference. The "count" attribute MUST NOT be applied to any "class" element, or element defining a combined class, when it is nested inside a combined class. A "count" attribute MUST NOT be applied to match operators of type "start", "end", "anchor", "look-ahead", or "look-behind" or to any operators, such as "rule" or "choice", that contain a nested instance of them. This limitation applies recursively and irrespective of whether a "rule" element containing these nested instances is declared in place or used by reference. However, the "count" attribute MAY be applied to any other instances of either an anonymous "rule" element or a "choice" element, including those instances nested inside other match operators. It MAY also be applied to the elements "any" and "char", when used as match operators. 6.3.4. The "name" and "by-ref" Attributes Like classes (see Section 6.2.1), rules declared as immediate child elements of the "rules" element MUST be named using a unique "name" attribute, and all other instances MUST NOT be named. Anonymous rules and classes or references complete definition has not been seen. In other words, it is explicitly not possible to define recursive rules or class definitions. The "by-ref" attribute MUST NOT appear in the same element as the "name" attribute or in an element that has any child elements. The example shows several named classes and a named rule referencing some of them by name. <class name="letter" property="gc:L"/> <class name="combining-mark" property="gc:M"/> <class name="digit" property="gc:Nd" /> <rule name="letter-grapheme"> <class by- <class by- </rule> 6.3.5. The "choice" Element The "choice" element is used to represent a list of two or more alternatives: <rule name="ldh"> <choice count="1+"> <class by- <class by- <char cp="002D" comment="literal HYPHEN"/> </choice> </rule> Each child element of a "choice" element represents one alternative. The first matching alternative determines the match for the "choice" element. To express a choice where an alternative itself consists of a sequence of elements, the sequence must be wrapped in an anonymous rule." attribute in addition to the "cp" attribute and OPTIONAL "comment" or "ref" attributes. No other attributes or child elements are permitted. 6.3.7. The "any" Element The "any" element is an empty element that matches any single code point. It MAY have a "count" attribute. For an example, see Section 6.3.9. Unlike a literal, the "any" element MUST NOT have a "ref" attribute. 6.3.8. The "start" and "end" Elements To match the beginning or end of a label, use the "start" or "end" element. An empty label would match this rule: <rule name="empty-label"> <start/> <end/> </rule> Conceptually, whole label" elements, to define a rule that requires that an entire label be well-formed. For this example, that means that it must start with a letter and that the first or last element to be matched, respectively, in matching a rule. "start" and "end" elements are empty elements that do not have a "count" attribute or any other attribute other than "comment". It is an error for any match operator enclosing a nested "start" or "end" element to have a "count" attribute. 6.3.9. Example Context Rule from IDNA Specification This is an example of the WLE rule from [RFC5892] forbidding the mixture of the Arabic-Indic and extended Arabic-Indic digits in the same label. It is implemented as a whole label rule associated with the code point ranges using the "not-when" attribute, which defines an impermissible context. The example also demonstrates several instances of the use of anonymous rules for grouping. <data> <range first- <range first- </data> <rules> <rule name="mixed-digits"> <choice> <rule> <class from- <any count="0+"/> <class from- </rule> <rule> <class from- <any count="0+"/> <class from- </rule> </choice> </rule> </rules> As specified in the example, a label containing a code point from either of the two digit ranges is invalid for any label matching the "mixed-digits" rule, that is, any time that a code point from the other range is also present. Note that invalidating the label is not the same as invalidating the definition of the "range" elements; in particular, the definition of the tag values does not depend on the "when" attribute. 6.4. Parameterized Context or When Rules To recap: When a rule is intended to provide a context for evaluating the validity of a code point or variant mapping, it is invoked by the "when" or "not-when" attributes described in Section 5.2. For "char" and "range" elements, an action implied by a context rule always has a disposition of "invalid" whenever the rule given by the "when" attribute is not matched (see Section 7.5). Conversely, a "not-when" attribute results in a disposition of "invalid" whenever the rule is matched. When a rule is used in this way, it is called a context or "when" rule. The example in the previous section shows a whole label rule used as a context rule, essentially making the whole label the context. The next sections describe several match operators that can be used to provide a more specific specification of a context, allowing a parameterized context rule. See Section 7 for an alternative method of defining an invalid disposition for a label not matching a whole label rule. 6.4.1. The "anchor" Element Such parameterized context rules are rules that contain a special placeholder represented by an "anchor" element. As each When Rule is evaluated, if an "anchor" element is present, it is replaced by a literal corresponding to the "cp" attribute of the element containing the "when" (or "not-when") attribute. The match to the "anchor" element must be at the same position in the label as the code point or variant mapping triggering the When Rule. For example, the Greek lower numeral sign is invalid if not immediately preceding a character in the Greek script. This is most naturally addressed with a parameterized instance and fail for the second. Unlike other rules, rules containing an "anchor" element MUST only be invoked via the "when" or "not-when" attributes on code points or variants; otherwise, their "anchor" elements cannot be evaluated. However, it is possible to invoke rules not containing an "anchor" element from a "when" or "not-when" attribute. (See Section 6.4.3.) The "anchor" element is an empty element, with no attributes permitted except "comment". in a rule. Here is an example of a rule that defines an "initial" context for an Arabic code point: (or context rule) is a named rule that. 6.4.3. Omitting the "anchor" Element If the "anchor" element is omitted, the evaluation of the context rule is not tied to the position of the code point or sequence associated with the "when" attribute. According to [RFC5892], the Katakana middle dot is invalid in any label not containing at least one Japanese character anywhere in the label. Because this requirement is independent of the position of the middle dot, the rule does not require an "anchor" element. be in one of these scripts, but the position of that code point is independent of the location of the middle dot; therefore, no anchor is required. (Note that the Katakana middle dot itself is of script Common, that is, "sc:Zyyy".) 7. The "action" Element The purpose of an action is to assign a disposition to a label in response to being triggered by the label meeting a specified condition. LGR 7.3.) 7.1. The "match" and "not-match" Attributes An OPTIONAL "match" or "not-match" attribute specifies a rule that must be matched or not matched as a condition for triggering an action. Only a single rule may be named as the value of a "match" or "not-match" attribute. Because rules may be composed of other rules, this restriction to a single attribute value does not impose any limitation on the contexts that can trigger an action. An action MUST NOT contain both a "match" and a "not-match" attribute, and the value of either attribute MUST be the name of a previously defined rule; otherwise, the document MUST be rejected. An action without any attributes is triggered by all labels unconditionally. For a very simple LGR, the following action would allocate all labels that match the repertoire: <action disp="allocatable" /> the rules are not affected by the disposition attributes of the variant mappings. To trigger any actions based on these dispositions requires the use of additional optional attributes for actions described next. 7.2. Actions with Variant Type Triggers 7.2.1. The "any-variant", "all-variants", and "only-variants" Attributes An action may contain one of the OPTIONAL attributes "any-variant", "all-variants", or "only-variants" defining triggers based on variant types. The permitted value for these attributes consists of one or more variant type values, separated by spaces. These MAY include type values that are not used in any "var" element in the LGR. When a variant label is generated, these variant type values are compared to the set of type values on the variant mappings used to generate the particular variant label (see Section 8). 5.3.4). <char cp="0078" comment="x"> <var cp="0078" type="allocatable" comment="reflexive" /> <var cp="0079" type="blocked" /> </char> <char cp="0079" comment="y"> <var cp="0078" type="allocatable" /> </char> ... <action disp="blocked" any- <action disp="allocatable" only- <action disp="some-disp" any- In the example above, the label "xx" would have variant labels "xx", "xy", "yx", and "yy". The first action would result in blocking any variant label containing "y", because the variant mapping from "x" to "y" is of type "blocked", triggering the "any-variant" condition. Because in this example "x" has a reflexive variant mapping to itself of type "allocatable", the original label "xx" has a reflexive variant "xx" that would trigger the "only-variants" condition on the second action. A label "yy" would have the variants "xy", "yx", and "xx". Because the variant mapping from "y" to "x" is of type "allocatable" and a mapping from "y" to "y" is not defined, the labels "xy" and "yx" trigger the "any-variant" condition on the third label. The variant "xx", being generated using the mapping from "y" to "x" of type "allocatable",-variant" trigger with reflexive variant mappings (Section 5.3: <char cp="0570" comment="ARMENIAN SMALL LETTER HO"> <var cp="0068" type="blocked" comment="LATIN SMALL LETTER H" /> <var cp="04BB" type="blocked" comment="CYRILLIC SMALL LETTER SHHA" /> < 8). 7.2.2. Example from Tables in the Style of RFC 3743 This section gives an example of using variant type triggers, combined with variants with reflexive mappings (Section 5.3.4), to achieve LGRs that implement tables like those defined according to [RFC3743] where the goal is to allow as variants only labels that consist entirely of simplified or traditional variants, in addition to the original label. This example assumes an LGR where all variants have been given suitable "type" attributes of "blocked", "simplified", "traditional", or "both", similar to the ones discussed in Appendix B. Given such an LGR, the following example actions evaluate the disposition for the variant label: <action disp="blocked" any- <action disp="allocatable" only- <action disp="allocatable" only- <action disp="blocked" all- <action disp="allocatable" /> The first action matches any variant label for which at least one of the code point variants is of type "blocked". 5.3.4). The final two actions rely on the fact that actions are evaluated in sequence and that the first action triggered also defines the final disposition for a variant label (see Section "blocked". There are exceptions where the assumption on reflexive mappings made above does not hold, so this basic scheme needs some refinements to cover all cases. For a more complete example, see Appendix B. 7.3. Recommended Disposition Values The precise nature of the policy action taken in response to a disposition and the name of the corresponding "disp" attributes are only partially defined here. It is strongly RECOMMENDED to use the following dispositions only in their conventional sense. invalid The resulting string is not a valid label. This disposition may be assigned implicitly; see Section 7.5. No variant labels should be generated from a variant mapping with this type. blocked The resulting string is a valid label but should be blocked from registration. This would typically apply for a derived variant that is undesirable due to having no practical use or being confusingly similar to some other label. allocatable The resulting string should be reserved for use by the same operator of the origin string but not automatically allocated for use. activated The resulting string should be activated for use. (This is the same as a Preferred Variant [RFC3743].) valid The resultant string is a valid label. (This is the typical default action if no dispositions are defined.) 7.4. Precedence Actions are applied in the order of their appearance in the file. This defines their relative precedence. The first action triggered by a label defines the disposition for that label. To define the order of precedence, list the actions in the desired order. The conventional order of precedence for the actions defined in Section 7.3 is "invalid", "blocked", "allocatable", "activated", and then "valid". This default precedence is used for the default actions defined in Section 7.6. 7.5. Implied Actions The context rules on code points ("not-when" or "when" rules) carry an implied action with a disposition of "invalid" (not eligible) if a "when" context is not satisfied or a "not-when" context is matched, respectively. These rules are evaluated at the time the code points for a label or its variant labels are checked for validity (see Section 8). In other words, they are evaluated before any of the actions are applied, and with higher precedence. The context rules for variant mappings are evaluated when variants are generated and/or when variant tables are made symmetric and transitive. They have an implied action with a disposition of "invalid", which means that. 7.6. Default Actions If a label does not trigger any of the actions defined explicitly in the LGR, the following implicitly defined default actions are evaluated. They are shown below in their relative order of precedence (see Section 7.4). Default actions have a lower order of precedence than explicit actions (see Section 8.3). The default actions for variant labels are defined as follows. The first set is triggered based on the standard variant type values of "invalid", "blocked", "allocatable", and "activated": <action disp="invalid" any- <action disp="blocked" any- <action disp="allocatable" any- <action disp="activated" all- A final default action sets the disposition to "valid" for any label matching the repertoire for which no other action has been triggered. This "catch-all" action also matches all remaining variant labels from variants that do not have a type value. <action disp="valid" comment="Catch-all if other rules not met"/> Conceptually, the implicitly defined default actions act just like a block of "action" elements that is added (virtually) beyond the last of the user-supplied actions. Any label not processed by the user-supplied actions would thus be processed by the default actions as if they were present in the LGR. As the last default action is a "catch-all", all processing is guaranteed to end with a definite disposition for the label. 8. Processing a Label against an LGR 8.1. Determining Eligibility for a Label In order to test a given label for membership in the LGR, a consumer of the LGR must iterate through each code point within a given label and test that each instance of a code point is a member of the LGR. If any instance of a code point is not a member of the LGR, the label shall be deemed invalid. An individual instance of a code point is deemed a member of the LGR when it is listed using a "char" element, or is part of a range defined with a "range" element, and all necessary conditions in any "when" or "not-when" attributes are correctly satisfied for that instance. Alternatively, an instance of a code point is also deemed a member of the LGR when it forms part of a sequence that corresponds to a sequence listed using a "char" element for which the "cp" attribute defines a sequence, and all necessary conditions in any "when" or "not-when" attributes are correctly satisfied for that instance of the sequence. In determining eligibility, at each position the longest possible sequence of code points is evaluated first. If that sequence matches a sequence defined in the LGR and satisfies any required context at that position, the instances of its constituent code points are deemed members of the LGR and evaluation proceeds with the next code point following the sequence. If the sequence does not match a defined sequence or does not satisfy the required context, successively shorter sequences are evaluated until only a single code point remains. The eligibility of that code point is determined as described above for an individual code point instance. A label must also not trigger any action that results in a disposition of "invalid"; otherwise, it is deemed not eligible. (This step may need to be deferred until variant code point dispositions have been determined.) 8.1.1. Determining Eligibility Using Reflexive Variant Mappings For LGRs that contain reflexive variant mappings (defined in Section 5.3.4), the final evaluation of eligibility for the label must be deferred until variants are generated. In essence, LGRs that use this feature treat the original label as the (identity) variant of itself. For such LGRs, the ordinary determination of eligibility described here is but a first step that generally excludes only a subset of invalid labels. To further check the validity of a label with reflexive mappings, it is not necessary to generate all variant labels. Only a single variant needs to be created, where any reflexive variants are applied for each code point, and the label disposition is evaluated (as described in Section 8.3). A disposition of "invalid" results in the label being not eligible. (In the exceptional case where context rules are present on reflexive mappings, multiple reflexive variants may be defined, but for each original label, at most one of these can be valid at each code position. However, see Section 8.4.) 8 rules are satisfied as follows: - Create each possible permutation of a label by substituting each code point or code point sequence in turn by any defined variant mapping (including any reflexive mappings). - Apply variant mappings with "when" or "not-when" attributes only if the conditions are satisfied; otherwise, they are not defined. - Record each of the "type" values on the variant mappings used in creating a given variant label in a disposition set; for any unmapped code point, record the "type" value of any reflexive variant (see Section 5.3.4). - Determine the disposition for each variant label per Section 8.3. - If the disposition is "invalid", remove the label from the set. - If final evaluation of the disposition for the unpermuted label per Section 8.3 results in a disposition of "invalid", remove all associated variant labels from the set. The number of potential permutations can be very large. In practice, implementations would use suitable optimizations to avoid having to actually create all permutations (see Section 8.5). In determining the permuted set of variant labels in step (1) above, all eligible partitions into sequences must be evaluated. A label "ab" that matches a sequence "ab" defined in the LGR but also matches the sequence of individual code points "a" and "b" (both defined in the LGR) must be permuted using any defined variant mappings for both the sequence "ab" and the code points "a" and "b" individually. 8.3. Determining a Disposition for a Label or Variant Label For a given label (variant or original), its disposition is determined by evaluating, in order of their appearance, all actions for which the label or variant label satisfies the conditions. - For any label that contains code points or sequences not defined in the repertoire, or does not satisfy the context rules on all of its code points and variants, the disposition is "invalid". - For all other labels, the disposition is given by the value of the "disp" attribute for the first action triggered by the label. An action is triggered if all of the following are true: - the label matches the whole label rule given in the "match" attribute for that action; - the label does not match the whole label rule given in the "not-match" attribute for that action; - any of the recorded variant types for a variant label match the types given in the "any-variant" attribute for that action; - all of the recorded variant types for a variant label match the types given in the "all-variants" or "only-variants" attribute given for that action; - in case of an "only-variants" attribute, the label contains only code points that are the target of applied variant mappings; or - the action does not contain any "match", "not-match", "any-variant", "all-variants", or "only-variants" attributes: catch-all. - For any remaining variant label, assign the variant label the disposition using the default actions defined in Section 7.6. For this step, variant types outside the predefined recommended set (see Section 7.3) are ignored. - For any remaining label, set the disposition to "valid". 8.4. Duplicate Variant Labels For a poorly designed LGR, it is possible to generate duplicate variant labels from the same input label, but with different, and potentially conflicting, dispositions. Implementations MUST treat any duplicate variant labels encountered as an error, irrespective of their dispositions. This situation can arise in two ways. One is described in Section 5.3.5 and involves defining the same variant mapping with two context rules that are formally distinct but nevertheless overlap so that they are not mutually exclusive for the same label. The other case involves variants defined for sequences, where one sequence is a prefix of another (see Section 5.3.1). The following shows such an example resulting in conflicting reflexive variants: <char cp="0061"> <var cp="0061" type="allocatable"/> </char> <char cp="0062"/> <char cp="0061 0062"> <var cp="0061 0062" type="blocked"/> </char> A label "ab" would generate the variant labels "{a}{b}" and "{ab}" where the curly braces show the sequence boundaries as they were applied during variant mapping. The result is a duplicate variant label "ab", one based on a variant of type "allocatable" plus an original code point "b" that has no variant, and another one based on a single variant of type "blocked", thus creating two variant labels with conflicting dispositions. In the general case, it is difficult to impossible to prove by mechanical inspection of the LGR that duplicate variant labels will never occur, so implementations have to be prepared to detect this error during variant label generation. The condition is easily avoided by careful design of context rules and special attention to the relation among code point sequences with variants. 8.5. Checking Labels for Collision The obvious method for checking for collision between labels is to generate the fully permuted set of variants for one of them and see whether it contains the other label as a member. As discussed above, this can be prohibitive and is not necessary. Because of symmetry and transitivity, all variant mappings form disjoint sets. In each of these sets, the source and target of each mapping are also variants of the sources and targets of all the other mappings. However, members of two different sets are never variants of each other. If two labels have code points at the same position that are members of two different variant mapping sets, any variant labels of one cannot be variant labels of the other: the sets of their variant labels are likewise disjoint. Instead of generating all permutations to compare all possible variants, it is enough to find out whether code points at the same position belong to the same variant set or not. For that, it is sufficient to substitute an "index" mapping that identifies the set. This index mapping could be, for example, the variant mapping for which the target code point (or sequence) comes first in some sorting order. This index mapping would, in effect, identify the set of variant mappings for that position. To check for collision then means generating a single variant label from the original by substituting the respective "index" value for each code point. This results in an "index label". Two labels collide whenever the index labels for them are the same. 9. Conversion to and from Other Formats Both [RFC3743] and [RFC4290] provide different grammars for IDN tables. The formats in those documents are unable to fully support the increased requirements of contemporary IDN variant policies. This specification is a superset of functionality provided by the older IDN table formats; thus, any table expressed in those formats can be expressed in this new format. Automated conversion can be conducted between tables conformant with the grammar specified in each document. For notes on how to translate a table in the style of RFC 3743, see Appendix B. 10. Media Type Well-formed LGRs that comply with this specification SHOULD be transmitted with a media type of "application/lgr+xml". This media type will signal to an LGR-aware client that the content is designed to be interpreted as an LGR. 11. IANA Considerations IANA has completed the following actions: 11.1. Media Type Registration The media type "application/lgr+xml" has been registered to denote transmission of LGRs that are compliant with this specification, in accordance with [RFC6838]. Type name: application Subtype name: lgr+xml Required parameters: N/A Optional parameters: charset (as for application/xml per [RFC7303]) Security considerations: See the security considerations for application/xml in [RFC7303] and the specific security considerations for Label Generation Rulesets (LGRs) in RFC 7940 Interoperability considerations: As for application/xml per [RFC7303] Published specification: See RFC 7940 Applications that use this media type: Software using LGRs for international identifiers, such as IDNs, including registry applications and client validators. Additional information: Deprecated alias names for this type: N/A Magic number(s): N/A File extension(s): .lgr Macintosh file type code(s): N/A Person & email address to contact for further information: Kim Davies <[email protected]> Asmus Freytag <[email protected]> Intended usage: COMMON Restrictions on usage: N/A Author: Kim Davies <[email protected]> Asmus Freytag <[email protected]> Change controller: IESG Provisional registration? (standards tree only): No 11.2. URN Registration This specification uses a URN to describe the XML namespace, in accordance with [RFC3688]. URI: urn:ietf:params:xml:ns:lgr-1.0 Registrant Contact: See the Authors of this document. XML: None. 11.3. Disposition Registry This document establishes a vocabulary of "Label Generation Ruleset Dispositions", which has been reflected as a new IANA registry. This registry is divided into two subregistries: - Standard Dispositions - This registry lists dispositions that have been defined in published specifications, i.e., the eligibility for such registrations is "Specification Required" [RFC5226]. The initial set of registrations are the five dispositions in this document described in Section 7.3. - Private Dispositions - This registry lists dispositions that have been registered "First Come First Served" [RFC5226] by third parties with the IANA. Such dispositions must take the form "entity:disposition" where the entity is a domain name that uniquely identifies the private user of the namespace. For example, "example.org:reserved" could be a private extension used by the example organization to denote a disposition relating to reserved labels. These extensions are not intended to be interoperable, but registration is designed to minimize potential conflicts. It is strongly recommended that any new dispositions that require interoperability and have applicability beyond a single organization be defined as Standard Dispositions. In order to distinguish them from Private Dispositions, Standard Dispositions MUST NOT contain the ":" character. All disposition names shall be in lowercase ASCII. The IANA registry provides data on the name of the disposition, the intended purposes, and the registrant or defining specification for the disposition. 12. Security Considerations 12.1. LGRs Are Only a Partial Remedy for Problem Space Substantially unrestricted use of non-ASCII characters in security- relevant identifiers such as domain name labels may cause user confusion and invite various types of attacks. In many languages, in particular those using complex or large scripts, an attacker has an opportunity to divert or confuse users as a result of different code points with identical appearance or similar semantics. The use of an LGR provides a partial remedy for these risks by supplying a framework for prohibiting inappropriate code points or sequences from being registered at all and for permitting "variant" code points to be grouped together so that labels containing them may be mutually exclusive or registered only to the same owner. In addition, by being fully machine processable the format may enable automated checks for known weaknesses in label generation rules. However, the use of this format, or compliance with this specification, by itself does not ensure that the LGRs expressed in this format are free of risk. Additional approaches may be considered, depending on the acceptable trade-off between flexibility and risk for a given application. One method of managing risk may involve a case-by-case evaluation of a proposed label in context with already-registered labels -- for example, when reviewing labels for their degree of visual confusability. 12.2. Computational Expense of Complex Tables A naive implementation attempting to generate all variant labels for a given label could lead to the possibility of exhausting the resources on the machine running the LGR processor, potentially causing denial-of-service consequences. For many operations, brute-force generation can be avoided by optimization, and if needed, the number of permuted labels can be estimated more cheaply ahead of time. The implementation of WLE rules, using certain backtracking algorithms, can take exponential time for pathological rules or labels and exhaust stack resources. This can be mitigated by proper implementation and enforcing the restrictions on permissible label length., <>. [UAX42] The Unicode Consortium, "Unicode Character Database in XML", May 2016, <>. [Unicode-Stability] - The Unicode Consortium, "Unicode Encoding Stability Policy, Property Value Stability", April 2015, < stability_policy.html#Property_Value>. [Unicode-Versions] - The Unicode Consortium, "Unicode Version Numbering", June 2016, <>. [XML] Bray, T., Paoli, J., Sperberg-McQueen, M., Maler, E., and F. Yergeau, "Extensible Markup Language (XML) 1.0 (Fifth Edition)", World Wide Web Consortium, November 2008, <>. 13.2. Informative References [ASIA-TABLE] - DotAsia Organisation, ".ASIA ZH IDN Language Table", February 2012, <>. [LGR-PROCEDURE] - Internet Corporation for Assigned Names and Numbers, "Procedure to Develop and Maintain the Label Generation Rules for the Root Zone in Respect of IDNA Labels", December 2012, < draft-lgr-procedure-07dec12-en.pdf>. [RELAX-NG] The Organization for the Advancement of Structured - Information Standards (OASIS), "RELAX NG Compact Syntax", November 2002, < relax-ng/compact-20021121.html>. [RFC3688] Mealling, M., "The IETF XML Registry", BCP 81, RFC 3688, DOI 10.17487/RFC3688, January 2004, <>. 4290] Klensin, J., "Suggested Practices for Registration of Internationalized Domain Names (IDN)", RFC 4290, DOI 10.17487/RFC4290, December 2005, <>. [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA Considerations Section in RFCs", BCP 26, RFC 5226, DOI 10.17487/RFC5226, May 2008, <>. [RFC5564] El-Sherbiny, A., Farah, M., Oueichek, I., and A. Al-Zoman, "Linguistic Guidelines for the Use of the Arabic Language in Internet Domains", RFC 5564, DOI 10.17487/RFC5564, February, <>. [TDIL-HINDI] - Technology Development for Indian Languages (TDIL) Programme, "Devanagari Script Behaviour for Hindi Ver2.0", < resourceDetails&toolid=1625&lang=en>. [UAX44] The Unicode Consortium, "Unicode Character Database", June 2016, <>. [WLE-RULES] - Internet Corporation for Assigned Names and Numbers, "Whole Label Evaluation (WLE) Rules", August 2016, < attachments/43989034/WLE-Rules.pdf>. Appendix A. Example Tables The following presents a minimal LGR table defining the lowercase LDH (letters, digits, hyphen) repertoire and containing no rules or> In practice, any LGR that includes the hyphen might also contain rules invalidating any labels beginning with a hyphen, ending with a hyphen, and containing consecutive hyphens in the third and fourth positions as required by [RFC5891]. <?xml version="1.0" encoding="utf-8"?> <lgr xmlns="urn:ietf:params:xml:ns:lgr-1.0"> <data> <char cp="002D" not- <range first- <range first- </data> <rules> <rule name="hyphen-minus-disallowed" comment="RFC5891 restrictions on U+002D"> <choice> <rule comment="no leading hyphen"> <look-behind> <start /> </look-behind> <anchor /> </rule> <rule comment="no trailing hyphen"> <anchor /> <look-ahead> <end /> </look-ahead> </rule> <rule comment="no consecutive hyphens in third and fourth positions"> <look-behind> <start /> <any /> <any /> <char cp="002D" comment="hyphen-minus" /> </look-behind> <anchor /> </rule> </choice> </rule> </rules> <> <scope type="domain">example.com</scope> 9.0</reference> <reference id="1" >RFC 5892</reference> <reference id="2" >Big-5: Computer Chinese Glyph and Character Code Mapping Table, Technical Report="blocked" ref="2" /> <var cp="534B" type="allocatable" ref="2" /> </char> <char cp="4E17" ref="0"> <var cp="4E16" type="allocatable" ref="2" /> <var cp="534B" type="allocatable" ref="2" /> </char> <char cp="534B" ref="0"> <var cp="4E16" type="allocatable" ref="2" /> <var cp="4E17" type="blocked" ref="2" /> </char> </data> <!-- Context and whole label rules --> <rules> <!-- Require the given code point to be between two 006C code points --> <rule name="catalan-middle-dot" ref="0"> <look-behind> <char cp="006C" /> </look-behind> <anchor /> <look-ahead> <char cp="006C" /> </look-ahead> </rule> <!--="invalid" match="three-or-more-consonants" /> <action disp="blocked" any- <action disp="allocatable" all- </rules> </lgr> Appendix B. How to Translate Tables Based on RFC 3743 into the XML Format As background, the rules specified in [RFC3743] work as follows: - The original (requested) label is checked to make sure that all the code points are a subset of the repertoire. - If it passes the check, the original label is allocatable. -: - Block all variant labels containing at least one blocked variant. - Allocate all labels that consist entirely of variants that are "simp" or "both". - Also allocate all labels that are entirely "trad" or "both". - Block all surviving labels containing any one of the dispositions "simp" or "trad" or "both", because they are now known to be part of an undesirable mixed simplified/traditional label. -. - [email protected]
http://pike.lysator.liu.se/docs/ietf/rfc/79/rfc7940.xml
CC-MAIN-2021-43
en
refinedweb
Related (hippo, bird, cat, giraffe, duh), some of them share the same voices! Why does that adorable hippo sound like the cat?! The solution I posted in my previous blog is simply use CocosDenshion to manipulate the recorded voice (your voice), to a higher or lower pitch to produce the voice of the animal (chipmunk). But the flaw of tha solution is that if you change the pitch, you are also changing the speed. So if you lower the pitch, you get this really low voice that is being played really sloooow. And I don’t want that I want to change the pitch but not change the speed. So I need a different solution. And the solution is Dirac3. According to its. Basically Dirac allows you to change the pitch of your audio, without speeding it up or slowing it down. Dirac 3 has a free version, called Dirac LE, which you can simply download from their website: Dirac LE is also available iPhone/iPad ARM 6 and 7 compliant (Xcode, iOS 3.2+ and iOS4+). Okay, download Dirac LE, and then let’s get started (oh, I am setting up mine as I write this blog post, as well). According to the “iPhone ReadMe (or suffer).rtf” that came with the zip file, we need to include the vecLib/Accelerate frameworks to your project. Go to Frameworks, right click, add Existing Frameworks, and then look for “Accelerate.framework”, add. Oh, and any file that will contain Dirac calls need to be .mm, instead of .m. And then Add Exsiting Files, add “Dirac.h” and “libDIRAC_iOS4-fat.a” to your project. I will be using the Time Stretching Example as my guide, (it’s also in the zip file). Oh, the zip file also contains a 32 page pdf file explaining Dirac. From the Time Stretching Example, copy the files in ExtAudioFile folder, EAFRead.h, EAFRead.mm, EAFWrite.h, EAFWrite.mm. These are the files Dirac will use to read and write audio files. And then we create a new file, I’m calling it AudioProcessor.mm, take note it’s .mm, because it will be calling Dirac. And basically I just copy pasted most of the code from iPhoneTestAppDelegate of the example. (Guilty for being a copy paster coder). And then edit some stuff, so AudioProcessor.h: #import <Foundation/Foundation.h> #import “EAFRead.h” #import “EAFWrite.h” @interface AudioProcessor : NSObject <AVAudioPlayerDelegate> { AVAudioPlayer *player; float percent; NSURL *inUrl; NSURL *outUrl; EAFRead *reader; EAFWrite *writer; } @property (readonly) EAFRead *reader; @end And then edit some more stuff, AudioProcessor.mm: #include “Dirac.h” #include <stdio.h> #include <sys/time.h> #import <AVFoundation/AVAudioPlayer.h> #import <AVFoundation/AVFoundation.h> #import “AudioProcessor.h” #import “EAFRead.h” #import “EAFWrite.h” double gExecTimeTotal = 0.; void DeallocateAudioBuffer(float **audio, int numChannels) { if (!audio) return; for (long v = 0; v < numChannels; v++) { if (audio[v]) { free(audio[v]); audio[v] = NULL; } } free(audio); audio = NULL; } float **AllocateAudioBuffer(int numChannels, int numFrames) { // Allocate buffer for output float **audio = (float**)malloc(numChannels*sizeof(float*)); if (!audio) return NULL; memset(audio, 0, numChannels*sizeof(float*)); for (long v = 0; v < numChannels; v++) { audio[v] = (float*)malloc(numFrames*sizeof(float)); if (!audio[v]) { DeallocateAudioBuffer(audio, numChannels); return NULL; } else memset(audio[v], 0, numFrames*sizeof(float)); } return audio; } /* This is the callback function that supplies data from the input stream/file whenever needed. It should be implemented in your software by a routine that gets data from the input/buffers. The read requests are *always* consecutive, ie. the routine will never have to supply data out of order. */ long myReadData(float **chdata, long numFrames, void *userData) { // The userData parameter can be used to pass information about the caller (for example, “self”) to // the callback so it can manage its audio streams. if (!chdata) return 0; AudioProcessor *Self = (AudioProcessor*)userData; if (!Self) return 0; // we want to exclude the time it takes to read in the data from disk or memory, so we stop the clock until // we’ve read in the requested amount of data gExecTimeTotal += DiracClockTimeSeconds(); // ……………………….. stop timer …………………………………… OSStatus err = [Self.reader readFloatsConsecutive:numFrames intoArray:chdata]; DiracStartClock(); // ……………………….. start timer …………………………………… return err; } @implementation AudioProcessor @synthesize reader; -(void)playOnMainThread:(id)param { NSError *error = nil; player = [[AVAudioPlayer alloc] initWithContentsOfURL:outUrl error:&error]; if (error) NSLog(@”AVAudioPlayer error %@, %@”, error, [error userInfo]); player.delegate = self; [player play]; } -(void)processThread:(id)param { NSLog(@”processThread”); NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; long numChannels = 1; // DIRAC LE allows mono only float sampleRate = 44100.; // open input file [reader openFileForRead:inUrl sr:sampleRate channels:numChannels]; // create output file (overwrite if exists) [writer openFileForWrite:outUrl sr:sampleRate channels:numChannels wordLength:16 type:kAudioFileAIFFType]; // DIRAC parameters // Here we set our time an pitch manipulation values float time = 1.0; float pitch = 1.0; float formant = 1.0; // First we set up DIRAC to process numChannels of audio at 44.1kHz // N.b.: The fastest option is kDiracLambdaPreview / kDiracQualityPreview, best is kDiracLambda3, kDiracQualityBest // The probably best *default* option for general purpose signals is kDiracLambda3 / kDiracQualityGood void *dirac = DiracCreate(kDiracLambdaPreview, kDiracQualityPreview, numChannels, 44100., &myReadData, (void*)self); // void *dirac = DiracCreate(kDiracLambda3, kDiracQualityBest, numChannels, 44100., &myReadData); if (!dirac) { printf(“!! ERROR !!nntCould not create DIRAC instancentCheck number of channels and sample rate!n”); printf(“ntNote that the free DIRAC LE library supports onlyntone channel per instancennn”); exit(-1); } // Pass the values to our DIRAC instance DiracSetProperty(kDiracPropertyTimeFactor, time, dirac); DiracSetProperty(kDiracPropertyPitchFactor, pitch, dirac); DiracSetProperty(kDiracPropertyFormantFactor, formant, dirac); // upshifting pitch will be slower, so in this case we’ll enable constant CPU pitch shifting if (pitch > 1.0) DiracSetProperty(kDiracPropertyUseConstantCpuPitchShift, 1, dirac); // Print our settings to the console DiracPrintSettings(dirac); NSLog(@”Running DIRAC version %snStarting processing”, DiracVersion()); // Get the number of frames from the file to display our simplistic progress bar SInt64 numf = [reader fileNumFrames]; SInt64 outframes = 0; SInt64 newOutframe = numf*time; long lastPercent = -1; percent = 0; // This is an arbitrary number of frames per call. Change as you see fit long numFrames = 8192; // Allocate buffer for output float **audio = AllocateAudioBuffer(numChannels, numFrames); double bavg = 0; // MAIN PROCESSING LOOP STARTS HERE for(;;) { // Display ASCII style “progress bar” percent = 100.f*(double)outframes / (double)newOutframe; long ipercent = percent; if (lastPercent != percent) { //[self performSelectorOnMainThread:@selector(updateBarOnMainThread:) withObject:self waitUntilDone:NO]; printf(“rProgress: %3i%% [%-40s] “, ipercent, &”||||||||||||||||||||||||||||||||||||||||”[40 – ((ipercent>100)?40:(2*ipercent/5))] ); lastPercent = ipercent; fflush(stdout); } DiracStartClock(); // ……………………….. start timer …………………………………… // Call the DIRAC process function with current time and pitch settings // Returns: the number of frames in audio long ret = DiracProcess(audio, numFrames, dirac); bavg += (numFrames/sampleRate); gExecTimeTotal += DiracClockTimeSeconds(); // ……………………….. stop timer …………………………………… printf(“x realtime = %3.3f : 1 (DSP only), CPU load (peak, DSP+disk): %3.2f%%n”, bavg/gExecTimeTotal, DiracPeakCpuUsagePercent(dirac)); // Process only as many frames as needed long framesToWrite = numFrames; unsigned long nextWrite = outframes + numFrames; if (nextWrite > newOutframe) framesToWrite = numFrames – nextWrite + newOutframe; if (framesToWrite < 0) framesToWrite = 0; // Write the data to the output file [writer writeFloats:framesToWrite fromArray:audio]; // Increase our counter for the progress bar outframes += numFrames; // As soon as we’ve written enough frames we exit the main loop if (ret <= 0) break; } percent = 100; //[self performSelectorOnMainThread:@selector(updateBarOnMainThread:) withObject:self waitUntilDone:NO]; // Free buffer for output DeallocateAudioBuffer(audio, numChannels); // destroy DIRAC instance DiracDestroy( dirac ); // Done! NSLog(@”nDone!”); [reader release]; [writer release]; // important – flushes data to file // start playback on main thread [self performSelectorOnMainThread:@selector(playOnMainThread:) withObject:self waitUntilDone:NO]; [pool release]; } – (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag { } – (void) initAudioProcessor: (NSURL*) filePath { NSLog(@”initAudioProcessor”); //NSString *inputSound = [[[NSBundle mainBundle] pathForResource: @”voice” ofType: @”aif”] retain]; NSString *outputSound = [[[NSHomeDirectory() stringByAppendingString:@”/Documents/”] stringByAppendingString:@”out.aif”] retain]; inUrl = [filePath retain]; outUrl = [[NSURL fileURLWithPath:outputSound] retain]; reader = [[EAFRead alloc] init]; writer = [[EAFWrite alloc] init]; // this thread does the processing [NSThread detachNewThreadSelector:@selector(processThread:) toTarget:self withObject:nil]; } – (void)dealloc { [player release]; [inUrl release]; [outUrl release]; [super dealloc]; } In my AudioController (code in previous blog), create a AudioProcessor object: AudioProcessor *audioProcessor = [[AudioProcessor alloc] init]; [audioProcessor initAudioProcessor: [NSString stringWithFormat: @”%@”, recordedTmpFiles[recordedTmpFileIdx]]]; recordedTmpFiles[recordedTmpFileIdx] is the filepath of the recorded audio. Just adjust float pitch, to change the well, pitch of your voice. For a chipmunky voice, set the pitch to > 1, for a low voice set the pitch to < 1. And that’s it 🙂
https://purplelilgirl.com/2011/03/17/tutorial-other-ways-to-chipmunkify-your-voice/
CC-MAIN-2021-43
en
refinedweb
- Training Library - Google Cloud Platform - Courses - Introduction to Google Cloud Firestore and Datastore Introduction to Google Cloud Firestore and Datastore Contents Firestore and Datastore The course is part of this learning path. Hello and welcome, this is Introduction to Cloud Firestore with App Engine, and in this lesson, we'll be exploring the basic functionality of Cloud Firestore and how it's used with App Engine. By the end of this lesson you should be able to: explain the purpose of Cloud Firestore, explain the relationship between Cloud Datastore and Cloud Firestore, explain when to choose Datastore Mode, and explain how to use Datastore with App Engine. Most applications need a place to store queryable data, and fortunately, there is no shortage of databases to help. However the choice of database is not always a simple one, it depends on multiple factors including but not limited to the type of data being stored, how the data will be consumed, et cetera. Technical limitations are another concern. For example, will the database be able to support the amount of traffic? No matter what kind of database you're using, if it can't support the workload that you intend to use it for, it's not all that useful. Remember, App Engine can keep scaling up to support the demand and that means the database needs to as well, and that's why Google created Cloud Datastore and paired it with App Engine. Now, that doesn't mean it's the only database you can use though it was designed to be a good likely pairing for many apps. So, if you're interested in learning more, then let's get started. Google released Datastore in 2013 to serve as App Engine's database, and for years that is how it has been. Datastore is a highly scalable, fully-managed, no-SQL document database. Supporting both eventual and strong consistency, it supports transactions and it offers a SQL-like query language called GQL. It stores data as properties of an entity with support for multiple data types, and it categorizes entities based on a developer supplied Kind. To make entity lookups perform quickly, entities include a Datastore property named a Key which is a unique ID, and to allow entities to be queried, Datastore allows developers to create indices based on the properties for which we want to filter. I mentioned queries being similar to SQL, they're similar but not exact, and there are some limitations for which I'll include a link App Engine is designed to run web apps and mobile backends which are broad categories with a wide range of storage requirements. There's really just no single option that's going to work for all workloads. If we were to compare a mobile chat application with a mobile news feed, both of them have very different storage needs. The same goes for web applications, let's say a brochureware application is going to have different storage needs than a site such as Twitter. In 2014, Google acquired a realtime database called Firebase. Realtime databases are used for data which is always changing and require processing to happen very quickly. Shortly after that acquisition, Google started building Cloud Firestore. It took the best parts of Firebase, it took the best parts of Datastore, and it smashed them together into a single service. Now, the purpose of Cloud Firestore is to serve as the next generation no-SQL database for Google Cloud. This is where we get a bit tangled. Cloud Firestore provides one of two operating modes that are called Native Mode and Datastore Mode. Using Cloud Firestore in Datastore Mode is the next generation of Cloud Datastore. It supports Datastore's API and it uses Firestore's data storage layer which removes some of the previous Datastore limitations. Now, currently the original Datastore and Firestore in Datastore Mode are two different products, however, Google will be moving users over to Firestore seamlessly over time. Using Firestore in Native Mode is not the next generation of Firebase, at least not publicly, at least not in this moment. Using Cloud Firestore in Native Mode is similar to Firebase, though there are some implementation changes. So with all of this, the logical question is, how do I know which mode to use? Here's some general advice, now, it's not universally applicable, though it is a good starting place. Datastore or Firestore in Datastore Mode are intended for server workloads, meaning a service-side application interacts with a database. Now, this is exactly what App Engine does, so if you're building with App Engine and you need a schema list database, then you'll probably wanna use Firestore in Datastore Mode. Firestore in Native Mode is similar to Firebase, which is basically an application platform and doesn't require developers to create service-side applications. Firestore Native and Firebase both provide STKs which allow read and writes to documents for which users have access and all without a server, and if you need some sort of backend functionality, it also supports running Cloud functions based on event triggers which include, when a document is created, deleted, updated, et cetera. So Firestore in Native Mode is very cool, but it's not the focus of this lesson, so we're not going to go into detail, this lesson is really focused on using Datastore Mode with App Engine, though if you wanna learn more, I'll include some links for further reading. Before moving on, there are some things to commit to memory, each project can use Firestore in one mode only. So if you select Datastore Mode, that's the mode for your project and vice versa with Native Mode. Also, and importantly, if you create an App Engine app inside a project, it's going to automatically choose Datastore Mode, so be careful with that, if you really don't wanna Datastore Mode, don't create an App Engine app in that project first. Alright, with all of this out of the way, how do you actually use Datastore with an App Engine application? Due to the ever-evolving nature of App Engine, there have been different methods for interacting with Datastore which varied by runtime. Currently, Google recommends the use of the Google Cloud Client Libraries. Google provides these libraries for different runtimes, and they allow engineers to interact with Datastore in the language that they're familiar with. If for some reason you're using a language that does not have a supported library, you can always fall back to the Rest API and use that directly, it just requires a bit more effort on your part. Before wrapping up, let's check out a demo of Cloud Datastore in the console. I'm here on the Datastore Dashboard, I've already enabled the Datastore API and I have some sample entities which store some dummy data. Notice here I have a Kind called EmailEvent and there are four entities. This Entities page is where we can see the different entities for a specific Kind which is specified in the dropdown. And if we click on Create it will open a form where we can add a new entity. The Namespace is used to partition data, an example might be to specify a company name which would allow for a multitenant app, and if you don't specify a namespace that's fine, the default is used, just know, they can't be changed once they're set. Datastore uses the concept of a Kind to categorize an entity and that makes it easier to query specific types of entities The Key is used to look up an entity quickly, notice these properties here, these are added by Datastore to make it easier for us to populate this, it knows that this is an EmailEvent Kind so it's given us the properties that exist on some of the other entities. This is just a nice feature from the user interface to make it easier to enter our data, remember, this is a schema list database, so we don't have to define specific properties for even the same Kind, I could just remove all of these and have a totally different property for one EmailEvent than I do for all the rest. I can't imagine a use case in which you'd want to do that, but it is possible. The query language that Datastore supports is called GQL and again, and it resembles SQL up to a certain point, so if we start by typing select star from followed by the name of our Kind, we can see this returns all EmailEvent Kinds with all the properties, though you could also specify properties to return and that way, you can get just the data you need. OK, let's stop here and see how we did. The purpose of Cloud Firestore is to serve as the next generation no-SQL database for Google Cloud. To get there, it took the lessons learned from Datastore and Firestore, probably other stuff internally as well, and put them all into a single product. Cloud Datastore is currently both its own service and a mode of Firestore, where Firestore in Datastore Mode is the latest generation of Datastore and will replace Datastore. Datastore Mode is intended for server workloads and if you're pairing with App Engine, then it's likely a good choice. To interact with Datastore or Datastore Mode in software, we would use the Google Cloud Client Libraries, though we could always use the Rest API directly if we needed to. Alright, that's going to do it for this lesson, I hope this has been helpful, I hope it's filled in a few of the gaps, and I will see you in the next lesson.
https://cloudacademy.com/course/introduction-google-cloud-firestore-datastore-1310/google-cloud-firestore/
CC-MAIN-2021-43
en
refinedweb
How-To: Route messages to different event handlers Introduction Content-based routing is a messaging pattern that utilizes a DSL instead of imperative application code. PubSub routing is an implementation of this pattern that allows developers to use expressions to route CloudEvents based on their contents to different URIs/paths and event handlers in your application. If no route matches, then an optional default route is used. This becomes useful as your applications expands to support multiple event versions, or special cases. Routing can be implemented with code; however, keeping routing rules external from the application can improve portability. This feature is available to both the declarative and programmatic subscription approaches. Enable message routing This is a preview feature. To enable it, add the PubSub.Routing feature entry to your application configuration like so: apiVersion: dapr.io/v1alpha1 kind: Configuration metadata: name: pubsubroutingconfig spec: features: - name: PubSub.Routing enabled: true Learn more about enabling preview features. Declarative subscription For declarative subscriptions, you must use dapr.io/v2alpha1 as the apiVersion. Here is an example of subscriptions.yaml using routing. apiVersion: dapr.io/v2alpha1 kind: Subscription metadata: name: myevent-subscription spec: pubsubname: pubsub topic: inventory routes: rules: - match: event.type == "widget" path: /widgets - match: event.type == "gadget" path: /gadgets default: /products scopes: - app1 - app2 Programmatic subscription Alternatively, the programattic approach varies slightly in that the routes structure is returned instead of route. The JSON structure matches the declarative YAML. import flask from flask import request, jsonify from flask_cors import CORS import json import sys app = flask.Flask(__name__) CORS(app) @app.route('/dapr/subscribe', methods=['GET']) def subscribe(): subscriptions = [ { 'pubsubname': 'pubsub', 'topic': 'inventory', 'routes': { 'rules': [ { 'match': 'event.type == "widget"', 'path': '/widgets' }, { 'match': 'event.type == "gadget"', 'path': '/gadgets' }, ], 'default': '/products' } }] return jsonify(subscriptions) @app.route('/products', methods=['POST']) def ds_subscriber(): print(request.json, flush=True) return json.dumps({'success':True}), 200, {'ContentType':'application/json'} app.run() const express = require('express') const bodyParser = require('body-parser') const app = express() app.use(bodyParser.json({ type: 'application/*+json' })); const port = 3000 app.get('/dapr/subscribe', (req, res) => { res.json([ { pubsubname: "pubsub", topic: "inventory", routes: { rules: [ { match: 'event.type == "widget"', path: '/widgets' }, { match: 'event.type == "gadget"', path: '/gadgets' }, ], default: '/products' } } ]); }) app.post('/products', (req, res) => { console.log(req.body); res.sendStatus(200); }); app.listen(port, () => console.log(`consumer app listening on port ${port}!`)) [Topic("pubsub", "inventory", "event.type ==\"widget\"", 1)] [HttpPost("widgets")] public async Task<ActionResult<Stock>> HandleWidget(Widget widget, [FromServices] DaprClient daprClient) { // Logic return stock; } [Topic("pubsub", "inventory", "event.type ==\"gadget\"", 2)] [HttpPost("gadgets")] public async Task<ActionResult<Stock>> HandleGadget(Gadget gadget, [FromServices] DaprClient daprClient) { // Logic return stock; } [Topic("pubsub", "inventory")] [HttpPost("products")] public async Task<ActionResult<Stock>> HandleProduct(Product product, [FromServices] DaprClient daprClient) { // Logic return stock; } package main import ( "encoding/json" "fmt" "log" "net/http" "github.com/gorilla/mux" ) const appPort = 3000 type subscription struct { PubsubName string `json:"pubsubname"` Topic string `json:"topic"` Metadata map[string]string `json:"metadata,omitempty"` Routes routes `json:"routes"` } type routes struct { Rules []rule `json:"rules,omitempty"` Default string `json:"default,omitempty"` } type rule struct { Match string `json:"match"` Path string `json:"path"` } // This handles /dapr/subscribe func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) { t := []subscription{ { PubsubName: "pubsub", Topic: "inventory", Routes: routes{ Rules: []rule{ { Match: `event.type == "widget"`, Path: "/widgets", }, { Match: `event.type == "gadget"`, Path: "/gadgets", }, }, Default: "/products", }, }, } w.WriteHeader(http.StatusOK) json.NewEncoder(w).Encode(t) } func main() { router := mux.NewRouter().StrictSlash(true) router.HandleFunc("/dapr/subscribe", configureSubscribeHandler).Methods("GET") log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", appPort), router)) } <?php require_once __DIR__.'/vendor/autoload.php'; $app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [ new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'inventory', routes: ( rules: => [ ('match': 'event.type == "widget"', path: '/widgets'), ('match': 'event.type == "gadget"', path: '/gadgets'), ] default: '/products')), ]])); $app->post('/products', function( #[\Dapr\Attributes\FromBody] \Dapr\PubSub\CloudEvent $cloudEvent, \Psr\Log\LoggerInterface $logger ) { $logger->alert('Received event: {event}', ['event' => $cloudEvent]); return ['status' => 'SUCCESS']; } ); $app->start(); Common Expression Language (CEL) In these examples, depending on the type of the event ( event.type), the application will be called on /widgets, /gadgets or /products. The expressions are written as Common Expression Language (CEL) where event represents the cloud event. Any of the attributes from the CloudEvents core specification can be referenced in the expression. Example expressions Match “important” messages has(event.data.important) && event.data.important == true Match deposits greater than $10000 event.type == "deposit" && event.data.amount > 10000 Match multiple versions of a message event.type == "mymessage.v1" event.type == "mymessage.v2" CloudEvent attributes For reference, the following attributes are from the CloudEvents specification. Event Data data As defined by the term Data, CloudEvents MAY include domain-specific information about the occurrence. When present, this information will be encapsulated within data. - Description: The event payload. This specification does not place any restriction on the type of this information. It is encoded into a media format which is specified by the datacontenttypeattribute (e.g. application/json), and adheres to the dataschemaformat when those respective attributes are present. - Constraints: - OPTIONAL LimitationCurrently, it is only possible to access the attributes inside data if it is nested JSON values and not JSON escaped in a string. REQUIRED Attributes The following attributes are REQUIRED to be present in all CloudEvents: id - Type: String - Description: Identifies the event. Producers MUST ensure that source+ idis unique for each distinct event. If a duplicate event is re-sent (e.g. due to a network error) it MAY have the same id. Consumers MAY assume that Events with identical sourceand idare duplicates. - Constraints: - REQUIRED - MUST be a non-empty string - MUST be unique within the scope of the producer - Examples: - An event counter maintained by the producer - A UUID source Type: URI-reference Description: Identifies the context in which an event happened. Often this will include information such as the type of the event source, the organization publishing the event or the process that produced the event. The exact syntax and semantics behind the data encoded in the URI is defined by the event producer. Producers MUST ensure that source+ idis unique for each distinct event. An application MAY assign a unique sourceto each distinct producer, which makes it easy to produce unique IDs since no other producer will have the same source. The application MAY use UUIDs, URNs, DNS authorities or an application-specific scheme to create unique sourceidentifiers. A source MAY include more than one producer. In that case the producers MUST collaborate to ensure that source+ idis unique for each distinct event. Constraints: - REQUIRED - MUST be a non-empty URI-reference - An absolute URI is RECOMMENDED Examples - Internet-wide unique URI with a DNS authority. - - mailto:[email protected] - Universally-unique URN with a UUID: - urn:uuid:6e8bc430-9c3a-11d9-9669-0800200c9a66 - Application-specific identifiers - /cloudevents/spec/pull/123 - /sensors/tn-1234567/alerts - 1-555-123-4567 specversion Type: String Description: The version of the CloudEvents specification which the event uses. This enables the interpretation of the context. Compliant event producers MUST use a value of 1.0when referring to this version of the specification. Currently, this attribute will only have the ‘major’ and ‘minor’ version numbers included in it. This allows for ‘patch’ changes to the specification to be made without changing this property’s value in the serialization. Note: for ‘release candidate’ releases a suffix might be used for testing purposes. Constraints: - REQUIRED - MUST be a non-empty string type - Type: String - Description: This attribute contains a value describing the type of event related to the originating occurrence. Often this attribute is used for routing, observability, policy enforcement, etc. The format of this is producer defined and might include information such as the version of the type- see Versioning of CloudEvents in the Primer for more information. - Constraints: - REQUIRED - MUST be a non-empty string - SHOULD be prefixed with a reverse-DNS name. The prefixed domain dictates the organization which defines the semantics of this event type. - Examples - com.github.pull_request.opened - com.example.object.deleted.v2 OPTIONAL Attributes The following attributes are OPTIONAL to appear in CloudEvents. See the Notational Conventions section for more information on the definition of OPTIONAL. datacontenttype Type: Stringper RFC 2046 Description: Content type of datavalue. This attribute enables datato carry any type of content, whereby format and encoding might differ from that of the chosen event format. For example, an event rendered using the JSON envelope format might carry an XML payload in data, and the consumer is informed by this attribute being set to “application/xml”. The rules for how datacontent is rendered for different datacontenttypevalues are defined in the event format specifications; for example, the JSON event format defines the relationship in section 3.1. For some binary mode protocol bindings, this field is directly mapped to the respective protocol’s content-type metadata property. Normative rules for the binary mode and the content-type metadata mapping can be found in the respective protocol. In some event formats the datacontenttypeattribute MAY be omitted. For example, if a JSON format event has no datacontenttypeattribute, then it is implied that the datais a JSON value conforming to the “application/json” media type. In other words: a JSON-format event with no datacontenttypeis exactly equivalent to one with datacontenttype="application/json". When translating an event message with no datacontenttypeattribute to a different format or protocol binding, the target datacontenttypeSHOULD be set explicitly to the implied datacontenttypeof the source. Constraints: For Media Type examples see IANA Media Types dataschema - Type: URI - Description: Identifies the schema that dataadheres to. Incompatible changes to the schema SHOULD be reflected by a different URI. See Versioning of CloudEvents in the Primer for more information. - Constraints: - OPTIONAL - If present, MUST be a non-empty URI subject Type: String Description: This describes the subject of the event in the context of the event producer (identified by source). In publish-subscribe scenarios, a subscriber will typically subscribe to events emitted by a source, but the sourceidentifier alone might not be sufficient as a qualifier for any specific event if the sourcecontext has internal sub-structure. Identifying the subject of the event in context metadata (opposed to only in the datapayload) is particularly helpful in generic subscription filtering scenarios where middleware is unable to interpret the datacontent. In the above example, the subscriber might only be interested in blobs with names ending with ‘.jpg’ or ‘.jpeg’ and the subjectattribute allows for constructing a simple and efficient string-suffix filter for that subset of events. Constraints: - OPTIONAL - If present, MUST be a non-empty string Example: - A subscriber might register interest for when new blobs are created inside a blob-storage container. In this case, the event sourceidentifies the subscription scope (storage container), the typeidentifies the “blob created” event, and the iduniquely identifies the event instance to distinguish separate occurrences of a same-named blob having been created; the name of the newly created blob is carried in subject: source: subject: mynewfile.jpg time - Type: Timestamp - Description: Timestamp of when the occurrence happened. If the time of the occurrence cannot be determined then this attribute MAY be set to some other time (such as the current time) by the CloudEvents producer, however all producers for the same sourceMUST be consistent in this respect. In other words, either they all use the actual time of the occurrence or they all use the same algorithm to determine the value used. - Constraints: LimitationCurrently, comparisons to time (e.g. before or after “now”) are not supported. Community call demo Watch this video on how to use message routing with pub/sub: Next steps - Try the Pub/Sub routing sample - Learn how to configure Pub/Sub components with multiple namespaces - List of pub/sub components - Read the API reference Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.dapr.io/developing-applications/building-blocks/pubsub/howto-route-messages/
CC-MAIN-2021-43
en
refinedweb
Hi, our app uses Xamarin.Mac and is ready for the AppStore, however we have In-App purchases and had to purhase Xamarin.Mac due to StoreKit support. The problem is we depend on NSHomeDirectory and it's quite hard to find some alternative implementation of getting the sandboxed root path. Our other title that was written with Monobjc is undergoing review process - quite a nice achievement for a completely free alternative technology. How can we get sanboxed root with Xamarin.Mac? Also I was quite surprised that there is no even a single mention of NSHomeDirectory on the Xamarin's website. We don't want to rewrite the code with NSDocument, unless there will be no other way, I hope not though.. Are you looking for this? Environment.GetFolderPath(Environment.SpecialFolder.Personal) No, this seems to give nonsandboxed path, unfortunately. [DllImport(Constants.FoundationLibrary)] public static extern IntPtr NSHomeDirectory(); public static string ContainerDirectory { get { return ((NSString)Runtime.GetNSObject(NSHomeDirectory())).ToString (); } }
https://social.msdn.microsoft.com/Forums/en-US/f9f91048-b7f0-4860-b616-727c27a38b4f/what-is-the-status-of-nshomedirectory-class?forum=xamarinios
CC-MAIN-2021-43
en
refinedweb
FAQ: Converting Add-ins to VSPackage Extensions Note This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here Add-ins are now deprecated. To make a new Visual Studio extension, you need to create a VSIX extension. Here are the answers to some frequently asked questions about how to convert a Visual Studio add-in to a VSIX extension. Warning Starting in Visual Studio 2015, for C# and Visual Basic projects, you can use the VSIX project and add item templates for menu commands, tool windows, and VSPackages. For more information, see What's New in the Visual Studio 2015 SDK. Important In many cases you can simply transfer your add-in code to a VSIX project with a VSPackage project item. You can get the DTE automation object by calling GetService in the Initialize method. DTE2 dte = (DTE2)GetService(typeof(DTE)); For more information, see How can I run my add-in code in a VSPackage? below. What software do I need to develop VSIX. Where's the extension documentation? Start with Starting to Develop Visual Studio Extensions. Other articles about VSSDK extension development on MSDN are below that one. Can I convert my add-in project to a VSIX project? An add-in project can't be converted directly to a VSIX project because the mechanisms used in VSIX projects are not the same as the ones in add-in projects. The VSIX project template, plus the right project item templates have a lot of code that makes it relatively easy to get up and running as a VSIX extension. How do I start developing VSIX extensions? Here’s how you make a VSIX that has a menu command: To make a VSIX extension that has a menu command Create a VSIX project. (File, New, Project, or type project in the Quick Launch window). In the New Project dialog box, expand Visual C# / Extensibility or Visual Basic / Extensibility and select VSIX Project.) Name the project TestExtension and specify a location for it. Add a Custom Command project item template. (Right-click the project node in the Solution Explorer and select Add / New Item. In the New Project dialog for either Visual C# or Visual Basic, select the Extensibility node and select Custom Command.) Press F5 to build and run the project in debug mode. A second instance of Visual Studio appears. This second instance is called the experimental instance, and it may not have the same settings as the instance of Visual Studio you're using to write code. The first time you run the experimental instance, you will be asked to sign in to VS Online and specify your theme and profile. On the Tools menu (in the experimental instance) you should see a button named My Command name. When you choose this button, a message should appear: Inside TestVSPackagePackage.MenuItemCallback(). How can I run my add-in code in a VSPackage? Add-in code usually runs in one of two ways: Triggered by a menu command (the code is in the IDTCommandTarget.Execmethod) Automatically on startup (the code is in the OnConnectionevent handler.) You can do the same things in a VSPackage. Here's how to add some add-in code in the callback method: To implement a menu command in a VSPackage Create a VSPackage that has a menu command. (For more information, see Creating an Extension with a Menu Command.) Open the file that contains the definition of the VSPackage. (In a C# project, it's <your project name>Package.cs.) Add the following usingstatements to the file: using EnvDTE; using EnvDTE80; Find the MenuItemCallbackmethod. Add a call to GetService to get the DTE2 object: DTE2 dte = (DTE2)GetService(typeof(DTE)); Add the code that your add-in had in its IDTCommandTarget.Execmethod. For example, here is some code that adds a new pane to the Output window and prints "Some Text" in the new pane. private void MenuItemCallback(object sender, EventArgs e) { DTE2 dte = (DTE2) GetService(typeof(DTE)); OutputWindow outputWindow = dte.ToolWindows.OutputWindow; OutputWindowPane outputWindowPane = outputWindow.OutputWindowPanes.Add("A New Pane"); outputWindowPane.OutputString("Some Text"); } Build and run this project. Press F5 or select Start on the Debug toolbar. In the experimental instance of Visual Studio, the Tools menu should have a button named My Command name. When you choose this button, the words Some Text should appear in an Output window pane. (You may have to open the Output window.) You can also have your code run on startup. However, this approach is generally discouraged for VSPackage extensions. If too many extensions try to load when Visual Studio starts, the start time might become noticeably longer. A better practice is to load the VSPackage automatically only when some condition is met (like a solution being opened). This procedure shows how to run add-in code in a VSPackage that loads automatically when a solution is opened: To autoload a VSPackage Create a VSIX project with a Visual Studio Package project item. (For the steps to do this, see How do I start developing VSIX extensions?. Just add the Visual Studio Package project item instead.) Name the VSIX project TestAutoload. Open TestAutoloadPackage.cs. Find the line where the package class is declared: public sealed class <name of your package>Package : Package Above this line is a set of attributes. Add this attribute: [ProvideAutoLoad(UIContextGuids80.SolutionExists)] Set a breakpoint in the Initialize()method and start debugging (F5). In the experimental instance, open a project. The VSPackage should load, and your breakpoint should be hit. You can specify other contexts in which to load your VSPackage by using the fields of UIContextGuids80. For more information, see Loading VSPackages. How can I get the DTE object? If your add-in doesn't display UI—for example, menu commands, toolbar buttons, or tool windows—you may be able to use your code as-is as long as you get the DTE automation object from the VSPackage. Here's how: To get the DTE object from a VSPackage In a VSIX project with a Visual Studio Package item template, look for the <project name>Package.cs file. This is the class that derives from Package; it can help you interact with Visual Studio. In this case, you use its GetService to get the DTE2 object. Add these usingstatements: using EnvDTE; using EnvDTE80; Find the Initializemethod. This method handles the command you specified in the package wizard. Add a call to GetService to get the DTE object: DTE dte = (DTE)GetService(typeof(DTE)); After you have the DTE automation object, you can add the rest of your add-in code to the project. If you need the DTE2 object, you can do the same thing. How do I change menu commands and toolbar buttons in my add-in to the VSPackage style? VSPackage extensions use the .vsct file to create most of the menu commands, toolbars, toolbar buttons, and other UI. The Custom Command project item template gives you the option to create a command on the Tools menu. For more information, see Creating an Extension with a Menu Command. For more information about .vsct files, see How VSPackages Add User Interface Elements. For walkthroughs that show how to use the .vsct file to add menu items, toolbars, and toolbar buttons, see Extending Menus and Commands. How do I add custom tool windows in the VSPackage way? The Custom Tool Window project item template gives you the option to create a tool window. For more information about this project item template, see Creating an Extension with a Tool Window. For information about tool windows, see Extending and Customizing Tool Windows and the articles under it, especially Adding a Tool Window. How do I manage Visual Studio windows in the VSPackage way? If your add-in manages Visual Studio windows, the add-in code should work in a VSPackage. For example, this procedure shows how to add code that manages the Task List to the MenuItemCallback method of the VSPackage. To insert window-management code from an add-in into a VSPackage, here is some code that adds new tasks to the Task List, lists the number of tasks, and then deletes one task. private void MenuItemCallback(object sender, EventArgs e) { DTE2 dte = (DTE2) GetService(typeof(DTE)); TaskList tl = (TaskList)dte.ToolWindows.TaskList; askItem tlItem; // Add a couple of tasks to the Task List. tlItem = tl.TaskItems.Add(" ", " ", "Test task 1.", vsTaskPriority.vsTaskPriorityHigh, vsTaskIcon.vsTaskIconUser, true, "", 10, true, true); tlItem = tl.TaskItems.Add(" ", " ", "Test task 2.", vsTaskPriority.vsTaskPriorityLow, vsTaskIcon.vsTaskIconComment, true, "", 20, true,true); // List the total number of task list items after adding the new task items. System.Windows.Forms.MessageBox.Show("Task Item 1 description: "+tl.TaskItems.Item(2).Description); System.Windows.Forms.MessageBox.Show("Total number of task items: "+tl.TaskItems.Count); // Remove the second task item. The items list in reverse numeric order. System.Windows.Forms.MessageBox.Show("Deleting the second task item"); tl.TaskItems.Item(2).Delete(); System.Windows.Forms.MessageBox.Show("Total number of task items: "+tl.TaskItems.Count); } How do I manage projects and solutions in a VSPackage? If your add-in manages projects and solutions, the add-in code should work in a VSPackage. For example, this procedure shows how to add code that gets the startup project., the following code gets the name of the startup project in a solution. (A multi-project solution must be open when this package runs.) private void MenuItemCallback(object sender, EventArgs e) { DTE2 dte = (DTE2) GetService(typeof(DTE)); SolutionBuild2 sb = (SolutionBuild2)dte.Solution.SolutionBuild; Project startupProj; string msg = ""; foreach (String item in (Array)sb.StartupProjects) { msg += item; } System.Windows.Forms.MessageBox.Show("Solution startup Project: "+msg); startupProj = dte.Solution.Item(msg); System.Windows.Forms.MessageBox.Show("Full name of solution's startup project: "+"/n"+startupProj.FullName); } How do I set keyboard shortcuts in a VSPackage? You use the <KeyBindings> element of the .vsct file. In the following example, the keyboard shortcut for the command idCommand1 is Alt+A, and the keyboard shortcut for the command idCommand2 is Alt+Ctrl+A. Notice the syntax for the key names. <KeyBindings> <KeyBinding guid="MyProjectCmdSet" id="idCommand1" editor="guidVSStd97" key1="A" mod1="ALT" /> <KeyBinding guid="MyProjectCmdSet" id="idCommand2" editor="guidVSStd97" key1="A" mod1="CONTROL" mod2="ALT" /> </KeyBindings> How do I handle automation events in a VSPackage? You handle automation events in a VSPackage in the same way as in your add-in. The following code shows how to handle the OnItemRenamed event. (This example assumes that you've already gotten the DTE object.) Events2 dteEvents = (Events2)dte.Events; dteEvents.ProjectItemsEvents.ItemRenamed += listener1.OnItemRenamed; . . . public void OnItemRenamed(EnvDTE.ProjectItem projItem, string oldName) { string s = "[Event] Renamed " + oldName + " to " + Path.GetFileName(projItem.get_FileNames(1) + " in project " + projItem.ContainingProject.Name; }
https://docs.microsoft.com/en-us/visualstudio/extensibility/faq-converting-add-ins-to-vspackage-extensions?view=vs-2015&redirectedfrom=MSDN
CC-MAIN-2019-43
en
refinedweb
Concurrency and Locking With JPA: Everything You Need to Know Concurrency and Locking With JPA: Everything You Need to Know Learn more about concurrency and locking with JPA. Join the DZone community and get the full member experience.Join For Free Imagine you have a system used by multiple users where each user is trying to modify the same entity concurrently. How do you ensure that the underlying data's integrity is preserved when accessed concurrently? The persistence providers an offer locking strategy to manage concurrency. There are two types of locking: Optimistic locking and Pessimistic locking. Before we deep dive into the locking strategy, let's learn a little bit about ACID transactions. ACID (Atomicity, Consistency, Isolation, Durability) transactions ensure that a database transaction is completed in a timely manner. The relational databases such as MySQL, Postgres, SQL Server, and Oracle are ACID compliant. A database transaction can be broken down into multiple components. ACID-compliant databases ensure that a transaction is persisted and committed, only when all the components of a transaction are succeeded. If any one of the components fails, the transaction will be rolled back and no change will be made. In case of multiple transactions, we need to define the locking strategy to make sure that underlying data's integrity is preserved. Let's look at each locking strategy in this article. Optimistic Locking The persistence provider JPA. @Entity public class User { @Id @Column (name = "ID", nullable = false) @GeneratedValue (strategy = GenerationType.AUTO) private long id; @Version private long version; @OneToMany(mappedBy = "user", cascade = CascadeType.ALL) private List<Order> orders = new ArrayList<>(); .. } Here, each transaction that reads data holds the value of the version property. Before any transaction that modifies the underlying data, it checks the value of version property again. If the value has changed in the meantime an OptimisticLockException will be thrown and transaction will be rolled back. Otherwise, the transaction commits the update and increments the value of version property. Optimistic Lock Modes JPA provided two locking modes in case of Optimistic locking: OPTIMISTIC and OPTIMISTIC_FORCE_INCREMENT. OPTIMISTIC obtains the read lock for the entities with @Version property. OPTIMISTIC_FORCE_INCREMENT obtains the read lock for the entities with @Version property and increments the value of the property. @Service public class UserService { @PersistenceContext private EntityManager entityManager; public void updateUser(final String id) { User user = entityManager.find(User.class, id); entityManager.lock(user, LockModeType.OPTIMISTIC); .. } } public void updateUser(final String id) { User user = entityManager.find(User.class, id); entityManager.lock(user, LockModeType.OPTIMISTIC_FORCE_INCREMENT); .. } Pessimistic Locking In case of pessimistic locking, JPA creates a transaction that obtains a lock on the data until the transaction is completed. This prevents other transactions from making any updates to the entity until the lock is released. Pessimistic locking can be very useful when the data is frequently accessed and modified by multiple transactions. Keep in mind that using pessimistic locking may result in decreased application performance, if the entities are not susceptible to frequent modifications. Pessimistic Lock Modes JPA provides three Pessimistic locking modes: PESSIMISTIC_READ, PESSIMISTIC_FORCE_INCREMENT and PESSIMISTIC_WRITE. PESSIMISTIC_READ obtains a long-term read lock on the data to prevent the data from being updated or deleted. Other transactions may read the data during the lock, but will not be able to modify or delete the data. PESSIMISTIC_FORCE_INCREMENT obtains a long-term read lock on the data to prevent the data from being updated or deleted. Also, increments the value of@Version property. PESSIMISTIC_WRITE obtains a long-term read lock on the data to prevent the data from being read, updated or deleted. User user = entityManager.find(User.class,id); entityManager.lock(user, LockModeType.PESSIMISTIC_WRITE); user.getOrders().forEach(order -> { orderRepository.delete(order); }); .. If a pessimistic lock cannot be obtained, it results in a transaction rollback and PessimisticLockException will be thrown. But, if the locking failure doesn't result in a transaction rollback, a LockTimeoutException will be thrown. Conclusion Depending on your application needs, choose the right locking strategy and apply locking strategy only if necessary. Feel free to let me know if you have any suggestions or comments below. Published at DZone with permission of Swathi Prasad , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/concurrency-and-locking-with-jpa-everything-you-ne
CC-MAIN-2019-43
en
refinedweb