text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
[SOLVED] Painting gradients with QGraphicsItem
Hello everyone,
I want to draw a QGraphicsItem and fill it with a QLinearGradient.
I've subclassed the Item implemented the paint function for testig like this:
painter->save(); QLinearGradient gradient(m_BoxSize.topLeft(), m_BoxSize.bottomRight()); gradient.setColorAt(0, Qt::green); gradient.setColorAt(1, Qt::red); painter->fillPath(shape(), gradient); painter->strokePath(shape(), QPen(Qt::red, 1, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin)); painter->restore();
m_BoxSize is a member variable of my Item and set to QRectF(0, 0, 40, 40);
and shape() is implemented like this:
QPainterPath path; path.addRoundedRect(m_BoxSize, 5, 5); return path;
When I add such an Item to my Scene, I'm getting a red rounded rect without infill.
When I change painter->fillPath(shape(), gradient); to painter->fillPath(shape(), Qt::red); I'm getting a red rounded rect as expected.
Can someone tell me my mistake?
Thank you in advance and kind regards
AlexRoot
Edit: why aren't the code blocks working?
[edit: Fixed coding tags, use ``` SGaist]
->Edit: Thank you :)
- SGaist Lifetime Qt Champion last edited by
Hi,
Can you share the complete class code ?
Hi,
yes, here it is:
namespace Graphics { namespace Items { class ClassItem : public QGraphicsItem { public: ClassItem(const QColor &color, const QPoint &position, QGraphicsScene *scene); virtual QRectF boundingRect() const override; virtual QPainterPath shape() const override; virtual void paint(QPainter *painter, const QStyleOptionGraphicsItem *item, QWidget *widget) override; private: QColor m_Color; QPoint m_Position; QRectF m_BoxSize; QPainterPath m_ShapePath; QLinearGradient m_TitleGradient; QGraphicsScene *m_pScene; public slots: void resize(const QRectF &newRect); }; } }
and the implementation
Graphics::Items::ClassItem::ClassItem(const QColor &color, const QPoint &position, QGraphicsScene *scene) : m_Color(color), m_Position(position), m_pScene(scene) { setFlags(ItemIsSelectable | ItemIsMovable); setAcceptHoverEvents(true); m_TitleGradient.setCoordinateMode(QLinearGradient::ObjectBoundingMode); m_TitleGradient.setColorAt(0, Qt::red); m_TitleGradient.setColorAt(1, Qt::blue); resize(QRectF(0, 0, 40, 40)); } void Graphics::Items::ClassItem::resize(const QRectF &newRect) { m_BoxSize = newRect; m_ShapePath = QPainterPath(); m_ShapePath.setFillRule(Qt::WindingFill); m_ShapePath.addRoundedRect(newRect, 5, 5); m_TitleGradient.setStart(newRect.topLeft()); m_TitleGradient.setFinalStop(newRect.bottomRight()); } QPainterPath Graphics::Items::ClassItem::shape() const { return m_ShapePath; } void Graphics::Items::ClassItem::paint(QPainter *painter, const QStyleOptionGraphicsItem *, QWidget *) { painter->save(); painter->fillPath(m_ShapePath, m_TitleGradient); painter->strokePath(m_ShapePath, QPen(Qt::red, 1, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin)); painter->restore(); }
It looks very different now, I've reimplemented it. My problem persists:
the example code "ignores" the fillPath line but
painter->fillPath(m_ShapePath, Qt::green);
works
Kind regards
AlexRoot
Any ideas on that?
Tanks in advance and have a nice day :)
Check gradient fill mode:
m_TitleGradient.setCoordinateMode(QLinearGradient::ObjectBoundingMode); // ... resize(QRectF(0, 0, 40, 40));
ObjectBoundingMode accepts range 0 -> 1.
Check LogicalMode (default mode).
Thank you for your answere. I've already tried that but it wasn't the solution. I've figured out that the problem occures when turning on OpenGL rendering. I guess it has to do with my hibryd graphic system. My main card has some serious problems so I can't run Optirun right now and I guess my secondary GPU does not support the required features (only OpenGL 3.X).
I'm not shure about that but as I said: turning off the OpenGL rendering solved that problem for me. Maybe it helps others.
Have a nice day everyone | https://forum.qt.io/topic/59398/solved-painting-gradients-with-qgraphicsitem/?page=1 | CC-MAIN-2019-43 | en | refinedweb |
Do you need to run a process every day at the exact same time like an alarm? Then Spring’s scheduled tasks are for you. Allowing you to annotate a method with
@Scheduled causes that.
!"); } }is the same as
fixedRatebut with a string value instead.
fixedDelayexecutes the method with a fixed period of milliseconds between the end of one invocation and the start of the next.
fixedDelayStringis the the last day of the month. If we specify a value in the day of week field, we must use
?in the day of month field, and vice versa.
Wrepresents the nearest weekday of the month. For example,
15Wwill trigger on.); } }:
@SpringBootApplication @EnableScheduling public class Application { public static void main(final String args[]) { SpringApplication.run(Application.class); } }
Being that timing.
The little amount of code used in this post can be found on my GitHub.
If you found this post helpful and wish to keep up to date with my new tutorials as I write them, follow me on Twitter at @LankyDanDev.
Published at DZone with permission of Dan Newton , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/running-on-time-with-springs-scheduled-tasks?fromrel=true | CC-MAIN-2019-43 | en | refinedweb |
I'm really confused with this! I have 2 classes, Club and Membership. In Membership I have the method, getMonth(), and in Club I have joinedMonth() which takes the parameter, 'month' - so a user enters a month and then I want it to return the Membership's which joined in that specific month.
I am trying to call the getMonth() method from class Club, so that I can then go on to compare the integers of the months. But, when I try to call the method, I just get the mentioned "non-static method getMonth() cannot be referenced from a static context".
Basically, what is this and how can I resolve it?
Thank you in advance!
Club:
public class Club { private ArrayList<Membership> members; private int month; /** * Constructor for objects of class Club */ public Club() { Membership member = new Membership("John", 04, 2010); } /** * Add a new member to the club's list of members. * @param member The member object to be added. */ public void join(Membership member) { members.add(member); } /** * @return The number of members (Membership objects) in * the club. */ public int numberOfMembers() { return members.size(); } /** * Determine the number of members who joined in the given month * @param month The month we are interested in. * @return The number of members */ public int joinedMonth(int month){ Membership.getMonth(); } }
Membership:
public class Membership { // The name of the member. private String name; // The month in which the membership was taken out. public int month; // The year in which the membership was taken out. private int year; /** * Constructor for objects of class Membership. * @param name The name of the member. * @param month The month in which they joined. (1 ... 12) * @param year The year in which they joined. */ public Membership(String name, int month, int year) throws IllegalArgumentException { Membership member = new Membership("Josh", 5, 2011); if(month < 1 || month > 12) { throw new IllegalArgumentException( "Month " + month + " out of range. Must be in the range 1 ... 12"); } this.name = name; this.month = month; this.year = year; } /** * @return The member's name. */ public String getName() { return name; } /** * @return The month in which the member joined. * A value in the range 1 ... 12 */ public int getMonth() { return month; } /** * @return The year in which the member joined. */ public int getYear() { return year; } /** * @return A string representation of this membership. */ public String toString() { return "Name: " + name + " joined in month " + month + " of " + year; } } | https://www.daniweb.com/programming/software-development/threads/426975/non-static-method-cannot-be-referenced-from-a-static-context | CC-MAIN-2019-43 | en | refinedweb |
Hello, I'm working on a File manager-like structure with NGUI, so far so good.
The way I got my "Folder" set up, is that a folder consists of "Parts" which consists of a Background, Icon and a Label, "Contents" which carry the folder's content, a "FolderContent" could either be a "Folder" or "FILE".
Whenever I "Open" a folder, (in short) I hide the previous folder contents and show the new ones'.
The problem is, when I pickup a folder and move it with the mouse (drag and drop or just pick and hold) I auto-organize the current folder's contents (reposition them), when a folder gets repositioned (n shifts to the left/up), its contents also shift with it, so when I open the folder later, I will see that the contents' are not positioned correctly.
I can get around that by using one of my methods "OrganizeContents" which does exactly what you think, but that wouldn't be necessary to do each time I open a folder, it's redundant.
Have a look:
Any ideas how to move a gameObject without affecting its children?
Another work-around would be to move the Folder's "Parts" when auto-positioning it, but that would bring inconsistencies in other places.
If I can't do what I'm asking for, any other work-arounds beside the two I mentioned?
Thanks.
Actually, each folder has a List of "FolderContent"s, if that's what you mean. It just makes sense to put the contents of a folder, inside the folder, doesn't it? - It has nothing to do with NGUI, just common sense.
Of course that folder has some content, but this does not mean you have to create it using Unity objects hierarchy. If you gain nothing from it, and additionally it causes problems, then why even bother?
In my approach, you still have proper hierarchy. I'll try to describe it in more detailed way.
Let's assume every folder/file is represented by a Plane object for which you have created a prefab. You also have to create a FileSystemObject script and attach it to this prefab. Basically, FileSystemObject should look like (C#):
public class FileSystemObject
{
public FileSystemObject[] children;
public FileSystemObject[] parent;
public ObjectType type; // this can be an enum {File, Directory}
}
When browsing your data you can dynamically create a number of Plane instances, get their FileSystemObject components and sets their properties. And you have the hierarchy you wished for, but don't have the problems related to position.
Instead of dynamic creation of Planes, it would be much better to reuse some instances created at the start of your application.
As to navigation: when you click on my Plane (or in your case object visually representing the folder), you can retrieve its script and properly instantiate children:
var fso = GetComponent();
foreach(var child in fso.children)
{
// instantiate children
}
I think it should be quite easy to convert your current solution to this approach.
why the array of parents? a folder content has only one parent. storing a ref to the parent was my previous setup, but then I figured I don't need it. Is there a reason for the parents array?
OK thanks I got your point, basically keep everything the same, but don't mess with the game objects' hierarchy, no parenting. gonna have to go now, I'll do it. could you move your comments to an answer? I will accept it once everything's ok.
Answer by ArkaneX
·
Sep 01, 2013 at 04:15 PM
What is the reason behind storing subfolders and files as children of the folder? I know nothing about NGUI, but if I were to code it in pure Unity, I would probably store subfolders and files as an arrays of GameObjects[] in parent folder script, not as standard GameObject children. If required, I would add another property for keeping reference to parent (null for top level folder):
public GameObject[] folders;
public GameObject[] files;
public GameObject parentFolder;
This way, repositioning parent would not affect children position. You can of course change GameObject in above snippet with proper objects.
EDIT: converted to answer after exchanging a few comments with OP (under question).
That did it just nice! A lot of code has been nuked on the way :) Easy conversion as well like you said.
However I think it's more elegant to use OOP here and not simply an enum to represent a FileSystemObject. There's a lot of benefits to that, for example, a folder doesn't need to have an array of files and folders, just an array (preferably a list) of FileSystemObjects, which could be either a File or a Folder.
Again, no need for the parentFolder, in case you're putting it so that each folder knows its parent when you "GoBack", you could just use a stack, each time you open a folder you just push, when you go back you pop, the parent would be left out as extra luggage in the folder class. Thanks a lot for your help :)
You're right regarding OOP - my solution was just a quick 'how to' example. Glad it was helpful :)
Answer by DESTRUKTORR
·
Sep 01, 2013 at 04:17 PM
The best method for moving the parent object without moving the child objects would be to save the child object's absolute position (transform.position) prior to moving the parent, then, after calling the move on the parent, set the child's absolute position back to what it was before the move.
This does sound quite like repositioning them once again, but performance-wise, it's better. 2 n loops, an assignment in each loop, saves some math/calculations. I'll benchmark it with the auto organizing method, if there's a huge dif, I'll choose your solution :)
I'm afraid it's not gonna work in my situation. Thinking about it, it is a good solution but not for me because what would happen if there ware more folders to the right/under 'Vids'? (in the previous example), using your method, I would have to save the contents positions of all those folders and then re-assign them again... :/
Why do it in the loop? The user will only notice that their position is changed after the next frame is loaded. Just ensure that it's moved back to its original position after everything else has moved XD. One way or another, it's only a reassignment of a Vector3 variable, which is little more than 3 floats (12 bytes, which, in the grand scheme of things, is really not much, considering we are beginning to see megabytes, or millions of bytes, as being laughably small amounts of data), and it's an O(n) efficiency, as the only level of variation is the first layer of children.
However, like ArkaneX said, it might just be easier if you simply didn't have the objects parented to one another, lol. The primary purpose, by and large, of making a game object a child of another is to ensure that they move, together, and sometimes to allow scripts to more easily access components of their children/parents (or, on occasion, to organize the hierarchy a bit better, but that's really not all that usual, nor should it take precedence over efficiency, IMO).
Thanks for clearing out when to make objects childs of other objects. I obviously mis-used parenting.
[CODE]Keep Gameobject position exactly at mouse cursor when cast float position to int
1
Answer
Simultaneously moving GameObjects using C# script.
3
Answers
transform.position error
1
Answer
How Should I Get a List of Child Objects
2
Answers
Instantiate as a child at position
2
Answers | https://answers.unity.com/questions/528336/how-to-move-a-gameobject-without-affecting-its-chi.html | CC-MAIN-2019-43 | en | refinedweb |
Author: Ray Johnson <[email protected]> Author: Donal K. Fellows <[email protected]> Author: Mark Janssen <[email protected]> State: Draft Type: Informative Vote: Pending Created: 14-Jul-2009 Post-History:
Abstract
This document describes a set of conventions that it is suggested people use when writing Tcl code. It is substantially based on the Tcl/Tk Engineering Manual [247].
NOTE
A transcription of the original version (dated August 22, 1997) of this file into PDF is available online at - Donal K. Fellows.
Introduction
This desribes.
Executable files:
The most common method for creating executable applications on UNIX platforms is the infamous #! mechanism built into most shells. Unfortunately, the most common approach of just giving a path to wish is not recommended. Don't do:
#! /usr/local/tclsh8.0 -f "$0" "[email protected]"" "[email protected]".
The Macintosh platform doesn't really have a notion of an executable Tcl file. One of the reasons for this is that, unlike UNIX or Windows, you can only run one instance of an application at a time. So instead of callingwish with a specific script to load, we must create a copy of the wish application that is tied to our script.
The easiest way to do this is to use the application Drag&Drop Tclets or the SpecTcl GUI builder which can do this work for you. You can also do this by hand by putting the start-up script into a TEXT resource and name it tclshrc - which ensures it gets sourced on start-up. This can be done with ResEdit (a tool provided by Apple) or other tools that manipulate resources. Additional scripts can also be placed in TEXT resource to make the application completely contained.
Packages and namespaces
T.
Package names
Each ok. It is also suggested (but not required) that you register your name on the NIST Identifier Collaboration Service (NICS). It is located at:
Version numbers
Each).
Package namespaces.
Structure
There.
How to organize a code file
Each.
The file header
The first part of a code file is referred to as the header. It contains overall information that is relevant throughout the file. It consists of everything but the definitions of the file's procedures. The header typically has four parts, as shown below:
/ # / package require specTable Package | package provide specMenu 1.0 Definition | namespace eval specMenu { | namespace export addMenu | array set menuData {one two three} | ... \ }
Abstract: (both are shown in the example). the example as closely as possible. The file fileHead.tcl [not available] provides a template for a header page.
Multi-file packages
Some.
Procedure headers
After the header you will have one or more procedures. Each procedure will begin with a procedure header that gives overall documentation for the procedure, followed by the declaration and body for the procedure. See below for an example.
# behavior of an unspecified argument should be mentioned. Comments for all of the arguments should line up on the same tab stop.
Results: The last part of the header describes the value returned by the procedure. The type and the intended use of the result should be described. This section should also mention any side effects that are worth noting.
The file tclProcHead [not available] contains a template for a procedure header which should be used as a base for all new Tcl commands. Follow the syntax of the above example exactly (same indentation, double-dash after the procedure name, etc.).
Procedure declarations
The procedure declaration should also follow exactly the syntax in the example above.).
Parameter order
Procedure:
Parameters should normally appear in the order in, in/out, out, except where overridden by the rules below.
If an argument is actually a sub-command for the command than it should be the first argument of the command. For example:
proc graph::tree {subCmd args} { switch $subCmd { add { eval add_node $args } draw {...
If there is a group of procedures, all of which operate on an argument of a particular type, such as a file path or widget path, the argument should be the first argument to each of the procedures (or after the sub-command argument).
Procedure bodies
The body of a procedure follows the declaration. See Section 6 for the coding conventions that govern procedure bodies. The curly braces enclosing the body should be on different lines, as shown in the examples above, even if the body of the procedure is empty.
Naming conventions
Choosing.
General considerations:
Are you consistent? Use the same name to refer to the same thing everywhere. For example, within the code for handling standard bindings in Tk widgets, a standard name w is always used to refer to the window associated with the current event..
Is the name so generic that it doesn't convey any information? The variable str from the previous paragraph is an example of this; changing its name to src makes the name less generic and hence conveys more information.
Basic syntax rules
Below are some specific rules governing the syntax of names. Please follow the rules exactly, since they make it possible to determine certain properties of a variable just from its name. {} {...
In multi-word names, the first letter of each trailing word is capitalized. Do not use underscores or dashes as separators between the words of a name.
set numWindows 0 ... }
Variables that hold Tcl code that will be evaled should have names ending in Script.
proc log::eval {logScript} { if {$Log::logOn} { set result [catch {eval $logScript} msg] ...
Variables that hold a partial Tcl command that must have additional arguments appended before being a valid script should have names ending in Cmd.
foreach scrollCmd $listScrollCmds { eval $scrollCmd $args }
Low-level coding conventions
This section describes several low-level syntactic rules for writing Tcl code. These rules help to ensure that all of the Tcl code looks the same, and they prohibit a few confusing coding constructs.
Indents are 4 spaces
Each.
Code comments occupy full lines
Comments that document code should occupy full lines, rather than being tacked onto the ends of lines containing code. The reason for this is that side-by-side comments are hard to see, particularly if neighboring statements are long enough to overlap the side-by-side comments. Also it is easy to place comments in a place that could cause errors. Comments must have exactly the structure shown in the example below, with a blank line above and below the comment. The leading blank line can be omitted if the comment is at the beginning of a block, as is the case in the second comment in the example below.. # Note that there is a blank line below it to separate it # more strongly from the code. if {$tcl_platform(platform) == "macintosh"} { return } foreach dir $dirList { # If the source succeds then we are done. # Note there is no blank line above the comment; # the indentation change is visible enough. if {![catch {source [file join $dir file.tcl]}]} { break } }
Continuation lines are indented 8 spaces
You.
Only one command per line
You should only have one Tcl command per line on the page. Do not use the semi-colon character to place multiple commands on the same line. This makes the code easier to read and helps with debugging.
Curly braces: { goes at the end of a line
Openalways
but.
Parenthesize expressions
Use parentheses around each subexpression in an expression to make it absolutely clear what is the evaluation order of the expression (a reader of your code should not need to remember Tcl's precedence rules). For example, don't type
if {$x > 22 && $y <= 47} ...
Instead, type this:
if {($x > 22) && ($y <= 47)} ...
Always use the return statement
You.
Switch statements { ... } }
If statements
Never { ... }
Documenting code.
Document things with wide impact
The.
Don't just repeat what's in the code
The most common mistake I see in documentation (besides it not being there at all) is that it repeats what is already obvious from the code, such as this trivial (but exasperatingly common) example:
# Increment i. incr i
Documentation.
Document each thing in exactly one place
Systems.
Write clean code
The.
Document as you go.
Document tricky situations
If behavior, but this isn't always possible.
Testing
One.
Basics
Tests
To.
Organizing tests
Organize.
Coverage
When.
Fixing bugs
Whenever.
Tricky features
I.
Test independence
Trytest.
Miscellaneous
Porting issues
Writingenv.
Changes files
Each package should contain a file namedchanges that keeps a log of all significant changes made to the package. The changes file provides a way for users to find out what's new in each new release, what bugs have been fixed, and what compatibility problems might be intro- duced and from the TK_CONFIG_JUSTIFY configuration option. None of the built-in widgets ever supported this mode anyway. .
(The Tcl and Tk core additionally uses a ChangeLog file that has a much higher detail within it. This has the advantage of having more tooling support, but tends to be so verbose that the shorter summaries in the changes file are still written up by the core maintainers before each release.)
The original version of this document is copyright (C) 1997 Sun Microsystems, Inc. Revisions to reflect current community best-practice are public domain. | https://core.tcl-lang.org/tips/doc/trunk/tip/352.md | CC-MAIN-2019-43 | en | refinedweb |
Spring Data JPA Auditing: Automatically Saving the Good Stuff
Spring Data JPA Auditing: Automatically Saving the Good Stuff
Auditing provides valuable information, but it can be a nightmare to implement. Fortunately, through Spring Data JPA, you can persist the columns you need.
Join the DZone community and get the full member experience.Join For Free
In any business application, auditing simply means tracking and logging every change we do in our persisted records, which simply means tracking every insert, update, and delete operation and storing it.
Auditing helps us in maintaining history records, which can later help us in tracking user activities. If implemented properly, auditing can also provide us similar functionality to version control systems.?
Here in this article, I will discuss how we can configure JPA to automatically persist the CreatedBy, CreatedDate, LastModifiedBy, and LastModifiedDate columns for any entity.
I will walk you through to the necessary steps and code that you will need to include in your project to automatically update these properties. We will use Spring Boot, Spring Data JPA, and MySQL to demonstrate this. We will need to add the following the file table stores the name and content of the file. And say we also want to store who created and modified any file at any given time. So, the goal is to keep track of when the file was created, by whom, and when it was last modified, and by whom.
We will need to add the name, content, createdBy, createdDate, lastModifiedBy, and lastModifiedDate properties to our File entity and, to make it more appropriate, we can move the createdBy, createdDate, lastModifiedBy, lastModifiedDate properties to a base class, Auditable, and annotate this base class with @MappedSuperClass. Later, we can use the Auditable class in other audited entities.
You will also need to write getters, setters, constructors, toString, and equals along with these fields. However, you should take a look at Project Lombok: The Boilerplate Code Extractor, if you want to auto-generate these things.
Both classes will look like this:
; } @Entity public class File extends Auditable<String> { @Id @GeneratedValue private int id; private String name; private String content; }
As you can see above, I have used the @CreatedBy, @CreatedDate, @LastModifiedBy, and @LastModifiedDate annotation on their respective fields.
The Spring Data JPA approach abstracts working with JPA callbacks and provides us these fancy annotations to automatically save and update auditing entities.
Using the AuditingEntityListener Class With @EntityListeners
Spring Data JPA provides a JPA entity listener class, AuditingEntityListener, which contains the callback methods (annotated with the @PrePersist and @PreUpdate annotations), the will JPA recognize what to store in them?
To tell JPA about currently logged-in users, we will need to provide an implementation of AuditorAware and override the getCurrentAuditor() method. And inside getCurrentAuditor(), we will need to fetch a currently logged-in user.
As of now, I have provided a hard-coded user, but if you are using Spring Security, then use it to find the currently logged-in user.
public class AuditorAwareImpl implements AuditorAware<String> { @Override public String getCurrentAuditor() { return "Naresh"; } }
Enable JPA Auditing by Using @EnableJpaAuditing
We will need to create a bean of type AuditorAware and will also need to enable JPA auditing by specifying @EnableJpaAuditing on one of our configuration classes. a file object, the CreatedBy, CreatedDate, LastModifiedBy, LastModifiedDate properties will automatically get saved.
In the next article, JPA Auditing: Persisting Audit Logs Automatically using EntityListeners, I will discuss how we can use JPA EntityListeners to create audit logs and generate history records for every insert, update, and delete operation.
You can find complete code on this GitHub repository, and please feel free to give }} | https://dzone.com/articles/spring-data-jpa-auditing-automatically-the-good-stuff?fromrel=true | CC-MAIN-2019-43 | en | refinedweb |
#include <engineplugin.h>
Store the information about the plugin in a structure so that we can freely add features without invalidating existing plugins.
Definition at line 120 of file engineplugin.h.
List of all engine's DM flags or NULL if none.
Definition at line 125 of file engineplugin.h.
Controls behavior of "Create Game" dialog.
If true then "Create Game" dialog will build flags pages out of the allDMFlags list. If false then plugin either doesn't want to have the flags pages created or will provide the pages on its own.
Default: true.
Definition at line 170 of file engineplugin.h.
Default port on which servers for given engine are hosted.
Definition at line 134 of file engineplugin.h.
Factory of executable retrievers objects.
By default this is a simple instance of GameExeFactory. If custom behavior is needed, plugins shouldn't overwrite the class or the contents of the pointer, but instead public setter methods should be used to set appropriate strategies. Refer to GameExeFactory doc for more details.
Definition at line 181 of file engineplugin.h.
All available game modes for the engine or NULL if none.
Definition at line 137 of file engineplugin.h.
Returns a list of modifiers.
Modifiers are used and displayed in Create Game dialog. If an empty list (or NULL) is returned, Modifier combo will be disabled.
Definition at line 145 of file engineplugin.h.
icon of the engine
Definition at line 148 of file engineplugin.h. | http://doomseeker.drdteam.org/docs/doomseeker_1.0/classEnginePlugin_1_1Data.php | CC-MAIN-2019-43 | en | refinedweb |
Joe Schaefer wrote:
> Geoffrey Young <[email protected]> writes:
>
> [...]
>
>
>>I really don't see how it can be any other way - I absolutely,
>>positively do not want to deal with questions about how prior beta
>>versions mix with later beta versions and, eventually, the official
>>2.0.
>
>
> So then, the proposed branch is a regression over trunk?
if you mean incompatible, yes. and that was obviously going to be the case
if things are renamed.
> Hmm, that'd surely get a veto vote from me (assuming we do
> allow vetoes).
I really don't see how it can be any other way. the point of this entire
exercise was to come to a decision so we could officially bless 2.0 and move
on. not to be a pain about it, but the API was officially not the official
API until 2.0 is released - that's why it's taking so long. as such, I
don't think we have any obligation to support prior 1.99 versions. what
we're doing here is figuring out what the official 2.0 will look like.
taken another way, we didn't worry about prior 1.99 versions each time we
added pools as an argument, took pools away, made things methods or
functions - it was part of the evolution of the software. granted this is
bigger, but it's the same thing. couple this with users complaining because
they installed 1.99_20 last month and are upgrading and things aren't what
they are supposed to be and you have a support nightmare. at least this is
a clean separation starting now.
>
> Since we're apparaently new to all this formal voting stuff,
> how about we first have a vote to adopt this tried-and-true document:
>
>
I'd really hate to see mod_perl start to operate that way.
> I think we MUST have
> a process which allows everyone to express their votes.
I wasn't suggesting otherwise. but this is entirely different than vetoing
the addition of some api. we have the current namespace and the proposed
namespace. perhaps there is a third option, but I doubt it since nobody was
able to figure one out. however, a decision needs to be made. I, at least,
will not play around with this branch forever, or even for another month -
we need to figure out the way mod_perl will look from now on, do it, and
move on.
--Geoff
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/perl-dev/200503.mbox/%[email protected]%3E | CC-MAIN-2019-43 | en | refinedweb |
I’m working on a project at the moment where I need to be able to poll an API periodically and I’m building the application using React. I hadn’t had a chance to play with React Hooks yet so I took this as an opportunity to learn a bit about them and see how to solve something that I would normally have done with class-based components and state, but do it with Hooks.
When I was getting started I kept hitting problems as either the Hook wasn’t updating state, or it was being overly aggressive in setting up timers, to the point where I’d have dozens running at the same time.
After doing some research I came across a post by Dan Abramov on how to implement a Hook to work with
setInterval. Dan does a great job of explaining the approach that needs to be taken and the reasons for particular approaches, so go ahead and read it before continuing on in my post as I won’t do it justice.
Initially, I started using this Hook from Dan as it did what I needed to do, unfortunately, I found that the API I was hitting had an inconsistence in response time, which resulted in an explosion of concurrent requests, and I was thrashing the server, not a good idea! But this was to be expected using
setInterval, it doesn’t wait until the last response is completed before starting another interval timer. Instead I should be using
setTimeout in a recursive way, like so:
const callback = () => { console.log("I was called!"); setTimeout(callback, 1000); }; callback();
In this example the console is written to approximately once every second, but if for some reason it took longer than basically instantly to write to the console (say, you had a breakpoint) a new timer isn’t started, meaning there’ll only ever be one pending invocation.
This is a much better way to do polling than using
setInterval.
Implementing Recursive
setTimeout with React Hooks
With React I’ve created a custom hook like Dan’s
useInterval:
import React, { useEffect, useRef } from "react"; function useRecursiveTimeout<T>( callback: () => Promise<T> | (() => void), delay: number | null ) { const savedCallback = useRef(callback); // Remember the latest callback. useEffect(() => { savedCallback.current = callback; }, [callback]); // Set up the timeout loop. useEffect(() => { let id: NodeJS.Timeout; function tick() { const ret = savedCallback.current(); if (ret instanceof Promise) { ret.then(() => { if (delay !== null) { id = setTimeout(tick, delay); } }); } else { if (delay !== null) { id = setTimeout(tick, delay); } } } if (delay !== null) { id = setTimeout(tick, delay); return () => id && clearTimeout(id); } }, [delay]); } export default useRecursiveTimeout;
The way this works is that the
tick function will invoke the
callback provided (which is the function to recursively call) and then schedule it with
setTimeout. Once the callback completes the return value is checked to see if it is a
Promise, and if it is, wait for the
Promise to complete before scheduling the next iteration, otherwise it’ll schedule it. This means that it can be used in both a synchronous and asynchronous manner:
useRecursiveTimeout(() => { console.log("I was called recusively, and synchronously"); }, 1000); useRecursiveTimeout(async () => { await fetch(""); console.log("Fetch called!"); }, 1000);
Here’s a demo:
Conclusion
Hooks are pretty cool but it can be a bit trickier to integrate them with some APIs in JavaScript, such as working with timers. Hopefully this example with
setTimeout is useful for you, feel free to copy the code or put it on
npm yourself. | https://www.aaron-powell.com/posts/2019-09-23-recursive-settimeout-with-react-hooks/ | CC-MAIN-2019-43 | en | refinedweb |
Thanks to all.
The image is dynamically created by user action and not creating a image file.
I implemented an action for the src and working file.
Thanks for the help.
-Yoga
-----Original Message-----
From: Craig R. McClanahan [mailto:[email protected]]
Sent: Tuesday, October 28, 2003 7:47 AM
To: Struts Users Mailing List
Subject: Re: specifying image source as jpg stream
Max Cooper wrote:
>You may want to write a separate servlet to serve the image data. That
>allows you to implement getLastModified() and allow proper browser-caching
>support, which can significantly increase the speed of your pages if the
>user is likely to view the images more than once. We did this with an Action
>first and since we had caching turned off, it reloaded the images every
>time. Switching to a separate servlet where we implemented getLastModified()
>was perceptably faster.
>
>Perhaps Struts should allow Action-implementers to implement some kind of
>getLastModified() method for this reason. Or at least to turn caching on and
>off at the Action (or action-mapping) level. getLastModified() is really
>useful if you have the image data (or document data, etc.) stored in a db.
>
>
>
Controlling this stuff at the per-Action level is a nice idea. If
you're using an Action to create dynamic output already (such as when
you directly stream the binary output and then return null), it's quite
easy to do today -- your Action will be able to see the
"If-Modified-Since" header that the browser sends, and then can decide
to return a status 304 (NOT MODIFIED) if your current database stuff is
not more recent.
Something along the lines of this in your Action.execute() method should
do the trick:
// When was our database data last modified?
long dataModifiedDate = ... timestamp when database last modified ...
// Have we sent to this user previously?
long modifiedSince = request.getDateHeader("If-Modified-Since");
if (modifiedSince > -1) { // i.e. it was actually specified
if (dataModifiedDate <= modifiedSince) {
response.sendError(HttpServletResponse.SC_NOT_MODIFIED);
return (null);
}
}
// Set the timestamp so the browser can send back If-Modified-Since
response.setDateHeader("Date", dataModifiedDate);
// Now write the actual content type and data
response.setContentType("mage/jpg");
ServletOutputStream stream = response.getOutputStream();
... write out the bytes ...
// Return null to tell Struts the response is complete
return (null);
>-Max
>
>
> | http://mail-archives.us.apache.org/mod_mbox/struts-user/200311.mbox/%3C78BE7700906EE7438678F9C9152DDEA1017A7D19@bkorex01.corp.mphasis.com%3E | CC-MAIN-2019-43 | en | refinedweb |
#include <gtk/gtk.h> GtkHScrollbar; GtkWidget* gtk_hscrollbar_new (GtkAdjustment *adjustment);
GObject +----GInitiallyUnowned +----GtkObject +----GtkWidget +----GtkRange +----GtkScrollbar +----GtkHScrollbar
GtkHScrollbar implements GtkBuildable and AtkImplementorIface.
The GtkHScrollbar widget is a widget arranged horizontally creating a
scrollbar. See GtkScrollbar for details on
scrollbars. GtkAdjustment pointers may be added to handle the
adjustment of the scrollbar or it may be left
NULL in which case one
will be created for you. See GtkAdjustment for details.
typedef struct _GtkHScrollbar GtkHScrollbar;
The GtkHScrollbar struct contains private data and should be accessed using the functions below.
GtkWidget* gtk_hscrollbar_new (GtkAdjustment *adjustment);
Creates a new horizontal scrollbar.
GtkScrollbar, GtkScrolledWindow | http://maemo.org/api_refs/4.1/gtk+2.0-2.10.12/libgtk2.0/GtkHScrollbar.html | CC-MAIN-2014-35 | en | refinedweb |
.
Although it's more usual to use the managed method Marshal.Copy, you can in fact use CopyMemory, which is an alias for the API function RtlMoveMemory, from C#.
You can even use it to copy one managed array to another as the following simple example shows:
using System;using System.Runtime.InteropServices;
class Test{
[DllImport("kernel32.dll", EntryPoint="RtlMoveMemory")] static extern void CopyMemory(double[] Destination, double[] Source, uint Length);
static void Main() { double[] Source = new double[5]{1.0, 2.0, 3.0, 4.0, 5.0}; double[] Destination = new double[5]; // all 0.0 initially CopyMemory(Destination, Source, 40); foreach(double d in Destination) { Console.WriteLine(d); } Console.ReadLine(); }}
I am currently using the CopyMemory as shown above, however, the variable which is passed to the function I have now deciphered is to the first variable in the array from the 3rd party program.
In VB, by using CopyMemory (Options(0), narray, 40) it takes 40 bytes from where [narray] starts in memory and copies that 40 bytes to the beginning of the [Options] array, populating the array with the values from the 3rd party program.
I have attempted this in C# and only the first entry of the [Options] array gets populated. I am passing the [narray] variable as "ref" but when I get the IntPtr of the variable it seems to be to the variable in the function not from the 3rd party program. It seems to be passing as "ref" as I can set the value and the 3rd party program knows that it has been changed.
I am using:
public static int UserFunction (ref double narray)
IntPtr myarrpointer;
IntPtr arrpasspointer;
double[] myarray = new double[5]
GCHandle myGC = GCHandle.Alloc(myarray, GCHandleType.Pinned);
GCHandle inGC = GCHandle.Alloc(narray, GCHandleType.Pinned);
myarrpointer = myGC.AddrOfPinnedObject();
arrpasspointer = inGC.AddrOfPinnedObject();
CopyMemory (myarrpointer, arrpasspointer, 40);
Can you just explain how you're getting 'narray' in the first place?
It sounds to me as though it should be an IntPtr to an unmanaged double array of five elements but you seem to have it as a managed scalar double.
If it's a managed scalar double passed by reference, then CopyMemory will copy only one element to the array, namely the double contained in narray itself.
Thanks for the post Alan,
narray is passed to the routine from a 3rd party program. I had the same thought as you that it would be the InPtr to the variable but narray does actually contain the 1st value of the array which is passed, and when the value is changed in C# it changes in the 3rd party program.
In VB it worked by this value being passed by reference and then CopyMemory just taking the 40 bytes from the memory location and passing it into the array.
It must still work in the same way as the 3rd party program which calls the function is the same - does the "ref" command in C# work the same as "ByRef" in VB or is there another one? In VB, I could pass the variables by name to CopyMemory is there a way of doing that in C#.
'ref' in C# does work the same way as 'ByRef' in VB but the difference is that you're going through the p/invoke layer. So the double value that comes back is copied to managed memory and is isolated from the rest of the array which is still on the unmanaged heap.
If you change the signature of the 3rd party function from 'ref double narray' to 'IntPtr narray' then it should still work as we're dealing with four byte pointers in both cases.
All you need to do then is to use the IntPtr to copy the unmanaged array to the managed array. Rather than CopyMemory, I'd use this overload of Marshal.Copy:
So the code would simply be:
double[] myarray = new double[5];Marshal.Copy(narray, myarray, 0, 5);
Thanks Alan, that worked great - you are most knowledgable.
Could you possibly offer me your wisdom on the problem below?
I need to pass a string from 3rd party program and maniuplate it and send it back. It is defined as UserInstruction(string myString); which C# can use but any changes are not echoed in the 3rd party program. I have used the "ref" command but this causes a crash. Any ideas?
The problem there, Darren, is that .NET strings are immutable.
However, luckily, we do have the StringBuilder type which is mutable and can be used to return strings when calling unmanaged code. As it's already a reference type it should just be passed by value:
UserInstruction(StringBuilder myStringBuilder);
If you know what the size of the string is going to be then it's a good idea to initialize the StringBuilder with the appropriate capacity. For example if the string is 100 characters long:
StringBuilder sb = new StringBuilder(100);
and then pass 'sb' to the unmanaged function.
You can, of course, get the string by simply appying the ToString() method to the StringBuilder object.
Setting 'myString' to the changed string won't actually send it back to the 3rd party program because myString is pointing to a location in managed memory rather than unmanaged memory.
You'll need to call the function again to do this or, if this isn't feasible (because of what the function does), then you'll need to pass an IntPtr rather than a string to get the unmanaged memory address which you can then manipulate using Marshal class methods.
In fact, if you pass your own string to start with, you'll need to allocate some memory on the unmanaged heap and then pass an IntPtr to that.
If you can't call the function again, can you let me know what type of string (ansi, unicode or BSTR) the 3rd party program is expecting? If it's designed to be callable from VB, then it's probably the third of these but we'll need to know as the Marshal class has different methods for each one.
As far as the unmanaged function is concerned, the string parameter is just a pointer to where the actual string is stored so I don't think it will be a problem if you declare the function to receive an IntPtr rather than a string.
What I then had in mind was something like this (off top of my head):
// fill this out to maximum number of characters neededstring s = "THIS IS MY NEW STRING";// convert managed string to unmanaged BSTRIntPtr ip = Marshal.StringToBSTR(s);// pass pointer to BSTR to unmanaged functionUserInstruction(ip);// presumably function now manipulates this string// recover as managed strings = Marshal.PtrToStringBSTR(ip);// manipulate again, switch to lower case let's says = s.ToLower();// copy the characters of s back to unmanaged memoryMarshal.Copy(s.ToCharArray(), 0, ip, s.Length);// give unmanaged function time to read manipulated string // and then release pointerMarshal.FreeBSTR(ip);
Alan, I have tried the method and after tweaking it seems to work in some capacity. I passed the IntPtr into the function and then attempted to use Marshal.PtrToStringBST(mynewptr) but this crashed the program so I used Marshal.PtrToStringUni (mynewptr) and this gave me access to the string. I then changed it's value and passed it back to the program using Marshal.Copy (myString.ToCharArray(), 0, mynewptr, myString.length) which has worked, the 3rd party program can see that it has changed. However, for some reason there seems to be a null character after each letter i.e I have changed = Hex (49 00 20 00 68 00...etc) - do you know why this is?
Unicode characters require two bytes per character.
As the ASCII characters (codes 32 - 127) only require one byte this means that the other byte will be a null ('\0').
If the 3rd party dll is using unicode characters, this shouldn't be a problem. However, if it's using ASCII (i.e. single byte characters), we could strip out the nulls (by converting the managed characters to bytes) before sending the string back.
©2014
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/Forums/Thread/48613/copymemory-in-C-Sharp.aspx | CC-MAIN-2014-35 | en | refinedweb |
Type: Posts; User: pokajy
ok ok I solved.
61440 means 2#0000_0000_0000_0000_1111_0000_0000_0000
10000 means 2#0000_0000_0000_0000_0010_0111_0001_0000
if
expression = 10000&61440;
then.. expression =...
there is just somewhere left in the code that i did not understand. code1 is also working like this: I changed source, code is still working.
#include "apdefap.h"
long _main(char*...
forexample at code1,
var1 = (unsigned long)GetTagDoubleStateQC (lpszObjectName,&status[0],&quality[0]);
var1 is the source, it changes automaticly.. from 0 to 32768.when var1 = 10000, index...
ok you're right 2kaud. people may think it's not safe. here are the codes.
Code1
#include "apdefap.h"
long _main(char* lpszPictureName, char* lpszObjectName, char* lpszPropertyName)
{
...
hello guys.recently i was away from the forum.I'M not c programmer and rarely i work with codes.
I need some help about one subject. if you can help, I'll be very glad.
There is one code,...
this c# project is connecting to a programmable logic controller with ethernet. informations are coming from plc.
information that I see in textbox is generally pression of an oxygen tank,...
hi. im beginner.
I find main problem is storing data with 500ms update, and drawing trends with it.
500ms update and 1 hour it means 2x60x60 = 7200 values.
Yes
it's floating point. it's...
Hi
I'm in trouble with creating trends.
I'll read one TextBox's value and at the end of 1 hour, I'll draw a trend into a word file, I'll save and print it.
(in trend info update must be about...
yes I saw several reporting examples. Generally people are using csv. It's only me who is abnormal.
Well at the beginning I said that I was a beginner. So don't be angry with strange questions.
...
I find excel very cumbersome.
Same table in Txt or rtf 10kb, in excel maybe 100kb.
If it was creating 2-3 files. it's ok. 90kb is nothing, but It'll be kinda reporting and it'll report many...
Hi
I'm a beginner and if I'm asking strange things sorry about it.
I read some informations from sql database and display it in my project.
I used sqldataadapter dataset and datagridview.
... | http://forums.codeguru.com/search.php?s=c7f14d470e322216cb2d582b028a4eab&searchid=4871661 | CC-MAIN-2014-35 | en | refinedweb |
NAME
jail, jail_get, jail_set, jail_remove, jail_attach - create and manage system jails
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <sys/param.h> #include <sys/jail.h> int jail(struct jail *jail); int jail_attach(int jid); int jail_remove(int jid); #include <sys/uio.h> int jail_get(struct iovec *iov, u_int niov, int flags); int jail_set(struct iovec *iov, u_int niov, int flags);. This is equivalent to the jail_set() system call (see below), with the parameters path, host.hostname, name, ip4.addr, and ip6.addr, and with the JAIL_ATTACH flag. The jail_set() system call creates a new jail, or modifies an existing one, and optionally locks the current process in it. Jail parameters are passed as an array of name-value pairs in the array iov, containing niov elements. Parameter names are a null-terminated string, and values may be strings, integers, or other arbitrary data. Some parameters are boolean, and do not have a value (their length is zero) but are set by the name alone with or without a “no” prefix, e.g. persist or nopersist. Any parameters not set will be given default values, generally based on the current environment. Jails have a set of core parameters, and modules can add their own jail parameters. The current set of available parameters, and their formats, can be retrieved via the security.jail.param sysctl MIB entry. Notable parameters include those mentioned in the jail() description above, as well as jid and name, which identify the jail being created or modified. See jail(8) for more information on the core jail parameters. The flags arguments consists of one or more of the following flags: JAIL_CREATE Create a new jail. If a jid or name parameters exists, they must not refer to an existing jail. JAIL_UPDATE Modify an existing jail. One of the jid or name parameters must exist, and must refer to an existing jail. If both JAIL_CREATE and JAIL_UPDATE are set, a jail will be created if it does not yet exist, and modified if it does exist. JAIL_ATTACH In addition to creating or modifying the jail, attach the current process to it, as with the jail_attach() system call. JAIL_DYING Allow setting a jail that is in the process of being removed. The jail_get() system call retrieves jail parameters, using the same name-value list as jail_set() in the iov and niov arguments. The jail to read can be specified by either jid or name by including those parameters in the list. If they are included but are not intended to be the search key, they should be cleared (zero and the empty string respectively). The special parameter lastjid can be used to retrieve a list of all jails. It will fetch the jail with the jid above and closest to the passed value. The first jail (usually but not always jid 1) can be found by passing a lastjid of zero. The flags arguments consists of one or more following flags: JAIL_DYING Allow getting a jail that is in the process of being removed. The jail_attach() system call attaches the current process to an existing jail, identified by jid. The jail_remove() system call removes the jail identified by jid. It will kill all processes belonging to the jail, and remove any children of that jail.
RETURN VALUES
If successful, jail(), jail_set(), and jail_get() return a non-negative integer, termed the jail identifier (JID). They return -1 on failure, and set errno to indicate the error. The jail_attach() and jail_remove() functions return connections name currently set for the prison for jailed processes.
ERRORS
The jail() system call will fail if: [EPERM] This process is not allowed to create a jail, either because it is not the super-user, or because it would exceed the jail’s children.max limit. [EFAULT] jail points to an address outside the allocated address space of the process. [EINVAL] The version number of the argument is not correct. [EAGAIN] No free JID could be found. The jail_set() system call will fail if: [EPERM] This process is not allowed to create a jail, either because it is not the super-user, or because it would exceed the jail’s children.max limit. [EPERM] A jail parameter was set to a less restrictive value then the current environment. [EFAULT] Iov, or one of the addresses contained within it, points to an address outside the allocated address space of the process. [ENOENT] The jail referred to by a jid or name parameter does not exist, and the JAIL_CREATE flag is not set. [ENOENT] The jail referred to by a jid is not accessible by the process, because the process is in a different jail. [EEXIST] The jail referred to by a jid or name parameter exists, and the JAIL_UPDATE flag is not set. [EINVAL] A supplied parameter is the wrong size. [EINVAL] A supplied parameter is out of range. [EINVAL] A supplied string parameter is not null-terminated. [EINVAL] A supplied parameter name does not match any known parameters. [EINVAL] One of the JAIL_CREATE or JAIL_UPDATE flags is not set. [ENAMETOOLONG] A supplied string parameter is longer than allowed. [EAGAIN] There are no jail IDs left. The jail_get() system call will fail if: [EFAULT] Iov, or one of the addresses contained within it, points to an address outside the allocated address space of the process. [ENOENT] The jail referred to by a jid or name parameter does not exist. [ENOENT] The jail referred to by a jid is not accessible by the process, because the process is in a different jail. [ENOENT] The lastjid parameter is greater than the highest current jail ID. [EINVAL] A supplied parameter is the wrong size. [EINVAL] A supplied parameter name does not match any known parameters. The jail_attach() and jail_remove() system calls will fail if: [EINVAL] The jail specified by jid does not exist. Further jail(), jail_set(), and jail_attach() call chroot(2) internally, so it can fail for all the same reasons. Please consult the chroot(2) manual page for details.
SEE ALSO
chdir(2), chroot(2), jail(8)
HISTORY
The jail() system call appeared in FreeBSD 4.0. The jail_attach() system call appeared in FreeBSD 5.1. The jail_set(), jail_get(), and jail_remove() system calls appeared in FreeBSD 8.0.
AUTHORS
The jail feature was written by Poul-Henning Kamp for R&D Associates “” who contributed it to FreeBSD. James Gritton added the extensible jail parameters and hierarchical jails. | http://manpages.ubuntu.com/manpages/maverick/man2/jail.2freebsd.html | CC-MAIN-2014-35 | en | refinedweb |
UFDC Home | Help | RSS TABLE OF CONTENTS HIDE Front Cover Preface Table of Contents Introduction Background to the institute of... Determining research priorities... Research program of RERU Four phases of RERU's research... Overview of RERU's major findings:... Strategies to improve agricultural... Measures to increase the rate of... Research programme of RERU Bibliography Back Cover Group Title: Paper - Overseas Liaison Committee, American Council on Education - No. 6 Title: Inter-disciplinary research on rural development CITATION PAGE IMAGE ZOOMABLE PAGE TEXT Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: Inter-disciplinary research on rural development the experience of the Rural Economy Research Unit in Northern Nigeria Series Title: OCL paper - Overseas Liaison Committee, American Council on Education no. 6 Physical Description: 46 p. : ; 28cm. Language: English Creator: Norman, D. W ( David W ) Publisher: Overseas Liaison Committee, American Council on Education Place of Publication: Washington Publication Date: 1974 Subjects Subject: Economic development -- Research -- Nigeria ( lcsh )Rural conditions -- Nigeria ( lcsh ) Notes Bibliography: Bibliography: p. 43-46. Statement of Responsibility: by David W. Norman. General Note: Cover title. General Note: On cover: Development from below. Funding: Electronic resources created as part of a prototype UF Institutional Repository and Faculty Papers project by the University of Florida. Record Information Bibliographic ID: UF00054833 Volume ID: VID00001 Source Institution: University of Florida Holding Location: University of Florida Rights Management: All rights reserved by the source institution and holding location. Resource Identifier: aleph - 000118769oclc - 01471947notis - AAN4631 Table of Contents Front Cover Page i Page ii Preface Page 1 Table of Contents Page 3 Introduction Page 5 Page 6 Page 7 Background to the institute of agricultural research and RERU Page 8 Page 9 Determining research priorities in the Institute of Agricultural Research Page 10 Research program of RERU Page 11 Four phases of RERU's research program Page 12 Page 13 Page 14 Page 15 Page 16 Overview of RERU's major findings: problems faced by farmers in the northern states of Nigeria Page 17 Page 18 Page 19 Page 20 Page 21 Page 22 Page 23 Strategies to improve agricultural incomes Page 24 Page 25 Page 26 Page 27 Page 28 Page 29 Page 30 Page 31 Page 32 Measures to increase the rate of adoption of improved technology Page 33 Page 34 Page 35 Page 36 Page 37 Page 38 Research programme of RERU Page 39 Page 40 Page 41 Page 42 Bibliography Page 43 Page 44 Page 45 Page 46 Back Cover Page 47 Full Text Development IRom Below V Inter-Disciplinary Research on Rural Development The Experience of the Rural Economy Research Unit in Northern Nigeria by David W. Norman Rural Economy Research Unit and Agricultural Economics Department Ahmadu Bello University Zaria, Nigeria OLC Paper No. 6 Overseas Liaison Committee April 1974 American Council on Education American Council on Education The American Council on Education is an organization whose membership includes over 90% of all doctorate-granting institu- tions in the United States, 80% of all bachelors and masters-granting institutions, 38% of all regularly-accredited two-year institu- tions, 191 educational associations and 57 affiliates. Since its founding in 1918, the Council has been a center for cooperation and coordination of efforts to improve American education at all levels. The Council investigates educational problems of general interest; stimulates experimental activities by institutions and groups of institutions; remains in constant contact with pending legislation affecting educational matters; acts in a liaison capacity be- tween educational institutions and agencies of the Federal Government; and, through its publications, makes available to ed- ucators and the general public, widely used handbooks, informational reports, and volumes of critical analyses of social and educational problems. The Council operates through its permanent staff, commissions and special committees. Its president, Roger Heyns, was formerly Chancellor of the University of California at Berkeley and Professor of Psychology and Education at the University of Michigan. The Council is housed at the National Center for Higher Education at One Dupont Circle in Washington, D.C. Overseas Liaison Committee The Overseas Liaison Committee of the American Council on Education is a specialized organization of scholars founded to promote communication between the American academic community and higher education in Africa, the Caribbean, Latin America, Asia, and the Pacific. OLC is a working committee of twenty-one university scholars and administrators selected for their specialized knowledge of higher education, for their willingness to devote time to program design and execution, and to represent the varied structure of American higher education. They are given administrative and program support by the Secretariat in Washington. Funding for OLC programs is provided by grants from the Carnegie Corporation of New York and the Ford Foundation, a con- tract with the U.S. Agency for International Development, and grants for special projects from public and private agencies such as the International Bank for Reconstruction and Development. OLC Members 1973174 COUNCIL Glenn H. Beck, Vice President for Agriculture, Kansas State University Roy S. Bryce-Laporte, Research Sociologist and Director, Research Institute for Immigration and Ethnic Studies, Center for the Study of Man, Smithsonian Institution John A. Carpenter, Professor of Social Foundations and Director of the Center for International Education, University of Southern California *James Carter, M.D., Director, Maternal and Child Health/Family Planning, Training and Research Center, Meharry Medical College Harlan Cleveland, President, University of Hawaii Robert L. Clodius, Professor of Agricultural Economics, University of Wisconsin *Rafael L. Cortada, Vice President, Hostos Community College of the City University of New York L Gray Cowan, Dean, Graduate School of Public Affairs, State University of New York, Albany Alfredo G. de los Santos, Jr., President, El Paso Community College Cleveland Dennard, President, Washington Technical Institute *James Dixon, President, Antioch College *Carl Keith Eicher, Professor of Agricultural Economics, Michigan State University, CHAIRMAN John W. Hanson, Professor of International Education and African Studies, Michigan State University Roger W. Heyns, ex officio, President, American Council on Education Michael M. Horowitz, Professor of Anthropology, State University of New York, Binghamton Willard R. Johnson, Associate Professor of Political Science, Massachusetts Institute of Technology *Arthur Lewis, Chairman, Department of Curriculum and Instruction, University of Florida Selma J. Mushkin. Professor of Economics and Director, Public Services Laboratory, Georgetown University *Inez Smith Reid, Associate Professorof Political Science, Barnard College; Executive Director, Black Women's Community Development Foundation Wayne A. Schutjer, Associate Professor of Agricultural Economics and Rural Sociology, Pennsylvania State University (effective September, 1974) Rupert Seals, Dean, School of Agriculture and Home Economics, Florida A and M University James Turner. Associate Professor and Director, Africana Studie and Research Center, Cornell University *Members of the Executive Committee 1973/74 PANEL OF CONSULTANTS Karl W. Bigelow, Professor Emeritus, Teachers College, Columbia University Paul Gordon Clark, Professor of Economics, Williams College Philip H. Coombs. Vice-Chairman, International Coun- cil for Educational Development C. W. de Kiewiet, President Emeritus, University of Rochester Frederick Harbison, Professor of Economics and Public Affairs and Roger William Straus Professor in Hu- man Relations, Princeton University Eldon L. Johnson, Vice President, University of Illinois John S. McNown, Albert P. Learned Professor of Civil Engineering, University of Kansas Glen L. Taggart, President, Utah State University Preface Although he is only 34 years old, Dr. David Norman has already made a major impact on agricultural economics research in Africa, and on a legion of students whom he has "touched". He is a quiet and dedicated scholar who has been working at Ahmadu Bello University over the past nine years. After David Norman completed his Ph.D. in Agricultural Economics at Oregon State University in 1965, he joined Ahmadu Bello University and launched the Rural Economy Research Unit (RERU). RERU has utilized an inter-disciplinary approach to the organization and conduct of village studies. When Dr. Norman and his colleagues laid out the research program for RERU, they noted that policy prescriptions on how to increase the output on small farms in the Northern States of Nigeria were not supported by socio-economic research findings. The results of RERU's village level studies now provide a solid underpinning for policy prescriptions for small farmers in northern Nigeria. Also, RERU's findings have helped redirect the priorities of tech- nical agricultural researchers at the Institute of Agricultural Research at Ahmadu Bello University and have encouraged researchers to supplement their experiment station testing with research at the farm level. Finally, RERU's publications are now standard references on how to organize socio-economic research in rural areas of Africa. Dr. Norman's approach to research from the bottom up is consistent with the theme Development From Below of the Ethiopia field trip/workshop on Rural Development for which this paper was prepared in October, 1973. The participants in the field trip/ workshop strongly urged the bilingual publication of Dr. Norman's paper. The OLC is honoured to publish Dr. Norman's paper and to call attention to his numerous publi- cations which are cited in the bibliography of the paper. Carl Keith Eicher, Chairman Overseas Liaison Committee American Council on Education Table of Contents 5 8 10 11 12 17 24 33 39 43 I. INTRODUCTION II. BACKGROUND TO THE INSTITUTE OF AGRICULTURAL RESEARCH AND RERU III. DETERMINING RESEARCH PRIORITIES IN THE INSTITUTE OF AGRICULTURAL RESEARCH IV. RESEARCH PROGRAM OF RERU V. FOUR PHASES OF RERU'S RESEARCH PROGRAM VI. OVERVIEW OF RERU'S MAJOR FINDINGS: PROBLEMS FACED BY FARMERS IN THE NORTHERN STATES OF NIGERIA VII. STRATEGIES TO IMPROVE AGRICULTURAL INCOMES VIII. MEASURES TO INCREASE THE RATE OF ADOPTION OE IMPROVED TECHNOLOGY APPENDIX A: RESEARCH PROGRAMME OF RERU BIBLIOGRAPHY "The hallmarks of an interdisciplinary study are that it seems overpriced, it shows that everything depends on everything else, (and that) nobody really understands what it saysl"-P.A. Morrison, Rand Corporation. I. INTRODUCTION* The term interdisciplinary approach and rural development are frequently used in the developing world. The phrase rural development can, and in fact is, defined in many ways and can include all aspects of rural life. In a meeting in West Africa rural development was defined as the "process whereby a series of quantitative and qualitative changes brought about within a given rural population result in improved living conditions for the population through an increased production capacity" (UNESCO, 1970). *This paper was originally prepared for the Development From Below Field Trip/Workshop which was held in Ethiopia from October 12-20, 1973. The permission of the Director of IAR to publish this paper is gratefully acknowledged. Discussion in this paper is limited to agriculture although it is appreciated that its development is influenced to a great extent by other facets of rural development, e.g., infrastructural development (roads, health and educational facilities), and non-agricultural employment opportunities in rural areas (Byerlee and Eicher, 1972). Rural development is a complex process involving the solution of technical, ecological, economic and social (human) problems with limited administrative, financial, and manpower resources. Rural development projects have often been carried out in Africa in a milieu in which knowledge about how to solve problems has been absent or at the best limited or ill-conceived (Baldwin, 1957). Implementation of integrated rural development involves many disciplines while knowledge required to solve the problems of development also crosses discipline boundaries. These disciplines often have "symbiotic" relationships with each other. Therefore the tendency for each discipline to work in isolation from others is increasingly giving way to a cooperative approach. There is a move away from a multi-disciplinary approach (which implies researchers from more than one discipline who do not necessarily communicate with one another) to that involving an inter- disciplinary emphasis (which implies greater integration of disciplines through joint projects). It is unfortunately probably true to say the inter-disciplinary approaches have generally been more successful in the implementation stage rather than the knowledge accumulation (research) stage of rural development. This is possibly due in part to the fact that, unlike the individuals involved in implementation who are faced with day to day realities of rural development, the research worker who is often academically orientated in a single discipline makes great efforts to preserve what he considers the "integrity" or "supreme relevance" of his discipline which is not "softened" by his Contact with the practical realities. There has been an increasing number of pleas for inter-disci- plinary approach to research in the developing world. Lipton (1969, 1970) 6 has forcefully argued for an inter-disciplinary approach because of the inability of conventional economic theory, based on a profit maximisation goal, to adequately explain the behaviour of traditional farmers in the developing world. Two possible explanations for this are: first, the profit maximisation goal may be conditional on a strategy which has both economic and non-economic connotations, i.e., security which can be interpreted as producing sufficient food for the family on the farm without recourse to the market (Norman, 1967-72); or secondly, the profit maximisation goal may be ignored and the crop not grown for essentially non-economic reasons, e.g., the refusal of some Moslem farmers to grow a profitable crop--tobacco--in parts of northern Nigeria due to religious scruples. Supporting Lipton, Roling (1966) has convincingly argued on theoretical grounds for an inter-disciplinary approach to the village studies on the part of social scientists, particularly economists and sociologists. He also notes that rural sociology is very important in the early stages of development prior to the emergence of "economic man" (Blair, 1971). In general economists are being accepted as having an important role to play in the developing world and more agricultural re- search institutions are including them on their staff. However the role of the rural sociologist or social anthropologist has been more difficult to sell. Roling (1966) implies and De Wilde (1967) states that this has in part been due to the reluctance of many such individuals to focus on relevant research, e.g., on those facets of human behaviour of particular For example even in the international research centres, i.e.,CIMMYT, IITA, IRRI, CIAT, etc., economics is more strongly represented than sociology. relevance to agricultural innovation (UNESCO, 1970) and defining the priorities of the traditional system (Collinson, 1968).1 De Wilde (1967) also mentions the importance of the inter-disciplinary approach not only between different disciplines in the social sciences but also between the social sciences and the technical sciences involved in agriculture. The objective of this paper is: (a) To describe the evolution of the inter-disciplinary research programme of the Rural Economy Research Unit (RERU) of Ahmadu Bello University in the northern part of Nigeria. (b) To examine some of the problems of farmers in the northern states of Nigeria, which have become apparent through the research work undertaken by RERU. (c) To discuss in the light of (b) the types of programmes that could result in improving agricultural incomes under the present administrative and financial constraints in the area. II. BACKGROUND TO THE INSTITUTE OF AGRICULTURAL RESEARCH AND RERU Research work by technical scientists on agricultural problems was initiated by the Department of Agriculture in the northern part of Nigeria in 1924. In 1957 this research became the responsibility of the Research and Specialist Division of the Ministry of Agriculture of the Northern Region of Nigeria. The Institute for Agricultural Research and Special Services (IAR) was established when this division was transferred from the Ministry of Agriculture to the Ahmadu Bello University lSee also Mosher (1964) (ABU) in October, 1962. The Institute is responsible for carrying out agricultural research for the six northern states in cooperation with the Ministry of Natural Resources in each state. Administratively the research arm of IAR is divided into a number of departments which are further sub- divided into sections on the basis of discipline. Many members of the departments have split teaching (in the Faculty of Agriculture) and research (in IAR) appointments. Such an arrangement permits the complimentary effects of teaching and research to be exploited while at the same time ensuring that research funds are available to academics to do research relevant to the needs of the country.1 In addition IAR has a distinct extension arm, the Extension and Research Liaison Division (ERLS) which serves as a link between the research staff and the extension workers in the Ministry of Natural Resources in each of the six northern states. In terms of size the IAR now has a senior staff establishment of 220 positions and an annual budget of over N3,000,000.2 In terms of the social scientists in IAR,the first were appointed under the auspices of RERU in 1965, with the initial support coming from a Ford 1Some academics resent the idea of "directed research" as an infringement on academic freedom. However this author believes that the developing world cannot afford the luxury to finance work not relevant to development problems. Whenever possible encouragement should be given to using the intellectual talent and available financial resources to work on priority research problems. It is unfortunate that in some academic circles, such talents are not fully utilized due to lack of finances for supporting research. This constitutes a big advantage of the administrative set up at IAR, ABU. 2That is approximately US $4,500,000. Foundation grant.1 Since then financial support for social science research has increased substantially under the auspices of the Agricultural Economics Department which has continued most of the research work initiated by RERU. At present 10.5 percent of the research senior staff positions in IAR are in the social science area, while social science research accounts for 8.3 percent of IAR's research budget. III. DETERMINING RESEARCH PRIORITIES IN THE INSTITUTE OF AGRICULTURAL RESEARCH One of the biggest problems has been, and still is, how to decide what factors agricultural research should focus on, i.e, what is the most relevant research in terms of encouraging rapid agricultural development. The following steps have been taken by the IAR,to develop a relevant research program and a close working relationship with the government: (a) The membership of the Board of Governors of the IAR,which is chaired by the Vice Chancellor of ABU,is dominated by prominent agriculturalists working in Ministries of Natural Resources in the states. The Board established broad policy guidelines for research and has the final word in approving the estimates and research programme. (b) The Professional and Academic Board of the IAR, whose chairman is the Director of the IAR, draws up the detailed research programme within the guidelines given by the Board of Governors. It-consists of department and section heads of IAR plus the provost of agriculture, deputy directors, and staff representatives. 1Such a time discrepancy between the appointment of the first technical and social scientists, i.e., in this case 41 years, is alas typical of most agricultural research institutes in Africa. 10 (c) Financial estimates drawn up by the various departments of the IAR are approved by the Professional and Academic Board before being trans- mitted to the Board of Governors. The research programme is drawn up by a number of sub-committees of the Professional and Academic Board which are mainly organised on a crop basis. Membership of these sub-committees is open to anyone who is involved in research considered by the committee in question. Representatives of RERU and the Extension and Research Liaison Division (ERLS) are represented on all these sub-committees. As well as encouraging an inter-disciplinary approach to problems and initiating plans for the research programme, these sub-committees act as a first step in assessing the suitability of proposed recommendations which must eventually be approved by the Professional and Academic Board before being disseminated to farmers through the ERLS. IV. RESEARCH PROGRAMME OF RERU RERU and later the Agricultural Economics Department have used an inter-disciplinary approach in their research programme which draws on the disciplines of rural sociology, geography and agricultural economics. Two basic underlying factors have been taken into consideration in determining RERU's research programme: (a) Rural development programmes in the northern states in general have emphasized working with the farmer within his traditional setting __ -~ ~---~----- -- IIC-- rather than moving him to irrigation schemes, settlement schemes, etc. Voluntary participation and working largely within the traditional setting necessitates research that seeks to obtain an understanding of the problems and constraints faced by farmers at the village level. (b) The desirability of deriving a micro rather than a macro-oriented 11 research programme. The reasons for the micro emphasis were: i. There is a paucity of accurate data at the village (micro) level in the northern states. ii. Expertise at the macro level is available at other Nigerian socio-economic research institutions. iii. The work of technical researchers and extension specialists at IAR can best be complimented by such village or micro level studies. There is, of course, nothing new about advocating such micro-oriented studies. Many research workers have strongly urged them to be under- taken (Bunting, 1970; Eicher, 1968; Belshaw and Hall, 1968) in order to help determine what changes should be introduced and how they should be introduced. V. FOUR PHASES OF RERU'S RESEARCH PROGRAM RERU has adopted a basic work plan of village studies which consists of four phases.1 These are: (a) Positive phase, i.e., determining what farmers are doing. (b) Hypothesis testing phase, i.e., determining why farmers do things in the way they do. (c) Normative phase, i.e., determining what farmers ought to do. (d) Policy phase, i.e., determining how the,changes suggested under phase (c) should be brought about. Thistfay) also involve a consideration of phase (b) to determine whether the suggested policy is in conflict with the farmers' reasons for doing things in the traditional way. lit is of course appreciated that there is likely to be considerable overlapping in terms of timing between the four phases, but conceptually it has proved to be a useful division. 12 Much of RERU's research during the 1965-71 period concentrated on the positive and hypothesis testing phases. With this foundation derived from the "basic studies", emphasis is now shifting more and more towards "change studies" which concentrate particularly on the normative and policy phases. Conceptually the types of research work carried out by RERU and the degree of inter-disciplinary work involved can be considered as follows.1 Basic studies These studies seeking to describe, explain and understand the agricultural environment have concentrated to a great extent on very detailed village studies in five different areas of the northern states. Inter-disciplinary research work has been confined to cooperation among social science disciplines, i.e., geography, rural sociology and agricultural economics. Not all the work has been inter-disciplinary in nature although initial demographic and land utilization analysis was usually done cooperatively and efforts were made to ensure that research done by different disciplines fitted into the aims of the RERU research programme. Change studies These studies seek to assess the potential value of the technology that is being produced by the research workers and to assess the value 1A brief summary of the actual studies undertaken by RERU appears in Appendix A while a list of publications emanating from that work is available elsewhere (RERU, 1973). of the various programmes that have been used and are to be used in introducing change. The research programme of change studies can be divided into three broad groups: (a) Assessment, at the farmers' level, of the recommendations put out by IAR to determine their technical feasibility, economic profitability, and social acceptability. This approach is usually single crop enterprise in orientation, e.g., cotton, maize, while emphasis is laid on investi- gations at the farmers' level rather than on the experimental station. One of several reasons for this is the false picture given of the value of the recommendation under experimental conditions where managerial levels are so much higher (Table 1) than found under village farming conditions. Far example, Table 1 reveals that maize yields of farmers are 322 Ibs. per acre as compared with 8000 Ibs. per acre under IAR experimental station results. Although research is only just commencing in this area there are already promising results: i. The inter-disciplinary nature of the research is proving to be very valuable. As well as cooperation between the social and technical disciplines at IAR, government has been willing to provide financial assistance and extension workers for the projects thereby confirming the relevance of this work in assisting their agricultural programs. ii. These studies are getting the technical scientists off the experimental stations onto farmers' fields where they can see with their own eyes the'problems aced by and the strategies' employed by the farmers. This could have a long run impact in the determination of even more relevant research priorities. Table 1. Examples of inputs, yields and net returns per acre of crops under different conditions in the North Central State of Nigeriaa Indigenous Demonstration RERU working Experimental Crop practices plotsc with farmers station Sorghum: Yield (lbl.) 701 991 1097 3000 Costs (N) 0.40 2.87 2.81 11.48 Net return (N) 17.84 / -/ 22.90 '7/ 25.71 .,/1// 66.52 ,'-?// Hours 134 154 154 June July hours 47 53 53 *V ** ,* Maize: .4 .1 Yield (Ibs.) Costs (N) Net return (N) Hours June July hours 322 0.44 7.29 Groundnuts: Yield (Ibs.) Costs (N) Net return (N) Hours 524 1.41 15.87 217 /',. 2136 3.34 47.92 367 133 3512 8000 14.31 17.13 69.98 -t7174.87 552 298 I - /I1 I 869 2.28 26.40 /1I5- 247 933 1.94 28.85 I- ~ 247 1500 7.52 41.81 I. L June July hours 101 107 107 Cotton: /- : Yield (Ibs.) 190 457 438 746 1300 Costs (N) 0.09 7.05 7.48 7.48 15.52 Net return (N) 6.75 ?-,' 9.40 1.3'2 ':' 8.29 19.38 .r/ 31.28 ., 1- Hours 138 94 206 305 June July hours 28 58 69 .. ... ;c :/=7 :> '' ")/ ,/ * Millet/Sorghum:6 Yield (Ibs.) Costs (N) Net return (N) Hours June July hours ML 320 SG 685 3.83 26.34 247 59 Not available a. Blanks in the table indicate information is not available. Costs and net returns exclude labour costs. Fertiliser is costed at subsidized prices. Prices of products used represent those prevailing in 1966-67. They are now much higher for cash crops. b. Used as indigenous practices in Table 2. Maize was not used since it is not a common crop. Other crop enterprises not listed in the table were also used. Most of these were crop mixtures. c. Used as improved technology in Table 2(b). These figures were obtained from demonstration plots carried out on farmers fields by extension workers in North Central State. d. Used as improved technology in Table 2(a). e. These estimates were obtained from discussions with technical scientists at IAR and represent what is average on the experiment station. f. One Naira (N) is approximately equal to $1.50 (US). g. Recently research workers at IAR have been looking at some crop mixtures under experimental conditions. Much of this work undertaken by Andrews, De Wolf, Kassam, and Baker has still to be published. ,) /2 3 ;// /- 1 s"Li "., ~: = f iii. The doubtful validity of recommendations based purely on experimental station results has increasingly been recognized by the Professional and Academic Board of IAR which has now approved of the idea in principle that whenever possible and where relevant,potential "recommendations" should be tested at the farmers' level before being finalized. (b) Assessment of government programmes to introduce change among farmers. With reference to mechanisation De Wilde (1967) has noted the tendency to repeat mistakes because there is no proper and easily accessible recording and analysis of past experience. The same criticism can be applied to many other government programmes which often have 2 little idea of the benefit/cost ratios involved. RERU is commencing a number of such studies which will involve a considerable amount of cooperation from government in terms of provision of information. To date little difficulty has been experienced in this regard but it is anticipated that government may be reluctant to release financial information. (c) Assessment and evaluation of different ways of introducing change. This study which is the proposed culmination of much of RERU's work will seek to determine the best operational way to bring about betterment of incomes from rain-fed agriculture when faced with the administrative, financial and manpower constraints experienced by government. The project which will involve knowledge accumulation through implementation lit is recognized however, that this must not result in undue delay in finalising the recommendation. 20ne could argue that assessment of such programmes should be done by planning units in government. Unfortunately these are poorly developed in the northern states at the present time. 16 will involve both social, i.e., extension, rural sociology and agricultural economics, and technical scientists, and also government which will provide the field extension workers. In summary, the inter-disciplinary nature of RERU's research programme is much more evident in the "change studies" which involve several social and technical disciplines, than in the "basic studies" which are confined to social science cooperation. In addition it has been easier to obtain financial and manpower support from government for the "change studies", in which they can soon see definite results, than it has been for the "basic studies". Finally RERU, is now beginning to be involved in the implementation stage of governmental projects. For example, RERU is represented on the Rural Development Bureau Committee2 of North Central State. VI. OVERVIEW OF RERU'S MAJOR FINDINGS: PROBLEMS FACED BY FARMERS IN THE NORTHERN STATES OF NIGERIA Before being able to determine ways of helping the farmer improve his income it is important that his problems are understood so that IThere is little doubt that it is often easier to work with disciplines that are completely different from ones own, e.g., the human element in agricultural economics compared with its absence in entomology, than one which is closely allied, e.g., the human element in rural sociology and agricultural economics. Presumably this is because allied disciplines often overlap and have different ways of looking at the same thing, while disciplines which are completely different look at different things. It is therefore even more essential that people of allied disciplines working together have an appreciation of each other's discipline and are also compatible in terms of personality. 2This consists of representatives of several ministries in North Central State; it is concerned with bringing about a coordinated approach to rural development. strategies can be designed to overcome them. The studies carried out by RERU have helped highlight some of the problems farmers face in the northern parts of Nigeria.1 It is impossible to consider these in detail but a few can be summarised under four main headings which are inter- related and cannot be considered in isolation. (1) Low investment in Traditional Agriculture Investment in traditional agriculture tends to be low for two main reasons: first, the supply of funds for investment is small, since savings from the farmers' low incomes are minimal, credit from institutional sources has in the last few years been almost non-existent, and credit from local moneylenders is costly (Vigo, 1965); second, the returns from - investment are low, partly because many forms of capital goods can be formed directly from labour, e.g., land improvements, hand tools, etc. and partly because the low level of technology greatly reduces the productivity of capital goods, e.g., investment in fertilizer without better seeds or management, compared to the returns in a technologically advanced agriculture. The result of the low investment means the level of technology remains low and few inputs are purchased. The problem of low returns from investment can be partially overcome with adequate extension contact and a "package deal" approach to the adoption of improved technology. The problem of increasing the supply of investment funds from savings is, initially at least, difficult. It is appreciated that many of these problems were already known but these studies have given empirical support for those which were previously based on "conventional wisdom" statements. A consumption study undertaken by Simmons (1973) has verified that savings are low.1 This problem is accentuated by the recent tendency for the traditionally preferred complex family units (gandaye) breaking up into simple family units (iyali) with more young decision-makers, who may be more open to change, but are less able to provide the necessary savings, due to young family responsibilities (Buntjer, 1970; Goddard, 1969; Hedges, 1963). A credit programme may therefore be essential to encourage greater investment in agriculture. (2) Land and labour allocation Since capital inputs are very low in traditional agriculture, production is mainly limited by the amount and quality of land available and the amount of labour provided by the farming family. The land tenure system is often cited as being a critical bottleneck to initiating change in traditional agriculture. In most parts of Nigeria, land is legally a communal asset, and individuals only possess usufructuary rights to that land. However, it is apparent that inherited land is considered to be very secure (Goddard, 1972). Therefore, it is unlikely that the land tenure system itself is a critical constraint on the willingness of farmers to invest in improve- ments in the land. However, under the present system land cannot be used as collateral and, as a result, farmers cannot usually obtain loans from commercial organizations. This makes it difficult for government lending agencies to take any punitive action for default iThis is implied by comparing expenditure patterns derived by Simmons with incomes estimated in other studies using the same farmers (Norman, 1967-1972). in payment. One cannot help but think that the lack of a viable credit system for small farmers and low potential returns from investment are more critical constraints on the expansion of agricultural output than the land tenure system. There are, of course, other problems in the existing land tenure system such as rigidity in the distribution and use of land and frag- mentation of farm holdings. However, in general, farmers in the northern states find that the amount of land their family can cultivate is not limited by the availability of land but rather by the labour they can supply to cultivate it (Ogunfowora, 1972; Norman, 1970). Since little hired labour is employed, the labour supply is essentially from family sources. Capital goods which could substitute for labour, i.e., herbicides, oxen, etc., are very seldom used because of the lack of technical know-how and the unavailability of funds. The unavailability of capital to purchase new types of technology such as improved seed, fertilizer, etc., is likely to be even more critical in a few parts of northern Nigeria, e.g., .Kano State where high population densities have caused land to be more limiting than labour. Increasing agricultural production on the extensive margin, i.e., through increasing acreage, is no longer possible.1 Instead future increases in agricultural production in such areas can only be achieved through 1Helleiner (1966) has noted that this has been the traditional way Nigerian farmers have responded. Buntjer (1973) has obtained empirical evidence that farmers responded to higher cotton prices in this manner rather than adopting the improved technology available for growing cotton. increasing the productivity of land.1 Any substantial increases in land productivity can only be brought about by new technology, most forms of which cost money. Where such new technology is not available, e.g., parts of North West State, there is no option but for individuals to migrate seasonally (Goddard, 1973) and then permanently out of the area. (3) Seasonal Labour Constraints The pronounced seasonal variation in rainfall means that agricultural activity in the northern states reaches a distinctive peak during the weeding period in June and July. There is little activity during the dry season (November to April) when only low lying land (fadama) can be cultivated. The amount of upland (gona) a family can handle during the June-July period determines to a great extent their level of agricultural activity during the rest of the year. The restricted agricultural activity during the dry season means farming families often supplement their incomes with rural non-farm jobs, e.g., traditional crafts, services, etc. Ready cash is most available after the cash crops have been sold, i.e., mainly December and January. Because of the slackening of work activities during the dry season, most of the cash is spent and little is left for purchasing improved inputs, e.g., seed, fertilizer at the beginning of the rainy season, i.e., April and May, and for hiring labour during the weeding bottleneck period, i.e., June and July. 1Boserup (1965) has hypothesised that population pressure is very important in the adoption of land intensification types of improved technology. S- 21 y (4) Low Incomes and Risk Aversion The above three problems (which is by no means an exhaustive list) lead to low farm incomes. Often it is assumed in economic theory that people wish to maximise profits. However, where incomes are low and spent largely on consumption,farmers are unlikely to take risks. Farmers in the northern states of Nigeria give priority to the provision of family food requirements and are cautious about introducing new crops and patterns of production. The goal of most farmers in the northern states is one of profit maximisation subject to a risk constraint.2 (5) Implications of these problems Some of the implications are as follows: (a) Research workers should bear in mind the following when determining their research priorities. i. For many farmers labour, particularly seasonal, rather than land is the major constraint on increases in production. This supports research which seeks to break the weeding bottleneck in June and July (e.g., herbicides, oxen) and innovations which do not require greatly increased labour inputs, particularly during that period. 1As far as the farmer is concerned there is an element of risk attached to any change from the traditional ways of doing things which have ensured his survival (Wharton, 1969). 2Under certain circumstances these two goals may not be in conflict. For example, there is some evidence that growing crops in mixtures under indigenous technological conditions is consistent with these goals (Norman, 1973). 22 ii. Because of low incomes and limited managerial capacity1 it follows that innovations which are very profitable, dependable and cheap (because the margin of their incomes over subsistence levels is very small) are most likely to be adopted (Wharton, 1969). This supports the idea of research that will fulfill these conditions. Unfortunately as Jones (1960) and Eicher (1968) have emphasized, "single trait" innovations are rare and consequently research workers are usually pushed towards advocating the more complex "package" type of approach (Milliken and Hapgood, 1967). (b) Government agencies concerned with rural development in the northern states should bear in mind: i. The problems that prevent the small farmer from increasing his income are many and complex. There is, because of limited resources and administrative capacity, no hope of the government being able to solve all the problems. The challenge facing the government is to decide which are the major constraints on raising incomes and then to devise viable policies and programs to overcome these constraints. ii. When improved technology is available, government needs to concentrate on three broad priorities: policies to convince and encourage farmers to change; programmes to Little can be taught the farmer on how to improve his farming operations under indigenous conditions; however he is not familiar with improved technology and requires extension assistance. ensure that the farmers will be able to purchase the inputs to bring about change; and programmes to deliver the inputs in sufficient quantities at the right time and in the right place. VII. STRATEGIES TO IMPROVE AGRICULTURAL INCOMES The remarks at the end of the previous section implied that the adoption of new technology was the only way to improve incomes from agriculture. Before considering this possibility in more detail it is necessary to establish that this is indeed the main approach that should be emphasized. Wharton (1968) has observed that in general two broad approaches can be used in bringing about agricultural development. These are: (a) Those which rely upon making fuller use of existing unrealised opportunities and the elimination of existing economic inefficiencies. (b) Those which involve marked changes in one or several of the factors held largely constant under (a), e.g., new technology, changes in infra- structure, changes in demand, changes in prices or the terms of trade between the agriculture and non-agriculture sectors, changes in the motivation of people and changes in institutions.1 RERU has undertaken some preliminary analysis on assessing the potential for increasing agricultural incomes in four different ways. Two of these fall into category (a) above and two into category (b). 1Agricultural development may in fact involve a combination of these two approaches. For example, as was emphasized earlier, the develop- ment of the infrastructure, i.e., roads and railways in Nigeria created previously unrealised opportunities for farmers in producing export products (Eicher, 1967). 24 (1) Reallocation of resources presently committed to production. The results of linear programming studies (Norman, 1970) in Table 2 indicate there is little potential for increasing incomes in this manner. For example, net returns are N185.61 in model B as compared with N173.36 in model A.1 Although the validity of such a conclusion is challenged by Lipton (1968) it does support a similar conclusion derived by Hopper (1965) that farmers are efficient under traditional conditions. (2) The utilisation of more inputs under indigenous technological conditions. There is a greater potential for increasing incomes using this approach, i.e., compare net returns in model C with model B in Table 2. However, it can be argued that this is only a relevant solution as long as land)continues to be a less limiting input than labour.2 When land becomes truly limiting, increases in income will have to come from the use of improved technology which increases'output per acre, e.g., improved seeds, fertilizer, etc. At the present time the type of tech- nology that would be most relevant would be that which increases the output ,per unit of labour. However, this type of technology is either not well developed for the environment in which the farmers work, e.g., herbicides, or is very expensive, e.g., oxen and tractors, and is therefore not likely One Naire (N) is equal to $1.50. 2This can be deduced from the results in Table 2 in which all models have fallow land. 3This is likely to happen because of the high population growth rates and the inability of the non-agricultural sector to absorb much of the increase in population. Table 2. Results of Linear Programming Models Using Different Levels of Technology In The Zaria Area of Northern Nigeria Linear Programming Models Indigenous technology Improved technology average Labour Labour Labour Labour farm restriction (1) restriction (2) restriction (1) restriction (2) A B C D E Land availability (acres): Upland 8.1 8.1 8.1 8.1 8.1 Lowland 1.0 1.0 1.0 1 1.0 Results: Total labour used (hours) 1753 1543 1822 1588 1996 Months in which no surplus labour is available Apr, June,July May, June Apr, June,July May, June Nov, Jan. July, Nov. Nov, Jan. July, Nov. Cultivated acres: Indigenous technology Sole crops (lowland) 0.8 0.5 1.0 0.6 1.0 Sole crops (upland) 1.2 0.9 1.8 0.0 0.0 Mixtures (upland) 5.3 4.7 4.5 4.6 5.5 Improved technology Sole crop (upland): Sorghum 0.8 1.8 Total (acres) 7.3 6.1 7.3 6.0 8.3 Net return (N) 173.36 185.61 218.17 187.60 222.67 Percent increase in net return over B 17.83 1.01 20.00 Are food needs satisfied? Yes Yes Yes Yes Yes 1 and 2The monthly labour distribution reflected the degree of agricultural activity. With labour restriction (1) the labour availability was the same as that actually used on the average farm. With labour restriction (2) labour availability in each month was the non-family labour actually hired in each month on the typical farm, i.e., A, plus the time actually spent by family members on farm A during June, the peak labour month for family labour. In other words it assumes that in any month family members will be prepared to spend as much time on farm work as they do during June. Source: (Norman, 1970) to be commonly adopted. One interesting point to note from the results in Table 2 is that using more of the traditional resources (land and labour) does require the employment of greater monetary resources, primarily for hiring labour. This implies that even in the absence of improved tech- nology credit may play an important role in facilitating the enlargement of the farm business.1 (3) Adjustment of prices. Profits and therefore incomes can be increased under ceteris paribus conditions by raising the prices received by farmers for their products and/or by reducing the costs of the inputs. (a) Product prices. In Nigeria at the present time food crop marketing is not controlled while export cash crops are marketed through marketing boards. In the case of food crops it would seem price support programmes would only be justified if the present marketing system exploits the farmer and if such a programme would not turn the terms of trade against the urban sector, which benefits greatly from relatively cheap food. Hays (1973) has undertaken a study which indicates that farmers receive 69 percent of the final retail price for their grain. He found the relatively wide margin between the producer and consumer, was however, not due to exploitation, but rather to the length of the marketing chain, i.e., up to seven middlemen. The question is could a government controlled system do any better? The history of the Nigerian Marketing Boards suggests that this is doubtful. Rather the solution as Hays suggests is to allow the free market to operate but to try and increase the farmers' share of the 1This however, does not necessarily suggest that institutional sources of credit should be advocated for this purpose, in the absence of improved technology. final retail price by continuing to improve communications, e.g., by building roads and disseminating price information through the news media, introducing standard volume or weight measures and encouraging farmers to organise in groups, e.g., as cooperatives, in order to eliminate some of the links in the marketing chain. Unlike food crops there is a stronger case that can be made for government intervention in the marketing of export cash crops. Initially the Nigerian Marketing Boards were established to help the farmer but slowly developed into taxing institutions by giving farmers low prices for their products and using the resulting trading surpluses for development rather than stabili- zation purposes (Helleiner, 1966; Olayide, 1971). In the last two or three years, with the apparent stagnation of the agricultural sector, the fallacy of this has been recognized and the producer prices for these cash crops have been raised substantially. Analysis presented elsewhere (Ogunfowora, 1972; Norman, 1970) indicates that substantial increases in the incomes of farmers could be achieved by a 50 percent increase in export cash crop prices, in spite of the fact that sufficient food supplies for the family would still be produced. (b) Input prices. It is of course not easy to control the prices of certain inputs, e.g., labour, land, interest charged by moneylenders, etc. In discussing reducing prices of inputs,one therefore usually thinks of elements of improved technology, e.g., fertilizer, herbicides, etc. It has been suggested (RERU, 1972) that a preferable approach to that 1That is rent since farmers only hold usufructuary rights to the land. suggested in (a) above would be not to raise the prices of cash (export) crops too much but to use more of the trading surpluses of the marketing boards in the subsidization of the cost of the new technology, e.g., fertilizer. This strategy would have at least two clear advantages. First, as well as having the advantage of encouraging the adoption of the new tech- nology by reducing the cost per unit, it would provide a more certain way of ensuring that at least some of the increase in incomes that would result, would not be spent on consumption. Secondly, the recent increases in cash (export) crop prices could divert farmers away from food crop production and thus contribute to further increases in food prices which would increase pressure for higher wages in urban areas. The main modern technological input is fertilizer. This is not crop specific in its application and therefore an approach of continuing to heavily subsidise inputs could benefit both food crops and cash crops and also areas where no cash crop, which comes under the jurisdiction of the marketing board, is grown. Govern- mental subsidies for fertilizer, seed dressing, herbicides, etc., could provide a firm basis for increasing farm incomes and production. Although as yet no analysis has been done on the impact on farmers' incomes of such a policy it has been shown that the demand for fertilizer could greatly increase (Ogunfowora and Norman, 1973). (4) Adoption of improved technology. It is of course obvious that relevant)improved technology has to be available before it can be adopted. Is there improved technology available? According to the information in Table 1 the answer is definitely yes but the analytical results in Table 2 indicate this conclusion is not so definite.' Although model D does indicate some payoff to the improved technology this is not so striking as would be expected. Model C utilising greater amounts of traditional inputs under indigenous technological conditions indicated potentially a much greater increase in income. However, it is also apparent that the biggest payoff comes with a combination of greater capital, a higher land availability and labour utilisation base, and the improved tech- nology, i.e., model E. Once again as in (2) above the importance of monetary resources is emphasised both for the purchase of the extra labour required and the improved technology. The apparent inconsistency between the relative values of the improved technology as expressed in Tables 1 and 2 can be resolved by an assessment of its relevance to the farms programmed in Table 2. Two and possibly three points which raise doubts about the validity of the improved technology are as follows: (a) The farms in Table 2 face a labour rather than land constraint. De Wilde (1967) has bemoaned the failure of people to recognize the potential significance of the seasonal labour constraint1 while Collinson (1968) has stressed the importance in such situations for the technology to fit in with seasonal labour requirements of other crops grown by the farmer.2 Thus land intensive technology will likely only appear in optimal farm plans (based on a profit maximisation goal), if it is very profitable (relatively to other crops), and/or it does not require much higher labour inputs (particularly during the labour bottleneck period) than other 1This is a difficulty for technical scientists to grasp who are accustomed to thinking in terms of the return per acre. 2After all the farmer is interested in maximising his income, subject to a security constraint, not from one crop but from the farm business as a whole. crops. The problem of this becomes particularly apparent in the crop enterprises recorded in Table 1. Cotton is a good example of a crop which according to recommendations should be planted earlier than is done traditionally. This immediately brings it into conflict with the weeding bottleneck for food crops in June and July (Norman, Hayward and Hallam, 1973). In spite of its much greater profitability under improved technological conditions it is therefore not competitive with other possible crop enterprises as can be seen by the fact that it does not appear in any of the optimal farm plans, in Table 2. (b) It is seen from the results in Table 2 that crop mixtures (growing several crops on the same field at the same time) are very significant in all the farm plans. One of the reasons for their popularity is that they permit a great deal of flexibility in the timing of farming operations and therefore can help alleviate the demands of the weeding bottleneck period in June and July (Norman, 1973). As a result they tend to be relatively more remunerative than sole (single) crops under indigenous conditions in terms of labour expended during this period.1 This means that sole crops under improved technological conditions have also to compete for adoption with a more serious competitor than sole crops under indigenous conditions, i.e., crop mixtures under indigenous conditions. In addition to the problem of seasonal timing of resources, particularly labour, sole crops under improved technological conditions often suffer from the problem of a lack of flexibility in the timing of the operations which can cause 1And also in fact, in terms of land. farmers difficulties in their adoption. The relevance of the improved technology could be greatly enhanced by research workers devoting some resources to making the improved technology more competitive and flexible by incorporating it into crop mixtures.I (c) Another important factor in determining the relevancy of the improved technology is not only its profitability but the variability in that return. The nature of the programming exercise given in Table 2 has not permitted this to be taken into account but because of the low incomes there is no doubt that this will be an important factor in determining the farmers attitude to its adoption. Here again crop mixtures are in a strong competitive position since the variability in their return in 2 value terms is lower than that of sole crops. Therefore, in conclusion, there is no doubt that for small farms relevant improved technology exists, but for the middle sized farms (Table 2), as opposed to the larger farms which can justify oxen, there are some limitations to the improved technology available at present. However, the time is rapidly approaching as population increases when lit is of course, a very complicated subject and it is questionable just how much in the way of research resources should be devoted to it in Terms of the potential payoff. The Institute for Agricultural Research, Ahmadu Bello University, has never explicitly stated that the recommended practices are only applicable to sole (single) crops. Unfortunately, however, since the recommended practices arose from experiments which of necessity were carried out on sole crops, the interpretation that they are only valid for crops grown under such conditions, has tended to be implicitly assumed. This impression has been further encouraged by the fact that the demonstration plots undertaken by the Ministries of Natural Resources in the northern states are undertaken on sole crops. 2This is mainly because different crop species are not equally affected by variations in weather, insects, diseases and prices. this technology will become relevant to larger numbers of farmers.1 The remaining part of the paper discusses in more detail the conditions necessary to ensure that improved technology is adopted and to suggest what levels of improved technology should be offered. VIII. MEASURES TO INCREASE THE RATE OF ADOPTION OF IMPROVED TECHNOLOGY Earlier in the paper it was mentioned that, providing the relevant technology is available, government programmes for bringing about agricultural development should concentrate on three factors: programmes designed to convince and encourage farmers to change; programmes to ensure that the farmers will be able to purchase the inputs necessary to bring about the change; and programmes to deliver the inputs in sufficient quantities to the right place at the right time. These programmes may seem obvious but it is very rare that all these operate together in one place.2 (1) Convincing the farmer. Many factors contribute to convincing the farmers of the desirability and practicality of increasing their well-being. They must of course want to change (Bailey, 1966) and since they are rational individuals (Blair, 1971) there must be good reasons for convincing them to change.3 The myth that traditional farmers have some target level of income and IMuch of the technology available at present could become much more relevant as soon as suitable herbicides have been recommended. 2For example one northern state did not have any fertilizer to distribute in 1973. 3In French-speaking countries in Africa the "animation rurale" approach has often been used in awakening farmers' receptivity to change and encouraging them to exercise some initiative in bringing about this change (UN, 1971). 33 have no desire to obtain more has long been exploded. Like anyone else, they will consider the effort (cost) of obtaining extra income in relation to the satisfaction (benefits) that income gives. If the probability that the potential benefits outweigh the costs is high,then they will be interested in making efforts to obtain the extra income. The higher the level of profitability of the innovation and the lower its variability, the greater will be the chance of a relevant innovation being adopted (Wharton, 1968). The extension worker plays a key role in convincing the farmer, by demonstration, of the potential profitability and dependability of the improved technology.2 The significance of the extension worker cannot be overemphasised, even after the demonstration phase, since it is he who has to provide the managerial expertise with reference to the improved technology, in order to ensure that farmers obtain the full benefits from its adoption.3 (2) Ability of farmers to purchase the improved technology. Even if the farmer is convinced of the value of the modern technology he may not adopt it because it is too expensive and he does not have the cash when it is necessary to purchase it. It is likely that 1This can be helped to some extent by appropriate pricing policies on the output or preferably input side (see page 27). 2Wharton (1968) has stated that it is the farmer's subjective evaluation of profitability and dependability of the improved technology that will influence his decision whether or not to adopt it and not any ofjective measurements done by someone else. This implies the importance of demonstrating the improved technology on the farmer's own field. 3This discussion has assumed that convincing the farmer to adopt the improved technology is purely a production problem. However, it is appreciated that if farmers do adopt it and production does increase there may well be very soon a reduction in profitability and therefore incentives because of a marketing problem. 34 it would be easier to convince farmers to adopt it if it is cheaper. This could be done as was suggested earlier by larger input subsidies rather than by raising the prices of the cash crops and thereby reducing the trading surpluses of the marketing boards. However, although larger input subsidies will make their utili- zation relatively more profitable as far as the farmer is concerned, it will not eliminate the need for cash. This is obvious from the cost of non-labour inputs given in Table 1. Considering that a farming family's average income is in the region of N200 to N400,it is not difficult to justify the contention that some form of institutional credit is required. The Nigerian Federal Government has recognized this in the current Develop- ment Plan by setting up a National Agricultural Credit Bank specifically for this purpose (Federal Republic of Nigeria, 1970). Although previous government credit programs have suffered from the low rate of repayment, it is hoped that future loans to farmers will be administered in such a way as to reduce the rate of default. Whether or not cooperatives or some other institutional arrangement is desirable has not yet been firmly established.1 (3) Availability of inputs. Finally, although the farmers may be convinced of the value of the modern technology, and, with the provision of credit, may be in a position to purchase the inputs, it is essential that the inputs are available in 1RERU is at the present time undertaking three studies in this area. Unfortunately cooperatives in several of the northern states are located in different ministries from the Ministry of Natural Resources, and there is some opposition to the idea of using them for this purpose. Also the past history of cooperatives in the northern states has not been good. 35 the area in convenient sized units at the right time. This may seem an obvious statement but because of the dispersed nature of the agricultural sector such a seemingly simple operation can cause enormous logistical problems. Government agencies still are primarily responsible for the input distribution system. However, because of the many other functions that such agencies are expected to perform, it has been suggested that consideration should be given to the possibility that at least part of this should be transferred to private organizations (CSNRD, 1969). There is a great deal to be said in favour of this approach especially since safeguards can be introduced by government agencies to prevent exploitation of farmers by commercial enterprises. The Need for Dual Recommendations As was noted in the proceeding section it is rare for all the three programmes to be in one place at one time. Indeed it is at the present time beyond the financial and administrative resources of govern- ment to undertake this approach everywhere. In view of this the following strategy is suggested. (a) Top priority should be given to ensuring that improved inputs are available to farmers everywhere.1 (b) The IAR should provide two levels of recommendations for improved technology (Norman, 1973). i. The intermediate level of recommendations would be advocated when there are a large number of farmers per extension worker, 1Also pricing policy for these inputs should be reviewed in the light of comments on page 27. i.e., 2,000-3,000:1, as is the case at the present time in most northern states. The recommendations would be based on a level of input at which average value product is at a maximum.1 Since managerial expertise with reference to the improved technology is not readily available (extension worker concentration is low) the risk of adopting the improved technology is correspondingly high. Therefore the levels of recommended improved inputs are relatively low.2 The potential profitability of the improved inputs will therefore not be as high as the advanced level of recommenda- tions, but in the absence of the extension input risks will be reduced. ii. The advanced level of recommendations would be advocated when there are fewer farmers per extension worker, e.g., less than 500:1 and,in the case of the land intensive technology presently available,where population densities are high and therefore farms are small.3 These recommendations would be based on a level of input at which marginal value product equals marginal factor cost, i.e., the optimum quantity of 1This is analagous to the highest return per unit of outlay approach suggested by Collinson (1968). 2It is likely that in such a situation farmers will apply the improved inputs in a crop mixture framework, a system they are confident of. This in fact is what often happens now. As a result IAR has issued advice but not recommendations (proved experimentally) on use of improved inputs in crop mixtures. 3The technology at present available is more suited to such farms. Also such farmers, because of a land constraint,will be more open to change, since they need to adopt the improved technology in order to continue to obtain a livelihood from agriculture. 37 input in terms of profit maximisation. The greater concentration of extension staff will enable them to provide the managerial expertise (concerning the improved technology) to the farmers thereby helping them to reap the full benefits of the technology while at the same time reducing risks and variation in the return. (c) That credit programmes would only be introduced in areas in which the advanced levels of recommendations are being extended. APPENDIX A. RESEARCH PROGRAMME OF RERU A summary of the research programme since the inception of RERU is as follows: BASIC STUDIES Village studies Until recently the main area of emphasis has been in studies carried out in a total of 13 villages in four of the six Northern States, i.e., North West (1967-70), North Central (1966-72), North East (1967-68) and Kwara (1969-70) To assess the impact of urban areas on the factor and product markets, two or three villages were picked in each area differing in ease of communication with the main city in that area. Research in each area has in general followed a similar pattern with the initial population enumeration and field mapping (using aerial photographs) being done cooperatively by all three disciplines. Each discipline has then undertaken responsibility for a specific study. Geography has con- centrated on constructing land use maps in each of the study villages in the three far northern states and has investigated the problems of densely populated areas with specific reference to the three villages in the Sokoto area (North West State) and the resulting tendency towards seasonal migration. Research by rural sociologists has emphasized the determination of factors influencing the readiness to change with specific reference to the study villages in the Zaria area (North Central State) in the hope that 1Work in fact was undertaken in five different areas. At the request of the State Government two different areas were picked in Kwara State. this will give some idea as to how future changes should be introduced. In one of the Sokoto study villages (North West State) a study has been made of the traditional lines of authority and communication to under- stand how these could increase the effectiveness of extension programmes. The main thrust of the agricultural economics work has been in undertaking farm management surveys in all the study areas. However, more recently in the Zaria study villages (North Central State) these have been comple- mented by a consumption study, a grain and legume marketing study and a utilisation of credit study. Other studies A number of other basic studies have also been undertaken as a result of specific requests from government, specific interests of research workers, etc. Those undertaken include: in geography an examination of Zaria (North Central State) as an urban centre on the surrounding rural area and a study to determine land utilization in relation to soil and land types in the Hadejia Flood Plain (Kano State); in rural sociology the influence of occupation (farming versus wage earning) of men on the life patterns of women and children; and in agricultural economics a small study of fruit and vegetable marketing in the Sokoto area (North West State), and a study of the marketing of cowpeas in various parts of North West and Benue Plateau States. CHANGE STUDIES The research programme with regard to the change studies is at present being concentrated in three broad groups. Much of the field work of projects listed in this part of the summary is still underway. Evaluation of government programmes In geography this includes a study of the changes in agricultural land use that are accompanying the development of the Kadawa Pilot Irrigation Scheme (Kano State), in rural sociology these include studies on the effectiveness of the Farm Institutes and reasons why many of the trainees leave agriculture (Kwara State), innovative farmers in North Central State, responsiveness of farmers to rises in cotton prices (North Central State), the functioning of Farmers Advisory Committees (Benue Plateau State) and the introduction and adoption of dry-season tomato production (North Central State). Studies in the area of agricultural economics include the analysis of the yields of crops from the demonstration plots carried by the state Ministries of Natural Resources, studies of the problems of cooperatives (Kwara State) and the impact of credit and marketing cooperatives on farmers incomes (Kano and North Eastern States). Rural sociology and extension have cooperated in a study to determine the factors responsible for the farmers adoption of cotton growing recommendations in the Gombe area (North East State) while extension, agricultural economics and Ministry of Natural Resources North Central State are cooperating on a study of two different farmer credit programmes, high and low levels of improved inputs and extension inputs. Assessment of recommendations at the farmers' level All the projects involved in this category have involved more than one discipline. Projects have included the undertaking of observation plots on crop mixtures (rural sociology and agricultural economics), observation plots on sole cropped maize (rural sociology and agricultural economics), assessment of recommendations for cotton growing at the farmers level (entomology, agricultural economics and North Central State Ministry of Natural Resources), assessment of recommendations for cotton, sorghum and maize growing at the farmers level with a credit component (entomology, extension, agricultural economics and North Central State Ministry of Natural Resources). Guided introduction of change This project which is just commencing and is a logical development of RERU's work will last at least five years and will test different combinations of three variables, i.e., different extension methods, credit versus no credit programme, and assurance of availability of improved inputs versus no such assurance will be tested in different villages. The plan is to test only those types of programmes that would be reasonably feasible for governments to adopt within the next 10 years. An inter-disciplinary approach is to be used including technical and social science disciplines and North Central State Government. BIBLIOGRAPHY 1. Bailey, F.G., (1966). The peasant view of the bad life. Communication Series No. 20. Brighton, Institute of Development Studies, University of Sussex. 2. Baldwin, K.D.S. (1957). The Niger Agricultural Project. London, Blackwell. 3. Belshaw, D.G.R. and M. Hall (1969). Economic and technical co-ordination in agricultural development: the case for operational research. Agricultural Research Priorities for Economic Development in Africa, Vol. III. National Academy of Science, Washington, D.C. 4. Blair, H.W., (1971). The green revolution and "economic man": some lessons for community development in South Asia. Pacific Affairs, 44(3). 5. Boserup, E., (1965). The Conditions of Agricultural Growth. London, Allen and Unwin. 6. Bunting, A.H., (1970). Change in Agriculture. London, Duckworth. 7. Buntjer, B.J., (1970). The changing structure of gandu. (Published in M.J. Mortimore (Ed.), Zaria and its Region: a West African Savannah City and its Environs. Department of Geography Occasional Paper No. 4, Zaria, A.B.U.). 8. Buntjer, B.J., (1973). Personal communication. 9. Byerlee, D. and C.K. Eicher (1972). Rural Employment, Migration and Economic Development: Theoretical Issues and Empirical Evidence From Africa. African Rural Employment Study. Rural Employment Paper No. 1. East Lansing, Michigan State University. 10. Collinson, M.P., (1968). The evaluation of innovations for peasant farming. East African Journal of Rural Development, 1(2). 11. Consortium for the Study of Nigerian Rural Development (1969). Strategies and Recommendations for Nigerian Rural Development. Department of Agricultural Economics, Michigan State University, East Lansing, Michigan. 12. Degraff, H., (1951). Some problems involved in transferring technology to underdeveloped areas. Journal of Farm Economics, 33. 13. De Wilde, J.C. et al., (1967). Agricultural Development in Tropical Africa, Volume 1, the Synthesis. The Johns Hopkins Press. 14. Eicher, C.K., (1967). Long term agricultural development in Nigeria. Journal of Farm Economics, 45(5). 15. Eicher, C.K., (1970). Research on Agricultural Development in Five English-Speaking Countries in West Africa. New York, Agricultural Development Council. 16. Federal Republic of Nigeria, (1970). Second National Development Plan, 1970-74. Lagos, Federal Ministry of Information. 17. Goddard, A.D., (1969). Are Hausa-Fulani families breaking up? Samaru Agricultural Newsletter, 11(3). 18. Goddard, A.D., (1972). Land tenure, land holding and agricultural development in the central Sokoto close-settled zone. Savanna, 1(1). 19. Hays, H.M. The organisation of the staple food grain marketing system in northern Nigeria: a study of efficiency of the rural- urban link. Manhattan, Kansas State University, 1973. (Ph.D. dissertation). 20. Hedges, T.R., (1963). Farm Management Decisions. Englewood Cliffs, Prentice-Hall. 21. Helleiner, G.K., (1967). Peasant Agriculture, Government, and Economic Development. Homewood, Irwin. 22. Helleiner, G.K., (1966). Marketing boards and domestic stabilization in Nigeria. Review of Economics and Statistics, 48(1). 23. Hopper, W.D., (1965). Allocation efficiency in a traditional Indian agriculture. Journal of Farm Economics, 47(3), 611-624. 24. Jones, W.O., (1960). Economic man in Africa. Food Research Institute Studies, 1. 25. Lipton, M., (1968). A game against nature (2 parts). The Listener, 79. 26. Lipton, M., (1968). The theory of the optimising peasant. The Journal of Developing Studies, 4(3). 27. Lipton, M., (1970). Interdisciplinary studies in less developed countries. Joint Reprint Series No. 35. Brighton, Institute of Development Studies, University of Sussex. 28. McMeekan, C.P., (1965). What kind of agricultural research? Finance and Development, 2(2). 29. McRobie, G., (1968). New prospects for India's villages. Asian Review, 1(2). 30. Mellor, J.W., (1970). The Economics of Agricultural Development. Ithaca, Cornell University Press. 31. Millikan, M.F. and D. Hapgood., (1967). No Easy Harvest. Boston, Little, Brown and Company. 32. Mosher, A.T., (1964). The sociologist in agricultural development. Rural Sociology, 29(1). 33. Norman, D.W., (1967-72). An economic study of three villages in Zaria province. Parts 1 to 3. Samaru Miscellaneous Paper Nos. 19, 23, 37, and 36. Samaru, Institute for Agricultural Research, Ahmadu Bello University. 34. Norman, D.W., (1970). Initiating change in traditional agriculture. Invited paper read at 1970 Nigerian Agricultural Society Conference. Proceedings of the Agricultural Society of Nigeria, 7. ""- 35. Norman, D.W., (1973). Intercropping of annual crops under indigenous conditions in the northern part of Nigeria. (Published in I.M. Ofori (Ed.), Factors of Agricultural Growth in West Africa, Legon, published by ISSER, University of Ghana and printed by Presbyterian Press, Accra). 36. Norman, D.W., (1973). Modern technology: its relationship to risk, managerial ability, and level of extension input. Samaru Agricultural Newsletter, 15(1). 37. Norman, D.W., Hayward, J.A. and H.R. Hallam (1973). An Assessment of the Effectiveness of Improved Cotton Growing Recommendations as Applied by Farmers in North Central State, Nigeria, 1971. Samaru, Institute for Agricultural Research, Ahmadu Bello University. (Mimeographed). - 38. Norman, D.W. and E.B. Simmons, (1973). Determination of relevant research priorities for farm development in West Africa. (Published in I.M. Ofori (Ed.), Factors of Agricultural Growth in West Africa, Legon, published by ISSER, University of Ghana). 39. Ogunfowora, 0., (1972). Derived resource demand, product supply and farm policy in the North Central State of Nigeria. Unpublished Ph.D. dissertation, Iowa State University. 40. Ogunfowora, B. and D.W. Norman, (1973). Farm-firm normative fertilizer demand response in the North Central State of Nigeria. Journal of Agricultural Economics, 24(2). 41. Olayide, S.O., (1971). Effects of the marketing boards on the output and income of primary producers. Paper given at International Conference on the Marketing Board System, 29th March 3rd April, 1971. 42. Rural Economy Research Unit, (1972). Farm income levels in the northern states of Nigeria. Information requested by the Salaries and Wages Review Commission of 1970. Samaru Miscellaneous Paper No. 35. Ahmadu Bello University. 43. Rural Economy Research Unit and Agricultural Economics Department, (1973). Progress Report No. 7. Institute for Agricultural Research, Ahmadu Bello University. 44. Roling, N.G., (1966). Towards the inter-disciplinary integration of economic theory and rural sociology. Sociologia Ruralis, 6(2). ,/ 45. Schultz, T.W., (1964). Economic growth from traditional agriculture. (Published in A.H. Moseman (Ed.), Agricultural Sciences for the Developing Countries. Washington, AAAS). 46. Simmons, (1973). Consumption survey in three villages in Zaria Province. Vol. 2, Rural expenditures. Institute for Agricultural Research, Ahmadu Bello University. 47. Todaro, M.P., (1971). Income expectations, rural-urban migration and employment in Africa. International Labour Review. 104. 48. United Nations Department of Economic and Social Affairs, (1971). Popular Participation in Development: Emerging Trends in Community Development. New York, United Nations. 49. United Nations Educational, Scientific and Cultural Organization (1970). Report on the Meeting of Experts on the Development of Rural Life and Institutions in West Africa, 22nd 31st July, 1970, Accra, Ghana. 50. Vigo, A.H.S., (1965). A Survey of Agricultural Credit in the Northern Region of Nigeria. Kaduna, Northern Region Ministry of Agriculture (Mimeographed). 51. Wharton, C.R., (1968). Risk, uncertainty and the subsistence farmer. Paper read at the Joint Session, American Economic Association and Association for Comparative Economics, December 1968. Chicago. Other OLC Publications The Reorganization of Higher Education in Zaire, by William Rideout, March 1974, OLC Paper No. 5. Nigerian Universities in the 70's, by A. Babatunde Fafunwa, April 1974, OLC Paper No. 4. Reflections on the Comilla Rural Development Projects, by Akhter Hameed Khan, March 1974, OLC Paper No. 3. Education Sector Planning for Development of Nation-Wide Learning Systems, by Frederick H. Harbison, November 1973, OLC Paper No. 2. Experiences in Rural Development: A Selected, Annotated Bibliography of Planning, Implementing, and Evaluating Rural Development in Africa, by Tekola Dejene and Scott E. Smith, August 1973. OLC Paper No. 1. Le Developpement Rural: Realisations et Evaluation, Bibliographie annotee de textes choisis sur la planification, la mise en oeuvre et I'dvaluation du developpement rural en Afrique, par Tekola Dejene et Scott E. Smith, Aoft 1973. Cahier OLC No. 1. International Directory for Educational Liaison, January 1973. $5.00 in U. S., Canada, and Europe; other countries, no charge. Repertoire International de Liaison en Matiere d'Enseignement, August 1973. "Enhancing the Contribution of Formal Education in Africa: Primary Schools, Secondary Schools and Teacher Training Insti- tutions," by John W. Hanson. "A Human Resource Approach in the Development of African Nations," by Frederick H. Harbison. "The Emergent African University: An Interpretation," by C. W. de Kiewiet. Single copies of OLC papers may be obtained without charge by writing to: Overseas Liaison Committee American Council on Education One Dupont Circle Washington, D. C. 20036 U.S.A. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs | http://ufdc.ufl.edu/UF00054833/00001 | CC-MAIN-2014-35 | en | refinedweb |
Originally posted by sonir shah: Question ID :988380923984 What will the following code print when run? public class Test { static String s = ""; public static void m0(int a, int b) { s +=a; m2(); m1(b); } public static void m1(int i) { s += i; } public static void m2() { throw new NullPointerException("aa"); } public static void m() { m0(1, 2); m1(3); } public static void main(String args[]) { try { m(); } catch(Exception e){ } System.out.println(s); } } Answer: 1 can any one explain me with reasons
Originally posted by sonir shah: But how is m1() related to m2() method. both are completely different methods. my question still remains unanswered .i.e why only have we considered the m0() method instead of m1() for applying the value of 's'?? sonir | http://www.coderanch.com/t/235733/java-programmer-SCJP/certification/Jq-ID | CC-MAIN-2014-35 | en | refinedweb |
Software products slated for the iOS market have to be frugal in their use of memory. iOS devices such as the iPhone and iPad have limited physical memory, much less than their flash storage. Using Xcode can help design frugal code and subject source files to static analysis. You can also use its Instruments tool to track down memory problems at runtime. In this article, I discuss some common memory problems and how they can affect a typical iOS app. I show you how to detect these problems with the aforementioned tools and some ways of fixing them.
You will need a working knowledge of ANSI-C, Objective-C, and Xcode. The sample project needs version 3.x (or newer) of the Xcode development suite.
Types of Memory Problems
Most memory problems are one of four types. The first type is the dangling pointer. This is an object or data pointer that still refers to a deallocated memory block. The block may still have valid data, but the data can "disappear" at some point in time. Attempts to access a dangling pointer may lead to a segmentation fault (
EXC_BAD_ACCESS or
SIGSEGV). A dangling pointer should not be confused with a
NULL, which is defined as
((void *) 0).
Consider the snippet in Listing One. Here, class
FooProblem has a single property (
objcString) and a single action method. The action method,
demoDanglingPointer:, initializes
objcString to an empty
NSString (line 18). Then, in a separate code block, it creates an instance of
NSMutableString (line 20). It sends the instance a release message (line 22), but also assigns that same instance to
objcString (line 25). Once the code block exits, deallocation occurs and
objcString is left holding a dangling pointer.
Listing One
// -- FooProblem.h @interface FooProblem : NSObject { // -- properties:demo NSString *objcString; } // -- methods:demo:actions - (IBAction) demoDanglingPointer:(id)aSrc; @end // -- FooProblem.m @implementation FooProblem - (IBAction) demoDanglingPointer:(id)aSrc { NSString *tBar; objcString = [NSString string]; { tBar = [[NSMutableString alloc] initWithString:@"foobar"]; //... do something else [tBar release]; } //..do something else objcString = tBar; } @end
Now consider now the snippet in Listing Two. This variant of
FooProblem declares a C
struct named
FooLink (lines 1-6). In its
demoDanglingPointer: action, it declares the local variable
tTest and assigns the latter the output from method
danglingPosix() (line 26). The
danglingPosix() method creates a local instance of
FooLink (tBar) inside a code block (lines 38-41). It assigns values to the
struct fields and sets the local
tFoo to
tBar (line 42). But just before the code block exits,
danglingPosix() disposes of
tBar with a call to
free() (line 44) and returns the pointer held by
tFoo (line 48). The result local
tTest now has a dangling pointer.
Listing Two
typedef struct Foo { char *fFoo; unsigned int fBar; struct Foo *fNext; } FooLink; // -- FooProblem.h @interface FooProblem : NSObject { // -- properties //... } // -- methods:demo:actions - (IBAction) demoDanglingPointer:(id)aSrc; - (FooLink *)danglingPOSIX; @end // -- FooProblem.m @implementation FooProblem - (IBAction) demoDanglingPointer:(id)aSrc { FooLink *tTest; tTest = [self danglingPOSIX]; // ...do something else } - (FooLink *)danglingPOSIX { FooLink *tFoo; // initialise the output result tFoo = nil; { FooLink *tBar; tBar = malloc(sizeof(FooLink)); tBar->fFoo = "foobar"; tBar->fBar = 1234; tFoo = tBar; //... do something else free(tBar); } // return the string result return (tFoo); } @end
The next type of memory problem is the double free. This occurs when a code routine tries to dispose of an object or structure that has already been disposed of. Disposal need not happen in succession, so long as it affects the same pointer. A double free also leads to a segmentation fault, followed by a crash.
Listing Three is a classic example of a double free. The action method
demoDoubleFree: creates an instance of
FooLink and populates its fields (lines 3-5). It sends the instance to the method
doubleFreePOSIX:, which updates the two fields (lines 14-15). But then
doubleFreePOSIX: disposes of the
FooLink instance with a call to
free() (line 18). When it returns control to
demoDoubleFree:,
demoDoubleFree: also disposes of the same
FooLink structure using
free() (line 8).
Listing Three
- (IBAction) demoDoubleFree:(id)aSrc { tFoo = malloc(sizeof(FooLink)); tFoo->fFoo = "Foobar"; tFoo->fBar = 12345; [self doubleFreePOSIX:tFoo]; free(tFoo); } - (void)doubleFreePOSIX:(FooLink *)aFoo { //...do something else aFoo->fFoo = "BarFoo"; aFoo->fBar = aFoo->fBar + 123; //... do something else free(aFoo); }
Listing Four shows another double free example. The action method
demoDoubleFree: creates and adds an
NSNumber instance to the mutable array property
objcArray (lines 23-24). Then it invokes the method
doubleFreeObjC. This method parses the array property and uses its entries to create an
NSString object (line 39-40). Later, it sends a release message to each entry (line 44). If the entry is the
NSNumber object, a double free error occurs. This is because the
NSNumber object was marked for autorelease. An explicit release interferes with the autorelease pool's attempt to dispose of the object.
Listing Four
// -- FooProblem.h @interface FooProblem : NSObject { // -- properties NSMutableArray *objcArray; } // -- methods:demo:actions - (IBAction) demoDoubleFree:(id)aSrc; // -- methods:demo:utilities - (void)doubleFreeObjC; @end // -- FooProblem.m @implementation FooProblem // ...truncated for length - (IBAction) demoDoubleFree:(id)aSrc { NSNumber *tNum; tNum = [NSNumber numberWithLong:rand()]; [objcArray addObject:tNum]; //...do something else [self doubleFreeObjC]; } - (void)doubleFreeObjC { NSString *tText; id tObj; tText = [NSString string]; for (tObj in objcArray) { tText = [tText stringByAppendingFormat:@"/@", tObj]; //...do something else [tObj release]; } } @end
A memory leak is another typical memory problem. It occurs where a routine fails to dispose of its objects or structures. The failure may be due to an error or exception, or it may be due to poor code design. A continuous memory leak can lead to a low-memory condition. At best, it could cause iOS to terminate the offending app. At worst, it could force users to perform a hard reset.
Listing Five is one example of a memory leak. This variant of
FooProblem creates an empty
NSMutableArray instance and assigns it to the property
objcArray (lines 17-18). Its
dealloc method, however, does not send a release message to
objcArray. The result is, of course, a memory leak, even if the array object remains empty.
Listing Five
// -- FooProblem.h @interface FooProblem : NSObject { // -- properties NSMutableArray *objcArray; } @end // -- FooProblem.m @implementation FooProblem - (id)init { if (self = [super init]) { // prepare the following properties objcArray = [[NSMutableArray alloc] initWithCapacity:1]; return (self); } else // the parent has failed to initialise return (nil); } //...other methods go here - (void)dealloc { //...do something else // pass the message to the parent [super dealloc]; } @end | http://www.drdobbs.com/cpp/lock-options/cpp/debugging-memory-in-ios-apps/240144285 | CC-MAIN-2014-35 | en | refinedweb |
08 February 2013 18:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Friday’s midday ?xml:namespace>
CRUDE: Mar WTI: $96.13/bbl, up 30 cents; Mar Brent: $118.92/bbl, up $1.68
NYMEX WTI crude futures rose, tracking a rally in global stock markets in response to a string of upbeat economic indicators. Strong Chinese export data and a rise in oil imports signalled a steady rise in economic activity. Brent continued to outperform its American counterpart as the market remained sensitive to the risk of wider supply disruptions. WTI topped out at $96.57/bbl before retreating,
RBOB: Mar: $3.0608/gal, up 6.09 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures prices rebounded after falling on Thursday. Higher crude futures gave support.
NATURAL GAS Mar: $3.290/MMBtu, up 0.5 cent
NYMEX natural gas futures edged upward through Friday’s morning session, boosted by improving near-term demand prospects as winter storms sweep through the
ETHANE: higher at 25 cents/gal
Ethane spot prices traded slightly higher as it continued to track energy commodities.
AROMATICS: benzene wider at $4.75-4.81/gal
Prompt benzene spot prices moved to a slightly wider range early in the day, sources said. Bids were flat, while offers were up by a penny compared with $4.75-4.80/gal FOB (free on board) the previous session.
OLEFINS: ethylene steady at 62.5-63.0, RGP higher at 73.5 cents/lb
February ethylene bids and offers were absent in the market, while bids for March ethylene were heard at 61.0 cents/lb, lower than deals done the previous day at 62.5 | http://www.icis.com/Articles/2013/02/08/9639567/noon-snapshot-americas-market-summary.html | CC-MAIN-2014-35 | en | refinedweb |
.
Use cases
To route audio from loudspeaker to earpiece/headset.
To route audio from earpiece to loudspeaker(even when headset is connected).
Example code
Header files
#include <AudioOutput.h> // for audio routing control
#include <MAudioOutputObserver.h> // for audio routing observers
#include <MdaAudioSamplePlayer.h> // for playing audio
Libraries Used
audiooutputrouting.lib // for routing audio
mediaclientaudio.lib // for playing audio
When we play music files using CMdaAudioPlayerUtility class, the audio will be routed to loudspeaker by default. When the headsets are connected, the audio is routed from loudspeaker to headsets by default.
The CAudioOutput class in AudioOutput.h is used by an audio application to inform the Audio subsystem on where the audio need to be routed for output when playing.This class should only be used if the default audio routing is not sufficient for the audio client.
The following example code shows how to control audio routing:
Play audio file with a CMdaAudioPlayerUtility instance. Playing audio files using CMdaAudioPlayerUtility is explained here.
The audio is played to loudspeaker, by default if headsets are not connected.Once the audio file playing is started, then you can create the CAudioOutput instance (by passing the playerutility to it).
RegisterObserverL() allows clients to register to be updated when the default audio output changes and later when the specified observer no longer wants to be updated call UnregisterObserver().
Then call SetAudioOutputL() passing EPrivate as TAudioOutputPreference parameter for the SetAudioOutputL() API for routing audio to earpiece.
// To route audio to private speaker(earpiece)
CAudioOutput::TAudioOutputPreference myOutputPref = CAudioOutput::EPrivate;
iMySound->SetRoutingL(myOutputPref);
void CMySound::SetRoutingL(CAudioOutput::TAudioOutputPreference& aAudioOutput)
{
iAudioOutput = CAudioOutput::NewL(*iMyAudioPlayerUtility);
iAudioOutput->RegisterObserverL(*this);
iAudioOutput->SetAudioOutputL(aAudioOutput);
}
// callback function
void CMySound::DefaultAudioOutputChanged( CAudioOutput& aAudioOutput,
CAudioOutput::TAudioOutputPreference NewDefault )
{
// Audio routing changed, write your code here
CEikonEnv::InfoWinL(_L("In Callback function"),_L("AudioOutput Routed"));
}
// To route audio to public speaker(loudspeaker),even when headset is connected
CAudioOutput::TAudioOutputPreference myOutputPref = CAudioOutput::EPublic;
iMySound->SetRoutingL(myOutputPref);
// To route audio to both speakers(loudspeaker & earpiece)
CAudioOutput::TAudioOutputPreference myOutputPref = CAudioOutput::EAll;
iMySound->SetRoutingL(myOutputPref);
Following are the various TAudioOutputPreference enums defined in AudioOutput,h, which can be passed to SetAudioPutputL() API :
The output audio can be routed as desired by choosing proper TAudioOutputPreference parameter( as per your routing option ) for the SetAudioOutputL() API.
enum TAudioOutputPreference
{
ENoPreference, /// Used to indicate that the playing audio can be routed to
/// any speaker. This is the default value for audio.
EAll, /// Used to indicate that the playing audio should be routed
/// to all speakers.
ENoOutput, /// Used to indicate that the playing audio should not be routed
/// to any output.
EPrivate, /// Used to indicate that the playing audio should be routed to
/// the default private speaker. A private speaker is one that
/// can only be heard by one person.
EPublic /// Used to indicate that the playing audio should be routed to
/// the default public speaker. A public speaker is one that can
/// be heard by multiple people.
};
Example project
File:Control Audio Routing.zip | http://developer.nokia.com/community/wiki/index.php?title=Audio_Routing_API&oldid=130913 | CC-MAIN-2014-35 | en | refinedweb |
Issues
ZF-10639: i18n Issues in the official docs..
Description… << Headline contains htmlentities ( "& auml ;" )
I know it´s trivial but it also answers on google ( "zend framework namespace" ) ...
Greetings, "hadean" (IRC)
Posted by Matthew Weier O'Phinney (matthew) on 2010-11-08T11:54:26.000+0000
Fixed on the site, and for upcoming releases. | http://framework.zend.com/issues/browse/ZF-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel | CC-MAIN-2014-35 | en | refinedweb |
point in the series, we have covered the most important infrastructure components; however, our business domain consists of a single entity which doesn't help to explain how to resolve some common scenarios when designing parent-child relationships across different application layers. In this chapter, we are introducing a new entity to the model so we can describe how the above mentioned cases might be resolved.
At the end of this chapter, in the appendix section, we also discuss the following topics:
Up to this point, our model consisted of a single class: Customer. We are adding a new entity named Address, this is a simple entity that contains customer address details:
Customer
Address
public class Address :EntityBase
{
01 protected Address(){}
02 public virtual Customer Customer { get; private set; }
public virtual string Street { get; private set; }
public virtual string City { get; private set; }
public virtual string PostCode { get; private set; }
public virtual string Country { get; private set; }
03 public static Address Create(IRepositoryLocator locator, AddressDto operation)
{
var customer = locator.GetById<Customer>(operation.CustomerId);
var instance = new Address
{
...
};
locator.Save(instance);
return instance;
}
public virtual void Update(IRepositoryLocator locator, AddressDto operation)
{
UpdateValidate(locator, operation);
...
locator.Update(this);
}
private void UpdateValidate(IRepositoryLocator locator, AddressDto operation)
{
return;
}
}
The entity has a reference to the Customer reference (line 02) so we will have a one-to-many relationship. As we did with the Customer class, we hide the constructor (line 01) so the Create static method (line 03) needs to be invoked when a new instance is required.
Create
The Customer class needs some re-factoring to accommodate for the new Address class; couple important points are that the Customer class will be responsible for the creation and deletion of Address instances and that the collection of addresses is not exposed directly to ensure the collection is well managed; see also how ISet needs to be used because of NHibernate:
ISet
01 public virtual ReadOnlyCollection<Address> Addresses()
{
if (AddressSet == null) return null;
return new ReadOnlyCollection<Address>(AddressSet.ToArray());
}
02 public virtual Address AddAddress(IRepositoryLocator locator,
AddressDto operation)
{
AddAddressValidate(locator, operation);
var address = Address.Create(locator, operation);
AddressSet.Add(address);
return address;
}
03 public virtual void DeleteAddress(IRepositoryLocator locator, long addressId)
{
DeleteAddressValidate(locator, addressId);
var address = AddressSet.Single(a => a.Id.Equals(addressId));
AddressSet.Remove(address);
locator.Remove(address);
}
Instead, the collection is exposed by cloning the collection into a ReadOnlyCollection (line 01). If a new address needs to be added to the customer, the AddAddress method must be used (line 02); the same applies when an address is to be removed (line 03).
ReadOnlyCollection
AddAddress
As a result of the above changes, the domain model is as follows:
The following changes need to be added to the NHibernate mapping file:
<hibernate-mapping
<class name="Customer" table="Customer">
...
01 <set name ="AddressSet" fetch="subselect">
<key column="Customer_ID"
foreign-
<one-to-many
</set>
</class>
<class name="Address" table="Address">
<id name="Id" type="Int64" unsaved-
<generator class="native" />
</id>
02 <many-to-one
<property name="Street" length="50" not-
<property name="City" length="50" not-
<property name="PostCode" length="10" not-
<property name="Country" length="50" not-
</class>
</hibernate-mapping>
In the Customer mapping, the private AddressSet collection is declared as a one-to-many collection of Address instances; we indicate that the Customer_ID field in the Address table is used as the link (line 01). In the Address mapping section, we also declare the Customer reference to use the same column name (line 02). This approach permits to navigate from the child back to the parent.
AddressSet
Customer_ID
Let's demonstrate how easy it is to propagate the changes to our database; if we create a new test:
and the configuration is set so the test is run using the NHibernate mode, then the test will generate the new schema for us, isn't that nice? Just remember to change the test App.config file:
You may want to open a connection to the database to see the new schema:
We are planning to modify the user interface so the following screens will be available:
We need to provide a new service so we can create, retrieve, and update an Address instance:
Adding a new service requires the following:
IContractLocator
AddressServiceProxy
AddressServiceAdapter
AddressWcfService
The implementation of the above classes is straightforward as they are in fact very similar to the implementations for the Customer service; you may want to get the source code for further details.
In the server side, we need to amend eDirectory.WcfService to add the new Address service to the list of endpoints:
<configuration>
...
<system.serviceModel>
<services>
...
<service name="eDirectory.WcfService.AddressWcfService"
behaviorConfiguration="eDirectory.WcfServiceBehaviour">
<endpoint address="AddressServices" binding="basicHttpBinding"
bindingConfiguration="eDirectoryBasicHttpEndpointBinding"
contract="eDirectory.Common.ServiceContract.IAddressService" />
</service>
</services>
...
</system.serviceModel>
...
</configuration>
Besides implementing the new classes AddressServiceAdapter and AddressServiceProxy
we have added a new bunch of Views with their respective Models and ViewModels:
Among the model classes, the one that needs to be mentioned is AgendaModel:
AgendaModel
class AgendaModel
{
public IList<CustomerDto> CustomerList { get; set; }
public CustomerDto SelectedCustomer { get; set; }
public AddressDto SelectedAddress { get; set; }
}
Notice that the model provides class holders for the selected grid rows; this works in both ways, which is very nice. The only thing to do in the View is to set the binding correctly:
It may not be obvious, but when the list of clients is retrieved from the server, each customer DTO contains a collection of addresses. You may implement a more chatty design where the address collection is only retrieved when the customer is selected. Also, the Customer reference in the Address class translates into the DTO implementation in storing the CustomerId instead; if you don't take this approach, the serialization of your DTOs would be a nightmare, to say the least:
CustomerId
There is another interesting aspect on the AgendaViewModel, that is the way we manage the action buttons using the RelayCommand class. In this case, if a customer instance contains an address, the user needs to delete all addresses before the Delete button for the customer is enabled. This is achieved easily by implementing a predicate in the RelayCommand constructor using the above mentioned selected holder:
AgendaViewModel
RelayCommand
private RelayCommand DeleteCustomerCommandInstance;
public RelayCommand DeleteCustomerCommand
{
get
{
if (DeleteCustomerCommandInstance != null)
return DeleteCustomerCommandInstance;
DeleteCustomerCommandInstance =
new RelayCommand(a => DeleteCustomer(Model.SelectedCustomer.Id),
p => Model.SelectedCustomer != null &&
Model.SelectedCustomer.Addresses.Count == 0);
return DeleteCustomerCommandInstance;
}
}
The XAML declaration is a piece of cake:
Another aspect implemented is something that we have not had a chance to see before; this is how the ViewModel and Services use the selected customer DTO to enhance the user experience; for example, when a new customer instance is created, we need to ensure that the new customer instance is the one selected in the grid once the user is back to the main screen. We resolve this requirement as follows:
public RelayCommand CreateCustomerCommand
{
get
{
if (CreateCustomerCmdInstance != null)
return CreateCustomerCmdInstance;
01 CreateCustomerCmdInstance =
new RelayCommand(a => OpenCustomerDetail(null));
return CreateCustomerCmdInstance;
}
}
private void OpenCustomerDetail(CustomerDto customerDto)
{
var customerDetailViewModel = new CustomerDetailViewModel(customerDto);
02 var result = customerDetailViewModel.ShowDialog();
03 if (result.HasValue && result.Value)
Model.SelectedCustomer = customerDetailViewModel.Model.Customer;
04 Refresh();
}
private void Refresh()
{
long? customerId = Model !=null && Model.SelectedCustomer != null ?
Model.SelectedCustomer.Id : (long?) null;
long? addressId = Model != null && Model.SelectedAddress != null ?
Model.SelectedAddress.Id : (long?)null;
var result = CustomerServiceInstance.FindAll();
Model = new AgendaModel { CustomerList = result.Customers };
if(customerId.HasValue)
{
05 Model.SelectedCustomer =
Model.CustomerList.FirstOrDefault(c => c.Id.Equals(customerId));
...
}
RaisePropertyChanged(() => Model);
}
There is a little bit of code above, but bear with us for a second; the CreateCustomerCommand delegates to the OpenCustomerDetail method (line 01), this method calls the customer detail screen and if a new customer instance is created, it sets the SelectedCustomer property in the Model (lines 02 and 03). Then the Refresh method is called which invokes the CustomerServiceInstance.FindAll() and sets the Model.SelectedCustomer (line 05) to the value it had before the service was called.
CreateCustomerCommand
OpenCustomerDetail
SelectedCustomer
Refresh
CustomerServiceInstance.FindAll()
Model.SelectedCustomer
Parent-child relationships are common in all applications; in this chapter, we discussed how relatively easy it is to implement those across all our application layers. We have discussed how to model our entities so collections are well managed. In summary, the parent is fully responsible for the creation and deletion of child instances. It is a good example of how our entities are moving away from just being simple CRUD data classes to more complex entities that implement business behavior.
We also discussed the NHibernate implementation and how easy it is at this point of the project creating new tests that automatically manage the new database schema, an aspect that proves to be invaluable. We also covered some MVVM techniques to leverage some common scenarios on the client side, like enable/disable action buttons using the predicates on the RelayCommand; once more, it was demonstrated how much value can be achieved by providing a rich model implementation to the XAML Views, reducing the amount of code-behind as a result of the XAML binding capabilities.
In the next chapter, we will discuss how easy it is to deploy our application to Microsoft Azure.
For those that are new to the series or those who are not sure yet how to get the eDirectory application running, the following section describes the steps to quickly get the application running. eDirectory is an application that can be run in-memory or against a SQL Server database using an NHibernate repository implementation. Here, we discuss how to get the client running in a very easy manner: in-process in-memory mode.
In the first place, you need to verify that the client App.Config is set properly so SpringConfigFile is set to use the InMemoryConfiguration.xml file:
Ensure that the eDirectory.WPF application is set to be the start up one:
Change the configuration to the in-memory instance in Visual Studio:
Now the only thing you need is to start the application: F5 or CTRL+F5:
There are couple things done in this chapter on the WPF side that are worth a brief discussion. WCF by default terminates the client application when the first View that was created is closed. In this version of eDirectory, it is required to ask the user which View must be open. Once the user presses the OK button, the original screen must be closed; if nothing is done, the application terminates at that point. An easy way of stopping this behavior is to indicate to WPF that the application itself will look after its shutdown:
public partial class App : Application
{
public App()
{
01 ShutdownMode = ShutdownMode.OnExplicitShutdown;
}
private void BootStrapper(object sender, StartupEventArgs e)
{
var boot = new eDirectoryBootStrapper();
boot.Run();
02 Shutdown();
}
}
When the App instance is created, it is indicated that the application will shutdown manually (line 01), which takes place after the Run method returns (line 02).
App
Run
The second beauty is a customized enum converter that is used by the Selector View that permits matching a radio-button to a specific enum value. The converter is:
Selector
public class EnumMatchToBooleanConverter : IValueConverter
{
01 public object Convert(object value, Type targetType,
object parameter, CultureInfo culture)
{
if (value == null || parameter == null) return false;
string checkValue = value.ToString();
string targetValue = parameter.ToString();
return checkValue.Equals(targetValue,
StringComparison.InvariantCultureIgnoreCase);
}
02 public object ConvertBack(object value, Type targetType,
object parameter, CultureInfo culture)
{
if (value == null || parameter == null) return null;
bool useValue = (bool)value;
string targetValue = parameter.ToString();
return useValue ? Enum.Parse(targetType, targetValue) : null;
}
}
The Convert method is used to see if the radio-button must be set given an enumeration value; the method assumes that the radio-button is to be set if the parameter matches the passed value. ConvertBack returns null if the radio-button is not set; if it is set, it returns the enum value set in the XAML.
Convert
ConvertBack
null
The XAML is as follows:
The converter is declared as a resource named enumConverter and then used in the radio-button declaration; an enum value is assigned to each; CurrentOption is a ViewTypeEnum property declared on the ViewModel that is correctly set without any other additional code. Nice!
enumConverter
CurrentOption
ViewTypeEnum
In this chapter, we decided to introduce AutoMapper. This is an object-to-object mapper, and it is ideal for use when dealing with domain entities and DTOs. You may want to have a look at the CodePlex project for further details.
It is quite easy to use AutoMapper. In the first place, we create the mappings, then we install them and then the mappings can be used. In the eDirectory.Domain project, a new class is added that declares the mappings:
Two mappings are defined, the mapping from Customer to CustomerDto is the interesting one. This one maps the DTO Addresses collection to a function that delegates into the other AutoMapper mapping to map the Addresses collection in the entity to a collection of AddressDto instances.
CustomerDto
Addresses
AddressDto
Then when the WCF service is started, the static Install method is invoked:
Install
You can also leverage the Spring.Net capabilities to initialise the static method by just declaring the class in the configuration file; this is the approach used when we execute the application in in-memory mode; this is another nice example of the Spring.Net capabilities:
An example of how the eDirectory solution uses the AutoMapper mapping is found in the Customer service. | http://www.codeproject.com/Articles/137791/WCF-by-Example-Chapter-XIII-Business-Domain-Parent?fid=1601525&df=90&mpp=25&sort=Position&spc=None&tid=3726631 | CC-MAIN-2014-35 | en | refinedweb |
, = getContents >>= print . maximum . prods . input:
nums = ... -- put the numbers in a list problem_13 = take 10 . show . sum $ nums
4]]
Alternate solution, illustrating use of strict folding:
import Data.List problem_14 = j 1000000 where f :: Int -> Integer -> Int f k 1 = k f k n = f (k+1) $ if even n then div n 2 else 3*n + 1 g x y = if snd x < snd y then y else x h x n = g x (n, f 1 n) j n = fst $ foldl' h (1,1) [2..n-1]
Faster solution, using an Array to memoize length of sequences :
import Data.Array import Data.List syrs n = a where a = listArray (1,n) $ 0:[1 + syr n x | x <- [2..n]] syr n x = if x' <= n then a ! x' else 1 + syr n x' where x' = if even x then x `div` 2 else 3 * x + 1 main = print $ $ 2^1000
7
10 ] | http://www.haskell.org/haskellwiki/index.php?title=Euler_problems/11_to_20&oldid=15946 | CC-MAIN-2014-35 | en | refinedweb |
13 April 2012 08:17 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The plant was slated to resume operations on 26 March following the completion of a planned maintenance but this was delayed in view of the temporary suspension of operations at its upstream ethylene plant at the
The company operates a two-line ethylene plant with a total production capacity of about 695,000 tonnes/year.
“We can produce some quantity of VAM for April but we can’t confirm the production volume because of the limitation of ethylene supply,” he said.
It is not known how long the low operating rates at the VAM plant will be maintained, but there has been no need to import VAM to make up for the shortfall in output, he added.
The VAM plant was previously operating at 80% of capacity amid slow market conditions before it was taken offline for maintenance on 17 February.
Other major VAM producers in the Asian market include US | http://www.icis.com/Articles/2012/04/13/9549957/japans-showa-denko-operating-oita-vam-plant-at-low-rates.html | CC-MAIN-2014-35 | en | refinedweb |
Agenda
See also: IRC log
[Agenda planning. . .]
NM: Let's try issue HttpRedirections-57
JR:
... Going through the history---first two points are the origin of this
... 1) 303s aren't supposed to be cached -- bug in 2616 -- fixed in HTTPbis
DC: Let's endorse that fix
LM: Not sure about that -- not prepared to endorse -- abstain
NM: This becomes relevant because we encouraged people to use 303
JR: Any reason not cache 303 responses?
LM: No
NM: draft RESOLUTION: TAG endorses the proposed change to HTTPbis to allow caching of 303 responses
DC: Specific proposal is where?
<jar>
<DanC_> is this OK? "A 303 response SHOULD NOT be cached unless it is indicated as
<DanC_> cacheable by Cache-Control or Expires header fields."
JR: This is different from 307. . .
DC: I think the HTTP spec. is usually neutral wrt caching
JR: OK, we need to explore this further -- the difference from 307 is worrying
<noahm> I heard DC say HTTP was neutral in the absence of cache-control or expires header
<DanC_> ACTION: jonathan to research 303 caching change in HTTPbis [recorded in]
<trackbot> Created ACTION-347 - Research 303 caching change in HTTPbis [on Jonathan Rees - due 2009-12-16].
JR: Sub-issue 2) There's a need
for a non-3xx response, in order that the original URI stays in
the status bar
... Unlike 302, 303 or 307, where the target goes in the address bar
<DanC_> (researching the bug...)
JR: This is described as a security concern
<DanC_> (many/most purl users want the purl bookmarked, not the redirected addressed)
TBL: But we really don't want that for e.g. 307, because it's only a temporary redirect, so people shouldn't e.g. bookmark it
LM: The single result display in
the address bar is insufficient for what we want to tell the
user
... Doing UI design is inappropriate for us. . .
JR: I agree, that's why I want to lose this part of the issue
LM: The principle we can endorse
is that the URI you see should be a URI you can use to get you
what you see
... Going further to say it should be a long-term, bookmarkable, etc. URI is a bit fuzzier
NM: WebArch says use one URI for
a resource
... even when they're not going away, it can be a problem, for example when example.com redirects to example-1.com or example-2.com for load balancing
JR: What should I do
<jar> For all practical purposes it's impossible to get a purl.org URI into your bookmarks list
DC: Let's find out why Mozilla decline to address the PURL folks' request to fix this, so that you could bookmark PURLs
TBL: Flight of fancy on 303x, 303y, 303z. . .
<DanC_> "304622 min -- All nobody RESO INVA Adding a live bookmark via feedview uses the location of the feed rather than the location given in the referring page's link element; redirects, PURLs don't work "
<DanC_> maybe this is the bug
<noahm> proposed ACTION: Jonathan to research reasons why browser providers (e.g. Mozilla) aren't willing to meet requests (e.g. from purl) to switch address bar URL following successful redirect
<noahm> ACTION: Jonathan to research reasons why browser providers (e.g. Mozilla) aren't willing to meet requests (e.g. from purl) to switch address bar URL following successful redirect [recorded in]
<trackbot> Created ACTION-348 - Research reasons why browser providers (e.g. Mozilla) aren't willing to meet requests (e.g. from purl) to switch address bar URL following successful redirect [on Jonathan Rees - due 2009-12-16].
<jar> or to not switch
JR: 3) Rhys Lewis was working on
a finding wrt httpRange-14, but that work stopped when the SWEO
note Cool URIs for the SemWeb was published
... I think that work should be picked up and made into a finding
... which would replace/elaborate the email message which currently stands as the resolution of httpRange-14
... That was the context for ISSUE-57 at its inception
... Additional points that have been added, are my points 4--6
... Latest news: AWWSW task force has reported:
... A number of forms for this work, of which I'm the main editor
... helped along by our discussion at the last f2f
... A lot of text to introduce one key definition:
... for the phrase "corresponds to", which comes from the definition of the 200 response code, in 2616 and HTTPbis
LM: I wouldn't take this too seriously -- we didn't when we wrote it
JR: We agree entirely. It's the practice which matters to actually pin this down
LM: I note that this story works/should work pretty much for ftp: as well
JR: Wrt WebArch, 'representation'
corresponds to 'entity' or 'content entity'
... and 'represents' corresponds to 'corresponds to'
<DanC_> LMM: the HTML spec uses 'resource' for what HTTP calls entity. I filed a bug; we'll see...
LM: Note that the correspondence is at a particular instant
JR: Yes, at a particular time
LM: And in a particular context
JR: It's hard to pare things down
to the point where we could focus
... So there's now a bunch of stuff which has been moved off the table
... Section HTTP Exchanges summarizes what we all know about GET requests
DC: hmm... in pt 5, "preferably"? the server decides which resource the name refers to...
JAR: but an intermediary might get confused
DC: ah... "preferably" makes more sense for intermediaries
TBL: 304? 307?
JR: Yes, step 6 pbly should be
clarified wrt responses other than 200
... [works through the RDF formalization]
TBL: Why did you avoid 'representation'
JR: Because people objected to giving a URI to something called 'representation' a URI
TBL: All I was concerned is to distinguish the original resource, identified by its URI, and the 'resource' which is some representation of that resource, which also may have a URI, but is not the same
JR: Right
... correspondence is a 4-place rel'n between resource, a content entity, a start time and an end time
HST: Context is richer than just time
LM: Accept headers
TBL: But there's still something core
JR: I try to work breadth first
HST: I didn't mean Accept Headers, but rather deixis, e.g.
DC: or
JR: On to section "What this semantics is careful not to say"
<masinter>
<masinter> vs
LM: Server response is a speech act
JR: Precisely -- let's look at
some more recent slides
... How do you prove correctness of an HTTP proxy, cache, API or theory
<DanC_> Potatoes don't say anything
<DanC_> bug in "Content negotiation" slide: speaks_for should be corresponds_to
slide21 should have corresponds_to instead of speaks_for in conneg slide (21?)
<jar> TOPLAS 1993 ?
<DanC_> (I think of it as BAN logic)
JR: Now make use of Abadi,
Burrows, Lampson and Plotkin logic (ABLP)
... originally for crypto
... and access control
<DanC_> (a larch formalization based on a 1989 SRC Research Report )
LM: What's good about this is precisely that it qualifies everything with the principal who/which/that says it
JR: Crucial observation -- HTTP defines corresponds_to as follows:
"example.com controls { corresponds_to E}"
JR: The domain of "says" is
principals, Non-principals don't say anything
... Not all resources are principals
NM: Break for 15 minutes
<jar> There are two versions of ABLP, the DEC SRC TR from 1991, and the TOPLAS paper from 93 or 94
<jar> not to be confused with the earlier BAN paper from 1990, which overlaps in content
NM: Resumed
JR: [Gets to slide 12, reconstruction of httpRange-14]
NM: So this is stronger than the original conclusion?
JR: Yes
... The original 'resolution' simply constrained the range of the corresponds_to relation
... but it didn't actually address the original problem
NM: Elaborating the "image conneg example": URI identifies a photo. Conneg used to retrieve either jpeg or gif. They agree up to a point in conveying the photo, but not completely, does the theory allow/explain that?
JR: This theory as it stands isn't articulated enough to determine the relationship between corresponds_to and speaks_for
NM: Good progress here, wrt
httpRange-14
... Note that we're OK, mostly, when we ask for, say, the Declaration of Independence, and what we get back has some advertising in a sidebar
... and I think this can address that
LM: I think this is very good stuff. I hope we can use it to clarify what is meant by Origin
LM: The whole CORS, confused deputy, etc. debate is hampered by a lack of clear definition of precisely this kind of thing: what is an origin, a deputy, etc.
LM: Linking SemWeb and Security would be a great thing, possibly a win for both sides
NM: Great idea -- specific action?
DC: I'd like to write this up in a different editorial style
<timbl> Have we finished JAR's slide set?
JR: Sure
<timbl> ah
JR: Connects with CAPdesk, DARPA-funded DARPAbrowser
<noahm> The chair would very much like for Dan to propose an action for himself.
<DanC_> . ACTION Dan write up speaks_for applied to httpRedirections and httpRange using motivating examples
<noahm> Thank you!
<DanC_> ACTION Dan write up speaks_for applied to httpRedirections and httpRange using motivating examples
<trackbot> Created ACTION-349 - Write up speaks_for applied to httpRedirections and httpRange using motivating examples [on Dan Connolly - due 2009-12-16].
<johnk> Pointing out Miller et al's Horton paper:
<johnk> re: "delegating responsibility in digital systems"
<jar> JAR is babbling about Mark Miller's previous work: DARPAbrowser and CAPdesk (w.r.t our discussion of 307 and what's in the browser URI bar, etc. )
TBL: Slides done, can we try to
find a replacement for 'speaks_for'
... We have a URI, we get a 200
... Using 'speaks_for' as the relationship which relates content to the resource
... but if R is a person, the content can't 'speak_for' a person
<DanC_> contexts in which the term gets used "a secure channel from Bob speaks for bob"
TBL: that is, an entity speaking for the agent
<masinter> you get a 200 from a server, where the server speaks for the person
JR: In the old days we sent
letters, and my letter did 'speak_for' me
... No resource speaks for me, it doesn't say that
<DanC_> (it's clear to me that offline witing is going to be more efficient than group discussion, but if Tim has a clear example, I'm interested to capture it.)
<DanC_> i identifies Pat Hayes
<DanC_> 2. 200 from resource identified by i
Slide 9 appears to back Tim
<DanC_> conjecture: 200 response speaks for Pat
HST: Stipulate that we have a URI
for Pat Hayes
... Then your slides appear to say that if I get a ContentEntity from GETting that URI
... that it a) corresponds_to Pat and therefore, per the 'Controversial Axiom', that it speaks_for Pat
JAR: would give us a reason to ask Pat not to assert such things, because it breaks our theory
JR: Ah -- the ContAx isn't
licensed by any existing spec.
... I think it's useful to explain a lot of WebArch
TBL: So if it is, we have a reductio wrt Pat saying what he says about that URI
<DanC_> phpht
JR: Oh, yes, and, the ContAx
should include server says that E speaks for R
... not E speaks for R directly
AM: Looking at R doesn't say any s, then E doesn't (mustn't) say any s
JR: This is meant just to be a restatement of the positive direction
AM: This says E's only role is to say what R says
JR: Yes, that's the ContAx
JAR: yes, advertising conflicts
DC: I'm getting useful input, not guaranteed to end up in the same place
LM: Please try to include Origin
DC: Not sure how, but I'll at least try.
HT: I think perhaps there are too many levels at which entities say things. It's clear to me that an XML document says some things, because of the semantics of XML. I.e. the infoset.
TBL: I dispute that it says those things.
DC: I understand both positions.
JAR: Me too.
HT: I'm being intentionally obtuse in part to get to talking about a 3rd party, which is the interpreter of the message. We often think of this as a human observing a screen, can also be listening to audio.
HT: It's that which ultimately says things.
JAR: Similar to the crypto case, in which the interpreters have to be part of the proof system.
<masinter> A potato says "help i'm a potato" ?
<DanC_> (the dispute between TBL and HT is issue ISSUE-28 fragmentInXML-28; odd that tracker considers it closed when it's plain that the TAG doesn't have consensus.)
TBL: When it's RDF, what it says is what the triples it produces say
<DanC_> (the resolution in tracker sides with Tim)
HT: Isn't that analagous to my statement that what an XML document "says" is first order the Infoset, and then 2nd order the interpretation of those.
TBL: No, I'm talking about the interpretation of the graph.
HT: Ah.
HT: What I [originally] scribed is wrong when I attributed to TBL "what it says is the triples it produces"; should have scribed "what it says is what the triples it produces say"
NM: good progress here, great
work JR
... DC is going to try to restate/elaborate
<DanC_> action-201?
<trackbot> ACTION-201 -- Jonathan Rees to report on status of AWWSW discussions -- due 2009-12-01 -- PENDINGREVIEW
<trackbot>
<DanC_> . action-201 due 15 Mar 2010
[procedural discussion]
<DanC_> action-201 due 15 Mar 2010
<trackbot> ACTION-201 Report on status of AWWSW discussions due date now 15 Mar 2010
TBL: I'd like to see some interaction with the Tabulator work
<DanC_> ACTION-116 due 31 Dec 2009
<trackbot> ACTION-116 Align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. due date now 31 Dec 2009
<noah> ACTION-201 Due 2 March 2010
<trackbot> ACTION-201 Report on status of AWWSW discussions due date now 2 March 2010
LM: Could we have used a Link Header in a 404 response?
JR: Yes
LM: But not a link in the body of 404 document itself?
DC: No
LM: But I like the idea of having links in the body, because you can have lots of them
<noah>
<noah> This is in relation to ACTION-303
AM: Doesn't this allow me to just support an earlier version?
<Zakim> noah, you wanted to talk about problems with >requiring< future proofing
HST: The 'earliest appropriate'
sentence is meant to rule that out.
... Maybe that needs to be stronger
NM: I have a long history of
interest in this
... I like this as a goal for many circumstances
... But there are cases where it doesn't work
... The XML 1.1 experience is illustrative in this case
... So we shouldn't require this kind of future-proofing of references
... Specifically in terms of systems which are involved in communication
<DanC_> +1 "should future-proof" is too strong. The simple case of citing a frozen spec is fine in many cases
<Zakim> johnk, you wanted to wonder whether it is confusing to combine conformance and referencing behaviour in one statement
<noah> Seeing where you're going, Henry, unless new editions >never< allow for new content, I think my concern stands.
JK: Conformant implementations? Should that be separated from what is referenced? Trying to pack too much in?
<noah> Or maybe I'm not guessing right as to what your concern/suggestion will be.
JK: How references are written is different from what is a conformant implementation
<Zakim> DanC_, you wanted to ask for a reminder of a specific case we're particularly interested in... it was somewhere in the HTML 5 references, yes?
DC: There was a specific case wrt the HTML 5
<masinter> think IETF tradition is to make the 'future proofing' more part of general policy than being specific in each draft. A1 references B1. When B2 updates B1, implementations of A1 may or may not follow B2
HT: As it stands, there are only stubs in the HTML 5 references.
DC:HT: No.
HT: Last I looked. E.g. following link from content-sniffing you got something that just said content sniffing.
<DanC_>
<noah> We pause to read HTML 5 references section....
HT: Ah, it's better than it was.
DC: So if we pushed on any of these, we would pbly find the editor would have a reason
HT: E.g. the text in the references says "[CSS] Cascading Style Sheets Level 2 Revision 1, B. Bos, T. Celik, I. Hickson, H. Lie. W3C, April 2009.", but links the undated copy.
HST: So what does it mean for an implementor? Specifically, implementors 5 years from now have to figure out what was meant. We're trying to fix that.
<Zakim> TBL, you wanted to point out that anyone using this language assumes there is a contract with future working groups to maintain the operability of the referencing spec, when
TBL: If you propose we use the
present and the future -- why not earlier ones?
... As for the future, that depends on the sort of WG and the sort of spec.
... If the group doesn't commit to back compatibility, you can't rely on it
<masinter> Is the distinction between "edition" and "version" important?
TBL: You might try to negotiate a
commitment from the WG that they won't change. . .
... Or you might just require people to check
<masinter> Can distinction between "technical specification" and "applicability statement" be useful? "applicability statement" calls out specific dated versions, while general "technical specification" doesn't? Two documents, one of which updates.
TBL: So it's not clear that we can go with what you propose
LM: I like the difference between
edition and version
... We used to differentiate between applicability statements and language specs.
... So you would only have to update the appl. statement
<Zakim> ht, you wanted to reply to Noah wrt editions vs. versions
LM: Alternatively, you could have policy outside the doc. altogether
NM: You haven't addressed my concern, because it wasn't lack of back-compat that broke the XML 1.1 situation
HT: The response to Noah and Tim is to say "yes, all those criticisms apply to unrestricted blank checks" (leaving aside for a sec refs to older versions), by relying on the W3C Policy for Edtions (stepping gently around XML 1.1/10 5th edition in particular), is precisely because it makes this plausible.
NM: Do new editions allow new content?
HT: Yes.
NM: Then I still have a problem. See problems deploying XML 1.0 5th edition. A sometimes inappropriate (depending on the specs) expectation is created that implementations that haven't been updated will support new content sourced by those that have been.
JR: Conformance to a spec. that has a variable in it is intrinsically vague
<Zakim> jar, you wanted to consider classes of comforming implementations (conforming to various combinations of specs)
JR: So there's a time-sensitivity wrt the answer to "does this conform?"
<Zakim> noah, you wanted to mention that there can be issues with 3rd party specs.
NM: TBL mentioned SOAP in passing
[AM leaves]
NM: SOAP wasn't sure about
supporting XML 1.1
... It depended on the Infoset, and we weren't sure that even if we went to XML 1.1, the Infoset would have been well-future-proofed enough for it all to hold together
... So in some ways, my willingness to future-proof my references depends on other specs also being well future-proofed
<Zakim> johnk, you wanted to ask how can we apply henry'd text to the specific issue noted?
HST: Yes, we have a real case of this with XML 1.0 5e and XML NS 3e
JK: Addressing dated prose in conjunction with an undated URI is separate from future-proofing?
LM: My assumption is that the dated ref. is normative
<jar> If dated spec A normatively cites undated spec B, and artifact Z conforms to A - what does that mean? Maybe: (1) it conforms to A(B(t)) for some t, or (2) it conforms to A(B(t)) for all t, or (3) if conforms to A(B(t)) for t >= now
DC: Hidden URIs are less significant
<DanC_> (editorially I like including the full, dated URI in a citation, but I much prefer using the document title as the link text.)
HST: Jonathan attempted
to answer John. I agree as far as it goes but want to go
further. You're right, I was trying to address two problems: 1)
dated vs. undated refs conflict, and BTW some peoples' styles
to make the URI explict...
... there are many variations on that 2) usually, all that people tend to say is by grouping into normative and non-normative. It's rare for the conformance section to clarify what is meant by making a reference normative.
<noah> FWIW, Dan, though it's clunky, I tend to feel that making both live links, to the same URI, is the least bad approach.
<jar> the normative reference speaks for the spec that refers to it
<DanC_> (oh... and I don't like "available at"; I consider the semantics "identified by", and I leave it implicit)
<DanC_>
<noah> Queue is open only for next steps discussion
DC: I asked the HTML 5 editor to
add 'work in progress' to links to documents which identify
themselves as work in progress
... The response was 'busywork'
NM: I don't think this can go further unless my concerns and maybe TBL's are addressed
<DanC_> (aha! found some work I did in this area: 'formally defining W3C's namespace change policy options w.r.t. recent TAG versioning terminology' )
JR: I thought restricting to editions was good enough
TBL: I had missed that HST meant to constrain to editions, that satisfies me
<noah> What I have in mind is something along the lines of:
<noah> The TAG believes that this is good practice in many cases, but not in all. We recognize that, particularly in cases where no assurance is given that future editions won't support use of new (I.e. previously invalid) content, the advice given here may be impractical.
<DanC_> I think the short para HT proposed is "too clever by half"; it'll only be an effective communication if it recapitulates critical parts of the edition policy
<DanC_> also, I want to make it clear that it's not the only "template" we endorse by providing more than one template; e.g. another one for really frozen, dated specs
<jar> whether in practice the "edition" process as specified and executed is sufficient to protect investment is something I'm not qualified to answer. it sounds as if it would be, as specified, if followed, but haven't checked...
<DanC_> close action-303
<trackbot> ACTION-303 Draft text on writing references closed
<DanC_> close action-304
<trackbot> ACTION-304 Write up issue around normative references to particular versions of specs closed
<scribe> ACTION: Henry to revise based on feedback on www-tag and the feedback from TAG f2f 2009-12-09 discussion [recorded in]
<trackbot> Created ACTION-350 - Revise based on feedback on www-tag and the feedback from TAG f2f 2009-12-09 discussion [on Henry S. Thompson - due 2009-12-16].
<johnk>
<DanC_> Miller et. al.
NM: Adjourned for lunch.
Note: the lines below, up to the announcement that the meeting is "resuming", are in response to informal requests that were made during breaks for information about certain recent Microsoft announcements. These were not discussed during the formal meeting sessions.
<timbl>
<noah> Tim, if you're interested in Microsoft's Dallas, it was introduced at their developer's conference a couple of weeks ago. You can go to the transcript of the keynote at and look for the word "Dallas". The video of the keynote, with demos, is at
<noah> You can use the transcript to find the right place in the video.
NM: Resuming.
<masinter> I believe the TAG asked me to review widget:
<masinter> I did so
<masinter> the webapps working group replied
<masinter> i answered their replies this morning
<masinter> if the TAG would like to review the correspondence and chime in later, then we don't need to take up meeting time here. If you'd like, I can go over what I think the open issues are. Opinions?
<masinter>
<masinter> see "Comment on Widget IRI" messages
(still working on the agenda)
<noah>
<timbl>
close action-311
<trackbot> ACTION-311 Schedule discussion of a persistent domain name policy promotion closed
timbl: Above link is old, but
background
... Argument against using http: URIs as names, is that DNS doesn't socially support you. The domain name is rented, not owned.
... One proposal, if it's broken, fix it.
... DNS was controlled by IETF, ICANN, and it being up for rent was assumed a good idea
... now the dangers are becoming known.
... All the white house pages disappeared when the administration changed (e.g.)
danc: (asks about how that example bears...)
<masinter> points to again
timbl: Many companies put up things that people would like to find later
danc: There is a third-party business around finding things like that
<DanC_> (I don't see how domains would help in either of the supposedly-motivating cases timbl just gave)
timbl: Anyhow. One way to tackle is
to make a new TLD that has different rules
... You might use it for archivable web pages , under a set of rules
... concerning transfer of rights to other entities so that pages can continue to stay live
<masinter> points to and previous version
timbl: there might be a pot of $ to
pay for this
... Problem is to design a social system, maybe as a DNS play, or by setting up a consortium
<masinter> points to whitehouse.gov
timbl: Suggesting that to help make this happen, the TAG could write a finding advocating it
ashok: These would be *unalterable* pages?
timbl: To be determined
ashok: Can you then sell something in this archive space?
timbl: What transfers is responsibility - not any right to change
jar: It's a contract with the public
<Zakim> ht, you wanted to suggest a workshop
ht: There are many design points. We
could spend time talking about alternatives...
... I wonder is for the TAG to host a workshop before we write a finding, to scare up a representation of the interested parties
... a new TLD is a problem for existing URIs that are supposed to have persistent resolution
... but might be worth paying the cost
... Another way to go is to talk ICANN into a process around existing domains & persistence
<DanC_> (ah.. that would be better... a way for any domain to get permanent status, sorta like 501(c)3 )
ht: Can we get theorists, library
community, other constituencies together to talk
... How about a workshop?
<DanC_> +1 workshop
<masinter> points out talks from previous 1999 workshop on Internet Scale naming
<noah> Wondering whether cost/logistics would work out for workshop proposal. If so, seems appealing, but not sure whether we can get
<Zakim> DanC_, you wanted to note that it's not any more broken that it could/should be. New domains are not going to get companies to keep their product manuals online or stop the
danc: Tim's examples didn't motivate
a TLD for me...
... Giving more visible to best practices is a good idea though
... There's a running business that does endowed web publication
ht: I haven't found any reference to DNS insurance
danc: There are journals like PLoS
that charge authors because they agree to host the content in
perpetuity
... you pay once, it's there forever
noah: (pokes fun at this)
<ht>
danc: The White House doesn't have the URI persistence ethic
<masinter> points to "This American Life" story about a cyrogenics firm which promised perpetual freezing:
masinter: Points to 1999 workshop "problems URIs don't solve"
<masinter> points to again
masinter: Organizations split. They
merge. They go out of business. Sub-sites move. Countries
disappear.
... In perpetuity has to be around content, not just names
... People will look to organizations like archive.org for long-term resolvable names
<timbl> ./me quickly runs a script to change all the links in all his HTML to point to an internet archive version of the URL just in case
masinter: Getting a guarantee is not the same thing as getting a credible guarantee
<masinter> points to and previous version
<masinter>
lm: would like advice on how to
progress with these two projects
... duri = dated URI, guarantees persistent reference, but resolution may be tricky
<timbl> I wonder whether "that described by" is one word in Latin
lm: still puzzled about this approach
danc: Use cases?
lm: tdb: has an optional date... actually two of them, when the resource was read, and when it was interpreted
danc: I've never seen a situation where the complexity of duri: is required
<Zakim> noah, you wanted to say that the TLD with persistent assignment seems very appealing, restricting the owner's ability to alter the pages doesn't. Seems best approached as an
danc: The URI scheme space is high price real estate, so better to do as an RDF property for those who are happy to use RDF
noah: Something was said about locking down the content, Tim hesitated
timbl: source code repository with version control
<masinter> 1999 workshop:
noah: What about perpetual ownership
of name - should be orthogonal to an obligation to preserve
... Preservation of content should be more granular
ashok: Who will host all this stuff? Not a private company, which can go away.
timbl: A consortium of libraries.
ht: Replication is the only assurance
of permanence
... This is a huge design space.
<Zakim> johnk, you wanted to ask what is the incentive for someone to use duri and if not sufficient incentive, and not all using them, wouldn't we still have the problems described by
johnk: This is a social problem. Not
sure we can solve this. All of the institutions and agreements go
away.
... Not sure this is web architecture
timbl: We need to kick it from the technical into the social
<Zakim> ht, you wanted to mention transparency
<masinter> points to
johnk: There's no technical solution here
<DanC_> yes, lockss is great work in this space
<noah> Heads up: before Dan goes, I want to remind everyone that we should switch to generic resources within 5+ mins
ht: Footnote: The motivation for
things like tdb: and wpn: was transparency, so that you can tell by
looking at a URI that it named a non-information-resource (not sure
i still believe that)
... One component is a board of trustees with the power to wind it all up (e.g. if there were no web, at some future time)
<masinter> points to for long-term archiving also (and see references)
ht: The digital curation people worry about: Where do the resources come from to carry resources forward (e.g. archaic disks)
timbl: lots of ways for accessibility to fail
<Zakim> DanC_, you wanted to push back: why should IBM get "ibm.com" in perpetuity without giving back to the commons/community a persistence promise (e.g. re content of homepage) and to
ht: Aim for June?
<noah> suggest phrasing, "perhaps in June"
<DanC_> ACTION Henry to look into a workshop on persistence... perhaps the June 2010 timeframe
<trackbot> Created ACTION-351 - Look into a workshop on persistence... perhaps the June 2010 timeframe [on Henry S. Thompson - due 2009-12-16].
<Zakim> masinter, you wanted to ask who gets "att.com" when AT&T is broken up into baby bells, lucent, etc.
lm: recommends references in the long term archiving paper (see above)
<DanC_> ... esp the references
<noah> NM: To be clear, I think persistence of name assignment should be attacked (mostly) separately from encouraging providers of content to provide that content in perpetuity and/or to make it immutable.
<DanC_> action-312?
<trackbot> ACTION-312 -- Jonathan Rees to find a path thru the specs that I think contradicts Dan's reading of webarch -- due 2009-12-01 -- PENDINGREVIEW
<trackbot>
<DanC_>
JAR:The email I sent on Monday was sort of "camouflaged"
JAR: In a sense, some people are trying to say, 'I can prove I need URNs'
JAR: I was trying to set that down more rigorously.
JAR: I want to relate it to the formalism I've been building.
<DanC_> close action-312
<trackbot> ACTION-312 Find a path thru the specs that I think contradicts Dan's reading of webarch closed
<DanC_> action-121 due 15 Mar 2010
<trackbot> ACTION-121 HT to draft TAG input to review of draft ARK RFC due date now 15 Mar 2010
<DanC_> action-121 due 2 Mar 2010
<trackbot> ACTION-121 HT to draft TAG input to review of draft ARK RFC due date now 2 Mar 2010
<DanC_> action-33 due 20 Dec
<trackbot> ACTION-33 revise naming challenges story in response to Dec 2008 F2F discussion due date now 20 Dec
<noah>
masinter: I drafted replacement
text
... "how to use conneg" explanation for HTTPbis
<masinter>
danc: Don't see any text about how the representations relate to one another
<noah> BTW, the "problems" with the tag-weekly.html version of the agenda seem to be due to slow response by W3C servers. The tag-weekly.html version now appears to match the dated version.
<masinter>
masinter: sentence about server's purposes needs to be added. re-open action
danc: This is what the speaks_for
slide in the presentation is about... if representations
contradict, it's incoherent
... How about striking "for its purposes"
lm: "for the purposes of this communication"
<DanC_> +1
+1
noah: (making another point about
attribution)
... determining, for the purposes of this communication, which representations...
<noah> Note that the supplier of representations (or choices) has the responsibility of determining, for purposes of this communication, which representations might be considered to be the "same".
<noah> I don't like "considered to be the same".
<DanC_> how about: considered to give the same information
noah: The spec already says entity
corresponds to resource
... Two representations each have the responsibility to correspond to.
... so nothing else needs to be said.
<masinter> change "might be considered 'the same'" to "might be considered to represent the same information'
DanC: That's the bug we're trying to fix.
<noah> Not convinced.
noah: Saying "corresponds to" is enough
<masinter> the proposed text in uses "represent"
johnk: You're saying two things. Do we want to make the second statement, that the conneg reps have to sufficiently resemble one another (or something similar)?
<noah> There is already an obligation that each representation correspond. It will tend to be the case that multiple representations of a (an immutable) resource will tend to have interpretations that are in some ways similar, perhaps extremely similar, but the archicture should not rule out, e.g. a B&W gif and a color jpeg of very different resolution.
lm: Different ways to represent "the
same information" (quoting lm's email 763)
... I infelicitously said "same representations" when I should have said "represent the same information"
noah: There are enough weasel
words
... good that we're talking about representing the same information
<noah> I.e. to make me happy
lm: And the server has responsibility.
<DanC_> action-231?
<trackbot> ACTION-231 -- Larry Masinter to draft replacement for \"how to use conneg\" stuff in HTTP spec -- due 2009-11-18 -- OPEN
<trackbot>
<DanC_> action-231 due next week
<trackbot> ACTION-231 Draft replacement for \"how to use conneg\" stuff in HTTP spec due date now next week
(consensus around give or represent the same information)
?
break.
noah: Let's see if we can get
organized for a more comprehensive approach, or find a whole that's
greater than the sum of the parts
... The TOC is broader in the topic coverage than it might be
... maybe look at the form of our products in this area
ashok: From what we spoke about
yesterday, it seemed there were many differences between various
people think about web apps
... I thought: web app = you are working with several communicating components
... but maybe some people thought it was an app running on a server [with sessions]
<Zakim> DanC_, you wanted to project the web app product next to the outline, and to suggest (a) invited presentations or other get-togethers and (b) looking at relevant wikipedia pages
ashok: In the first case, authorization etc are big issues. In 2nd case, security issues go away
danc: I was looking at PhoneGap and
Native Client [see previous action]
... Inviting any of those folks to talk to us would be a good thing
... Let's look at wikipedia pages related to security, web apps, widgets, etc
... The idea is to inform the developer community; a lot of people end up at wikipedia
... Maybe contributing to wp might be a way to help
... (brainstorming)
<Zakim> jar, you wanted to ask for / suggest criteria etc
JAR: I agree with Ashok's comment about Web applications, and assumed we were talking about the distributed case.
JAR: I assumed it involved The Common Man in the Street (TCMITS).
JAR: Regarding the TOC, it was a brain dump, first developed by the group together, and then refined by me. What I'm missing are criteria. Some sort of structure or philosophy that would guide us.
<noahm> NM muses: maybe the criteria include: 1) architectural issues you would not get right based on what's been set out for the Web of documents and 2) clarifying points of confusion Goal: show that it's, in the end, one consistent, scalable architecture integrating documents and apps.
JAR: Consider, e.g., why a specific programming language wasn't chosen for the Web. It was deemed desirable to have competition there. Maybe there's a winner now (Javascript.) Anyway, what do we want to make the same, and what different?
noah: We don't talk about how you use oracle, that's an implementation detail
<DanC_> (I dunno how conscious it was that javascript happened when it happened... there was talk of active content back in 1990. tcl and such. not to mention display postscript.)
<johnk> well, and you have XSLT with XML and CSS too I guess
noah: Things like cross-origin
security, how to use URIs right - those things are in scope
... What happens inside server is not in scope
... typed possible criteria into IRC (above)
... clarify confusion around e.g. AJAX, or say how to apply old story in new situations
... to what extent is google maps one application, vs. a very large number of maps? ... more than just a document
<Zakim> noahm, you wanted to respond to ashok
timbl: Even though mapping software allows you to display many overlays, this is always done in code. But with calendars - you can control calendar view, how they're stacked / displayed - that's richer than what you can do with maps
<DanC_> (hmm... I wonder if KML is sufficient.)
<noahm> I think that talking about proper use of URIs when you're composing layers might be interesting
<DanC_> (... to get maps to work, like calendars, in various clients)
<noahm> Ah, when Tim says music, he's thinking more iTunes than Sibelius
<noahm>
timbl: Music: iTunes maybe - other applications - multidimensional access / view. Key point is you're looking at more than one document at a time
timbl: When you pull in the data you have to be clever. E.g. you're looking for photos tagged x. Client would do a query to get the photos of interest
<DanC_> (it's really a drag that the Zakim queue isn't a UI feature, e.g. integrated with the list of names in the channel. So many times I'm this close >< to writing an ajax-based front end to Zakim/tracker/rrsagent)
[?]
<Zakim> johnk, you wanted to say that it was part of web arch in 1990
johnk: Want to push back on jar's
idea that webarch didn't address programming / application
layer
... For last TAG meeting I tried to draw a parallel between local web browser vs. javascript ... original web arch did deal with this...
timbl: For example, you could have
faceted browsing using forms
... javascript model just moves data/code onto the client
johnk: Phone's IP address isn't
public, but a server [once it knows address] can call back to the
phone to perform actions
... would like to address that applications are distributed in some way [holds up piece of paper]
johnk: Here are some models. 1.
server & client, server assembles a widget, client GETs widget,
does a software install
... interesting thing is 2 trust decisions. 1. Install? 2. Run?
... side case: What is difference between this and native client, or plugin?
... again you have 2 trust decisions, except that (maybe) app is given more power
ashok: Model: app stays on server ---
johnk: I'm not done. Case 2. For
example, in iGoogle (?), Google says all this content is sanctioned
by Google
... Client does a GET, trust decision is: Install + run? (as one decision)
... ashok: How different from widget case?
... Both in one step.
noah: (something about cookies vs.
user ids)
... Reserve the word "install" for ...
johnk: Case 3: Site A has a document,
with content that calls out to site B (Fedex and airline)
... Fedex has document that calls out to airline
... (2nd example) Amazon is in control, compiles the content
... Cross-site case. there are trust decisions in both directions
danc: Line from amazon to fedex - ?
johnk: Not saying this is deployed in a reasonable way, just observing
Case 4: Client accesses both Amazon and Fedex
scribe: the client does the mashup
danc: e.g. tabulator
... We're trying to get a feel for case 4
timbl: Tabulator is a browser extension
danc: What's a good example?
timbl: If you look up me, it pulls up information from wikipedia
danc: No, where the *user* chose both sites?
timbl: What people have we seen?
danc: The interesting difference is that in case 4, the user chooses the sources to be combined. It's not one server referring the user to another.
timbl: Consider two people on twitter, each with a bunch of tweets.
<DanC_> (might have been nice if tim had drawn a separate thingy rather than erasing 4. oh well.)
timbl: Storage of the data is
separate from the...
... Suppose tweets are to be readable by my friends
... when someone pulls in tweets, it's because they're in the group
... tabulator code is completely trusted by C. Runs with user's identity
<DanC_> (hmm... this speaks_for exercise might be an interesting way to look closely at OpenID phishing risks... and to explore my intuition that OAuth is sorta kerberos-shaped)
johnk: The user has to decide to download the twitter app, and ...?
timbl: No, it's in the cloud
(scribe not quite getting it)
timbl: Separate decisions about where to store their data, vs. [something about the app]
johnk: (End of 4 cases as diagrammed on piece of paper and then on the whiteboard)
johnk: web server provider / consumer issues coming out of SOAP work
ashok: There are several trust decisions... made by the *user* explicitly
johnk: brainstorming...
... The site is also making some decisions for you
<DanC_> . ACTION: John integrate whiteboard drawings into a prose document about ways to distribute applications
ashok: In case 2, where igoogle pulls in stuff for you, there's the question of state
johnk: Yes, in all 4 cases
<DanC_> ACTION: John integrate whiteboard drawings into a prose document about ways to distribute applications [recorded in]
<trackbot> Created ACTION-352 - Integrate whiteboard drawings into a prose document about ways to distribute applications [on John Kemp - due 2009-12-16].
<Zakim> noahm, you wanted to ask about use of core mechanisms like URIs in the Tim use case
noah: Tim's use case was about making maps much better. You go out and say 'tell me about this area'
<Zakim> DanC_, you wanted to look at the list of install-time capabilities/permissions in the W3C widgets spec and to note seems to have no
(timbl recessing himself)
danc: List of install capabilities in
widget spec - seems dangerous to standardize this
... "This is xxx and it wants to look at your contacts list"
(timbl back)
danc: Can't find an actual starter
list of particular permissions / capabilities - seems good to not
standardize, but seems bad because not tested
... Lets you sprinkle open dust on your distributed system
noah: We're no worse off. Let the market deal with it
johnk: Symbian has a specific list of caps that the OS gives you
<DanC_> I'm fairly satisfied with using URI space as a marketplace of features, if it works out that way
masinter: Issue of versioning APIs,
registries comes up repeatedly
... the problem becomes much worse regarding what might be available on the device
<DanC_> but yeah... if everybody pretends to support hundred-pound-gorrila.com/featurex , then that sucks
masinter: "are you a Symbian phone"? is the wrong question. "do you support geolocation?"
noah: If you have an ordinary web
page, it asks, can I call the geoloc API?
... or, in the install process, the question gets asked at install time
... phonegap either does or doesn't give you a good answer
danc: The premise of the w3c widget
spec is that you could have a w3c widget store
... The 100-pound gorilla phenomenon is still a risk
... ... little guys will be disenfranchised
<Zakim> noahm, you wanted to ask about use of core mechanisms like URIs in the Tim use case
lm: If you want to name it with the name of the implementation, it's hard to extend, or you run into trademark problems
danc: It's in CR (widget packaging & config)
noah: If they want to write a great iphone app this is a dumb way to do it
lm: The failure hasn't happened because the 2 years haven't passed (you name a capability by the implementation, and there's no extensibility story, then within 2 years you'll have kludges)
jar: +1 to LM
<noahm> I'm not convinced we're seeing that problem is happening. Yet.
danc: The spec says, URIs go here
<noahm> I'm sympathetic to watching for this trouble happening; I'm unenthusiastic about getting the TAG all geared up about this until we see trouble brewing.
danc: The install time ritual says, this app wants to look at x, y, z
<johnk> +1 to Noah
danc: The spec only says put URIs
here
... Maybe there will be a marketplace... but maybe the gorilla gets in there, and everyone else has to pretend to be the gorilla
noah: It's not the user-agent string case
danc: No, not interestingly different
<noahm> I'm not convinced it's underspecified.
lm: If there's part of a spec that's underspecified, and that part need specifications for interoperability, we (TAG) could say so
<johnk> I think the basis for the widget spec is exactly _for_ interoperability
timbl: Expecting that probably , there will be the equivalent of a mime type registry
<noahm> I think there will be much more diversity here than for mime types.
timbl: current frame, focal length, lots of profiles to talk about... w3c may get involved
noah: The tough thing is there's lots
of innovation going on... would have been bad for standardization
to rule out multitouch
... the fact that it's a URI is good
<DanC_> (given that the players in this space seem to be acting in good faith, I'm ok to accept the 100-pound-gorrilla name-mangling risk; I'm OK to hope for a healthy market)
lm: I don't want a solution, I just want to ask the question: What is the migration path e.g. from one pointer to two?
danc: Maybe people will come to W3C to get a URI?
lm: We'd like to see, if they have a solution, let's get it documented better. If not, let's work on one.
<DanC_> Larry, if you want an action, you can pretty much always assign yourself one. or you can nominate somebody.
<Zakim> noahm, you wanted to ask about use of core mechanisms like URIs in the Tim use case and to talk about innovation vs. standardizatoin in this space and to ask about use of core
lm: Not sure I want to engage widget folks again
noah: The maps could be more
sophisticated... (that's what Tim was saying...) telling a story
about naming and identity is important. Is there agreement on when to
mint a URI, how much client/server AJAX flexibility is, who knows
what the URIs are. Very interesting area to work.
... TAG story: identity, interaction, formats
danc: Identity per noah is a big story
(scribe hears "semantics" when noah says "identity")
noah: Portals ...
<DanC_> DanC: it's interesting to me in that it includes/subsumes the concern I have about "proposal to make ajax crawlable". If success can be less than the whole thing, I'm all for it.
<Zakim> timbl, you wanted to say that for that class of application (map, iTune, document mgt, iPhoto, calendar, timelines, etc) there typically are *not* URIs for the total view.
<noahm> Are not and should not be, or are not but there should be?
timbl: Noah asked, do people make up
URIs for the views?
... Not in general.
... If so, the URIs get big.
... Tabulator students took a sparql query to encode a view.
... When URIs get too big, they invent a data format.
(jar promises to be brief)
<Zakim> jar, you wanted to talk about the US civil war and to talk about sparql-over-GET + tinyurl
JAR: In nearly every part of this discussion, I see us dancing around, meaning, inference, and contracts.
JAR: Want to encourage people to look at OWL, which is the W3C technology in the inference space (and it's very nice)
DC: there's a consortium of URL shortening companies
noah: I said, identification is something we could profitably work on
jar: 'Identification' is meaningless without meaning / inference
(discussion of agenda)
jar: re OWL, e.g. a specification induces a class of conforming entities. that's DL. one of many possible applications.
<noahm> . ACTION: Noah to do just a bit of work framing some issues around identification for Ajax apps (remembering the merged maps use case) Due 20 January 2009
johnk: Approach of starting with 3 pillars of webarch is good
<noahm> ACTION: Noah to do just a bit of work framing some issues around identification for Ajax apps (remembering the merged maps use case) Due 20 January 2009 [recorded in]
<trackbot> Created ACTION-353 - Do just a bit of work framing some issues around identification for Ajax apps (remembering the merged maps use case) Due 20 January 2009 [on Noah Mendelsohn - due 2009-12-16].
jar: spec / interface naming /v ersioning is one good focus, security is another
danc: Minions, please check client side storage design and look for architectural issues
ashok: web databases?
danc: yes
<DanC_> . ACTION ashok review client side storage apis (web simple storage etc.), looking for architectural issues or other critical problems... or interesting design features the TAG should know about
<DanC_> ACTION ashok review client side storage apis (web simple storage etc.), looking for architectural issues or other critical problems... or interesting design features the TAG should know about
<trackbot> Created ACTION-354 - Review client side storage apis (web simple storage etc.), looking for architectural issues or other critical problems... or interesting design features the TAG should know about [on Ashok Malhotra - due 2009-12-16].
johnk: I could try to map AWWW section on interaction to parts of webapps TOC that seem related
noah: Interesting, but how about look at interaction story in webapp & findings, and ask: could I tell the Ajax story?
johnk: Yes, I was trying to be more specific, but that's the idea
<noahm> . ACTION john to explore the degree to which AWWW and associated findings tell the interaction story for Web Applications
<noahm> ACTION john to explore the degree to which AWWW and associated findings tell the interaction story for Web Applications due: 2 Feb 2010
<trackbot> Created ACTION-355 - Explore the degree to which AWWW and associated findings tell the interaction story for Web Applications due: 2 Feb 2010 [on John Kemp - due 2009-12-16].
<noahm> ACTION-355 = john to explore the degree to which AWWW and associated findings tell the interaction story for Web Applications
<noahm> ACTION-355: john to explore the degree to which AWWW and associated findings tell the interaction story for Web Applications
<trackbot> ACTION-355 Explore the degree to which AWWW and associated findings tell the interaction story for Web Applications due: 2 Feb 2010 notes added
<DanC_> action-355 due 2 feb 2010
<trackbot> ACTION-355 Explore the degree to which AWWW and associated findings tell the interaction story for Web Applications due: 2 Feb 2010 due date now 2 feb 2010
lm: Do we have an exit strategy for
ISSUE-50?
... The goal of Henry's action is to close the issue, right?
all: yes
Adjourned until 0900 2009-12-10 | http://www.w3.org/2001/tag/2009/12/09-minutes.html | CC-MAIN-2014-35 | en | refinedweb |
Economic Analysis of Toilet Seat Position
kdawson posted more than 7 years ago | from the why-is-this-so-hard? dept.
's the big deal.. (0)
Anonymous Coward | more than 7 years ago | (#19370251)
Re:What's the big deal.. (2, Insightful)
WilliamSChips (793741) | more than 7 years ago | (#19370285)
Re:What's the big deal.. (5, Funny)
Wayne247 (183933) | more than 7 years ago | (#19370367).. (5, Interesting)
markdavis (642305) | more than 7 years ago | (#19370555).. (5, Funny)
complete loony (663508) | more than 7 years ago | (#19370657)
Re:What's the big deal.. (2, Insightful)
jcorno (889560) | more than 7 years ago | (#19371009) sit on a wet seat, so the next woman has to hover, too. Don't ask me why they can't put the seat up. I'm guessing it's a matter of principle.
sit down to piss (0)
Anonymous Coward | more than 7 years ago | (#19370577)
Re:What's the big deal.. (0)
Anonymous Coward | more than 7 years ago | (#19370767)
Many studies have proven that sitting makes sure that the blater is emptied more completely thus leaving less urine that might cause medical problems.
Also, I heard once that urinating while sitting is less demanding on the prostate, making it less likely to generate problems in the long term.
Besides men should always try to please women, any way they can...
Re:What's the big deal.. (4, Funny)
DuncanE (35734) | more than 7 years ago | (#19370371)
WE took the time to lift it UP. THEY can take the time to put it DOWN.
(Yes Im married and whipped so this will only ever be posted on slashdot. Im never actually going to say it out loud.)
Re:What's the big deal.. (1)
KiloByte (825081) | more than 7 years ago | (#19370471)
And no, neither her nor her flatmate heed it when I complain. Drat. Damn female chauvinist pigs... I should sue them for gender discrimination or something.
Re:What's the big deal.. (1)
WilliamSChips (793741) | more than 7 years ago | (#19370813)
Re:What's the big deal.. (-1, Troll)
Anonymous Coward | more than 7 years ago | (#19370859)
Re:What's the big deal.. (1)
Orkie (899576) | more than 7 years ago | (#19371033)
Re:What's the big deal.. (1)
Fred_A (10934) | more than 7 years ago | (#19370477)
(note : I'm not in the US.)
Re:What's the big deal.. (1)
Fred_A (10934) | more than 7 years ago | (#19370491)
What about the lid? (4, Insightful)
nurb432 (527695) | more than 7 years ago | (#19370255)
Re:What about the lid? (4, Interesting)
Anonymous Coward | more than 7 years ago | (#19370301)
Re:What about the lid? (4, Funny)
Timesprout (579035) | more than 7 years ago | (#19370669)
Re:What about the lid? (4, Interesting)
jc42 (318812) | more than 7 years ago | (#19370785)? (5, Insightful)
Purity Of Essence (1007601) | more than 7 years ago | (#19370949)
Missing options... (1)
DrYak (748999) | more than 7 years ago | (#19370461)
Oh, wait ! This isn't a poll.
Re:What about the lid? (1)
smitty_one_each (243267) | more than 7 years ago | (#19370473)
Buried, page 798.
Essential earmarks.
Re:What about the lid? (5, Funny)
bl8n8r (649187) | more than 7 years ago | (#19370497) (3, Funny)
Opportunist (166417) | more than 7 years ago | (#19370529)
Re:Depends entirely on the artwork (2, Funny)
DJ Rubbie (621940) | more than 7 years ago | (#19370761)
Re:What about the lid? (1)
whimmel (189969) | more than 7 years ago | (#19371161)
I prefer the lid closed mainly because I have a shelf full of toiletries and a towel rack hanging over the toilet. If one of those items were to fall, I don't want to have to fish it out of the toilet.
I used to room with a female who insisted that I close the shower curtain. I listened to her argument, agreed, and learned quickly to close it when I was finished.
She (to this day in her new place) leaves the seat down and lid up. It makes me think she just gets up and walks away from the toilet when she's done. Gross!
Each time I'd find the toilet in that state, I'd yank the shower curtain open with as much noise as possible. She would then come running up the stairs yelling "SORRY!", lower the toilet lid and then put the shower curtain back. It apparently didn't teach her but it made me feel better.
Re:What about the lid? (5, Informative)
_vSyncBomb (50710) | more than 7 years ago | (#19370609):What about the lid? (2, Informative)
travdaddy (527149) | more than 7 years ago | (#19370781)
Re:What about the lid? (2, Funny)
Anonymous Coward | more than 7 years ago | (#19370935)
Re:What about the lid? (1)
Dachannien (617929) | more than 7 years ago | (#19371031)
I demand new and interesting ways to have a shit (1)
TheEmptySet (1060334) | more than 7 years ago | (#19370259)
Re:I demand new and interesting ways to have a shi (4, Informative)
grahamlee (522375) | more than 7 years ago | (#19370971)
You insensitive clod... (3, Funny)
Anonymous Coward | more than 7 years ago | (#19370265)
(I'm lots of fun at the office, too... those silk plants sure look real)
Re:You insensitive clod... (0)
Anonymous Coward | more than 7 years ago | (#19371127)
Way. Too. Much. Time (1)
segedunum (883035) | more than 7 years ago | (#19370269)
A technological approach (1)
wfberg (24378) | more than 7 years ago | (#19370305) vastly more likely to close it. Also, men are more likely to close the door for fear of exposing themselves (or malodorous fumes) when they would be facing the door when using the toilet, rather than standing with their back to the door - which is the most likely orientation when urinating. Closing the door for toiletseat-down operation can be reinforced in males by only providing access to printed matter (such as a newspaper) with the door closed (and hence, the toiletseat down).
The toilet seat might be operated electronically, or even mechanically, so this system could even be used during power outages or in developing nations. It would require only the bare minimum of training for all participants.
Re:A technological approach (0)
Anonymous Coward | more than 7 years ago | (#19370377)
Re:A technological approach (0)
Anonymous Coward | more than 7 years ago | (#19370401)
to the unfruitfully endowed males, (0)
Anonymous Coward | more than 7 years ago | (#19370427)
One would imagine such a sensor would be a source of great embarassement....
I'm not referring to myself of course.... *returns to shopping for a Porsche through tears*.
Re:A technological approach (0)
Anonymous Coward | more than 7 years ago | (#19370959)
Are you serious? Wow. Let's spend time and money on research and sensors and AI and installation and repair so that a ROBOT can do the simplest tasks for us.
While we're at it, lets spend thousands more dollars so robots can wipe our butts and brush our teeth. (Not the same ones, mind you.)
If this is a question of being too squeamish to touch the toilet seat, maybe what you need is to carry around some latex gloves.
Re:A technological approach (1)
ElleyKitten (715519) | more than 7 years ago | (#19371157)
Solve your problem (2, Insightful)
WormholeFiend (674934) | more than 7 years ago | (#19370309)
Among its various additional benefits, squatting really helps pushing out number-two's.
Until you shit all over your shoes. (1, Informative)
Anonymous Coward | more than 7 years ago | (#19370983)
Or... (2, Insightful)
msauve (701917) | more than 7 years ago | (#19371153)
Academic detachment (3, Funny)
antifoidulus (807088) | more than 7 years ago | (#19370319)
Re:Academic detachment (2, Informative)
Whiney Mac Fanboy (963289) | more than 7 years ago | (#19370495)
I fear that I have to point out that a hole in the ground is the traditional toilet for all cultures.
Re:Academic detachment (0)
Anonymous Coward | more than 7 years ago | (#19370559)
What about people who have dogs? (0)
Anonymous Coward | more than 7 years ago | (#19370321)
Otherwise that sloppy kiss from our dog is a bit gross.
Problem solved (0)
Anonymous Coward | more than 7 years ago | (#19370329)
from TFA: (1)
dominious (1077089) | more than 7 years ago | (#19370363)
Sit down to pee!!! (0)
Anonymous Coward | more than 7 years ago | (#19370365)
Simple solution in my house (1)
vinniedkator (659693) | more than 7 years ago | (#19370379)
Pathetic (0)
Anonymous Coward | more than 7 years ago | (#19370399)
shucks (0)
Anonymous Coward | more than 7 years ago | (#19370421)
Drat it all Shakespeare, I was so hoping to be satisfied in the end...
The Unconsidered Factor (2, Insightful)
Enonu (129798) | more than 7 years ago | (#19370459):The Unconsidered Factor (1)
DFENS619 (1008187) | more than 7 years ago | (#19370611)
Re:The Unconsidered Factor (1)
kalirion (728907) | more than 7 years ago | (#19370975)
and the solution is .... (1)
3seas (184403) | more than 7 years ago | (#19370463)
but be warned
Re:and the solution is .... (1)
coinreturn (617535) | more than 7 years ago | (#19370567)
But if the seat's on fire, perhaps you should piss on it.
Re:and the solution is .... (0)
Anonymous Coward | more than 7 years ago | (#19370765)
Easy solution (4, Funny)
Anonymous Coward | more than 7 years ago | (#19370467)
Remove the toilet seat.
No toilet seat, no arguments, no problem.
Re:Easy solution (5, Funny)
Anonymous Coward | more than 7 years ago | (#19370499)
Re:Easy solution (1)
dramenbejs (817956) | more than 7 years ago | (#19370727)
Re:Easy solution (1)
nsupathy (515587) | more than 7 years ago | (#19370883)
What the... (1)
Mystery00 (1100379) | more than 7 years ago | (#19370479)
This toilet seat thing is a pet peeve of mine... (5, Insightful)
808140 (808140) | more than 7 years ago | (#19370483) (1)
808140 (808140) | more than 7 years ago | (#19370519)
Somehow I reversed the emphasized. Sorry, should have previewed.
Re:This toilet seat thing is a pet peeve of mine.. (2, Insightful)
wonkavader (605434) | more than 7 years ago | (#19370591)
When we look for a job, many of us us the Dilbert principle. If there are a few Dilbert cartoons on the cubes, work there. If there are a lot or none, don't. (None means that management won't allow them, and people are scared, too many means the company is seriously pooched.) This is a rule. No matter how nice things look, if it doesn't pass the Dilbert test, we don't take it.
The toilet seat thing seems just as useful and important or more so. If she doesn't immediately see that there shouldn't be an issue there, run.
Re:This toilet seat thing is a pet peeve of mine.. (1)
coinreturn (617535) | more than 7 years ago | (#19370601)
Amen. Amazingly, when I explained this to my wife, she agreed and the issue completely disappeared. Showing not only that she's not a selfish twit, but that she can be convinced by logical reasoning.
Re:This toilet seat thing is a pet peeve of mine.. (1)
garett_spencley (193892) | more than 7 years ago | (#19370619) asses dry
Re:This toilet seat thing is a pet peeve of mine.. (1)
complete loony (663508) | more than 7 years ago | (#19370715)
Re:This toilet seat thing is a pet peeve of mine.. (2, Funny)
Lumpy (12016) | more than 7 years ago | (#19370823):This toilet seat thing is a pet peeve of mine.. (2, Funny)
Charcharodon (611187) | more than 7 years ago | (#19371001) bizarre bounces or ricochets it would take, would land in the toilet from anywhere in the bathroom. Once exposed to this, it takes a long time for it to get it out of your life, much like a neurotic woman. I had never noticed it before because of the much stronger force, known as female OCD, altered the natural laws of space and time in my household. Once I resigned myself to leaving the toilet set back down, things stopped landing in the toilet, though they tried their damndest to do, and instead started landing in the trash can. Now I have to get into the habit of taking out the trash and putting a bag in the can.
A guy can't fucking win.
Re:This toilet seat thing is a pet peeve of mine.. (1)
neolith (110650) | more than 7 years ago | (#19371101)
This has got to be some holdover from cavemen days (with apologies to the GEICO guys). Guys sitting down to pee, when they can, is the next leap forward in the evolution of civilization.
Re:This toilet seat thing is a pet peeve of mine.. (3, Funny)
billcopc (196330) | more than 7 years ago | (#19371103)
Eugenics starts in the bathroom!
Re:This toilet seat thing is a pet peeve of mine.. (1)
Jeff DeMaagd (2015) | more than 7 years ago | (#19371121).
Oh dear... (1)
Z00L00K (682162) | more than 7 years ago | (#19370509) something heavy there... Sometimes it may be a good idea to flush before and after...
Irony (1)
Realistic_Dragon (655151) | more than 7 years ago | (#19370513)
A rather simple algo, ladies (1)
Opportunist (166417) | more than 7 years ago | (#19370553)
lid->lower();
sit();
pee();
Re:A rather simple algo.. for both sexes (1, Funny)
Anonymous Coward | more than 7 years ago | (#19370707)
{
if (gender==MALE)
goto_sink();
else
sit();
}
pee();
Just hope I got that right. Last thing you need is a nasty buffer overflow
Re:A rather simple algo, ladies (1)
WilliamSChips (793741) | more than 7 years ago | (#19370897)
Re:A rather simple algo, ladies (0)
Anonymous Coward | more than 7 years ago | (#19371133)
Proxy... (0)
Anonymous Coward | more than 7 years ago | (#19370621)
If the toilet seat issue is solved, you'll go on to fighting about the toothpaste (why do you always have to squeese from the middle instead of from behind and rolling it up).
--melot
WHY? (-1, Offtopic)
mpweasel (539631) | more than 7 years ago | (#19370643)
I'd mark myself offtopic, but that's not allowed.
Very odd indeed (0)
Anonymous Coward | more than 7 years ago | (#19370693)
Assumptions too strong? (1)
Asgerix (1035824) | more than 7 years ago | (#19370719)
For example, the author assumes that John (an appropriate name, btw) visits the toilet as often as Marsha. In my experience, females visit the toilet more frequent than males.
Another thing: It is assumed that John only performs one of the two actions (#1 and #2) when he goes to the toilet. This is not really a problem though; if he has to do both, he would probably do both sitting down, and therefore we could adjust the probability p (of doing #1) to exclude these visits. The author ought to have mentioned this, though.
Re:Assumptions too strong? (1)
muftak (636261) | more than 7 years ago | (#19370841)
Cost of forgetting to change the seat position (1)
bongk (251028) | more than 7 years ago | (#19370777)
There is also the inverse, where John forgets to raise the toilet seat before #1, often for the same reasons as above. Again the probability is lower and the cost (either of needing to clean the toilet seat or of yelling from Marsha when she sits on a wet seat) is greater than the costs of changing the seat position.
In any case in my house the game includes a 5 year old boy who generally waits till the last second and then runs into the bathroom doing the potty dance, and doesn't remember to raise the toilet seat for #1. The resulting mess I think now even has my "Marsha" raising the toilet seat after use in anticipation of this activity.
Re:Cost of forgetting to change the seat position (1)
Baron_Yam (643147) | more than 7 years ago | (#19370833).
Optimal solution (0)
Anonymous Coward | more than 7 years ago | (#19370905)
this is an easy one (0)
Anonymous Coward | more than 7 years ago | (#19370915)
Upbringing (1)
pilsner.urquell (734632) | more than 7 years ago | (#19370923)
Must be a slow news day!
another solution (1)
TooFarGone (841076) | more than 7 years ago | (#19370985)
Gay (0)
Anonymous Coward | more than 7 years ago | (#19371065)
Solution: Just install two bathrooms... (1)
chiraz90210 (961309) | more than 7 years ago | (#19371075)
I made a deal with my girlfriend (1)
Schmye Bubbula (692253) | more than 7 years ago | (#19371077)
Solution (2, Insightful)
ChameleonDave (1041178) | more than 7 years ago | (#19371083) (1)
llZENll (545605) | more than 7 years ago | (#19371129)
Cost of cleaning is missing (0)
Anonymous Coward | more than 7 years ago | (#19371163)
Leave the toilet cover down. problem solved (2, Interesting)
Lt.Hawkins (17467) | more than 7 years ago | (#19371169)
Problem solved. Also keeps pets out of the toilet. | http://beta.slashdot.org/story/85789 | CC-MAIN-2014-35 | en | refinedweb |
Introduction
Background
Support for different devices
Using the code
Implementing a Windows API Raw Input handler
Registering raw input devices
Retrieving and processing raw input
Retrieving the list of input devices
Getting information on specific devices
Reading device information from the Registry
Conclusion
Sources
There was a time when you were lucky if a PC had so much as a mouse, but today, it is common to have a wide variety of Human Interface Devices (HIDs) ranging from game controllers to touch screens. In particular, users can connect more than one keyboard to their PCs. However, the usual keyboard programming methods in the .NET Framework offer no way to differentiate the input from different keyboards. Any application handling KeyPress events will receive the input from all connected keyboards as if they were a single device.
KeyPress
Windows XP and above now support a "raw input" API which allows programs to handle the input from any connected human interface devices directly. Intercepting this information and filtering it for keyboards enables an application to identify which device triggered the message. For example, this could allow two different windows to respond to input from different keyboards.
This article and the enclosed code demonstrate how to handle raw input in order to process keystrokes and identify which device they come from. The Rawinput.dll file in the attached zip contains the raw input API wrapper; copy this dll to your own project and follow the instructions in "Using the code" if you want to use it without running the sample application.
I recently published an article on implementing a low-level keyboard hook in C#[^] using the SetWindowsHookEx and related methods from user32.dll. While looking for a solution to handle multiple keyboards, Steve Messer[^] came across my article and we discussed whether my code could be adapted to his needs. In fact, it turned out that it couldn't, and that the Raw Input API was the solution.
SetWindowsHookEx
Unfortunately, there are very few keyboard-related Raw Input samples online, so when Steve had finished a working sample of his code, I offered to write this article so that future .NET developers faced with this problem wouldn't have to look far to find the solution. While I have made minor adjustments to the code, it is primarily Steve's work and I thank him for sharing it. Note: as of March 2007, you can also download Steve's WPF sample illustrating the use of WndProc in Windows Vista. However, this article only describes the Windows XP source code.
Please note that this will only work on Windows XP or later in a non-Terminal Server environment, and the attached sample projects are for Visual Studio 2005. The latest update was developed using Visual Studio 2012 on Windows 8 64 bit. I do not believe the update has any dependencies on anything higher that .Net 2.0 so you can simply add the files to any version of VS that you might have.
The attached code is a generic solution that mostly mirrors the sample code given on MSDN. Different devices will work in different ways, and you may need to amend the code to suit the keyboards you are using. Unfortunately, we won't always be able to help with device-specific queries, as we won't have the same devices you have. Steve Messer has tested the code with different keyboards, however, and is confident that it will work with most devices provided they are correctly installed..
using RawInput_dll;
2. Instantiate an RawInput object
RawInput
The RawInput class's constructor takes one argument, which is the handle to the current window.
RawInput rawinput = new RawInput(Handle);
rawinput.KeyPressed += OnKeyPressed;
_rawinput.CaptureOnlyIfTopMostWindow = true; // Otherwise default behavior is to capture always
_rawinput.AddMessageFilter(); // Adding a message filter will cause keypresses to be handled
protected override void WndProc(ref Message message)
{
switch(message.Msg)
{
case Win32.WM_INPUT:
_keyboardDriver.ProcessRawInput(message.LParam);
break;
}
base.WndProc(ref message);
}
The rest of this article describes how to handle "raw input" from a C# application, as illustrated by the RawInput and RawKeyboard class's in the sample application.
RawInput and RawKeyboard
MSDN identifies "raw input" [^] as being the raw data supplied by an interface device. In the case of a keyboard, this data is normally intercepted by Windows and translated into the information provided by Key events in the .NET Framework. For example, the Windows manager translates the device-specific data about keystrokes into virtual keys.
Key
However, the normal Windows manager doesn't provide any information about which device received the keystroke; it just bundles events from all keyboards into one category and behaves as if there were just one keyboard.
This is where the Raw Input API is useful. It allows an application to receive data directly from the device, with minimal intervention from Windows. Part of the information it provides is the identity of the device that triggered the event.
The user32.dll in Windows XP, Vista, and Windows 8 contains the following methods for handling raw input:
RegisterRawInputDevices
GetRawInputData
GetRawInputDeviceList
GetRawInputDeviceInfo
The following sections give an overview of how these four methods are used to process raw data from keyboards.
By default, no application receives raw input. The first step is therefore to register the input devices that will be providing the desired raw data, and associate them with the window that will be handling this data.
To do this, the RegisterRawInputDevices method is imported from user32.dll:
[StructLayout(LayoutKind.Sequential)]
internal struct RawInputDevice
{
internal HidUsagePage UsagePage;
internal HidUsage Usage;
internal RawInputDeviceFlags Flags;
internal IntPtr Target;
public override string ToString()
{
return string.Format("{0}/{1}, flags: {2}, target: {3}", UsagePage, Usage, Flags, Target);
}
}
Each RAWINPUTDEVICE structure added to the array contains information on a type of device which interests the application. For example, it is possible to register keyboards and telephony devices. The structure uses the following information:
In this case, we are only interested in keyboards, so the array only has one member and is set up as follows:
RAWINPUTDEVICE[] rid = new RAWINPUTDEVICE[1];
rid[0].usUsagePage = 0x01;
rid[0].usUsage = 0x06;
rid[0].dwFlags = RIDEV_INPUTSINK;
rid[0].hwndTarget = hwnd;
Here, the code only defines the RIDEV_INPUTSINK flag, which means that the window will always receive the input messages, even if it is no longer has the focus. This will enable two windows to respond to events from different keyboards, even though at least one of them won't be active.
RIDEV_INPUTSINK
With the array ready to be used, the method can be called to register the window's interest in any devices which identify themselves as keyboards:
RegisterRawInputDevices(rid, (uint)rid.Length, (uint)Marshal.SizeOf(rid[0]))
Once the type of device has been registered this way, the application can begin to process the data using the GetRawInputData method described in the next section.
When the type of device is registered, the application begins to receive raw input. Whenever a registered device is used, Windows generates a WM_INPUT message containing the unprocessed data from the device.
Each window whose handle is associated with a registered device as described in the previous section must therefore check the messages it receives and take appropriate action when a WM_INPUT one is detected. In the sample application, the
protected override void WndProc(ref Message message)
{
switch (message.Msg)
{
case Win32.WM_INPUT:
{
// Should never get here if you are using PreMessageFiltering
_keyboardDriver.ProcessRawInput(message.LParam);
}
break;
base.WndProc(ref message);
}:
[DllImport("User32.dll")]
internal static extern int GetRawInputData(IntPtr hRawInput, DataCommand command, {Out] IntPtr pData, ref uint size, int sizeHeader);
int dwSize = 0;
Win32.GetRawInputData( hdevice, DataCommand.RID_INPUT, IntPtr.Zero, ref dwSize, Marshal.SizeOf(typeof(Rawinputheader)));.
if( Win32.GetRawInputData(hdevice, DataCommand.RID_INPUT, out _rawBuffer, ref dwSize, Marshal.SizeOf(typeof(RAWINPUTHEADER))) == dwSize)
//do something with the data
As mentioned above, the WM_INPUT message contains raw data encapsulated in a RAWINPUT structure. As with the RAWINPUTDEVICE structure described in the previous section, this structure is redefined in the RawInput dll as follows.
);
}
}
The next step is to filter the message to see if it is a key down event. This could just as easily be a check for a key up event; the point here is to filter the messages so that the same keystroke isn't processed for both key down and key up events.
private const int WM_KEYDOWN = 0x0100;
private const int WM_SYSKEYDOWN = 0x0104;
...
if (raw.keyboard.Message == WM_KEYDOWN ||
raw.keyboard.Message == WM_SYSKEYDOWN)
{
//Do something like...
int vkey = raw.keyboard.vkey;
MessageBox.Show(vkey.ToString());
}
At this point, the RawKeyboard class retrieves further information about the message and the device that triggered it, and raises its custom KeyPressed event. The following sections describe how to get information on the devices.
Although this step isn't required to handle raw input, the list of input devices can be useful. The sample application retrieves a list of devices, filters it for keyboards, and then returns the number of keyboards. This is part of the information returned by the InputEventArgs in the RawKeyboard:
RAWINPUTDEVICELIST
pRawInputDeviceList
uiNumDevices
In order to ensure that the first and second arguments are correctly configured when the list of devices is required, the method should be set up in three stages.
First, it should be called with pRawInputDeviceList set to IntPtr.Zero. This will ensure that the variable in the second argument (deviceCount here) is filled with the correct number of devices. The result of this call should be checked, as an error means that the code can proceed no further.
deviceCount
for( int i = 0; i < deviceCount; i++ )
{
RAWINPUTDEVICELIST rid = (RAWINPUTDEVICELIST)Marshal.PtrToStructure(
new IntPtr(( pRawInputDeviceList.ToInt32() + ( dwSize * i ))),
typeof( RAWINPUTDEVICELIST ));
//do something with the information (see section on GetRawInputDeviceInfo)
}
When any subsequent processing is completed, the memory should be deallocated.
Marshal.FreeHGlobal( pRawInputDeviceList );
Once GetRawInputDeviceList has been used to retrieve an array of RAWINPUTDEVICELIST structures as well as the number of items in the array, it is possible to use GetRawInputDeviceInfo to retrieve specific information on each device.
First, the method is imported from user32.dll:
[DllImport("User32.dll")]
extern static uint GetRawInputDeviceInfo(IntPtr hDevice, uint uiCommand, IntPtr pData, ref uint pcbSize);
Its arguments are as follows:
RIDI_PREPARSEDDATA
RIDI_DEVICENAME
RIDI_DEVICEINFO
RIDI_DEVICE_INFO
cbSize
The example code uses a for loop to iterate through the available devices as indicated by the deviceCount variable. At the start of each loop, a RAWINPUTDEVICELIST structure called rid is filled with the information on the current device (see GetRawInputDeviceList section above).
rid
In order to ensure that enough memory is allocated to store the desired information, the GetRawInputDeviceInfo method should first be called with pData set to IntPtr.Zero. The handle in the hDevice parameter is provided by the rid structure containing information on the current device in the loop.
hDevice
uint pcbSize = 0;
GetRawInputDeviceInfo( rid.hDevice, RIDI_DEVICENAME, IntPtr.Zero, ref pcbSize );
In this example, the purpose is to find out the device name, which will be used to look up information on the device in the Registry.
Following this call, the value of pcbSize will correspond to the number of characters needed to store the device name. Once the code has checked that pcbSize is greater than 0, the appropriate amount of memory can be allocated.
IntPtr pData = Marshal.AllocHGlobal( (int)pcbSize );
And the method can be called again, this time to fill the allocated memory with the device name. The data can then be converted into a C# string for ease of use.
string
string deviceName;
GetRawInputDeviceInfo( rid.hDevice, RIDI_DEVICENAME, pData, ref pcbSize );
deviceName = (string)Marshal.PtrToStringAnsi( pData );
The rest of the code then retrieves information about the device and checks the Registry to retrieve device information.
Following the above code, deviceName will have a value similar to the following:
deviceName
\\??\:
// remove the \??\
item = item.Substring( 4 );
string[] split = item.Split( '#' );
string id_01 = split[0]; // ACPI (Class code)
string id_02 = split[1]; // PNP0303 (SubClass code)
string id_03 = split[2]; // 3&13c0b0c5&0 (Protocol code)
// The final part is the class GUID and is not needed here
The Class code, SubClass code and Protocol retrieved this way correspond to the device's path under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet, so the next stage is to open that key:
HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet
RegistryKey OurKey = Registry.LocalMachine;
string findme = string.Format(@"System\CurrentControlSet\Enum\{0}\{1}\{2}", id_01, id_02, id_03 );
The information we are interested in is the device's friendly description:
string deviceDesc = (string)OurKey.GetValue( "DeviceDesc" );.
private class PreMessageFilter : IMessageFilter
{
public bool PreFilterMessage(ref Message m)
{
if (m.Msg != Win32.WM_INPUT)
{
// Allow any non WM_INPUT message to pass through
return false;
}
return _keyboardDriver.ProcessRawInput(m.LParam);
}
}
It is activated by the following code in the RawInput class
public void AddMessageFilter()
{
if (null != _filter) return;
_filter = new PreMessageFilter();
Application.AddMessageFilter(_filter);
}
2. Capture keypresses if top-most window. The wParam of the _rawBuffer.header tells us if the input occurred while the application was in the foreground.
public static bool InputInForeground(IntPtr wparam)
{
return wparam.ToInt32() == RIM_INPUT;
};
}
Although the .NET Framework offers methods for most common purposes, the Raw Input API offers a more flexible approach to device data. The enclosed code and the explanations in this article will hopefully prove a useful starting point for anyone looking to handle multiple keyboards in an XP or Vista based application.
This article gives an overview of the different steps required to implement the Raw Input API. For further information on handling raw. | http://www.codeproject.com/Articles/17123/Using-Raw-Input-from-C-to-handle-multiple-keyboard?fid=375378&df=90&mpp=10&sort=Position&spc=None&select=4322612&tid=4352909 | CC-MAIN-2014-35 | en | refinedweb |
Introduction
Purpose
This document is addressed to the software designers who are very familiar with the IBM Cognos BI product and who would like to use the IBM Cognos BI Software Development Kit (SDK) component. The IBM Cognos Software Development Kit contains many samples but they may be intimidating for someone just starting out with the SDK because they are somewhat complex and are not stand-alone as they make use of classes that are defined in other directories. This document offers a simpler approach in that it contains only one source file with a self-contained SDK application.
Applicability
This.
What is the IBM Cognos Software Development Kit?
Uses for the IBM Cognos Software Development Kit
The IBM Cognos Software Development Kit is an additional install component which allows the programmatic execution of,
- Most of the tasks that can be performed through the IBM Cognos BI user interface (UI)
- The automation of repetitive non-UI tasks, such as scheduling large number of reports, changing permissions for many reports, etc
- Integrating IBM Cognos BI into other applications.
Examples of IBM Cognos Software Development Kit specific tasks include,
- Display all IBM Cognos BI users and their access to reports
- Assign a new owner to all reports for which the original owner was deleted
- After a package is republished, update the reports and queries belonging to the package.
- Display a list of IBM Cognos BI reports in a custom web page
However, the IBM Cognos Software Development Kit cannot be used for tasks that are typically associated with configuration and packaged user interfaces. For example, the SDK cannot modify IBM Cognos BI user interface components such as,
- Portal Pages
- Logon Pages
- Any user interface widgets
- Studios in IBM Cognos Connection
- Branding
The API of the IBM Cognos Software Development Kit
The main component of IBM Cognos Software Development Kit is the Application Programming Interface (API) known as the BI Bus API. IBM Cognos Software Development Kit API supports specific programming languages through the use of toolkits. The supported programming languages are,
- Java
- .NET Framework languages
IBM Cognos Software Development Kit API can be used either in stand-alone programs or in web pages, such as ASPs or JSPs.
In addition to the toolkits, there are other IBM Cognos Software Development Kit components which will not be presented in this document. The other components are,
- URL interface which is used to automate actions by passing commands and parameters over HTTP in order to integrate with other applications
- Framework Manager (FM) SDK which is used to model metadata and publish packages
- Cognos Mashup Services (CMS) which exposes a REST and WSDL/SOAP interface to IBM Cognos BI outputs so that this output can be easily consumed by other applications.
The IBM Cognos Software Development Kit documentation
The IBM Cognos Software Development Kit documentation is divided into the following components (the number of pages are from the IBM Cognos 10 document set),
- IBM Cognos Software Development Kit Getting Started (44 pages)
- IBM Cognos Software Development Kit Installation and configuration guide (15 pages)
- IBM Cognos Software Development Kit Developer Guide (3565 pages)
- IBM Cognos Custom Authentication Provider Developer Guide (41 pages)
- IBM Cognos Framework Manager Developer Guide (172 pages)
- IBM Cognos Mashup Service Developer Guide (275 pages)
Since the IBM Cognos Software Development Kit documentation is quite large, the new user is encouraged to read about the essentials of the SDK in the developerWorks article “IBM Cognos Proven Practices: Approach to the IBM Cognos SDK”, which can be found at the following URL,
How to install IBM Cognos Software Development Kit
The IBM Cognos Software Development Kit must be installed on a computer where any server component of IBM Cognos BI is already installed. The SDK installation procedure is similar to the IBM Cognos BI Server installation procedure.
The remainder of this document assumes that the IBM Cognos Software Development Kit has been installed and an IBM Cognos BI Server is available. It is recommended, if possible, that a single server install be used when just starting out with the SDK. A Windows or Linux workstation-based install of IBM Cognos BI that uses the IBM Cognos Content Store and the deployment of the sample IBM Cognos PowerCubes makes an excellent self contained learning environment.
How IBM Cognos Software Development Kit works
The IBM Cognos Software Development Kit API contains classes which correspond to IBM Cognos BI services such as the Content Manager Service, the Report Service or the Monitor Service. The services that are available on an IBM Cognos BI server are specified in IBM Cognos Configuration and can be seen in under the Configuration tab in IBM Cognos Administration.
Illustration 1: IBM Cognos 10 Services listed in the IBM Cognos Configuration
Illustration 2: IBM Cognos 10 Services listed in IBM Cognos Administration
An IBM Cognos Software Development Kit program can run on a computer which does not have the IBM Cognos BI server installed. However the IBM Cognos Software Development Kit libraries are required to be installed on this computer. The SDK libraries are located at:
- <cognos-directory>\sdk\java\lib for Java; axisCognosClient.jar is the main Java SDK library and the rest of the jar files are to support axisCognosClient.jar and the samples that are installed with the SDK
- <cognos-directory>\sdk\csharp\lib for the .NET languages
An IBM Cognos Software Development Kit application will generally supply a set of credentials to the IBM Cognos BI server and the application will be bound and limited to these credentials. This means that the SDK application must be authenticated to perform operations in the same way a user needs to logon to the IBM Cognos Connection. This ensures secure access to IBM Cognos content.
The Structure of an IBM Cognos Software Development Kit Program
A typical IBM Cognos Software Development Kit program has four primary sections.
- Connect to the IBM Cognos BI service(s)
- Logon to IBM Cognos BI
- Execute tasks
- Logoff from IBM Cognos BI
This document contains a sample SDK application written in Java and contains methods that correspond to these primary sections.
Step 1: Connect to the IBM Cognos BI server
Method name : connectToCognos
The IBM Cognos Software Development Kit program must first connect to an
IBM Cognos BI Dispatcher using the URL of the IBM Cognos BI server. This
URL is defined in IBM Cognos Configuration and is the value of the
“Dispatcher URI for external applications” field. The URL has the form.
Illustration 3: IBM Cognos Configuration showing the “Dispatcher URI for external applications” field
Once connected to an IBM Cognos BI Dispatcher, the IBM Cognos Software Development Kit application requests an IBM Cognos BI service. Unless the application is going to use the Anonymous credentials, the application must request the Content Manager service because that is the service that handles the logon/logoff processes.
Step 2: Logon to the IBM Cognos BI server
Method name : logonToCognos
Logon is done through the Content Manager service. IBM Cognos BI supports both authenticated and anonymous user access. In this step, if the anonymous access is disabled, then the SDK application must logon using a namespace ID, a user name and its associated password. The namespace ID is the value of the Namespace ID field in IBM Cognos Configuration.
Illustration 4: The "Namespace ID" field in IBM Cognos Configuration
The namespace ID, user name and password are supplied in an XML string known as a credential. The credential takes the form of,
<credential> <namespace>namespaceID</namespace> <username>user<username> <password>pwd</password> </credential>
In the sample Java code, note the 3 lines following the call to the
logon() method. These lines are required to retrieve and
store information related to the authenticated session into the cmService
variable. In IBM Cognos 8 this was handled automatically but in IBM Cognos
10, this must be done programmatically as some of the session related
information is dynamic and needs to be refreshed before calling an IBM
Cognos 10 service. For more information on this, see the Managing Service
Headers topic in the chapter titled Coding Practices and Troubleshooting
in the IBM Cognos 10 SDK Developers guide.
Step 3: Execute application specific tasks
Method name : executeTasks
This step performs the specific tasks of the application. The code example shows how to display the list of packages in the “Public Folders”. The Content Manager Service performs a query with the "/content//package" search path and extracts the “searchPath” and the “defaultName” properties. The search path syntax is described in Section 4 below. The query returns an array of BaseClass objects, which is then printed. If the IBM Cognos BI samples have been installed, the output will look similar to,
GO Data Warehouse (analysis) - /content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Data Warehouse (analysis)'] GO Data Warehouse (query) - /content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Data Warehouse (query)'] GO Sales (analysis) - /content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Sales (analysis)'] GO Sales (query) - /content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Sales (query)'] Sales and Marketing (conformed) - /content/folder[@name='Samples']/folder[@name='Models'] /package[@name='Sales and Marketing (conformed)']
Step 4: Logoff from the IBM Cognos BI server
Method name : logoffFromCognos
The logoff is done through the Content Manager service. If an SDK application terminates without logging off, the resources that were allocated for the IBM Cognos 10 session that was established when the SDK application logged on will remain allocated until the session times out.
The Java code example
The following code is contained in the ZIP file attachment that accompanies this document. Unzip to the <c10_install>/sdk/java directory and a folder called SDKExample will be created. From the SDKExample folder, you can use the build.bat/build.sh scripts to build the application and you can use the run.bat/run.sh scripts to run the application.
Before building and running this example, there needs to be some editing of the Java source file, the build script and the run script.
- In the file SDKExample.java the variables nameSpaceID, userName and password must be modified to contain values that will work in your environment
- In the build and run scripts, the JAVA_HOME variable must be set to the path to the JDK used, with the minimum JDK version being 1.5 (also known as JDK 5.0). In the build.sh and run.sh scripts it may also be necessary to set the CRN_HOME environment variable.
import java.net.URL; import javax.xml.namespace.QName; import org.apache.axis.client.Stub; import org.apache.axis.message.SOAPHeaderElement; import com.cognos.developer.schemas.bibus._3.BaseClass; import com.cognos.developer.schemas.bibus._3.BiBusHeader; import com.cognos.developer.schemas.bibus._3.ContentManagerService_PortType; import com.cognos.developer.schemas.bibus._3.ContentManagerService_ServiceLocator; import com.cognos.developer.schemas.bibus._3.PropEnum; import com.cognos.developer.schemas.bibus._3.QueryOptions; import com.cognos.developer.schemas.bibus._3.SearchPathMultipleObject; import com.cognos.developer.schemas.bibus._3.Sort; import com.cognos.developer.schemas.bibus._3.XmlEncodedXML; public class SDKExample { private static String dispatcherURL = ""; private static String nameSpaceID = "NSID"; private static String userName = "user"; private static String password = "pwd"; private ContentManagerService_PortType cmService=null; public static void main(String args[]) { SDKExample mainClass = new SDKExample(); // instantiate the class // Step 1: Connect to the Cognos services mainClass.connectToCognos (dispatcherURL); // Step 2: Logon to Cognos mainClass.logonToCognos(nameSpaceID, userName, password); // Step 3: Execute tasks mainClass.executeTasks(); // Step 4: Logoff from Cognos mainClass.logoffFromCognos(); } // Step 1: Connect to the Cognos services private void connectToCognos(String dispatcherURL) { ContentManagerService_ServiceLocator cmServiceLocator = new ContentManagerService_ServiceLocator(); try { URL url = new URL(dispatcherURL); cmService = cmServiceLocator.getcontentManagerService(url); } catch (Exception e) { e.printStackTrace(); } } // Step 2: Logon to Cognos private void logonToCognos(String nsID, String user, String pswd) { StringBuffer credentialXML = new StringBuffer(); credentialXML.append("<credential>"); credentialXML.append("<namespace>").append(nsID).append("</namespace>"); credentialXML.append("<username>").append(user).append("</username>"); credentialXML.append("<password>").append(pswd).append("</password>"); credentialXML.append("</credential>"); String encodedCredentials = credentialXML.toString(); XmlEncodedXML xmlCredentials = new XmlEncodedXML(); xmlCredentials.set_value(encodedCredentials); try { cmService.logon(xmlCredentials, null); SOAPHeaderElement temp = ((Stub)cmService).getResponseHeader( "", "biBusHeader"); BiBusHeader CMbibus = (BiBusHeader)temp.getValueAsType( new QName ("", "biBusHeader")); ((Stub)cmService).setHeader( "", "biBusHeader", CMbibus); } catch (Exception ex) { ex.printStackTrace(); } } // Step 3: Execute tasks private void executeTasks() { PropEnum props[] = new PropEnum[] { PropEnum.searchPath, PropEnum.defaultName }; BaseClass bc[] = null; String searchPath = "/content//package"; try { SearchPathMultipleObject spMulti = new SearchPathMultipleObject(searchPath); bc = cmService.query(spMulti, props, new Sort[] {}, new QueryOptions()); } catch (Exception e) { e.printStackTrace(); return; } System.out.println("PACKAGES:\n"); if (bc != null) { for (int i = 0; i < bc.length; i++) { System.out.println(bc[i].getDefaultName().getValue() + " - " + bc[i].getSearchPath().getValue()); } } } // Step 4: Logoff from Cognos private void logoffFromCognos() { try { cmService.logoff(); } catch (Exception ex) { ex.printStackTrace(); } }
Search Paths
A search path is used to specify the location of objects in the IBM Cognos BI Content Store hierarchy. A search path can specify a path to a single Content Store object or it can make use of expressions and wildcard characters to retrieve more that one Content Store object
The search path syntax is similar to a path in an operating system such as DOS or UNIX. More specifically, it resembles XPath which is a query language for selecting nodes from an XML document.
The syntax of a search path is described in detail in Appendix A of the IBM Cognos Software Development Kit Developer Guide.
Search path for one object
Example: A path taken from the IBM Cognos Connection UI, which locates the specific 'Budget vs. Actual’ report in the 'Report Studio Report Samples’ folder :
/content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Data Warehouse (analysis)'] /folder[@name='Report Studio Report Samples'] /report[@name='Budget vs. Actual']
To get the search path for a report, click on the “Set properties” icon for the report in IBM Cognos Connection. In the “Set properties” dialog, click on the “View the search path, ID and URL“ link. A dialog will pop up, which contains the Search Path as the first field.
Illustration 5: The "Set Properties" icon in IBM Cognos Connection
Illustration 6: The search path of an IBM Cognos BI Content Store object as seen using IBM Cognos Connection to view the object's properties
Search path for multiple objects
The wildcard character (*) indicates all the objects under the specified root. For example, to search all the objects in the 'Report Studio Report Samples’ folder, use the search path :
/content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Data Warehouse (analysis)'] /folder[@name='Report Studio Report Samples']/*
When a path starts with two slashes (//), all objects in the content store that fulfill the specified criteria are selected. For example,
- //folder – will return all Folder objects in the Content Store
- //report – will return all Report objects in the Content Store
When a path contains “//”, all descendants of the current object that fulfill the specified criteria are selected. For example, to select and return all the reports in the 'Report Studio Report Samples’ folder, use the search path :
/content/folder[@name='Samples']/folder[@name='Models'] /package[@name='GO Data Warehouse (analysis)'] /folder[@name='Report Studio Report Samples']//report
How to Run the IBM Cognos Software Development Kit Samples
The IBM Cognos Software Development Kit has two different toolkits: Java and .NET. The toolkits are located in the directory <cognos-directory>/sdk under the following folder names :
- java for the Java toolkit
- csharp for the .NET toolkit
Each of the toolkit directories contains many subdirectories with samples. The functionality of each sample is described in the .html file under the same directory. Comments in the source files describe the main purpose of each sample including a summary of which SDK methods are used. The steps to use the samples in the toolkits are described in the IBM Cognos Software Development Kit Getting Started guide.
To run a Java sample you have to build the Java sample first and then run it.
- The script files for Windows are build.bat and run.bat
- The script files for UNIX are build.sh and run.sh
- A JDK must be installed on the computer
- In each script file, the values of the two variables JAVA_HOME and CRN_HOME should be updated with the actual values for the computer
- It is also possible to build all the samples at once using build-samples.bat or build-samples.sh scripts located at <cognos-directory>\sdk\java
To run the SDK samples with .NET, you must have version 2.0 or 3.0 of the .NET Framework installed. To modify or rebuild the C# .NET samples, you must have a C# development environment installed, such as Visual Studio 2005 or the .NET Framework Software Development Kit (SDK) v2.0. In each sample, the build.bat script included with the sample code shows one way of building the application using the Visual Studio .NET compiler.. | http://www.ibm.com/developerworks/data/library/cognos/development/how_to/page565.html | CC-MAIN-2014-35 | en | refinedweb |
[code = java]
import static java.lang.System.out;
import java.util.Scanner;
public class PasswordChecker {
public static void main (String args []) {
Scanner PasswordInput = new Scanner (System.in);
String Password = PasswordInput.next ();
out.println("You typed >>>"+Password+"<<<");
out.println();
if (Password == "Nathan") {
out.println("The word you typed is stored ");
out.println("in the same place as the ");
out.println("real password!");
out.println("You must be a hacker!");
} else {
out.println("The word you typed is ");
out.println("not stored in the same place ");
out.println("as the real password!");
out.println("But that is okay!");
}
out.println ();
if (Password.equals ("Nathan")) {
out.println("The word you typed has the same ");
out.println("characters as the real password");
out.println("You can use our precious System");
} else {
out.println("The word you typed does not ");
out.println("have the same characters as ");
out.println("the password. You cannot ");
out.println("use our precious System");
}
}
}
[/code] | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/30161-if-statement-not-working-even-when-condition-true-else-statement-working-though-printingthethread.html | CC-MAIN-2014-35 | en | refinedweb |
Results 1 to 3 of 3
- Join Date
- Jul 2012
- 9
How to get the number of elements in a SystemV queue
I'm trying to create a function that takes a pointer of a task_struct process in input and returns the number of elements in a SystemV queue related with it (I have several tasks that created SystemV queues).
int getQueueElements(struct task_struct *p){
struct ipc_namespace *ns;
struct ipc_ids *ipc;
int qNum;
if(p){
ns = p->nsproxy->ipc_ns;
ipc = ns->ids[IPC_MSG_IDS];
qNum = ipc->in_use;
if(ipc->seq_max != -1)return qNum;
}
return(-1);
}
I have written the function above but it lets me to know only the number of IPC objects currently in use, and not the number of messages on queue related with the process p.
Thanks in advance for any suggestion about.
Rob
- Join Date
- Jul 2012
- 9
I forgot to specify that I'm working on an optimized scheduling policy for certain real-time processes based on SystemV queues for IPC, and I need to know the queues level to perform some operations.
- Join Date
- Jul 2012
- 9
Maybe I have solved adding a new field in the task_struct, in order to store the SysV queue unique Key during the queue creation.
In this way I should obtain all queues information (if the process uses queues) from the task_struct of a process... | http://www.linuxforums.org/forum/kernel/190640-how-get-number-elements-systemv-queue.html | CC-MAIN-2018-22 | en | refinedweb |
Our.
With just a single call to our API, a developer can easily add nudity-detection, within a given level of confidence, to their application or data processing pipeline. This allows for easy flagging of inappropriate images, without the need to build or integrate any complex image-processing libraries or machine-learning architecture.
SAMPLE INPUT
import Algorithmia input = "_IMAGE_URL_" client = Algorithmia.client('_API_KEY_') algo = client.algo('sfw/NudityDetectioni2v/0.2.6') print algo.pipe(input)
SAMPLE OUTPUT
LEARN MORELEARN MORE
{ "nude": true, "confidence": 0.93 } | https://demos.algorithmia.com/isitnude/ | CC-MAIN-2018-22 | en | refinedweb |
This title does not express what I mean quite well, I apologize, but it is difficult for me to express it better, because I don't quite understand what's going on due to lack of OOP knowledge and experience.
I am building a basic game, which is going to have the player run around a board with a 'hero' sprite, being chased by a 'badGuy' sprite. Because the two sprites share 5-6 methods, I decided to make a super class 'Sprite' and two classes 'Hero extends Sprite' and 'BadGuy extends Sprite'. Now for all those super methods, including stuff like:
getX(); getY(); getBounds(); render();
to work I need the super class to track the location of 'Hero' and 'badGuy'. So I implemented 'Sprite' like this:
package game.sprites;
import javafx.scene.shape.Rectangle;
import javax.swing.*;
import java.awt.*;
public class Sprite {
public static int x;
public static int y;
private int imageWidth;
private int imageHeight;
public Image image;
public Sprite(int x, int y) {
Sprite.x = x;
Sprite.y = y;
}
public static void render(Graphics g, Image image) {
g.drawImage(image, x, y, null);
}
public Image loadImage(String filePath) {...}
public void getImageDimensions() {...}
public Rectangle getBounds() {
return new Rectangle(x, y, imageWidth, imageHeight);
}
public Image getImage() {
return image;
}
public int getX() {
return x;
}
public int getY() {
return y;
}
}
The problem kicks in when I want to give different starting coordinates to 'Hero' and 'BadGuy' objects. Currently if I set them different, the second call of 'Sprite' overrides the first and both start at the same spot (which would be very frustrating if your goal is to run from 'badGuy').
'Hero' and 'BadGuy' are currently initialized this way:
public class BadGuy extends Sprite {
public BadGuy() {
super(x, y);
initBadGuy();
}
public void initBadGuy() {
loadImage("resources/craft.gif");
getImageDimensions();
x = 860; // Hero x = 20;
y = 560; // Hero y = 20;
}
So what I tried to do is make the subclasses override Sprite's x and y. But I googled it and I understand that this is very bad idea and thus it is not possible. So my question is something like: How can I make 'Sprite' inherit subclass 'x' and 'y' variables and perform the necessary methods when the certain subclass is called.
Now that I look at it - both the constructor and init<>() are identical for the subclasses, so maybe they can be implemented in 'Sprite' instead? Just a thought, but I'm getting quite confused already, so no idea.
Thanks.
You are getting this problem because
x and
y are declared as static fields in your
Sprite class.
From JLS 8.3.1.1. static Fields
If a field is declared static, there exists exactly one incarnation of the field, no matter how many instances (possibly zero) of the class may eventually be created. A static field, sometimes called a class variable, is incarnated when the class is initialized (§12.4).
Use following code:
Change your Sprite Class like below:
public class Sprite { public int x; public int y; .... }
BadGuy class:
public class BadGuy extends Sprite { public BadGuy(int x, int y) { super(x, y); ... } .... }
Hero class:
public class Hero extends Sprite { public Hero(int x, int y) { super(x, y); ... } .... }
From Main class do following: //From where you want to create Object of both classes
public static void main(String[] args){ Hero hero = new Hero(20,20); BadGuy badGuy= new BadGuy(860,560); } | http://www.dlxedu.com/askdetail/3/c37706442f64633a3eed8b4b9436f9d1.html | CC-MAIN-2018-22 | en | refinedweb |
Timer.Alarm callback is blocked by caller
Hi,
I am experiencing an issue in the way the Timer.Alarm functionality appears to be working on my project, in that if I setup an alarm to fire a callback in X seconds, and the calling process then works for Y seconds (sleep, loop etc) and Y>X, the Alarm callback won't fire until the caller has finished.
For example (and yes I know this is not the best way but for illustrative purposes):
def timer_callback(src): Timer.Alarm(timer_callback, 5) t = utime.gmtime() print("{0:02}:{1:02}:{2:02} - Timer Callback".format(t[3], t[4], t[5])) utime.sleep(10)
So this sets the callback to trigger every 5 seconds, prints out the current time and sleeps for 10 seconds
>>> alarm_test.timer_callback(None) 00:01:17 - Timer Callback 00:01:22 - Timer Callback 00:01:32 - Timer Callback 00:01:42 - Timer Callback 00:01:52 - Timer Callback 00:02:02 - Timer Callback 00:02:12 - Timer Callback
So from this you can see that a Timer callback that sets an Alarm will block the timer until its processing has finished (as you can see with the output happening every 10 seconds, not 5). Is this the intended behavior? I would expect the alarm to trigger regardless of what the calling program is doing like an interrupt.
Just for the sake of it I even implemented another function just to set the alarm and call it in a new thread
def setalarm_thread(fn, alarmtime): Timer.Alarm(fn, alarmtime) def timer_callback(src): _thread.start_new_thread(setalarm_thread, (timer_callback, 5)) t = utime.gmtime() print("{0:02}:{1:02}:{2:02} - Timer Callback".format(t[3], t[4], t[5])) utime.sleep(10)
And the result is the exact same, the Alarm callback is blocked until the calling function has finished.
So is this the expected behavior? Or am I doing something silly here? Any help would be greatly appreciated.
@robert-hh Thanks for that insight. My issue was that (unlike my example) the callback would trigger a separate function that included an additional but non-related alarm call back that would get 'stalled' from the original callback.
Having worked out that as long as an alarm is created from the 'main' thread it will not be blocked by work from the caller I have adjusted my program to use a mix of sensor alarms and secondary "worker" threads which are triggered by not run from the alarm callback. Not sure if it is the best way but seems to allow things to run smoothly without blocking.
@nathanh two things:
a) usually, the time alarm is not enabled in the callback, at least not for periodic callbacks. You can do that one one_shot callbacks.
b) Callbacks are comfortable variants of an Interrupt Service Routine (ISR). Like ISR's, they are meant to finish as soon as possible. Until they return, further calls to it are blocked. That is the intended behavior. So having a long sleep in a callback is clearly a DON'T. Having the callback working longer than the expected time between interrupts is a design problem. | https://forum.pycom.io/topic/2505/timer-alarm-callback-is-blocked-by-caller | CC-MAIN-2018-22 | en | refinedweb |
As per Microsoft, .Net can now describe in one line and that is as follow.
Great, this is absolutely right. ..
1.Performance
It is now much master than Asp.Net Core 1.x. It is now more than 20% faster than previous version. You can check it now with techempower.com as following URL shows. Just search aspnetcore on this URL, you will get the result.
2.Minimum Code
We need to write few lines of code to achieve the same task. Just for example, Authentication is now easy with minimum line of code. When we talk about Program.cs class, Asp.Net Core 2.0 has minimum line of code in Main method as compare to previous version. With earlier version of Asp.Net Core, we need to setup everything in Main method like your web server “Kestrel”, your current directory, if you would like to use IIS than need to integrate IIS as well. But with Asp.Net Core 2.0, we don’t need to take care of these things; these will handle by CreateDefaultBuilder method automatically to setup everything.
3.Razor Page
Asp.Net Core 2.0 has introduced Razor Page to create dynamic pages in web application. Using Razor Pages, we can create simple and robust application using Razor features like Layout Pages, Tag Helpers, Partials Pages, Templates and Asp.Net features like code behind page, directive etc. Razor Page does follow the standard MVC pattern. Here we use different types of directive like @page, @model, @namespace, @using etc. on view page and respective code behind page inherited with PageModel class which is base class.
Razor page is simple a view with associate code behind class which inherit Page Model class which is an abstract class in “Microsoft.AspNetCore.Mvc.RazorPages“. It doesn’t use controller for view [.cshtml page] as we do in MVC but code behind works as like a controller itself. These pages [.cshtml] are not placed inside the Pages folder.
Choose Web Application as a template when you would like to create Razor Pages application in Asp.Net Core 2.0.
4.Meta Packages and Runtime Store
Asp.Net Core 2.0 comes with “Microsoft.AspNetCore.All” package which is nothing but a MetaPackage for all dependencies which are required when creating Asp.Net Core 2.0 application. It means once you include this, you don’t need to include any other packages or don’t need to dependent on any other packages. It is because “Microsoft.AspNetCore.All” supports .Net Runtime Core Store which contains all the runtime packages which are required for Asp.Net Core development.
Here you can see only one reference is added and that is “Microsoft.AspNetCore.All” with version 2.0.5. So, this meta package will take care for all other packages required on runtime using Runtime Store.
You don’t need to add any other packages from outside; all is here with meta package and don’t need to take care of multiple packages with different version, here only have one version 2.0.5 or 2.x.x.
When you expand this reference section, you will find all the related packages are already referred with this meta package as following image shown.
5..Net Standard 2.0
The .Net Standard is group of APIs which are supported by .Net Framework. As compare to previous version of .Net Standard 2.0 supports dubbed APIs in numbers. It around more than 3200+ APIs supported by .Net Standard 2.0.
Leave the exception cases but .Net Standard 2.0 supports 70% of APIs which are being used or can be used with .Net Framework.
Just for example, .Net Standard didn’t support Logging feature using Log4Net, so we are not able to use it with Asp.Net Core, but with .Net Standard 2.0, this is in. We can now use lots of feature which are part of .Net Framework but we were not using it in Asp.Net Core with .Net Standard 1.x. We can use .Net Framework along with .Net Standard 2.0.
So, now onwards we can use all related APIs with .Net Standard 2.0.
For more about read following article;
6.SPA Template
Asp.Net Core 2.0 comes with new SPA template which can be used with latest version of Angular 4, React.js, and Knockout.js with Redux. By default Angular 4 is implementing with all required pages and React is also same. When we application using SPA template than all required packages automatically will installed using NPM packages. You don’t need to take care of angular packages or typescript packages, it will install and give ready made project from where you can start your coding for next.
7.HTTP.sys
The packages “Microsoft.AspNetCore.Server.WebListener” and “Microsoft.Net.Http.Server” are now merged into one packages and that package is Microsoft.AspNetCore.Server.HttpSys. Respective to this, namespace is also update to implement Microsoft.AspNetCore.Server.HttpSys. So, from now rather than implementing two packages, we only need to implement one.
8.Razor View Engine with Roslyn
Asp.Net Core 2.0 is now supported Roslyn compiler and support C# 7.1 features. So, now we can get the benefit of Roslyn compiler in Asp.Net Core MVC application with Razor View Engine.9.Visual Basic Support
With this new release of .Net Core 2.0, Visual Basic is part of .Net Core programming language. Now we can create different type of application using Visual Basic code as well.
10.Output from Asp.Net Core Web Server
In the output window, now we can trace our application using the “Asp.Net Core Web Server” option. This will show you how our application is started and got rendered on the browser. So, each information from starting to render will get here.
Conclusion
So, today we have learned about top 10 features of Asp.Net Core 2.0. | http://www.mukeshkumar.net/articles/dotnetcore/10-new-features-of-asp-net-core-2-0 | CC-MAIN-2018-22 | en | refinedweb |
Serial Programming/Modems and AT Commands/S-Registers< Serial Programming | Modems and AT Commands
Contents
S-RegistersEdit
S0: Ring to Answer AfterEdit
Defines the number of ring bursts before the modem automatically answers an incoming call. When set to zero, auto-answer is disabled.
S1: Ring CountEdit
(Read only) Counts the number of ring bursts received. Reset to zero after 8 seconds of no ring.
S2: Escape Sequence CharacterEdit CharacterEdit
Specifies the ASCII value of the carriage return (CR) character. The carriage return terminates command lines and result codes.
S4: Line Feed CharacterEdit
Specifies the ASCII value for the line feed (LF) character. The line feed character follows a carriage return at the end of long-form result codes. Short-form result codes are sent without line feeds.
S5: Backspace CharacterEdit
Specifies the ASCII value for the backspace (BS) character that you can use to edit the command line.
S6: Wait Before Blind DialingEditEdit ModifierEdit
This register contains the pause time of the (,) dial modifier used in the dial string. Consecutive commas will invalidate the modem's approval if the total pause period exceeds 12 seconds.
S9: Carrier Detect Response TimeEdit
This register contains the time period that a received carrier signal must be present for the modem to recognise it and turn on the DCD signal.
S10: Delay Between Lost Carrier and Hang UpEditEdit
This register contains the time period of the duration and inter-digital pause of the DTMF dialling tones. | https://en.m.wikibooks.org/wiki/Serial_Programming/Modems_and_AT_Commands/S-Registers | CC-MAIN-2018-22 | en | refinedweb |
Let's start out with a simple micro benchmark:
using System;using System.Threading;
class Program{ public static void Main() { int start = Environment.TickCount; double[] d = new double[1000]; for (int i = 0; i < 1000000; i++) { for (int j = 0; j < d.Length; j++) { d[j] = (double)(3.0 * d[j]); } } int end = Environment.TickCount; Console.WriteLine(end - start); }}
On my system this takes about 7 seconds when run in optimized mode (i.e. not in the debugger).
Here's the optimized x86 code generated by the 2.0 CLR JIT for the body of the inner loop:
fld qword ptr [ecx+edx*8+8] ; d[j]
fmul dword ptr ds:[007B1230h] ; * 3.0
fstp qword ptr [esp] ; (double)
fld qword ptr [esp] ; (double)
fstp qword ptr [ecx+edx*8+8] ; d[j] =
There first thing that jumps out is that the double cast takes two x87 instructions, a store and a load. Part of the reason the cast is expensive is because the value has to leave the FPU and go to main memory and back into the FPU. In this particular case it turns out to be very expensive, because esp happens to be not 8 byte aligned.
esp
Making a seemingly unrelated change can make the micro benchmark much faster, just adding the following two lines at the top of the Main method will make the loop run in about 2.3 seconds on my system:
double dv = 0.0; Interlocked.CompareExchange(ref dv, dv, dv);
The reason for this performance improvement becomes clear when we look at the method prologue in the new situation:
push ebp
mov ebp,esp
and esp,0FFFFFFF8h
push edi
push esi
push ebx
sub esp,14h
This results in an 8 byte aligned esp pointer. As a result the fstp/fld instructions will run much faster. It looks like a "bug" in the JIT that it doesn't align the stack in the first scenario.
fstp/fld
Of course, the much more obvious question is: Why does the cast generate code at all, isn't a double already a double?
Before answering this question, let's first look at another minor change to the micro benchmark. Let's remove the Interlocked.CompareExchange() again and change the inner loop body to the following:
Interlocked.CompareExchange()
double v = 3.0 * d[j]; d[j] = (double)v;
With this change, the loop now takes just 1 second on my system. When we look at the x86 code generated by the JIT, it becomes obvious why:
fld qword ptr [ecx+edx*8+8]
fmul dword ptr ds:[002A1170h]
fstp qword ptr [ecx+edx*8+8]
The redundant fstp/fld instructions are gone.
Back to the question of why the cast isn't always optimized away. The reason for this lies in the fact that the x87 FPU internally uses an extended 80 bit representation for floating point numbers. When you explicitly cast to a double, the ECMA CLI specification requires that this results in a conversion from the internal representation into the IEEE 64 bit representation. Of course, in this scenario we're already storing the value in memory, so this necessarily implies a conversion to the 64 bit representation, making the extra fstp/fld unnecessary.
Finally, in x64 mode all three variations of the benchmark take 1 second on my system. This is because the x64 CLR JIT uses SSE instructions that internally work on the IEEE 64 bit representation of doubles, so the cast is optimized away in all situations here.
For completeness, here's the code generated by the x64 JIT for the inner loop body:
movsd xmm0,mmword ptr [rcx]
mulsd xmm0,mmword ptr [000000C0h]
movsd mmword ptr [rcx],xmm0
I made another 0.34 update, since 0.36 is probably still a ways off.
Changes: | http://weblog.ikvm.net/default.aspx?date=2007-08-07 | CC-MAIN-2018-22 | en | refinedweb |
I am trying to run this script that is meant to interact with a device through the parallel port in my computer (Running 64 bit Windows 7). When I tried using pyparallel as shown below:
- Code: Select all
import parallel
p=parallel.Parallel()
I get this error:
- Code: Select all
Traceback (most recent call last):
File "C:\Python27\Scripts\SFL.py", line 7, in <module>
p=parallel.Parallel()
File "C:\Python27\lib\site-packages\parallel\parallelwin32.py", line 74, in __init__
self.ctrlReg = _pyparallel.inp(self.ctrlRegAdr)
WindowsError: exception: priviledged instruction
I'm pretty sure this means that as a user, I don't have the permissions to specify inputs and outputs to and from the parallel port. Is this correct? If so, is there a fix?
Any help would be GREATLY APPRECIATED as I am a programming noob. | http://www.python-forum.org/viewtopic.php?p=3847 | CC-MAIN-2015-40 | en | refinedweb |
Lang
Hi .Again me.. - Java Beginners
://
Thanks. I am sending running code...Hi .Again me.. Hi Friend......
can u pls send me some code on JPanel..
JPanel shoul have
1pic 1RadioButton
..
Like a Voter List
Java util package Examples
Java Util Package - Utility Package of Java
Java Util Package - Utility Package of Java
Java Utility package is one of the most commonly used packages in the java
program. The Utility Package of Java consist
java again - Date Calendar
java again I can't combine your source code yesterday, can you help me again. My problem is how we get result jtextfield2 from if jtexfield1 we enter(jTextfield keypressed) then the result out to jTextfield2,
This my jFrame
hi again - Java Beginners
/java/thread/thread-creation.shtml
code after changing..
import java.io.
matching database again - Java Beginners
matching database again Dear experts,
So happy I get through this ask a question page again. Thank God.
I want to say "A BIG THANK YOU" for helping me about the matching codes.
It is working now after fine tuning
Read data again - Java Beginners
Read data again sir,
i still hav a problem,first your code will be change like this :
in netbeans out message error 5. Can you help me again. My database like my question before.Can you fix and find the problem in my code
Read data again - Java Beginners
Read data again Hey,
i want to ask again about how to read data from txt,
My DB:
kd_div varchar(15),
nm_div varchar(30),
dep varchar(25),
jab varchar(35),
cab varchar(15),
ket varchar(30)
My data in txt file is://i
doesnt run again - Java Beginners
the soltion
Hi
I am sending u again the code, this code run in my
Hi..Again Doubt .. - Java Beginners
Hi..Again Doubt .. Thank u for ur Very Good Response...Really great..
i have completed that..
If i click the RadioButton,,ActionListenr should get call. It should add to the MS Acess table..Plz check this out....
hope u ill
call frame again - Java Beginners
read from jbutton1 in FrameA to FrameB,then i write "JAVA" in Jtextfield1(FrameB),then i click jbutton1 in FrameB. "JAVA" is a word i'am write in Jtexfield1
java util date - Time Zone throwing illegal argument exception
java util date - Time Zone throwing illegal argument exception Sample Code
String timestamp1 = "Wed Mar 02 00:00:54 PST 2011";
Date d = new Date...());
The date object is not getting created for IST time zone. Java
Read data again - Java Beginners
help again plz sorry - Java Beginners
help again plz sorry Thanks for giving me thread code
but i have a question
this code is comletelly right
and i want to make it runs much faster....
Thanks
util
Plz chk it and reply again - Java Beginners
Java util date
Java util date
The class Date in "java.util" package represents... to
string and string to date.
Read more at:
http:/
Drop Down Reloads again in IE..How to prevent this?
Drop Down Reloads again in IE..How to prevent this? Hi i was using two drop down box..One for Displaying date followed by another for Dispalying...? Its purely JavaScript and HTML page..Im not uing this concept in Java or any
Script on the page used too much memory. Reload to enable script again.
Script on the page used too much memory. Reload to enable script again. Using a java script to generate the dynamic report. If page open the full... to enable script again". After getting this error other pages also not working
again with xml - XML
again with xml hi all
i am a beginner in xml so pls give me the details regarding the methods used in it.
wat will return the methods... it is used.
pls post some example code for it..
thanks in advance hello
java - Java Interview Questions
information :
Thanks...java Can unreachable object become reachable again? Hi friend,
Yes,an unreachable object may become reachable again.
The garbage
Associate a value with an object
with an object in Java util.
Here, you
will know how to associate the value... of the several extentions
to the java programming language i.e. the "...;}
}
Download this example
this code will be problem it display the error again send jsp for registration form
this code will be problem it display the error again send jsp for registration form I AM ENTERING THE DETAILS OFTER IT DISPLAY THE ERROR PLEASE...;/option>
<option value="C#">C#</option>
<option value="Java
i written the program in the files but in adding whole file is writing once again - Java Beginners
Inheritance Example In Java
Inheritance Example In Java
In this section we will read about the Inheritance using a simple example.
Inheritance is an OOPs feature that allows to inherit...
the inheritance feature in Java programming. This example will demonstrate you
java - Java Beginners
java write a programme to to implement queues using list interface Hi Friend,
Please visit the following link:
Thanks
java - Applet
://
Thanks...java what is the use of java.utl Hi Friend,
The java
java persistence example
java persistence example java persistence example
stack and queue - Java Beginners
://
Hope...stack and queue write two different program in java
1.) stack
2
Java Client Application example
Java Client Application example Java Client Application example
STACK&QUEUE - Java Interview Questions
://
Hope that it will be helpful for you
Java - Java Interview Questions
://
Thank you for posting
Example of HashMap class in java
Example of HashMap class in java.
The HashMap is a class in java collection framwork. It stores values in the
form of key/value pair. It is not synchronized
Java set example
Java set example
In this section you will learn about set interface in java. In java set is a
collection that cannot contain duplicate element. The set... collection.
Example of java set interface.
import java.util.Iterator;
import to show class exception in java
Example to show class exception in java
In this Tutorial we are describing the example to show the use of
class exception in java .Exceptions are the condition
Java XStream
Java XStream
XStream is a simple library used to serialize the objects to XML and back
again into the objects.
Features of the XStream APIs
XStream provides better
java - Java Interview Questions
more information to visit....... an explanation with simple real time example
Hi friend,
Some points
Java Example Update Method - Java Beginners
Java Example Update Method I wants simple java example for overriding update method in applet .
please give me that example - Java Interview Questions
to :
Thanks
Static Method in java with realtime Example
Static Method in java with realtime Example could you please make me clear with Static Method in java with real-time Example
array example - Java Beginners
i cannot solve this example final Keyword Example
Java final Keyword Example
In this section we will read about the final... in Java in various different context. In this example we will create... Example : In this example we will create a
Java class into which we
enters an invalid value, they should be prompted again. For example:
Grade.... In this example, user input is in dark red,
prompts are in navy blue, and all other... a flexible range of values for the user's inputted answer. For example,
above you'll
Switch Statement example in Java
also provides you an example with complete
java source code for understanding... Switch Statement example in Java
This is very simple Java program
Database Connectivity Example In Java
Database Connectivity Example In Java
In this section we will read about how... the
SQL queries. In this example we will create a Java class into which we... and execute the above example you will get the output
as follows :
Then again
Working With File,Java Input,Java Input Output,Java Inputstream,Java io
Tutorial,Java io package,Java io example
;
Lets see an example that checks the existence of
a specified file...:\nisha>java CreateFile1
New file "myfile.txt" has been created... again then after
checking the existence of the file, it will not be created and you
Java Hello World code example
Java Hello World code example Hi,
Here is my code of Hello World program in Java:
public class HelloWorld {
public static void main...");
}
}
Thanks
Deepak Kumar Hi,
Learn Java
There and Back Again
There and Back Again
The weblog of Joshua Eichorn, AJAX, PHP and Open Source
Read full Description
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/93962 | CC-MAIN-2015-40 | en | refinedweb |
There are two separate mechanisms for administering security within the Solaris operating environment, WBEM ACL (access control list) based and Solaris RBAC (role-based access control) .
The classes defined in the Solaris_Acl1.0.mof file are used to implement ACL-based security. This provides a default authorization scheme for the Solaris WBEM Services, and applies to all CIM operations. This feature is specific to the Solaris WBEM Services.
Instances of the Solaris_Acl1.0.mof classes determine the default authorizations assigned to a WBEM user and/or namespace. Provider programs, however, are allowed to override this scheme for CIM operations relating to instance manipulation; the Sun Solaris providers use the RBAC scheme to do this.
You can use the (/usr/sadm/bin/wbemadmin) to add users to existing ACLs with either read or write permissions. See "Using the Sun WBEM User Manager to Set Access Control". You can also write WBEM applications using the Solaris_Acl1.0.mof classes to set access control. See "Using the APIs to Set Access Control".
The classes defined in the Solaris_Users1.0.mof file are used to implement Solaris RBAC security for defining user roles and priveleges, via the tool of the . The SMC tool lets you add users to existing roles and grant RBAC rights to existing users. (An RBAC right is managed in the portion of the SMC tool.) See "Solaris Management Console Tool".
The CIM Object Manager validates a user's login information for the machine on which the CIM Object Manager is running. A validated user is granted some form of controlled access to the entire Common Information Model (CIM) Schema. The CIM Object Manager does not provide security for system resources such as individual classes and instances. However, the CIM Object Manager does allow control of global permissions on namespace and access control on a per-user basis.
The following security features protect access to CIM objects on a WBEM-enabled system:
Authentication - The process of verifying the identity of a user, device, or other entity in a computer system, often as a prerequisite to allowing access to the resources in a system.
Authorization - The granting to a user, program, or process the right of access.
Replay protection - The CIM Object Manager protects against a client picking up and sending another client's message to the server by validating a session key.
A client cannot copy another client's last message sent to a CIM Object Manager. The CIM Object Manager uses a MAC for each message, based on a negotiated session key, to guarantee that all communication in the client-server session is with the same client that initiated the session and participated in the client-server authentication.
A MAC is a token parameter added to a remote call which contains security information used to authenticate that single message. It is used to confirm that the message came from the client that was originally authenticated for the session, and that the message is not being replayed from some other client. This type of mechanism is used in WBEM for RMI messages. The session key negotiated in the user authentication exchange is used to encrypt the security information in the message's MAC token.
Note that no digital signing of messages is performed.
When.
Once the CIM Object Manager has authenticated the user's identity, that identity can be used to verify whether the user should be allowed to execute the application or any of its tasks. The CIM Object Manager supports capability-based authorization, which allows a privileged user to assign read and write access to specific users. These authorizations are added to existing Solaris user accounts.
The SMC tool lets you add users to existing roles and grant RBAC rights to existing users. (An RBAC right is managed in the portion of the SMC tool.)
Change to the location of the SMC invocation command by typing the following:
# cd /usr/sbin
Start SMC by typing the following command:
# smc
Double-click on "This Computer" (or single-click the expand/compress icon next to it) in the left-hand Navigation panel to expand the tree beneath it. Do the same for "System Configuration", and you will see the Users icon underneath.
Click on the Users icon to start the application.
For more information on the , see the man page smc(1M). | http://docs.oracle.com/cd/E19455-01/806-6468/6jfdjss8r/index.html | CC-MAIN-2015-40 | en | refinedweb |
got an error while compile this program manually.
mapping.findForward("errors.jsp");
}
}
i set both servlet,struts jar files and i got an error in saveErrors()
error
Heading
cannot find...i got an error while compile this program manually. import AAUtil but gives error on compile.
Hi Friend,
Please visit
i am getting the problem when i am downloading the pdf file from oracle 10g database - Struts
i am getting the problem when i am downloading the pdf file from oracle 10g... into datbase and download the pdf file from database. but when i created the pdf file... but it is not downloading .Please help to me.i am getting the below error when downloading
Hi
Hi The thing is I tried this by seeing this code itself.But I;m facing a problem with the code.Please help me in solving me the issue.
HTTP Status... an internal error () that prevented it from fulfilling this request.
exception
i got an exception while accept to a jsp
i got an exception while accept to a jsp type Exception report
description The server encountered an internal error... in a file.later i changed it to ANSII problem is resolved...
System.out.println(j+" is greater than "+i);
}
}
Hi Friend...
int i = Integer.parseInt(args[0]);
int j = Integer.parseInt(args[1]);
if
Hi... - Struts
Hi... Hello,
I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me Firstly you open the browser and type the following url in the address bar
java i/o - Java Beginners
gets closed.
when i open the program again and enter text in this case previous texts get replaced with new text.
i tried my best but got failed plz tell me...java i/o Dear sir,
i wrote a program where program asks "Enter your error - Struts
is
I THINK EVERY THING IS RIGHT BUT THE ERROR IS COMING I TRIED BY GIVING INPUT...java struts error
my jsp page is
post the problem
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak
hi
hi I have connected mysql with jsp in linux and i have used JDBC connectivity but when i run the program, its not working the program is displaying need some help I've got my java code and am having difficulty to spot what errors there are is someone able to help
import java.util.Scanner;
public class Post {
public static void main(String[] args) {
Scanner sc
I have a doubt regarding action - Struts
I have a doubt regarding action hi,
I have doubt regarding struts,I got response through jsp and once again it redirecting to action. If anybody knows pls respond me.
with regards,
Teju Hi friend
hi!
hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....)
thanx for answering....
Hi...);
System.out.print("Enter integer: ");
int i=input.next have got this code but am not totally understanding what the errors. Could someone Please help. Thanks in advance!
import java.util.Random;
import java.util.Scanner;
private static int nextInt() {
public classboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report..
hi.. I want upload the image using jsp. When i browse the file then pass that file to another jsp it was going on perfect. But when i read...);
for(int i=0;i<arr.length-1;i++) {
newstr=newstr.. Hi,
I am new in struts please help me what data write in this file ans necessary also...
struts-tiles.tld,struts-beans.tld,struts........its very urgent Hi Soniya,
I am sending you a link. This link
error in program when trying to load image in java - Java Beginners
error in program when trying to load image in java Hi, I'm trying... below is the codes I used to create a logo. Once I tried to run the program I got that error message above saying uncaught error fetching image. I don't know.. cant find any compile time error but there is runtime error.
i cant find any compile time error but there is runtime error. ...){
System.out.println("Error:connection not created");
}
Hi,
I tried the following updated code and it is working. Please copy and run
Hi.... - Java Beginners
Hi.... Hi Friends
when i compile jsp file then got the error "code to large for try statement" I am inserted 177 data please give me solution and let me know what is the error its very urgent Hi Ragini
Struts - Struts
Struts Hi,
I m getting Error when runing struts application.
i have already define path in web.xml
i m sending --
ActionServlet...
ActionServlet
*.do
but i m getting
runtime error - Java Beginners
runtime error I created a sample java source ,compiled it succesfully using command prompt but when i tried to run i got the folowing error
" Exception in thread "main" java.lang.UnsupportedClassVersionError"
I have set
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
hi see and give me reply as soon as possible
hi see and give me reply as soon as possible Hi Friend,
I got path,but it will again ask path error
first i was gave index.jsp.It is displayed and it is not going to next level.
HTTP Status 404 - /struts/hello
type
when i hit enter it represent action like click a button
when i hit enter it represent action like click a button Hi,
Plz provide a program in html like I want 2 submit buttons,when i hit the enter button it will represent the action like click the 1st submit i want to develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions options and correct option for the question.this is the first project i am doing
Hi.. - Java Beginners
Hi.. Hi,
I got some error please let me know what is the error
integrity constraint (HPCLUSER.FAFORM24GJ2_FK) violated - parent key
its very urgent Hi Ragini
can u please send your complete source
hi..
hi.. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional...=browsefile.split("/");
System.out.println("ARR is="+arr);
for(int i=0;i<arr.length-1;i++) {
newstr=newstr
The server encountered internal error() - Struts
the source of the error.
Hi friend,
For solving the problem visit...The server encountered internal error() Hello,
I'm facing the problem in struts application.
Here is my web.xml
MYAPP
how can i make a session invalidate in java when browser is closed
how can i make a session invalidate in java when browser is closed how can i make a session invalidate in java when browser is closed
...
Hi deepak,
Thanks,
But it happens after sometime like which we have
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/8364 | CC-MAIN-2015-40 | en | refinedweb |
Details
Description
It would be useful to be able to register handlers after SolrCore has been initialized initialized. It is also useful to be able to ask what handlers are registered and to where. This patch adds the following functions to SolrCore:
SolrRequestHandler registerRequestHandler(String handlerName, SolrRequestHandler handler);
Collection<SolrRequestHandler> getRequestHandlers(Class<? extends SolrRequestHandler> clazz);
It also guarantees that request handlers will be initialized with an argument saying what path it is registered to. RequestHandlerBase gets a bean for the registered path.
While discussing this, Yonik suggested making it possible to defer initialization of some handlers that will be infrequently used. I added the 'LazyRequestHandlerWrapper' (if taking this out makes the patch any easier to commit - it can get its own issue)
check:
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
changed this so that LazyRequestHandlerWrapper is not a public class. That seems cleaner as it is not something that should be used directly
Updated in response to Hoss' comments:
1. gets rid of the get by class thing
2. adds Map<> getRequestHandlers()
3. gets rid of the extra param to init()
two outstanding questions from the email discussion...
1) yonik seemed to be concerned about synchronization issues involved with geting a request handler by name now that they can be registered dynamicly on the fly ... flarify that.
2) if we want request handlers to be able to find out what nam(es) they are registered with (now that anyone can call core.registerRequestHandler I might give the exact same identicle instance multiple names) they should be able to do that from the init method ... since we aren't changing the API of the init method, we should probably make sure that registering a handler causes...
handler = clazz.newInstance();
this.register( name, handler );
handler.init( args );
...to happen in that order.
i would even argue that when registering multiple handlers (ie: from the config) we may want the psuedocode to be...
foreach (handlerconfig){ handler = clazz.newInstance(); this.register( name, handler ); }
foreach (key in registry){ lookup(key).init( args ); }
...so that all handlernames are defined before any init methods are called.
#2, smart! yes
This update loads handlers in the style suggested by Hoss.
It makes sure everything is instanciated and registered before calling init() on any handlres registered in solrconfig.xml
It calls init() on handlers in the order they were defined.
The only open issue is if SolrCore.getRequestHandler() should be synchronized. I can't think of any potential problems if it isn't but i could be missing something.
I'll let whoever commits this decide if it should be synchronized or not.
w.r.t. synchronization of getRequestHandler(), I was just thinking ahead to when registerRequestHandler() may be called after the constructor for SolrCore.
Registration before initialization is interesting, but again, this only works easily if registerRequestHandler() is restricted to the SolrCore constructor. If this were to change in the future, it would expose un-initialized handlers to requests.
The API in this patch lets you call SolrCofe.registerRequestHandler( path, handler ) at any point. If you use this api, you are responsible to make sure the handler is initalized – this may or may not require calling init( NamedList ) depending on the handler implementation.
The "Registration before initialization" is only save for SolrCore to use within its constructor. RequestHandlers.registerHandlers( NodeList nodes ) - is package private and only called from the SolrCore constructor.
I still don't see how synchronization becomes an issue - unless someone is trying a bizarro use case where someone registers a handler within SolrRequestHandler.handleRequest() and expects that exactly the next request use the newly registered handler.
In my use case, I have a custom filter that extends SolrRequestDispatcher. This filter initializes solr normally, then inspects what was registered and automatically sets up the rest of the environment.
If you modify the map in one thread and read from it in another thread, that requires synchronization to work correctly (even if it's a different entry being accessed).
But what is the failure mode?
Suppose, in thread A, I call:
SolrCore.getSolrCore().registerRequestHandler( "/path/to/handler", handler );
then 5 seconds later in thread D, I call:
SolrCore.getSolrCore().getRequestHandler( "/path/to/handler" )
Can we be sure the new handler will be returned? Is there any chance of anything exploding? Is it only in the microseconds around touching the map that things are undefined?
If it is a graceful error (null or whatever was there before), i don't think this case needs to be synchronized. If it is something else could happen, it should be.
> But what is the failure mode?
Any number of modes of failure.... it's very tough to predict them (I think you'd have to be Doug Lea
1)
thread #1 does map.put("/path/to/handler", handler)
thread #2 iterates over the map and gets a ConcurrentModificationException
2)
thread #1 does map.put("/path/to/handler", handler)
thread #2 does map.put("/path/to/handler2", handler2)
a) If they hash to the same bucket, one could overwrite the other
b) one or both could cause resize() to be invoked... ouch! many different modes of failure there
3)
thread #1 does map.put("/path/to/handler", handler) causing resize to be called()
thread #2 does a map.get("/myexistinghandler") and it gets back null
I'd agree with you if the only mode of failure was to get back null for the current object being put in the map (since it's a race between threads anyway, null is a valid view - one should synchronize externally in that case anyway). But, any insert can mess up all other reads.
synchronized it is!
Rather then synchronizing each function call, I'm using a synchonized map:
private final Map<String, SolrRequestHandler> handlers = Collections.synchronizedMap(
new HashMap<String,SolrRequestHandler>() );
related note i'm typing before i forget...
in
SOLR-81 i tried to call SolrCore.getSolrCore.getDataDir() in the init method of a requestHandler and got an infinite loop. I can't remember if this type of situation would be prevented by this patch or not ... if it isn't that doesn't mean this patch shouldn't be committed, it just means we should probably open a separate bug to try and detect/prevent/error in that situation.
yes, that situation is handled by this patch. This was one of my primary reasons for writing it!
This patch lets you do call SolrCore.getCore() and inspect the schema/index/whatever. Without it, you need to do some sort of lazy loading after the first request.
A couple of comments...
- For lazy loading, you don't even want to load the class if it's not used (loaded classes take up resources, and there may be optional jars that will cause errors).
- it really seems like init() must be called before any calls to handleRequest. To ensure this, I don't think we can do the registration inbetween. This isn't just a hypothetical problem... think about when a new web page is published that causes a new type of request to start hitting an existing Solr server... 10s to 100s of requests per second for a new hander all of a sudden. The likelihood becomes very high that another request will cause handleRequest() to be called before or concurrently with init().
> - For lazy loading, you don't even want to load the class if it's not used (loaded classes take up
> resources, and there may be optional jars that will cause errors).
Ok - I'm a little nervous about that because I like things to fail loudly at startup rather then wait to tell you they were configured incorrectly. (
SOLR-179) But if you are using lazy loading, that is probably the behavior you would expect.
I'll change it so that the LazyRequestHandlerWrapper stores the string value for the class name rather then the Class itself.
>
> - it really seems like init() must be called before any calls to handleRequest.
>
yes, init() must be called before any call to handleRequest() - absolutely
Correct me if I have the lifecycle wrong, but I think it is ok:
1. SolrDispatchFilter.init() calls SolrCore.getSolrCore()
2. SolrCore.getSolrCore() calls SolrCore constructor
3. SolrCore constructor initalizes schema, listeners, index and writers
4. then calls reqHandlers.initHandlersFromConfig( SolrConfig.config )
this function:
a. creates each handler for solrconfig.xml and registers it
b. calls init() on each handler - (since register was called first, each handler knows what else exists, but it may or may not be initialized)
5. initialize searcher / updateHandler
6. SolrDispatchFilter.init() finishes and solr starts accepting requests.
All handlers call init() before they could possibly receive any requests. No requests can hit solr during the limbo period (a-b), It is only in the "unstable" state in the SolrCore constructor - I think the benefits of handlers knowing what else is registered during their init() method is worth the slightly awkward construction.
The public interface:
SolrCore.register( String handlerName, SolrRequestHandler handler )
assumes that the handler is properly initialized. As soon as this is called it can immediately start accepting requests. I will make the javadoc more clear on this point.
The only potentially dangerous function is (4) initHandlersFromConfig. This is a package private function that defiantly should not be called anywhere else. Calling this function twice is not normal, if someone does it, they are asking for trouble.
1. Changed the LazyRequestHandlerWrapper to hang on to a string rather then a class and does not access the class until it is needed. (saves memory, but delays errors)
2. Added more explicit documentation
initHandlersFromConfig still registers all handlers before initializing them - i am confident this is ok unless it is called outside of the solr core constructor.
> (saves memory, but delays errors)
Delays errors can also be a feature (if things need to be configured first, or jars need to be dropped in the right spot, etc).
I think getWrappedHandler() needs to by synchronized or else
- multiple instances could be instantiated
- an instantiated instance could be handed back to a different thread before or during the handler's init()
- general spookiness even after init() finishes due to lack of synchronization (initialized data won't necessarily be seen correctly in a different thread)
One line change adding synchronized to:
public synchronized SolrRequestHandler getWrappedHandler()
thanks yonik
- - - - - - - - - -
>> (saves memory, but delays errors)
> Delays errors can also be a feature (if things need to be configured first, or jars need to be dropped in the right spot, etc).
>
I'm convinced. With
SOLR-179 you can configure things to stop after errors - if you want some things to stop while otheres continue, you can make them lazy loaded.
Committed. Thanks Ryan!
If you all are more comfortable with
Collection<SolrRequestHandler> getRequestHandlers()
then:
Collection<SolrRequestHandler> getRequestHandlers(Class<? extends SolrRequestHandler> clazz)
that is an easy change. Likewise we can postpone the Lazy bit if it makes anything easier.
I included tests for everything i think is testable about these changes, and added nice javadocs. | https://issues.apache.org/jira/browse/SOLR-182?focusedCommentId=12478561&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-40 | en | refinedweb |
basket-client 0.3.
Installation
$ pip install basket-client
Usage
Do you want to subscribe people to Mozilla’s newsletters? All you need to do is:
import basket basket.subscribe('<email>', '<newsletter>', <kwargs>)
You can pass additional fields as keyword arguments, such as format and country. For a list of available fields and newsletters, see the basket documentation.
Are you checking to see if a user was successfully subscribed? You can use the lookup_user method like so:
import basket basket.lookup_user(email='<email>', api_key='<api_key>')
And it will return full details about the user. <api_key> is a special token that grants you admin access to the data. Check with the mozilla.org developers to get it.
Settings
- BASKET_URL
- URL to basket server, e.g.:
- BASKET_API_KEY
- The API Key granted to you by the mozilla.org developers so that you can use the lookup_user method with an email address.
- BASKET_TIMEOUT
- The number of seconds basket client should wait before giving up on the request.Default: 10
If you’re using Django you can simply add these settings to your settings.py file. Otherwise basket-client will look for these values in an environment variable of the same name.
Tests
To run tests:
$ python setup.py test
Change Log
v0.3.7
- Add the lookup_user function.
- Add the BASKET_API_KEY setting.
- Add the BASKET_TIMEOUT setting.
v0.3.6
- Add the confirm function.
v0.3.5
- Add tests
v0.3.4
- Fix issue with calling subscribe with an iterable of newsletters.
- Add request function to those exposed by the basket` module.
v0.3.3
- Add get_newsletters API method for information on currently available newsletters.
- Handle Timeout exceptions from requests.
- Downloads (All Versions):
- 10 downloads in the last day
- 192 downloads in the last week
- 855 downloads in the last month
-.7.xml | https://pypi.python.org/pypi/basket-client/0.3.7 | CC-MAIN-2015-40 | en | refinedweb |
J2SE Web Services
By mkuchtiak on Oct 08, 2006
To create a web service you don't need to have an Application server (or Web server) installed. This is an illustration how the simple web service can be created in Netbeans5.5 Java Project. I was inspired by a Robert Eckstein and Rajiv Mordani's article about JAX-WS 2.0 With the Java SE 6 Platform and Petr Blaha's blog: Developing Web services for Mustang in Netbeans.
Now in Netbeans5.5 there is a JAX-WS 2.0 technology integrated, which
enables you to create and consume web services in Netbeans projects.
What is actually supported is :
- Web Service creation in Web application and EJB Project
- Web Service consumption (WS Client) in Web Application, EJB Project, Enterprise Client Application and raw Java Application.
What's is not supported yet is Web
Service Creation in raw Java
Application (J2SE Project).
This is a simple workaround how simply this feature can be achieved in Netbeans5.5.
Steps to create RPC/literal Web Service in Java Application.
- Start Netbeans5.5, create a new Java Application project and name it GeometricalWS.
- In Project Customizer add JAX-WS2.0 Library (this step is necessary only if you run Netbeans IDE on JDK1.5 platform) :
- Create a simple Java class and name it CircleFunctions. Also specify the Package Name for the class: geometricalws
- Using the @WebService annotation we'll convert this java class to a simple web service. Copy the following code into the editor window:
package geometricalws;(Notice that except of @WebService annotation, @WebMethod and @WebParam annotations are used to annotate web service operations and their parameters.
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;
import javax.jws.soap.SOAPBinding;
@WebService(name="Circle", serviceName="CircleService", portName="CirclePort")
@SOAPBinding(style=SOAPBinding.Style.RPC)
public class CircleFunctions {
@WebMethod(operationName="area")
public double getArea(@WebParam(name="r")double r) {
return java.lang.Math.PI \* (r \* r);
}
@WebMethod(operationName="circumference")
public double getCircumference(@WebParam(name="r")double r) {
return 2 \* java.lang.Math.PI \* r;
}
}
These annotations influence the way how WSDL file is created from this class)
- Using the javax.xml.ws.Endpoint:publish method the web service can be deployed to a simple web server provided by jax-ws runtime. Modify the Main class in the following way:
package geometricalws;(see that the URL address for web service and an instance of CircleFunction class should be specified in publish method)
import javax.xml.ws.Endpoint;
public class Main {
public static void main(String[] args) {
String wsAddress = "";
Endpoint.publish(
wsAddress,
new CircleFunctions());
System.out.println("Web service was published successfuly.\\n"+
"WSDL URL: "+wsAddress+"?WSDL");
// this is the way to make the local web server running until the process is killed
while (true) {
try {
Thread.sleep(2000);
} catch (InterruptedException ex) {}
}
}
}
- Run the application (using Run Project action on the project node). The message in output window notifies you that web service was published successfuly:
Web service was published successfuly.
WSDL URL:
- To ensure the web service was really published open your browser and type the URL of web service description file (wsdl file) to the url text field: The wsdl file should appear in browser window.
This is the example how the SOAP communication for this web service look like on the wire:
SOAP Request:
<?xml version="1.0" encoding="UTF-8"?>SOAP Response:
<soapenv:Envelope xmlns:
<soapenv:Body>
<ans:area xmlns:
<r>10.0</r>
</ans:area>
</soapenv:Body>
</soapenv:Envelope>
<?xml version="1.0" encoding="UTF-8"?>This is the excelent article comparing various (SOAP) binding styles:
<soapenv:Envelope xmlns:
<soapenv:Body>
<ans:areaResponse xmlns:
<return>314.1592653589793</return>
</ans:areaResponse>
</soapenv:Body>
</soapenv:Envelope>
Steps to create a Client:Creating a web service client is very simple using Netbeans5.5:
- Create another Java Application project and name it ClientProject.
- Create new Web Service client ant type the url of wsdl file to WSDL URL text field, set the package name for the client artifacts (java classes that will be generated from the wsdl):
Press <Finish> button. The Web Service Reference node: CircleFunctions should be created.
- Open Main.java in editor and Drag&Drop the "area" operation node from the ProjectsView to editor. A piece of code should be generated inside the main method.
Then, set the value for WS operation argument (r). See the picture:
- Run the Project. The following message should apear in Output Window :
Result = 314.1592653589793
Posted by Geertjan on October 10, 2006 at 01:49 PM CEST #
thx
Posted by online radyo on April 02, 2008 at 01:08 PM CEST #
thx
Posted by bedava radyo on April 02, 2008 at 01:08 PM CEST #
thx
Posted by canli radyo dinle on April 02, 2008 at 01:09 PM CEST #
thanks ;)
Posted by canli tv izle on April 02, 2008 at 01:09 PM CEST #
thanks
Posted by powerfm on April 02, 2008 at 01:09 PM CEST #
Thanks for this
Posted by youtube on June 16, 2008 at 10:23 AM CEST #
thnx
Posted by Driver indir on September 26, 2008 at 02:27 PM CEST #
Thank you very much for this information. I like this site
Posted by evden eve nakliyat on October 09, 2008 at 04:39 AM CEST #
Thank you very much for this information. I like this site
Posted by ankara evden eve on October 09, 2008 at 04:40 AM CEST #
Thank you very much for this information. I like this site
Posted by ankara nakliyat on October 09, 2008 at 04:40 AM CEST #
Thank you very much for this information. I like this site
Posted by evden eve on October 09, 2008 at 04:40 AM CEST #
Thank you very much for this information. I like this site
Posted by nakliye on October 09, 2008 at 04:41 AM CEST #
Thank you very much for this information. I like this site
Posted by ankara evden eve nakliyat on October 09, 2008 at 04:41 AM CEST #
Thank you very much for this information. I like this site
Posted by nakliyat on October 09, 2008 at 04:42 AM CEST #
thanks you site admins wery good
Posted by seks shop on October 17, 2008 at 02:15 PM CEST #,
Posted by Mark Oiness on October 24, 2008 at 12:11 AM CEST #
Great Job!
Posted by Electronic Cigarette on November 23, 2008 at 11:59 PM CET #
Posted by mirc on December 07, 2008 at 10:35 AM CET #
Posted by kelebek on December 07, 2008 at 10:36 AM CET #
A laser system <a href="">laser machine</a> can be used as a stand-alone machine guidance display or as a component of a complete automatic grade control system. When a laser beam strikes the receiver <a href="">vinyl cutter</a> vinyl cutter, the operator then puts the machine in automatic mode to conduct machine control. If the contractor uses the system <a href="">china CNC router</a> china CNC router for visual guidance, then the operator would compensate with manual adjustments called for by the system <a href="">engraving machine</a> engraving machine. Projects conducive to using lasers without machine <a href="">CNC machines</a> CNC machines automation include placing pads, performing formwork, setting foundations or footings, achieving depth control for sub-base excavation and conducting finish grade work. Laser systems can control as many pieces of equipment as needed, provided there is line of sight to the laser and the same grade or elevation is required.
Posted by Tamper evident labels on December 10, 2008 at 02:32 AM CET # titanium jewelry wholesale can make you look pleasant for any occasion.
Posted by Electronic Cigarette Supplier on December 10, 2008 at 02:35 AM CET #
Posted by tiffanys jewelry on February 09, 2009 at 01:10 AM CET #
Posted by minibüs kiralama on May 08, 2009 at 05:51 AM CEST #
Posted by minibüs kiralama on May 08, 2009 at 05:53:58 AM CEST #
I like the side of the article.
Posted by Christian Louboutin on November 18, 2009 at 05:20 AM CET #
Thank you. This is very good article.
Posted by laptop on December 31, 2009 at 12:16 PM CET #
thanks
Posted by oyun on February 02, 2010 at 07:39 AM CET #
wooooowwww
Posted by Şiirler on November 01, 2010 at 02:49 PM CET # | https://blogs.oracle.com/milan/entry/j2se_web_services | CC-MAIN-2015-40 | en | refinedweb |
Opened 5 years ago
Closed 5 years ago
#13606 closed (duplicate)
admin raw_id_fields fail to check against non-numerical input
Description
Inputting a non-numerical value in a foreign key field using raw_input_fields produces a ValueError exception in django admin.
E.g write "wer" to any foreign key field, which has been declared with raw_input_fields fails:
Django Version: 1.2.1 Exception Type: ValueError Exception Value: invalid literal for int() with base 10: 'wer'
Using Django 1.2-beta I was able to fix this bug by simply appending these two lines to
django/contrib/admin/widgets.py label_for_value()-function:
def label_for_value(self, value): key = self.rel.get_related_field().name try: obj = self.rel.to._default_manager.using(self.db).get(**{key: value}) except self.rel.to.DoesNotExist: return '' # simple fix except ValueError: return '' # end of fix return ' <strong>%s</strong>' % escape(truncate_words(obj, 14))
Now in Django 1.2.1 this fix unfortunately doesn't work.
Change History (4)
comment:1 follow-up: ↓ 2 Changed 5 years ago by kmtracey
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to duplicate
- Status changed from new to closed
comment:2 in reply to: ↑ 1 Changed 5 years ago by petrikuittinen@…
comment:3 Changed 5 years ago by anonymous
- Has patch set
- Needs tests set
- Resolution duplicate deleted
- Status changed from closed to reopened
I know this a duplicate ticket, BUT the previous ticket for was Django version 1.1.1.
Django 1.2+ needs to have patches in two files to get this issue fixed.
changes to django/forms/models.py from line 984::4 Changed 5 years ago by kmtracey
- Resolution set to duplicate
- Status changed from reopened to closed
The previous ticket (#13149) is still open; the place to post information on what is necessary to get it fixed in current trunk code is there. It's one problem, there should be only one ticket for tracking getting it fixed. The version for that ticket being set to 1.1 just indicates that the problem has been around at least since 1.1, it does not mean the fix will be specific to 1.1. For any ticket the fix will be made to current trunk and backported to the current release branch (if necessary). If the existing patches on that ticket no longer work, then it should be marked patch needs improvement, or better yet a new patch should be uploaded that fixes the problem for current code. A second ticket to track the same problem is not helpful.
Isn't this #13149? We only need one ticket to track getting it fixed. | https://code.djangoproject.com/ticket/13606 | CC-MAIN-2015-40 | en | refinedweb |
ICredentialPolicy Interface
Defines the credential policy to be used for resource requests that are made using WebRequest and its derived classes.
Assembly: System (in System.dll)
The credential policy determines whether to send credentials when sending a WebRequest for a network resource, such as the content of a Web page. If credentials are sent, servers that require client authentication can attempt to authenticate the client when the request is received instead of sending a response that indicates that the client's credentials are required. While this saves a round trip to the server, this performance gain must be balanced against the security risk inherent in sending credentials across the network. When the destination server does not require client authentication, it is best not to send credentials.
Use the AuthenticationManager.CredentialPolicy property to set an ICredentialPolicy policy. The IAuthenticationModule that handles authentication for the request will invoke the ShouldSendCredential method before performing the authentication. If the method returns false, authentication is not performed.
An ICredentialPolicy policy affects all instances of WebRequest with non-null credentials in the current application domain. The policy cannot be overridden on individual requests.
Legacy Code Example
The following code example shows an implementation of this interface that permits credentials to be sent only for requests that target specific hosts.
public class SelectedHostsCredentialPolicy: ICredentialPolicy { public SelectedHostsCredentialPolicy() { } public virtual bool ShouldSendCredential(Uri challengeUri, WebRequest request, NetworkCredential credential, IAuthenticationModule authModule) { Console.WriteLine("Checking custom credential policy."); if (request.RequestUri.Host == "" || challengeUri.IsLoopback == true) return true; return false; } }
Available since 2.0 | https://msdn.microsoft.com/en-us/library/system.net.icredentialpolicy(v=vs.110).aspx | CC-MAIN-2015-40 | en | refinedweb |
Adventures in Single-Sign-On: Cross Domain Script Request
Consider a scenario where a user authenticates with ADFS (or equivalent identity provider (IdP)) when accessing a domain such as (A) and then, from this page, a request is made to (B) to download a set of application scripts that would then interact with a set of REST based web services in the B domain. We'd like to have SSO so that claims provided to A are available to B and that the application scripts downloaded can then subsequently make requests with an authentication cookie.
Roughly speaking, the scenario looks like this:
Depiction of the scenario
It was straightfoward enough to set up the authentication with ADFS using WIF 4.5 for each of A and B following the MSDN "How To"; I had each of the applications separately working with the same ADFS instance, but the cross domain script request from A to B at step 5 for the script file generated an HTTP redirect sequence (302) that resulted in an XHTML form from ADFS with Javascript that attempts to execute an HTTP POST for the last leg of the authentication. This was good news because it meant that ADFS recognized the user session and tried to issue another token for the user in the other domain without requiring a login.
However, this obviously posed a problem as, even though it appeared as if it were working, the request for the script could not succeed because of the text/html response from ADFS.
Here's what looks like in this case:
<html> <body> ... <script type="text/javascript" src=""></script> </body> </html>
This obviously fails because the HTML content returned from the redirect to ADFS cannot be consumed.
I scratched my head for a bit and dug into the documentation for ADFS, trawled online discussion boards, and tinkered with various configurations trying to figure this out with no luck. Many examples online that discuss this scenario when making a web service call from the backend of one application to another using bearer tokens or WIF ActAs delegation, but these were ultimately not suited for what I wanted to accomplish as I didn't want to have to write out any tokens into the page (for example, adding a URL parameter to the app.js request), make a backend request for the resource, or use a proxy.
(I suspect that using the HTTP GET binding for SAML would work, but for the life of me, I can't figure out how to set this up on ADFS...)
In a flash of insight, it occurred to me that if I used a hidden iframe to load another page in B, I would then have a cookie in session to make the request for the app.js!
Here's the what the page looks like on the page in A:
<script type="text/javascript"> function loadOtherStuff() { var script = document.createElement('script'); script.setAttribute('type', 'text/javascript'); script.setAttribute('src', ''); document.body.appendChild(script); } </script> <iframe src="" style="display: none" onload="javascript:loadOtherStuff()"></iframe>
Using the iframe, the HTTP 302 redirect is allowed to complete and ADFS is able to set the authentication cookie without requiring a separate sign on since it's using the same IdP, certificate, and issuer thumbprint. Once the cookie is set for the domain, then subsequent browser requests in the parent document to the B domain will carry along the cookie!
The request for appscript.js is intercepted by an IHttpHandler and authentication can be performed to check for the user claims before returning any content. This then allows us to stream back the client-side application scripts and templates via AMD through a single entry point (e.g. appscript.js?app=App1 or a redirect to establish a root path depending on how you choose to organize your files).
Any XHR requests made subsequently still require proper configuration of CORS on the calling side:
$.ajax({ url: '', type: 'GET', crossDomain: true, xhrFields: { withCredentials: true }, success: function(result){ window.alert('HERE'); console.log('RETRIEVED'); console.log(result); } });
And on the service side:
<!--// Needed to allow cross domain request. configuration/system.webServer/httpProtocol //--> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="" /> <add name="Access-Control-Allow-Credentials" value="true" /> <add name="Access-Control-Allow-Headers" value="accept,content-type,cookie" /> <add name="Access-Control-Allow-Methods" value="POST,GET,OPTIONS" /> </customHeaders> </httpProtocol> <!--// Allow CORS pre-flight configuration/system.webServer/security //--> <security> <requestFiltering allowDoubleEscaping="true"> <verbs> <add verb="OPTIONS" allowed="true" /> </verbs> </requestFiltering> </security> <!--// Handle CORS pre-flight request configuration/system.webServer/modules //--> <add name="CorsOptionsModule" type="WifApiSample1.CorsOptionsModule" />
The options handler module is a simple class that responds to OPTION requests and also dynamically adds a header to the response:
/// <summary> /// <c>HttpModule</c> to support CORS. /// </summary> public class CorsOptionsModule : IHttpModule { #region IHttpModule Members public void Dispose() { //clean-up code here. } public void Init(HttpApplication context) { context.BeginRequest += HandleRequest; context.EndRequest += HandleEndRequest; } private void HandleEndRequest(object sender, EventArgs e) { string origin = HttpContext.Current.Request.Headers["Origin"]; if (string.IsNullOrEmpty(origin)) { return; } if (HttpContext.Current.Request.HttpMethod == "POST" && HttpContext.Current.Request.Url.OriginalString.IndexOf(".svc") < 0) { HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin", origin); } } private void HandleRequest(object sender, EventArgs e) { if (HttpContext.Current.Request.HttpMethod == "OPTIONS") { HttpContext.Current.Response.End(); } } #endregion }
The end result is that single-sign-on is established across two domains for browser to REST API calls using simple HTML-based trickery (only tested in FF!).
5_1<<.
Adding_4<<:
install-package microsoft.owin.security.openidconnect
Next, in the default Startup.Auth.cs file generated by the project template, you will need to add some additional code.
First, add this line:
app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
Then, add this:
app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions { ClientId = "138C1130-4B29-4101-9C84-D8E0D34D222A", Authority = "", PostLogoutRedirectUri = "", Description = new AuthenticationDescription { AuthenticationType = "OpenIdConnect", Caption = "Azure OpenId Connect" }, TokenValidationParameters = new TokenValidationParameters { // If you don't add this, you get IDX10205 ValidateIssuer = false } });:
public static string ValidateIssuer(string issuer, SecurityToken securityToken, TokenValidationParameters validationParameters) { if (validationParameters == null) { throw new ArgumentNullException("validationParameters"); } if (!validationParameters.ValidateIssuer) { return issuer; } if (string.IsNullOrWhiteSpace(issuer)) { throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10211)); } // Throw if all possible places to validate against are null or empty if (string.IsNullOrWhiteSpace(validationParameters.ValidIssuer) && (validationParameters.ValidIssuers == null)) { throw new SecurityTokenInvalidIssuerException(string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10204)); } if (string.Equals(validationParameters.ValidIssuer, issuer, StringComparison.Ordinal)) { return issuer; } if (null != validationParameters.ValidIssuers) { foreach (string str in validationParameters.ValidIssuers) { if (string.Equals(str, issuer, StringComparison.Ordinal)) { return issuer; } } } throw new SecurityTokenInvalidIssuerException( string.Format(CultureInfo.InvariantCulture, ErrorMessages.IDX10205, issuer, validationParameters.ValidIssuer ?? "null", Utility.SerializeAsSingleCommaDelimitedString(validationParameters.ValidIssuers))); }_5<<:
One of the More Creative Ways to Advertise Career Opportunities
As seen on Soundcloud.com as I was examining why the page wasn't loading...
Adding?
Indoor Rock Climbing – Try It!
One of my recently discovered activities that I'm falling in love with is indoor rock climbing (though I suppose I may try outdoor rock climbing and bouldering one day, too).
In a weird way, it's the ultimate thinking person's type of sport that is physically demanding, but also mentally challenging as well. Climbers like to talk in terminology like "problems", "projects", and "solutions" and it's entirely accurate and applicable way to describe what climbing is all about. If you walk into a bouldering area in a gym, you will see climbers just sitting around, planning routes, thinking about how to position their bodies to make the right move and attacking routes over and over again. Difficult routes demand that you plan and think about how you can make your way up a vertical face while expending the least amount of energy.
It's odd, but I also think that it's a very "romantic" (or "bromantic"?) activity because you'll have the most fun climbing with someone else. There is a lot of communication and trust involved when one person is controlling the safety and well-being of another person suspended 40 feet in the air. For that same reason, it's a great team building activity for companies because to climb, you need to be able to work together, communicate, and have trust in your partners.
To get started, you can look up Google Maps and find some nearby rock climbing gyms and just call and take a class. I took my first class at Rockreation in Costa Mesa, CA where you had to schedule ahead and the classes are far more formalized, but there are also places like Rockville back home in NJ, where the classes are much more informal and you can just show up and take a short intro class.
Most intro classes will teach you the basic elements of indoor climbing:
- Using harnesses and shoes
- Basic double-figure-8 knot tying
- Belaying
- Basic safety including verbal commands and communications.
But in looking through some videos, I've found that there is LOT more to learn and I've developed an even deeper appreciation for it. Take a look for yourself:
Five Fundamentals of Indoor Rock Climbing
How to Grip Indoor Climbing Holds
Footwork for Climbing
Five Advanced Bouldering Techniques
What I hope that you can get from this is that there is a real art to this that is beautiful to watch in action. In that last video, the Bat Hang at 1:45 is a thing of beauty. Seasoned climbers make it look easy, but it really takes a lot of practice, experience, and creativity to move around like Cliff Simanski does in the video.
Charlotte and Sandra working a wall.
I've also learned that I've been wrecking my forearms because I've basically been muscling my way up the walls with my upper body strength alone. A strong grip and upper body are certainly beneficial for climbing, but you need far more than that to advance in the sport.
In a sense, rock climbing has a lot in common with dance or gymnastics: it demands creative body movement, flexibility, balance, body awareness, and spatial awareness (maybe even more so because your life is on the line in some cases).
It's a great activity for kids of all ages (Charlotte is 3.5 years old) to enjoy.
An Alternate Meaning for FOCKED
Eric Brechner came up with one of my favorite acronyms of all time in software development: FOCKED.
_11<<
Courtesy of Wikipedia
And Amazon:
_13<<. | http://charliedigital.com/ | CC-MAIN-2015-40 | en | refinedweb |
In order to interact with Oracle Communications Services Gatekeeper, applications use either SOAP-based, RESTful, or native interfaces. Those applications using SOAP-based interfaces must manipulate the SOAP messages that they use to make requests in certain specialized ways.They must add specific information to the SOAP header, and, if they are using for example Multimedia Messaging, they must send their message payload as a SOAP attachment. Applications using the native interfaces use the normal native interface mechanisms, which are not covered in this document.
The following chapter presents a high-level description of the SOAP mechanisms and how they function to manage the interaction between Services Gatekeeper and the application.
The mechanisms for dealing with these requirements programmatically depend on the environment in which the application is being developed.
Note:Clients created using Axis 1.2 or older will not work with some communication services. Developers should use Axis 1.4 or newer if they wish to use Axis.
For examples using the Oracle WebLogic Server environment to accomplish these sorts of tasks, see "Managing SOAP headers and SOAP attachments programmatically".
If your application is using the a SOAP-based facade (set of interfaces) to interact with Services Gatekeeper, there are four types of elements you may need to add to your application's SOAP messages to Services Gatekeeper.
In order to secure Services Gatekeeper and the telecom networks to which it provides access, applications are usually required to provide authentication information in every SOAP request which the application submits. Services Gatekeeper leverages the WebLogic Server Web Services Security framework to process this information.
Note:WS Security provides three separate modes of providing security between a Web Service client application and the Web Service itself for message level security - Authentication, Digital Signatures, and Encryption. For an overview of securing web services, see "Securing and Administering WebLogic Web Services" in Oracle Fusion Middleware Security and Administrator's Guide for Web Services at:
Services Gatekeeper supports three authentication types:
The type of token that the particular Services Gatekeeper operator requires is indicated in the Policy section of the WSDL files that the operator makes available for each application-facing interface it supports. In the following WSDL fragment, for example, the required form of authentication, indicated by the <wssp:Identity> element, is Username Token.
Example 2-1 WSDL fragment showing Policy
<s0:Policy s1:
<wssp:Identity> <wssp:SupportedTokens> <wssp:SecurityToken <wssp:UsePassword </wssp:SecurityToken> <wssp:SecurityToken </wssp:SupportedTokens> </wssp:Identity>
</s0:Policy> <wsp:UsingPolicy n1:
Note:If the WSDL also has a
<wssp: Integrity>element, digital signing is required (WebLogic Server provides WS-Policy: sign.xml). If it has a
<wssp:Confidentiality>element, encryption is required (WebLogic Server provides WS-Policy: encrypt.xml).
Below are examples of the three types of authentication that can be used with Services Gatekeeper.
In the Username Token mechanism, which is specified by the use of the <wsse:UsernameToken> element in the header, authentication is based on a username, specified in the <wsse:Username> element and a password, specified in the <wsse:Password> element.
Two types of passwords are possible, indicated by the Type attribute in the Password element:
PasswordText indicates the password is in clear text format.
PasswordDigest indicates that the sent value is a Base64 encoded, SHA-1 hash of the UTF8 encoded password.
There are two more optional elements in Username Token, introduced to provide a countermeasure for replay attacks:
<wsse:Nonce>, a random value that the application creates.
<wsu:Created>, a timestamp.
If either or both the Nonce and Created elements are present, the Password Digest is computed as: Password_Digest = Base64(SHA-1(nonce+created+password))
When the application sends a SOAP message using Username Token, the WSEE implementation in Services Gatekeeper evaluates the username using the associated authentication provider. The authentication provider connects to the Services Gatekeeper database and authenticates the username and the password. In the database, passwords are stored as MD5 hashed representations of the actual password.
Example 2-2 Example of a WSSE: Username Token SOAP header element
<wsse:UsernameToken wsu: <wsse:Username> myUsername </wsse:Username> <wsse:PasswordmyPassword</wsse:Password> <wsse:Nonce ... </wsse:Nonce> <wsu:Created> ... </wsu:Created> </wsse:UsernameToken>
The UserName is equivalent to the application instance ID. The Password part is the password associated with this UserName when the application credentials was provisioned in Services Gatekeeper.
For more information on Username Token, see
In the X.509 Token mechanism, the application's identity is authenticated by the use of an X.509 digital certificate.
Typically a certificate binds the certificate holder's public key with a set of attributes linked to the holder's real world identity – for example the individual's name, organization and so on. The certificate also contains a validity period in the form of two date and time fields, specifying the beginning and end of the interval during which the certificate is recognized.
The entire certificate is (digitally) signed with the key of the issuing authority. Verifying this signature guarantees
that the certificate was indeed issued by the authority in question
that the contents of the certificate have not been forged, or tampered with in any way since it was issued
For more information on X.509 Token, see
The default identity assertion provider in Services Gatekeeper verifies the authenticity of X.509 tokens and maps them to valid Services Gatekeeper users.
Note:While it is possible to use the out-of-the-box keystore configuration in Services Gatekeeper for testing purposes, these should not be used for production systems. The digital certificates in these out-of-the-box keystores are only signed by a demonstration certificate authority For information on configuring keystores for production systems, see "Configuring Identity and Trust" in Oracle Fusion Middleware Securing Oracle WebLogic Server at:
The x.509 certificate common name (CN) for an application must be the same as the account UserName, which is the string that was referred to as the applicationInstanceGroupId in previous versions of Services Gatekeeper. This is provided by the operator when the account is provisioned.
Example 2-3 Example of a WSSE: X.509 Certificate SOAP header element
<wsse:Security xmlns: <wsse:BinarySecurityToken wsu: MIIEZzCCA9CgAwIBAgIQEmtJZc0… </wsse:BinarySecurityToken> <ds:Signature xmlns: <ds:SignedInfo> <ds:Reference…</ds:Reference> <ds:Reference…</ds:Reference> </ds:SignedInfo> <ds:SignatureValue>HFLP…</ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference> <wsse:Reference </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security>
Using WebLogic Server's WSSE implementation, Services Gatekeeper supports SAML versions 1.0 and 1.1. The versions are similar. For an overview of the differences between the versions, see
In SAML, a third party, the Asserting Party, provides the identity information for a Subject that wishes to access the services of a Relying Party. This information is carried in an Assertion. In the SAML Token type of Authentication, the Assertion (or a reference to an Assertion) is provided inside the <WSSE:Security> header in the SOAP message. The Relying Party (which in this case is Services Gatekeeper, using the WebLogic Security framework) then evaluates the trustworthiness of the assertion, using one of two confirmation methods.
Holder-of-Key
Sender-Voucher
For more information on these confirmation methods, see "SAML Token Profile Support" in Oracle Fusion Middleware Understanding Security for Oracle WebLogic Server at:
Example 2-4 Example of a WSSE: SAML Token SOAP header element
<wsse:Security> <saml:Assertion <saml:Conditions <saml:AuthenticationStatement <saml:Subject> <saml:NameIdentifier> <SecurityDomain>""</SecurityDomain> <Name>"cn=localhost,co=bea,ou=sales"</Name> </saml:NameIdentifier> </saml:Subject> </saml:AuthenticationStatement> </saml:Assertion> ... </wsse:Security>
Services Gatekeeper can be configured to run in session mode or sessionless mode. In session mode, an application must establish a session using the Session Manager Web Service before it is allowed to run traffic through Services Gatekeeper. The session allows Services Gatekeeper to keep track of all of the traffic sent by a particular application for the duration of the session, which lasts until the session times out, based on an operator-set interval, or until the application closes the session. The session is good for an entire Services Gatekeeper domain, across clusters, and covers all communication services to which the application has contractual access.
In sessionless mode, the application is not required to establish a session.
An application establishes a session in Services Gatekeeper by invoking the getSession() operation on the Session Manager Web Service. This is the only request that does not require a SessionID. In the response to this operation, a string representing the Session ID is returned to the client, and an Services Gatekeeper session, identified by the ID, is established. The session is valid until either the session is terminated by the application or an operator-established time period has elapsed. The SessionID must appear in the wlng:Session element in the header of every subsequent SOAP request.
In some cases the service that an application provides to its end-users may involve accessing multiple Services Gatekeeper communication services. For example, a mobile user might send an SMS to an application asking for the pizza place nearest to his current location. The application then makes a Terminal Location request to find the user's current location, looks up the address of the closest pizza place, and then sends the user an MMS with all the appropriate information. Three Services Gatekeeper communication services are involved in executing what for the application is a single service. In order to be able to correlate the three communication service requests, Services Gatekeeper uses a Service Correlation ID, or SCID. This is a string that is captured in all the CDRs and EDRs generated by Services Gatekeeper. The CDRs and EDRs can then be orchestrated in order to provide special treatment for a given chain of service invocations, by, for example, applying charging to the chain as a whole rather than to the individual invocations.
The SCID is not provided by Services Gatekeeper. When the chain of services is initiated by an application-initiated request, the application must provide, and ensure the uniqueness of, the SCID within the chain of service invocations.
Note:In certain circumstances, it is also possible for a custom service correlation service to supply the SCID, in which case it is the custom service's responsibility to ensure the uniqueness of the SCID.
When the chain of services is initiated by a network-triggered request, Services Gatekeeper calls an external interface to get the SCID. This interface must be implemented by an external system. No utility or integration is provided out-of-the box; this must be a part of a system integration project. It is the responsibility of the external system to provide, and ensure the uniqueness of, the SCID.
The SCID is passed between Services Gatekeeper and the application through an additional SOAP header element, the SCID element. Because not every application requires the service correlation facility, this is an optional element.
When the scid element is used, it should be on the same level as the session element in the SOAP header.
Parameter tunneling is a feature that allows an application to send additional parameters to Services Gatekeeper and lets a plug-in use these parameters. This feature makes it possible for an application to tunnel parameters that are not defined in the application-facing interface and can be seen as an extension to the it.
See the appropriate sections in the Communication Service Guide for descriptions of the tunneled parameters that are applicable to your communication service.
The application sends the tunneled parameters in the SOAP header of a Web Services request.
The parameters are defined using key-value pairs encapsulated by the tag <xparams>. The xparams tag can include one or more <param> tags. Each <param> tag has a key attribute that identifies the parameter and a value attribute that defines the value of the parameter. In the example below, the application tunnels the parameter aParameterName and assigns it the value aParameterValue.
Example 2-7 SOAP header with a tunneled parameter.
<soapenv:Header> ... <xparams> <param key="aParameterName" value="aParameterValue" /> </xparams> ... </soapenv:Header>
Depending on the plug-in the request reaches, the parameter is fetched and used in the request towards the network node.
In some communication services, the request payload are sent as SOAP attachments. Example 2-8 below shows a Multimedia Messaging sendMessage operation that contains an attachment carrying a jpeg image.
Example 2-8 Example of a SOAP message with attachment (full content is not shown)
POST /parlayx21/multimedia_messaging/SendMessage HTTP/1.1 Content-Type: multipart/related; <soapenv:Header> <ns1:Security ns1: </ns1:Security> </soapenv:Header> <soapenv:Body> <sendMessage xmlns= ""> <addresses>tel:234</addresses> <senderAddress>tel:567</senderAddress> <subject>Default Subject Text</subject> <priority>Normal</priority> <charging> <description xmlns="">Default</description> <currency xmlns="">USD</currency> <amount xmlns="">1.99</amount> <code xmlns="">Example_Contract_Code_1234</code> </charging> </sendMessage> </soapenv:Body> </soapenv:Envelope> ------=_Part_0_2633821.1170785251635 Content-Type: image/jpeg Content-Transfer-Encoding: binary Content-Id: <9FFD47E472683C870ADE632711438CC3>???? JFIF ?? C#%$""!&+7/&)4)!"0A149;>>>%.DIC<H7=>;?? C;("(;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;?? ? w" ?? ?? 7 !1AQ"aq2???#?BRr?3Cb????? ?? ' !1"AQ2Raq???? ? ??{?????>?"7B?7!1???????Z e{????ax??5??CC??-Du? ??X?)Y!??=R@??g?????T??c????f?Wc??eCi?l?????5s??\E???6I??(?x?^???=??d?#?itoi?{;? ??G....... ------=_Part_0_2633821.1170785251635--
This section illustrates how to manage the Services Gatekeeper required SOAP headers and SOAP attachments when you are using WebLogic Server and WebLogic Server tools to generate stubs for your Web Services clients. If you are using a different environment, the steps you need to take to accomplish these tasks will be different.
For an overview of using Oracle Fusion Middleware to create Web Service clients, see "Oracle Fusion Middleware" in Oracle Fusion Middleware Introducing Web Services at:
The following examples show particularly the use of a SOAP message handler.
These examples show the use of a single message handler to add both SOAP Headers and SOAP attachments.
The WebLogic Server environment relies heavily on using supplied Ant tasks. In Example 2-9 a supplied Ant task, clientgen, is added to the standard build.xml file. A handler configuration file, SOAPHandlerConfig.xml is added as the value for the
handlerChainFile attribute. SOAPHandlerConfig.xml is shown in Example 2-10.
Example 2-9 Snippet from build.xml
<clientgen wsdl="${wsdl-file}" destDir="${class-dir}" handlerChainFile="SOAPHandlerConfig.xml" packageName="com.bea.wlcp.wlng.test" autoDetectWrapped="false" generatePolicyMethods="true" />
The configuration file for the message handler contains the handler-name and the associated handler-class. The handler class, TestClientHandler, is described in Example 2-11.
Example 2-10 SOAPHandlerConfig.xml
<weblogic-wsee-clientHandlerChain <handler> <j2ee:handler-name>clienthandler1</j2ee:handler-name> <j2ee:handler-class> com.bea.wlcp.wlng.client.TestClientHandler </j2ee:handler-class> </handler> </weblogic-wsee-clientHandlerChain>
TestClientHandler provides the following functionality:
Adds a Session ID to the SOAP header, see "Session Management". The session ID is hardcoded into the member variable
sessionId.
Adds a service correlation ID to the SOAP header. See "Service Correlation" for more information.
Adds a SOAP attachment in the form of a MIME message with content-type text/plain. See "SOAP attachments" for more information.
Example 2-11 TestClientHandler
package com.bea.wlcp.wlng.client; import javax.xml.rpc.handler.Handler; import javax.xml.rpc.handler.HandlerInfo; import javax.xml.rpc.handler.MessageContext; import javax.xml.rpc.handler.soap.SOAPMessageContext; import javax.xml.soap.*; import javax.xml.namespace.QName; public class TestClientHandler implements Handler{ public String sessionId = "myID"; public String SCID = "mySCId"; public String contenttype = "text/plain"; public String content = "The content"; public boolean handleRequest(MessageContext ctx) { if (ctx instanceof SOAPMessageContext) { try { SOAPMessageContext soapCtx = (SOAPMessageContext) ctx; SOAPMessage soapmsg = soapCtx.getMessage(); SOAPHeader header = soapCtx.getMessage().getSOAPHeader(); SOAPEnvelope envelope = soapCtx.getMessage().getSOAPPart().getEnvelope(); // Begin: Add session ID Name headerElementName = envelope.createName("session","", ""); SOAPHeaderElement headerElement = header.addHeaderElement(headerElementName); headerElement.setMustUnderstand(false); headerElement.addNamespaceDeclaration("soap", ""); SOAPElement sessionId = headerElement.addChildElement("SessionId"); sessionId.addTextNode(sessionId); // End: Add session ID // Begin: Add Combined Services ID Name headerElementName = envelope.createName("SCID","", ""); SOAPHeaderElement headerElement = header.addHeaderElement(headerElementName); headerElement.setMustUnderstand(false); headerElement.addNamespaceDeclaration("soap", ""); SOAPElement sessionId = headerElement.addChildElement("SCID"); sessionId.addTextNode(SCID); // End: Add Combined Services ID // Begin: Add SOAP attachment AttachmentPart part = soapmsg.createAttachmentPart(); part.setContent(content, contenttype); soapmsg.addAttachmentPart(part); // End: Add SOAP attachment } catch (Exception e) { e.printStackTrace(); } } return true; } public boolean handleResponse(MessageContext ctx) { return true; } public boolean handleFault(MessageContext ctx) { return true; } public void init(HandlerInfo config) { } public void destroy() { } public QName[] getHeaders() { return null; } } | http://docs.oracle.com/cd/E16625_01/doc.50/e16611/app_soapreq.htm | CC-MAIN-2015-40 | en | refinedweb |
2009/11/2 <address@hidden>: > Hi, > > On Wed, Oct 28, 2009 at 10:53:56PM +0100, Bas Wijnen wrote: >> I've used Python for some larger projects. It's a very nice language, >> but I wouldn't consider it higher level than C++. > > Err... You are the first person I ever saw making such a claim. > > (I don't actually know Python, so I don't really have an opinion of my > own... What I heard from other people made me place it as much more > high-level than C++ though.) I would call a language with any of closures, stronger typing, namespaces, or automatic memory management higher level than C++. Consistent dynamic dispatch, a metaclass model, and high-performance built-in data types are also a bonus. As a *bare minimum*, though, built-in automatic memory management makes a big difference. I can't imagine doing application development without it. Disclaimer: pypy is the vehicle for my current research. >> The nice thing about templates is that they really are very low-level. >> This means they can actually be usable for kernel programming. No >> library support or magic compiler-generated code is required for them. >> They do exactly what you expect. But they are still capable of things >> that become extremely ugly in C (you can choose from huge #define >> blocks or lots of code duplication). > > I actually consider generic programming with the C preprecessor pretty > fun -- though not exactly helping readability ;-) > > However, I'm not talking about runtime features, nor explicit code > generation. I'm talking about the type inference implemented by some > languages (mostly functional ones), allowing the compiler to > automatically instance any function for the types it is actually used > on, without any explicit template handling. Polyinstantiation has been a big topic for bitc. See: "Parametric types, explicit unboxing, and separate compilation do not get along. " I suppose that you can reasonably rule out separate compilation as a problem when writing kernels, and be content with some link-time compilation otherwise (or I suppose these are the same thing). But I think that this is the point Barry was trying to make: C++ templates do all that work for you, at compile time. Whether one approach is /better/ than another it is hard to say. *I* think link-time polyinstantiation is a shade nicer, particularly in the case that you modify an important classes innards (possibly including, in C++, private attributes) and find yourself recompiling all of its dependencies. Resolving external structure (including vtable) offsets at link-time probably isn't that expensive, and makes library usage seem much more managable. William Leslie | http://lists.gnu.org/archive/html/l4-hurd/2009-11/msg00005.html | CC-MAIN-2015-40 | en | refinedweb |
You can subscribe to this list here.
Showing
1
results of 1
----- Original Message -----
From: Roger Haase <crosseyedpenguin@...>
To: Discussion of Webware for Python including feedback and proposals. <webware-discuss@...>
Cc:
Sent: Saturday, July 23, 2011 4:57 AM
Subject: Re: [Webware-discuss] Webware times out on WebFaction Hosting service
----- Original Message -----
From: Christoph Zwerschke <cito@...>
To: Discussion of Webware for Python including feedback and proposals. <webware-discuss@...>
Cc:
Sent: Saturday, July 23, 2011 2:20 AM
Subject: Re: [Webware-discuss] Webware times out on WebFaction Hosting service
Am 23.07.2011 07:05 schrieb Roger Haase:
> I have added a WebFaction "Custom app (listening on port)" to obtain
> a unique port number. Then I modified my wsgiadapter to point at
> 184.172.207.73:41759 and modified the app serverconfiguration to
> listen at the same address.
Which Webware version are you using? Webware normally connects to Apache
via mod_webkit; only the last beta supports mod_wsgi.
-- Christoph
------------------------------------------
I have been running Webware 1.1b1 with the mod_wsgi adapter since May, 2010 on Ubuntu 10.04 and Ubuntu 11.04. The Python version is 2.7.1 on Webfaction.
I will start over with a clean installation of Webware and give the mod_webkit adapter a try if the problems with wsgi continue.
Chuck and Christof, thanks for your thoughts.
Roger Haase
-----------------------------------------
I have some progress. I think the WSGIAdapter.py has a bug -- I could not use a non-standard port on Ubuntu 11.04. I copied a few lines of code from CGIAdapter.py to read adapter.address and pass on the host and port. My Patch:
"""
diff -r 326014bd2b8f WebKit/Adapters/WSGIAdapter.py
--- a/WebKit/Adapters/WSGIAdapter.py Sun Jul 24 14:37:54 2011 -0700
+++ b/WebKit/Adapters/WSGIAdapter.py Sun Jul 24 14:52:42 2011 -0700
@@ .
@@ -55,6 +55,11 @@
def __call__(self, environ, start_response):
"""The actual WSGI application."""
err = StdErr(environ.get('wsgi.errors', None))
+ host, port = open(os.path.join(self._webKitDir,
+ 'adapter.address')).read().split(':', 1)
+ if os.name == 'nt' and not host:
+ host = 'localhost' # MS Windows doesn't like a blank host name
+ port = int(port)
try:
inp = environ.get('wsgi.input', None)
if inp is not None:
@@ -73,7 +78,7 @@
environ = dict(item for item in environ.iteritems()
if isinstance(item[1], (bool, int, long, float,
str, unicode, tuple, list, set, frozenset, dict)))
- response = self.getChunksFromAppServer(environ, inp or '')
+ response = self.getChunksFromAppServer(environ, inp or '', host=host, port=port)
header = []
for chunk in response:
if header is None:
"""
The above works for me on Ubuntu 11.04. I am not sure if the additional code should be moved down within the try statement.
I am still working on resolving the WebFaction problem. With the patch applied, I now get "bad marshall data" errors. This is my first experience with web hosts. There are about 170 other users on my server and nginx is used to give each user their own apache conf file. I suspect nginx has some kind of attachment to my "custom port" that is causing the problem, but am still looking.
Roger Haase
if header is None: | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=201107&viewday=24 | CC-MAIN-2015-40 | en | refinedweb |
Page 1
Published by the College of Tropical Agriculture and Human Resources (CTAHR) and issued in furtherance of Cooperative Extension work, Acts of May 8 and June
30, 1914, in cooperation with the U.S. Department of Agriculture. Charles W. Laughlin, Director and.
Cooperative Extension Service
AgriBusiness
Dec. 1998
AB-12
Rev. 6/99
Economics of Ginger Root Production in Hawaii
Kent Fleming1 and Dwight Sato2
1Department of Horticulture
2Cooperative Extension Service, Hilo
T
ger root (Zingiber officinale
Roscoe) in Hawaii’s major ginger
growing area, the eastern half of the Big Island. The
economic analysis is based on a computer spreadsheet
budget for managing a ginger root enterprise and uses
information gathered from knowledgeable growers and
packers and from research and extension faculty and
publications of the College of Tropical Agriculture and
Human Resources (CTAHR), University of Hawaii at
Manoa. The production data used in the model are typi-
cal for a small ginger root farm in the late 1990s. How-
ever, the economic model is flexible, including over 100
variables, any of which can be changed by the user to
accommodate individual ginger root farming situations.
This budget has a wide range of uses, but it is pri-
marily intended as a management tool for growers of
edible ginger. Growers who enter their own farm data
will find the model useful for
• developing an end-of-the-year economic business
analysis of their ginger root enterprise,
• projecting next year’s income under various cost-
structure, production, and marketing scenarios,
• considering the economic impact of business
environment changes (e.g., regulatory or wage rate
changes),
• determining the economic benefit of adopting new
technology, and
• planning new or expanded operations.
his publication examines the
economics of producing gin-
Assumptions
The first step in determining profitability is to establish
some overall production and economic assumptions. The
farm in this example is five acres. For horticultural rea-
sons, ginger is usually grown in a rotation system in
which one year of ginger production is followed by three
years in which the land is not used for ginger. There-
fore, the annual ginger root
crop comes from only 25%
of the land. Some growers
simply move to new rented
land each year. The model accommodates either sys-
tem. The average cost of hand labor is assumed to be $6
per hour, with machine labor at $8 plus 33% in “ben-
efits” (e.g., FICA, etc.). Payment for the crop is received
two months after delivery. The desired rate of return on
equity capital is 6%, and the bank interest rate is 9% for
debt capital and 10% for working capital.
Gross income
It is assumed that the example ginger farm sells 90% of
its marketable production as mature ginger root, with
about 80% selling as Grade A. Packers report that the
proportion of Grade A has been slightly but steadily in-
creasing over the years. “Young ginger,” a specialty prod-
uct of limited demand, accounts for 5% of the marketed
production sold. The season price averages about 50%
higher than the Grade A price, but the yield is signifi-
cantly lower (Nishina et al., p. 3). (The production costs
might be slightly lower, although in this study they are
assumed to be the same regardless of grade.) Nishina et
al. reported that growers normally keep back about 5%
(assuming a 1:20 “seed”: crop ratio) of their production
for the next season’s planting, although one grower in-
terviewed reported retaining 10% of one season’s pro-
duction for the next season’s “seed.” This grower plants
more densely and obtains a higher yield. In this study
we follow the 5% described by Nishina et al.
Mature ginger root yields vary substantially from
year to year, primarily because of plant disease incidence.
Page 2
AB-12 Economics of Ginger Root ProductionCTAHR — Dec. 1998
2
Since 1980 the yields have ranged from a high of 50,000
pounds per acre of marketable ginger root (1997/98 sea-
son) to a low of 27,500 (1993). The Hawaii Agricultural
Statistics Service (HASS) bases its 1998 Outlook Re-
port on “the most recent 3-year average of 47,300 pounds
per [harvested] acre” (HASS, p. 3). Our example uses a
most-recent-5-year weighted average yield of 46,200
pounds per harvested acre. All growers interviewed be-
lieved that their marketable yields, and those of other
growers they knew, were greater than those reported by
HASS. The marketable yield figure used in this study
should be viewed as a conservative estimate. Growers
should enter the yield that they believe reflects their situ-
ation.
The price per pound received by growers and used
in this study is the weighted average price received for
all grades of ginger root marketed throughout the sea-
son. The HASS reported price is the Grade A price, the
major but not the sole component of the weighted aver-
age price. The weighted average price will be close to
but usually lower than the Grade A price. This fact per-
haps accounts for the growers’ common observation that
they never receive a price quite as high as that reported
by HASS. As with the annual yields, the Grade A prices
have fluctuated considerably since 1980, ranging from
a low of 40¢ per pound (1997) to a high of 92.3¢. The
most recent 5-year weighted average Grade A price is
68.1¢ per pound. (HASS does not project Grade A prices,
although using its method for estimating yield, its price
estimate would be about 67.3¢ per pound.) In light of
both the 1997/98 year’s exceptionally low Grade A price
and the feelings of packers that the industry will not
again experience the recent high prices, the estimated
Grade A price used in our model is adjusted downward
by 20% to a more conservative 54.5¢ per pound. Given
the marketing pattern of the example farm, the weighted
average price comes out to be 53.4¢ per pound. The re-
sulting gross income is $24,674 per harvested acre or
$30,843 for the whole ginger enterprise.
Operating costs
Operating costs are all the costs directly associated with
growing and harvesting the ginger crop. All costs are
expressed as costs per harvested acre and per farm and
as a percentage of gross income. The various percent-
ages of gross income can be viewed as the number of
cents from each dollar generated by ginger sales that
are spent on a particular operating expense. For example,
9.3¢ of every dollar of revenue is spent on methyl bro-
mide and plastic sheeting. This item is a major compo-
nent of the land preparation cost. In this example farm,
the land preparation activity is the single largest grow-
ing cost, constituting 13.5% of the total growing expen-
diture. Land preparation costs are likely to increase fur-
ther as the proposed deadline for the elimination of me-
thyl bromide approaches.
Total growing costs take one-third of the gross rev-
enue; harvesting activities absorb another quarter. Hired
labor is the single most significant operating input, con-
suming over one-quarter of the gross income. Labor is
about evenly divided between growing and harvesting
activities. The example farm uses a custom operator to
provide the machinery operations associated with land
preparation and planting. If he did not, the itemized la-
bor cost would be higher (as would his machinery own-
ership costs). Overall, $23,026, three-quarters of the
gross income from this example ginger farm, is expended
on total operating costs.
This budget includes two overhead costs that are
often overlooked. The first is the cost of working capi-
tal (often an operating loan). The second is the cost of
retaining ownership of an already delivered crop, as
opposed to being paid for it upon delivery to the buyer.
Ginger growers typically wait one to three months for
payment. In the example farm, payment is deferred two
months, reducing the net price 1.7% (0.9¢ per pound).
This deferred payment is a hidden cost of marketing,
but in effect it functions like a commission. If one’s cost
of operating capital was 12% and payment was not re-
ceived for three months, the financial impact would be
doubled.
Gross margin
The gross margin is the gross income minus the total
operating (or “variable”) costs. Therefore the gross mar-
gin for the whole enterprise is $7,475. It represents the
total amount available to pay the ownership (or “fixed”)
costs of production. Gross margin resembles another
frequently used term, “return over cash costs.” It is what
farmers popularly refer to as their “profit,” because it is
close to the return to their management and investment
(if there is no debt associated with the farming opera-
Page 3
3
AB-12 Economics of Ginger Root ProductionCTAHR — Dec. 1998
*The “capital recovery charge” method consists of calculat-
ing an annual loan payment, using the historic cost minus the
salvage value as the principle, the “life” as the term, and the
average cost of capital as the interest rate. To this amount is
added the cost of holding the asset’s salvage value, using the
owner’s opportunity cost or desired return on capital. If the
asset is already fully depreciated (i.e., the capital has already
been recovered), enter zero for historic cost.
**If one were to set the “desired return on owner equity” (in
the assumptions section above) to zero, the indicated “return
to management” would in fact be the frequently used “man-
agement and investment income” (M.I.I.), the return to the
owner/manager for his or her management and capital invest-
ment.
tion). If one were to deduct depreciation and rent, farm
gross margin would approximate “taxable income.”
Gross margin is a good measure for comparing the
economic and productive efficiency of similar sized
farms. More importantly, it represents the bare minimum
that a farm must generate in order to stay in business.
(Even if a farm were to lose money overall, a positive
gross margin would enable it to continue to operate, at
least in the short run.) But gross margin is not a good
measure of a farm’s true profitability or long-term eco-
nomic viability.
Ownership costs
These costs are the annualized costs for those produc-
tive resources that last longer than the annual produc-
tion cycle. For example, because capital items last more
than one production cycle, they have to be amortized
over their “useful lives.” In the economic analysis, a
“capital recovery charge” is calculated for all capital
items. This charge is an estimate of what it costs the
producer to own the capital assets for one year.* The
example farm’s total annualized capital cost is $6,554,
just over one-fifth of the farm’s gross income. It would
be higher if custom machinery services were not uti-
lized, because additional machinery would need to be
owned.
“The bottom line”
Total cost includes all cash costs and all opportunity
costs. Any return above total cost is economic profit.
Because economic profit considers all costs, a manager
would understandably be satisfied with his or her busi-
ness’ performance if economic profit were zero or
greater. Economic profit is the single best measure of
true profitability. Economic profit serves as a “market
signal” to indicate how attractive the enterprise is for
potential investors and for potential new entrants into
the industry.
The only problem with the economic profit concept
is that it may be confusing to hear that one should be
satisfied with an “economic profit of zero,” or it may be
intuitively difficult to grasp the meaning of a “negative
economic profit.” Perhaps a more easily understood
“bottom line” term is “return to management.” In a typi-
cal year, this example ginger farm manager receives a
return (before income taxes) of $1,742 for his or her
managerial efforts,** that is, 5.6% of the gross income.
Because this return to the management resource is
slightly greater than the resource’s value (using the “rule
of thumb” for the value of management, 5% of the gross
income, which in the example farm would be $1,542),
we can say the business is in fact profitable. (Of course,
this farm manager also would receive additional com-
pensation for any of the manual farm labor which he or
she provided.).
Risk
Our model’s particular production scenario appears
marginally adequate. However, the ginger market in-
cludes considerable foreign competition. Prices have
generally been good for ginger root, but the 1997/98
average price of ginger dropped to 40¢ per pound, an
all-time low. Despite excellent yields, the price was be-
low the break-even point, and generally ginger farming
was not economically profitable. In addition to abruptly
fluctuating prices, ginger root is relatively susceptible
to serious disease problems (Nishina et al.), providing
an ever-present possibility for a cultural problem to
sharply reduce yields. In 1993, for example, the aver-
age yield dropped to 27,500 pounds per acre.
Risk is inherent in all of agriculture, but the ginger
root industry appears to be more exposed to risk than
many other Hawaii agricultural endeavors. A review of
the HASS summary of prices and yields reveals consid-
erable ginger root price and yield volatility with rela-
tively little correlation between the two variables. The
Page 4
AB-12 Economics of Ginger Root ProductionCTAHR — Dec. 1998
4
Economics of ginger root production in Hawaii—cost-and-returns spreadsheet
This research was funded by the County of Hawaii, Department of Research and Development, and the Univrsity of Hawaii
at Manoa, College of Tropical Agriculture and Human Resources. Mention of specific products or practices does not imply an
endorsement by these agencies or a recommendation in preference to other other products or practices.
Page 5
5
AB-12Economics of Ginger Root ProductionCTAHR — Dec. 1998
45,907
53.1 | http://www.researchgate.net/publication/29737801_Economics_of_Ginger_Root_Production_in_Hawaii | CC-MAIN-2015-40 | en | refinedweb |
This chapter provides considerations for upgrading various Oracle Service Bus configuration artifacts to Oracle Service Bus 11g Release 1 (11.1.1.7.0).
It includes the following topics:
Upgrade Considerations for AquaLogic Service Bus 2.6 Users
AquaLogic Service Bus 3.0 Upgrade Considerations
Oracle Service Bus 10g Upgrade Considerations
Changes to Carriage Return Handling in XML in 11g Release 1 (11.1.1.6.0) and Later
Read the following sections if you are using AquaLogic Service Bus 2.6 configurations:
Integrated Development Environment
Displaying References from Alerts to Alert Destinations
Details Sent to Alert Destinations
Import-Export Alert Rule Changes
Session-Aware Access Control Management of Proxy Services
Transport SDK and Transport Provider Changes
Many of the design time features available in the AquaLogic Service Bus Console are available in Oracle Enterprise Pack for Eclipse, which is the Oracle Service Bus integrated development environment (IDE).
If you want to use the IDE instead of the Console, you can import an AquaLogic Service Bus 2.6 configuration JAR directly into the 11g Release 1 (11.1.1.7.0) IDE. For information about importing a JAR file into the 11g Release 1 (11.1.1.7.0) IDE, see the "Importing Resources" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
Note:
When you import any configuration JAR into the 11g Release 1 (11.1.1.7.0) IDE, the operational and administrative settings are removed. To retain these settings, first import the configuration JAR file into the console, export and import the file into the 11g Release 1 (11.1.1.7.0) IDE, as described above. When you later move configuration from IDE to Console, enable Preserve operational settings, so that operational settings that were imported in the first step are preserved.
For information about exporting JAR files, see Task 2: Exporting Security Configurations.
The Service Level Agreements (SLA) alert rules features in AquaLogic Service Bus 3.0 and later differ slightly from previous releases. These changes do not affect the run-time evaluation or how alerts are issued. However, you may notice the following changes:
In AquaLogic Service Bus 2.6, alert rule resources were created as separate resources and individually maintained. In Oracle Service Bus 11g Release 1 (11.1.1.6.0), alert rules are part of the service definition. Because Alert Rules are part of the service definition and are no longer resources themselves, this affects their display in the References and Referenced By pages in the AquaLogic Service Bus Console. For information about viewing references, see the "View References Page" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
You may notice minor changes in the content of the alerts as related to various destinations. For information about alert destinations, see the "Alert Destinations" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
Starting in AquaLogic Service Bus 3.0, if an alert rule is renamed, then for the alerts issued in the past and under the old name, the console will no longer provide access to the rule definition on the Alert Summary and Extended SLA Alert History pages.
In AquaLogic Service Bus 2.6 and earlier versions where you could see distinct entries of alert rules in the AquaLogic Service Bus Console, references to alert destinations through alert rules from proxy and business services are maintained and displayed as a single reference. For example, in a proxy service, if multiple alert rules and multiple pipeline alert actions use the same alert destination, only one entry for the alert destination is displayed in the Referenced By page for that alert destination. For more information, see the "Alert Destinations" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
Since Alerts are no longer separate resources in AquaLogic Service Bus 3.0 and later releases, the way references from alerts to alert destinations are displayed is different from AquaLogic Service Bus 3.0 and later releases. Two pages in the Console are affected. First, the Referenced By field in the Alert Destination page no longer displays the Alert Rule that is referencing the destination. Instead, the Reference By field displays the service that contains the alert. A side effect of this is that if a service has multiple alerts (SLA alerts or pipeline alerts) that reference the same Alert destination, the associated service is listed only once in the Referenced By field. Second, the Alert Rule page no longer contains the Reference information. Instead, the Service Summary page for the associated service contains the Alert Destinations referenced in the References field. For more information, see the "Alert Destinations" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
When an SLA alert is issued to configured destinations, the alert details include a Rule ID as in AquaLogic Service Bus 2.6 and before. However, in AquaLogic Service Bus 3.0 and later releases, the value of the Rule ID is set to a combination of the global name of the parent service and the name of the alert rule. For the SNMP trap, the Rule ID is truncated to 64 characters, as in AquaLogic Service Bus 2.6 and earlier.
Unlike AquaLogic Service Bus 2.6, which merged alerts with any existing alerts during import, AquaLogic Service Bus 3.0 and later releases provides users with preserve-overwrite semantics. This feature allows you to either keep all existing alerts or overwrite them, regardless of whether the alerts have the same name or not.
The security realm must be configured before completing the steps described in this section.
The exported JARs from AquaLogic Service Bus 2.6 do not contain any access control policies. Before importing a configuration JAR from AquaLogic Service Bus 2.6 release, Oracle Service Bus 11g Release 1 (11.1.1.7.0) uses a pre-import access control policy to perform an in-place upgrade of the JAR to 11g Release 1 (11.1.1.7.0). To perform the in-place upgrade, the pre-processor queries all manageable authorization providers for each proxy service and retrieves a list of applicable access control policies. It then inserts those policies into the service definition of the proxy service. This is done on best-effort basis.
A manageable authorization provider is an authorization provider that implements the
PolicyEditorMBean interface. Such providers expose read-write APIs that allow the Oracle WebLogic Server and the Oracle AquaLogic Service Bus console to add, modify, or delete policies stored in them.
For transport level and default message-level policies, the system queries only those providers that expose the
PolicyEditorMBean to retrieve any applicable policies, and inserts these policies into the service definition.
For operational message-level policies, the system can query providers that have implemented the
PolicyListerMBean. For providers that have not implemented the
PolicyListerMBean interface, the operation-level policies are not retrieved.
After the in-place upgrade finishes, the import process proceeds as if the configuration JAR is of type 11g Release 1 (11.1.1.7.0), including the access control policies retrieved from the authorization providers. Table 3-1 explains various combinations of the applicable parameters and the outcome of the import process. Note that the outcome is the result of the import process and does not represent anything done after the configuration is imported.
If an authorization provider does not exist in the target Oracle Service Bus 11g Release 1 (11.1.1.7.0) system, the import process ignores the imported ACLs for the authorization provider and displays a warning. In this case, you can discard the session, or undo the import task, and then add the authorization providers to the server and re-import. Alternatively, you can do a dummy update operation of security parameters in the Oracle Service Bus Console, and the system will auto-correct any conflicts that it can on best-effort basis. These changes are atomic and reversible if you discard the session.
For more information about updating the security parameters, see the "Message Level Security Configuration" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
The following changes may impact the Oracle Service Bus configuration:
Message Retry Count for Business Service Configuration
Duplicate URIs for a Business Service are Removed
Application Errors Retries
Transport Configuration in the Design Environment
In AquaLogic Service Bus 2.6, the message retry count applies to the list of URIs for a business service. In Oracle Service Bus 11g Release 1 (11.1.1.7.0), the retry count applies to the individual URL endpoints. The upgrade process maintains the 2.6 behavior as follows:
new_retries = N-1 + old_retries*N
where
N is the total number of URIs and
old_retries is the 2.6 retry count.
For example, suppose that in AquaLogic Service Bus 2.6, you have three URLs configured for the business service and a retry count of one. With the 2.6 retry mechanism all three URLs are tried. Then after the retry delay, all three URLs are retried again. To obtain the same behavior in 3.0 and later releases, the retry count is changed to five, which is obtained by applying the formula: (3 -1) + (1*3) = 5. The net effect is exactly the same: all three URLs are tried once (using two of the five retries), then after the retry delay, the three URLs are tried once more (using the last three of the five retries).
If only a single URL is configured, the old behavior and the new behavior are the same; the retry count does not change during the upgrade.
For more information, see the "Business Services: Creating and Managing" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
When importing business services into Oracle Service Bus 11g Release 1 (11.1.1.7.0), the import process removes the duplicate URIs in the 2.6 configurations. If the URIs use randomly weighted load balancing algorithms and the weights are set, the weights are adjusted accordingly. For example, if the business service is configured with the following URIs and weights:
URI_A 1
URI_B 3
URI_A 1
When the business service is upgraded into 11g Release 1 (11.1.1.7.0), the URI set is modified as follows:
URI_A 2
URI_B 3
For Business services configured with other algorithms, the upgrade removes the duplicate URIs and no other changes are made.
For more information about setting the parameters for the load balancing algorithm, see the "Business Services: Creating and Managing" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus.
In case of delivery failure when sending outbound requests, Oracle Service Bus allows you to specify whether to retry endpoint URIs for application errors, such as a SOAP fault. This does not affect retries for communication errors. This new option is available on the Transport Configuration page for business services. For more information, see the "Transport Configuration Page" section in the Oracle Fusion Middleware Administrator's Guide for Oracle Service Bus. After the number of retries is exceeded, an error is issued.
To maintain the 2.6 behavior, new tag is added with the default value set to true.
<xs:element
This tag will only be added to the transport end point whose provider configuration has the flag declare-application-errors as true.
To simplify switching between HTTP and HTTPS in the AquaLogic Service Bus 3.0 and later releases console, the HTTPS transport configuration has been removed and its functionality has been added to the 3.0 inbound HTTP transport provider. The 11g Release 1 (11.1.1.7.0) HTTP Transport Configuration page contains a check box to enable HTTPS.
A new element,
use-https is added to the schema of the HTTP Transport inbound properties.
<xs:element
Any existing HTTPS Transport configurations are upgraded to HTTP transport with this flag set to true.
Note:
This functionality is only applicable to HTTP proxy services.
In earlier releases, AquaLogic Service Bus transports were configured only through the AquaLogic Service Bus Console. However, starting in ALSB 3.0, the transports can be designed on Eclipse. For more information, see "Developing Oracle Service Bus Transports for Workshop for WebLogic" in the Oracle Fusion Middleware Transport SDK User Guide for Oracle Service Bus.
Read the following sections if you are using AquaLogic Service Bus 3.0:
Upgrading an AquaLogic Service Bus 3.0 Workspace in Oracle Enterprise Pack for Eclipse
JNDI Service Account Deprecated for Java Message Service Business Service
Pipeline Action ID Upgrade
Pipeline Monitoring Level
To upgrade an AquaLogic Service Bus 3.0 workspace to Oracle Service Bus Oracle Enterprise Pack for Eclipse, perform the following steps:
Note:
After you upgrade the workspace, you can no longer open it in WorkSpace Studio 1.1 for ALSB 3.0.
Start WorkSpace Studio 1.1 in ALSB 3.0 and open the workspace you are upgrading.
Close the ALSB perspective and all editors.
Close all projects.
Close WorkSpace Studio 1.1.
Back up your workspace.
Start Oracle Enterprise Pack for Eclipse in 11g Release 1 (11.1.1.6.0), and open the workspace you are upgrading.
Wait for upgrade to start. When you open an AquaLogic Service Bus 3.0 workspace in Oracle Enterprise Pack for Eclipse for Oracle Service Bus 11g Release 1 (11.1.1.7.0), it will take a few moments for the upgrade process to start. Do not edit the workspace until the upgrade completes and the confirmation dialog appears.
After upgrading projects, open the Oracle Service Bus perspective and continue working with the newly upgraded projects and artifacts.
In Oracle Service Bus 11g Release 1 (11.1.1.7.0), the
jndi-service-account is deprecated. After upgrading to 11g Release 1 (11.1.1.7.0), the Oracle Service Bus Java Message Service (JMS) business service uses the
jms-service-account for both JMS and JNDI purposes.
The following table shows how the Oracle Service Bus JMS business service migrates JNDI and JMS accounts.
After upgrading to 11g Release 1 (11.1.1.7.0), all the actions in a pipeline are assigned a unique ID.
In 11g Release 1 (11.1.1.7.0), proxy services use a new monitoring flag to control the level of statistics collected. The three levels are:
Service - coarse grained statistics.
Pipeline - the same level at which statistics were gathered in AquaLogic Service Bus 3.0.
Action - fine grained statistics.
The upgrade sets the monitoring flag to a value of
Pipeline.
As of Oracle Service Bus 11g Release 1 (11.1.1.7.0), validation rules in the following areas have been made more strict and may result in design time or run time errors:
Enhanced WSDL and service validation: after upgrading from previous releases to Oracle Service Bus 11g Release 1 (11.1.1.7.0), one may get conflict messages, such as:
[OSB Kernel:398022]No corresponding mapping was found in the service definition for the WSDL operation: OPERATION-NAME
[OSB Kernel:398034]Two operations expect the same incoming message, you must use a selector different than message body
In this case, update your WSDL or service as the error message indicates.
Enhanced Split-Join validation: AquaLogic Service Bus 3.0 allowed an insert action in a Split-Join to insert into an uninitialized variable. In Oracle Service Bus 11g Release 1 (11.1.1.7.0), such an insert action will fail with the following error:
Variable 'VARIABLE-NAME' is being referenced before it has been initialized with a value.: Fault [{}uninitializedVariable]
If your Split-Join works in AquaLogic Service Bus 3.0 and fails in Oracle Service Bus 11g Release 1 (11.1.1.7.0) with the above error, then modify the Split-Join to initialize the variable before insert.
Enhanced JMS proxy and business service URI validation: prior to Oracle Service Bus 11g Release 1 (11.1.1.7.0), you could enter a JMS proxy and business service URI without a host and port. In Oracle Service Bus 11g Release 1 (11.1.1.7.0), Oracle Service Bus applies the following validation:
For a JMS proxy service URI, you may omit the host and port. Doing so will display a confirmation dialog prior to commit.
For a JMS business service URI, you must always specify the host and port.
You must read the following upgrade considerations if you are using Oracle Service Bus 10g Release 3 Maintenance Pack 1 (10.3.1) or Oracle Service Bus 10g Release 3 (10.3):
JCA Endpoint Configuration
JMS Business Service Configuration
Global Operational Settings
The upgrade will preserve the existing behavior as follows:
An Alert Log Enabled flag is added to the Alert destinations and the default value is
true, by default.
SLA definitions and Pipeline Alert definitions that do not have an alert destination associated with them will be upgraded as follows:
A new alert destination is created as part of the upgrade process. It is created under the same project as the service being upgraded, and it will have the name
AlertDestinationForLogging. This alert destination will have only alert logging turned on. If there are multiple services under the same project that are subject to this upgrade logic, they will share the same alert destination.
SLA and pipeline alert definitions with no alert destinations are upgraded, so that they now reference this new alert destination.
JCA Endpoint configuration will be upgraded to use the new configuration. WSDL with JCA WSIF extensions will be upgraded to Oracle Service Bus 11g adapter artifact. For example, JCA file, abstract WSDL, and concrete WSDL will provide the interface to upgrade JCA 10g wsdl to WSDL 11g and JCA file. Other upgrades will be as follows:
JCA upgrader will scan for the imported Oracle Service Bus config.jar file, and then the JCA WSDL (Oracle SOA 10g WSDL with WSIF JCA extension) and the JCA services in the config.jar file will be upgraded.
Note:
Only JCA WSDL that are associated with JCA services will be upgraded. If JCA WSDL is not used by any JCA service, then it will not be upgraded.
JCA WSDL in the imported config.jar file will be upgraded. A JCA resource and an abstract WSDL will be generated from the JCA WSDL. A concrete WSDL will be generated from the abstract WSDL and the JCA resource. The concrete WSDL will be generated based on the following rules:
Target namespace should be the same as the abstract WSDL.
JCA (concrete) WSDL in 10g with the WSIF JCA binding will be upgraded to a JCA resource containing the JCA binding, abstract WSDL containing the abstract part of the 10g WSDL, and a concrete WSDL containing the soap binding based on portType defined in the abstract WSDL. The JCA service based on the JCA WSDL in 10g will be upgraded to be based on the concrete WSDL containing the soap binding and will also have a reference to the JCA resource.
The binding section will contain a SOAP 1.1 binding.
The SOAP 1.1 binding style is document.
The SOAP 1.1 binding section will contain all operations from the abstract WSDL.
For each binding operation, a SOAP 1.1 operation element will be generated with soapAction attribute set to the operation name.
For each binding operation's input and output element, there will be a SOAP 1.1 body element generated with attribute use set to literal.
To support specific AQ use case that has header message defined in abstract WSDL, a SOAP header element will be generated for the operation's input binding section. The assumption is that AQ abstract WSDL will contain a WSDL message name “Header_msg” and the message contains a single part named “Header”.
A service section will be created with a port for the generated binding The generated port will contain a SOAP 1.1 address element with location attribute set to:
jca://<adapter_connection_factory_jndi>
Note:
If a non-jca (e.g. http, jms etc.) service type proxy/business service in Oracle Service Bus 10g is based on a WSDL that has jca binding, and if the same WSDL is not used by a jca service type proxy/business service, then the WSDL will not be upgraded and hence there will be a conflict on import. You must manually fix the WSDL to have a non-jca binding such as soap binding to over come the conflict.
Example 3-1 JCA Concrete WSDL
<?xml version="1.0"?> <definitions name="db-inbound-concrete" targetNamespace="" xmlns: <import namespace="" location="inbound.wsdl"/> <binding name="db-inbound_binding" type="tns:db-inbound_ptt"> <soap:binding <operation name="receive"> <soap:operation <input name="receive"> <soap:body </input> </operation> </binding> <service name="db-inbound"> <port name="db-inbound_pt" binding="tns:db-inbound_binding"> <soap:address </port> </service> </definitions>
The existing JCA WSDL in the config jar will be updated with the concrete WSDL.
Each JCA Service in the config jar will be updated with a change in JCA transport specific configuration. A reference to the generated JCA resource will be added to the JCA transport specific configuration section. All other properties defined in the endpoint configuration will remain the same. The JCA service will still contain the same dependency on the same WSDL and binding, although the content of the WSDL is updated to contain the generated concrete WSDL.
TopLink Mapping File Content
TopLink Mapping file content will be extracted from the endpoint configuration. An XML resource will be created the toplink mapping file content extracted from endpoint configuration will be stored as the content for the XML resource. The name of the XML resource will be the toplink mapping file name specified in activation/interaction spec property in endpoint configuration without the “.xml” file extension.A dependency on the generated toplink mapping file XML resource will be generated in the JCA file resource for this JCA service.
AQ Adapter WSDL with Header Upgrade
Prior to SOA 11g, the Header element definition in AQ adapter WSDL is defined with the namespace that is the same as the target namespace of WSDL itself. In SOA 11g, AQ adapter has changed the target namespace for the Header element in WSDL. The new namespace is a fixed namespace as the following:
OSB JCA transport upgrader will automatically modify the target namespace for the Header element in AQ adapter WSDL to match the new fixed target namespace listed above.
The JCA adapters headers used in 10g is now available in the
NormalizedMessage properties. The NormalizedMessage header properties is used in 11g, when AQ adapter needs to have payload header. If payload header is required, then queue header and payload header will be transmitted through NormalizedMessage headers
Due to the change in header support, you must upgrade some Oracle Service Bus 10g configuration manually. Specifically, if the pipeline is accessing message header types that are not present in 11g, the configuration will have to be upgrade to access the header through Transport Header.
JMS business services using the request/response pattern (both MessageID as well as CorrelationID based correlation) will be upgraded to use the new configuration. The existing
Is response required flag is enhanced with the
Response Queues property option. You can specify
None,
One per Request URI, or
One for all Request URIs as the value for the
Response Queues property option. The following table shows how the existing values are upgraded.
When the
Is Response Required value is set to
true, then the same value is used in Oracle Service Bus 11g, and the
Response Queues option is set as
One for all
Request URIs.
When the
Is Response Required value is set to true in the pre upgrade services then the
Response Queues option is set as One for all
One for all Request URIs with new configuration Response-URI for each target. In a standalone domain, this target will be a single server. The existing connection-factory and the response JNDI names are merged with the new ResponseURI configuration. The ResponseURI format is created with the available host/port information from the first service URI from business service and appended with the connection-factory and the response queue JNDI name. The host/port information is always retrieved from first service URI. The first service URI also contains a connection-factory, which is retrieved if the configured connection-factory for response is empty.
The format of Response URI is as follows:
'jms://<ip>:<port>,<ip>:<port>/<connection-factory-jndi-name>/<response-queue-jndi-name>'
When you import a Global Operational Settings resource that does not contain result-caching element, then a new result-caching element is added with the default value
true.
UDDI related elements have been moved from the
ServiceEntry element in
Services.xsd to a new element
uddiConfiguration. This element is used for business services that are imported from UDDI.
After the upgrade, during synchronization if the WSDL exist with the old name, then this name will be used. However if the WSDL does not exist then a new wsdl will be created with the new naming conventions.
The service key as well as the service name for the service will now be stored as part of the
uddiConfiguration element.
An update was made to Apache XMLBeans, which corrects handling of carriage return characters in an XML document. The update was made because XMLBeans did not properly escape carriage return characters (
\r). If the document contained
, it was correctly unescaped to
\r when parsed into XMLBeans; however,
\r was not escaped when it was streamed out. This resulted in an invalid XML document because
\r is invalid in XML. After the fix to XMLBeans,
\r is correctly escaped to
.
If you upgrade to version 11g Release 1 (11.1.1.6.0) or later, and currently have a workaround for the above issue in place, you need to modify how return characters are handled. For example, if the workaround was to escape
a second time to
in order for it to be streamed back out as
, you need to remove the second escape to avoid validation errors. | http://docs.oracle.com/cd/E28280_01/upgrade.1111/e15032/considerations.htm | CC-MAIN-2015-40 | en | refinedweb |
I am extremely new to programming and I have an issue trying to use a random. I am not looking for the direct answers, but maybe some guidance. I am working on an exercise that is supposed to help me learn the language, but I am struggling with trying to create a random that i need to call upon later in the program to create an average. I am supposed to create a program that allows a user to enter two dice sides. Then the system randomly rolls each dice multiple times and at the end averages the number rolled for each die. My program has a lot of comments so i can review it and know what function each line performs. Any assistance is greatly appreciated.
:confused:
Thanks,
Anthony
import java.util.Random;
import java.util.Scanner;
public class Dice {
public static void main(String[] args) {
//int is the Primitive Data type and Diceone and Dicetwo are the Variables I created
int Diceone, Dicetwo;
/**int is the Primitive Data type and the DiceonefirstRoll is the variable for holding the number entered by the user*/
int DicetwofirstRoll, Dicetwosecondroll, DicetwothirdRoll;
//Calls the scanner class
Scanner scan = new Scanner (System.in);
//Sets Diceone to be the next integer entered via the keyboard
System.out.print ("How many sides does die 1 have?");
Diceone = scan.nextInt();
//Sets Diceotwo to be the next integer entered via the keyboard
System.out.print ("How many sides does die 2 have?");
Dicetwo = scan.nextInt();
//Closes the scanner for input
scan.close ();
//Do I need to call the random class like I do with the scanner in line 16?
//Creates random dice roll
System.out.print ("Die 1 first roll=");
Random DiceonefirstRoll= new Random(Diceone) + 1;
/**Trying make this so i can call it later. I am trying to have the # entered from Diceone and +1 due to not being able to receive a 0 on a die*/ | http://www.javaprogrammingforums.com/%20object-oriented-programming/32471-question-about-randoms-trying-use-printingthethread.html | CC-MAIN-2015-40 | en | refinedweb |
With ASP.NET Core there are various attributes that instruct the framework where to expect data as part of an HTTP request - whether the body, header, query-string, etc.
With C#, attributes make decorating API endpoints expressive, readable, declarative and simple. These attributes are very powerful! They allow aliasing, route-templating and strong-typing through data binding; however, knowing which are best suited for each HTTP verb, is vital.
In this article, we'll explore how to correctly define APIs and the routes they signify. Furthermore, we will learn about these framework attributes..
As a precursor to this article, one is expected to be familiar with modern C#, REST, Web API and HTTP.
ASP.NET Core has HTTP attributes for seven of the eight HTTP verbs listed in the Internet Engineering Task Force (IETF) RFC-7231 Document. The HTTP TRACE verb is the only exclusion in the framework. Below lists the HTTP verb attributes that are provided:
Likewise, the framework also provides a RouteAttribute. This will be detailed shortly.
In addition to the HTTP verb attributes, we will discuss the action signature level attributes. Where the HTTP verb attributes are applied to the action, these attributes are used within the parameter list. The list of attributes we will discuss are listed below:
Imagine if you will, that we are building out an ordering system. We have an order model that represents an order. We need to create a RESTful Web API that allows consumers to create, read, update and delete orders – this is commonly referred to as CRUD.
ASP.NET Core provides a powerful Route attribute. This can be used to define a top-level route at the controller class – doing so leaves a common route that actions can expand upon. For example consider the following:
[Route("api/[Controller]")]
public class OrdersController : Controller
{
[HttpGet("{id}")]
public Task<Order> Get([FromRoute] int id)
=> _orderService.GetOrderAsync(id);
The Route attribute is given a template of "api/[Controller]". The "[Controller]" is a special naming convention that acts as a placeholder for the controller in context, i.e.; "Orders". Focusing our attention on the HttpGet we can see that we are providing a template argument of "{id}". This will make the HTTP Get route resemble "api/orders/1" – where the id is a variable.
Let us consider an HTTP GET request.
In our collection of orders, each order has a unique identifier. We can walk up to the collection and ask for it by "id". Typical with RESTful best practices, this can be retrieved via its route, for example "api/orders/1". The action that handles this request could be written as such:
[
HttpGet("api/orders/{id}") // api/orders/7
]
public Task<Order> Get(
[FromRoute] int id,
[FromServices] IOrderService orderService)
=> orderService.GetOrderAsync(id);
Note how easy it was to author an endpoint, we simply decorate the controller’s action with an HttpGet attribute.
This attribute will instruct the ASP.NET Core framework to treat this action as a handler of the HTTP GET verb and handle the routing. We supply an endpoint template as an argument to the attribute. The template serves as the route the framework will use to match on for incoming requests. Within this template, the {id} value corresponds to the portion of the route that is the "id" parameter.
This is a Task<Order> returning method, implying that the body of the method will represent an asynchronous operation that eventually yields an Order object once awaited. The method has two arguments, both of which leverage attributes.
First the FromRoute attribute tells the framework to look in the route (URL) for an "id" value and provide that as the id argument. Then the FromServices attribute – this resolves our IOrderService implementation. This attribute asks our dependency injection container for the corresponding implementation of the IOrderService. The implementation is provided as the orderService argument.
We then expressively define our intended method body as the order services’ GetOrderAsync function and pass to it the corresponding identifier.
We could have just as easily authored this to utilize the FromQuery attribute instead. This would then instruct the framework to anticipate a query-string with a name of "identifier" and corresponding integer value. The value is then passed into the action as the id parameters argument. Everything else is the same.
However, the most common approach is the aforementioned FromRoute usage – where the identifier is part of the URI.
[
HttpGet("api/orders") // api/orders?identifier=7
]
public Task<Order> Get(
[FromQuery(Name = "identifier")] int id,
[FromServices] IOrderService orderService)
=> orderService.GetOrderAsync(id);
Notice how easy it is to alias the parameter?
We simply assign the Name property equal to the string "identifier" of the FromQuery attribute. This instructs the framework to look for a name that matches that in the query-string. If we were to omit this argument, then the name is assumed to be the name used as the actions parameter, "id". In other words, if we have a URL as "api/orders?id=17" the framework will not assign our “id” variable the number 17 as it is explicitly looking for a query-string with a name "identifier".
Continuing with our ordering system, we will need to expose some functionality for consumers of our API to create orders.
Enter the HTTP POST request.
The syntax for writing this is seemingly identical to the aforementioned HTTP GET endpoints we just worked on. But rather than returning a resource, we will utilize an IActionResult. This interface has a large set of subclasses within the framework that are accessible via the Controller class. Since we inherit from Controller, we can leverage some of the conveniences exposed such as the StatusCode method.
With an HTTP GET, the request is for a resource; whereas an HTTP POST is a request to create a resource and the corresponding response is the status result of the POST request.
[
HttpPost("api/orders")
]
public async Task<IActionResult> Post([FromBody] Order order)
=> (await _orderService.CreateOrderAsync(order))
? (IActionResult)Created($"api/orders/{order.Id}", order) // HTTP 201
: StatusCode(500); // HTTP 500
We use the HttpPost attribute, providing the template argument.
This time we do not need an "{id}" in our template as we are being given the entire order object via the body of the HTTP POST request. Additionally, we will need to use the async keyword to enable the use of the await keyword within the method body.
We have a Task<IActionResult> that represents our asynchronous operation. The order parameter is decorated with the [FromBody] attribute. This attribute instructs the framework to pick the order out from the body of the HTTP POST request, deserialize it into our strongly-typed C# Order class object and provide it as the argument to this action.
The method body is an expression. Instead of asking for our order service to be provided via the FromServices attribute like we have demonstrated in our HTTP GET actions, we have a class-scope instance we can use. It is typically favorable to use constructor injection and assign a class-scope instance variable to avoid redundancies.
We delegate the create operation to the order services' invocation of CreateOrderAsync, giving it the order. The service returns a bool indicating success or failure. If the call is successful, we'll return an HTTP status code of 201, Created. If the call fails, we will return an HTTP status code of 500, Internal Server Error.
Instead of using the FromBody one could just as easily use the FromForm attribute to decorate our order parameter. This would treat the HTTP POST request differently in that our order argument no longer comes from the body, but everything else would stay the same. The other attributes are not really applicable with an HTTP POST and you should avoid trying to use them.
[
HttpPost("api/orders")
]
public async Task<IActionResult> Post([FromForm] Order order)
=> (await _orderService.CreateOrderAsync(order))
? Ok() // HTTP 200
: StatusCode(500);
Although this bends from HTTP conformity, it's not uncommon to see APIs that return an HTTP status code 200, Ok on success. I do not condone it.
By convention if a new resource is created, in this case an order, you should return a 201. If the server is unable to create the resource immediately, you could return a 202, accepted. The base controller class exposes the Ok(), Created() and Accepted() methods as a convenience to the developer.
Now that we're able to create and read orders, we will need to be able to update them.
The HTTP PUT verb is intended to be idempotent. This means that if an HTTP PUT request occurs, any subsequent HTTP PUT request with the same payload would result in the same response. In other words, multiple identical HTTP PUT requests are harmless and the resource is only impacted on the first request.
The HTTP PUT verb is very similar to the HTTP POST verb in that the ASP.NET Core attributes that pair together, are the same. Again, we will either leverage the FromBody or FromForm attributes. Consider the following:
[
HttpPut("api/orders/{id}")
]
public async Task<IActionResult> Put([FromRoute] int id, [FromBody] Order order)
=> (await _orderService.UpdateOrderAsync(id, order))
? Ok()
: StatusCode(500);
We start with the HttpPut attribute supply a template that is actually identical to the HTTP GET. As you will notice we are taking on the {id} for the order that is being updated. The FromRoute attribute provides the id argument.
The FromBody attribute is what will deserialize the HTTP PUT request body as our C# Order instance into the order parameter. We express our operation as the invocation to the order services’ UpdateOrderAsync function, passing along the id and order. Finally, based on whether we are able to successfully update the order – we return either an HTTP status code of 200 or 500 for failures to update.
The return HTTP status code of 301, Moved Permanently should also be a consideration. If we were to add some additional logic to our underlying order service – we could check the given “id” against the order attempting to be updated. If the “id” doesn't correspond to the give order, it might be applicable to return a RedirectPermanent - 301 passing in the new URL for where the order can be found.
The last operation on our agenda is the delete operation and this is exposed via an action that handles the HTTP DELETE request.
There is a lot of debate about whether an HTTP DELETE should be idempotent or not. I lean towards it not being idempotent as the initial request actually deletes the resource and subsequent requests would actually return an HTTP status code of 204, No Content.
From the perspective of the route template, we look to REST for inspiration and follow its suggested patterns.
The HTTP DELETE verb is similar to the HTTP GET in that we will use the {id} as part of the route and invoke the delete call on the collection of orders. This will delete the order for the given id.
[
HttpDelete("api/orders/{id}")
]
public async Task<IActionResult> Delete([FromRoute] int id)
=> (await _orderService.DeleteOrderAsync(id))
? (IActionResult)Ok()
: NoContent();
While it is true that using the FromQuery with an HTTP DELETE request is possible, it is unconventional and ill-advised. It is best to stick with the FromRoute attribute.
The ASP.NET Core framework makes authoring RESTful Web APIs simple and expressive. The power of the attributes allow your C# code to be decorated in a manner consistent with declarative programming paradigms. The controller actions' are self-documenting and constraints are easily legible. As a C# developer – reading an action is rather straight-forward and the code itself is elegant.
In conclusion and in accordance with RESTful best practices, the following table depicts which ASP.NET Core attributes complement each other the best.
1) The FromQuery attribute can be used to take an identifier that is used as a HTTP DELETE request argument, but it is not as simple as leveraging the FromRoute attribute instead.
2) The FromHeader attribute can be used as an additional parameter as part of an HTTP GET request, however it is not very common – instead use FromRoute or FromQuery.! | https://www.dotnetcurry.com/aspnet/1390/aspnet-core-web-api-attributes | CC-MAIN-2022-27 | en | refinedweb |
XMonad.Actions.UpdateFocus
Description
Updates the focus on mouse move in unfocused windows.
Synopsis
- focusOnMouseMove :: Event -> X All
- adjustEventInput :: X ()
- focusUnderPointer :: X ()
Usage
To make the focus update on mouse movement within an unfocused window, add the
following to your
~/.xmonad/xmonad.hs:
import XMonad.Actions.UpdateFocus xmonad $ def { .. startupHook = adjustEventInput handleEventHook = focusOnMouseMove .. }
This module is probably only useful when focusFollowsMouse is set to True(default).
focusOnMouseMove :: Event -> X All Source #
Changes the focus if the mouse is moved within an unfocused window.
adjustEventInput :: X () Source #
Adjusts the event mask to pick up pointer movements.
focusUnderPointer :: X () Source #
Focus the window under the mouse pointer, unless we're currently changing focus with the mouse or dragging. This is the inverse to XMonad.Actions.UpdatePointer: instead of moving the mouse pointer to match the focus, we change the focus to match the mouse pointer.
This is meant to be used together with
updatePointer in individual key bindings.
Bindings that change focus should invoke
updatePointer at the end, bindings that
switch workspaces or change layouts should call
focusUnderPointer at the
end. Neither should go to
logHook, as that would override the other.
This is more finicky to set up than
focusOnMouseMove, but ensures that
focus is updated immediately, without having to touch the mouse. | https://xmonad.github.io/xmonad-docs/xmonad-contrib-0.17.0.9/XMonad-Actions-UpdateFocus.html | CC-MAIN-2022-27 | en | refinedweb |
How to populate a ListModel without for loop
Hello everyone.
I'm working on a Qt/QML software for data visualization that is meant to be quasi-real time.
In the software I'm working on, I populate a ListModel object as follows:
main.qml
Test_Data { id: surfaceData } // some code function populate_model(x,y,my_data) { for(var i=0; i<array_data.length; i++) surfaceData.model.append({"row": y[i], "col": x[i], "value": my_data[i]); }
Test_Data.qml
import QtQuick 2.5 Item { property bool isempty: true property alias model: dataModel ListModel { id: dataModel } }
mainwindow.cpp
QObject *obj = my_widget->rootObject(); QMetaObject::invokeMethod(obj,"populate_model", Q_ARG(QVariant, QVariant::fromValue(array_x)), Q_ARG(QVariant, QVariant::fromValue(array_y)), Q_ARG(QVariant, QVariant::fromValue(array_data)));
where my_widget is a QQuickWidget and array_x, array_y and array_data are std vectors.
In short, is pass three arrays to the QML function and populate the ListModel with that for loop.
The problem is that the arrays are generally very big (hundreds of thousands of elements) and populating such model list takes about one one second.
Is it possible to avoid the for loop with the appends to make the populating process faster?
@Davide87 Perhaps,
you get the solution here:
Thank you for your answer Bernd. However that topic is not actually much of help for me :-(
However, I'm thinking about leaving the QML idea and try another way to solve my problem.
Thank you again :-)
@Davide87 If you want better performance you need to use C++.
QAbstractListModelcould do the job I think. You can use this "smart models" if you don't want to implement your own model.
Thank you daljit. I'll try it. | https://forum.qt.io/topic/91593/how-to-populate-a-listmodel-without-for-loop | CC-MAIN-2022-27 | en | refinedweb |
XMonad.Prompt.OrgMode
Description
A prompt for interacting with org-mode. This can be seen as an org-specific version of XMonad.Prompt.AppendFile, allowing for more interesting interactions with that particular file type.
It can be used to quickly save TODOs, NOTEs, and the like with the additional capability to schedule/deadline a task, or use the system's clipboard (really: the primary selection) as the contents of the note.
Synopsis
- orgPrompt :: XPConfig -> String -> FilePath -> X ()
- orgPromptPrimary :: XPConfig -> String -> FilePath -> X ()
- data ClipboardSupport
- data OrgMode
Usage
You can use this module by importing it, along with XMonad.Prompt, in
your
xmonad.hs
import XMonad.Prompt import XMonad.Prompt.OrgMode (orgPrompt)
and adding an appropriate keybinding. For example, using syntax from XMonad.Util.EZConfig:
, ("M-C-o", orgPrompt def "TODO" "/home/me/org/todos.org")
This would create notes of the form
* TODO my-message in the
specified file.
You can also enter a relative path; in that case the file path will be
prepended with
$HOME or an equivalent directory. I.e. instead of the
above you can write
, ("M-C-o", orgPrompt def "TODO" "org/todos.org") -- also possible: "~/org/todos.org"
There is also some scheduling and deadline functionality present. This
may be initiated by entering
+s or
+d—separated by at least one
whitespace character on either side—into the prompt, respectively.
Then, one may enter a date and (optionally) a time of day. Any of the
following are valid dates, where brackets indicate optionality:
- tod[ay]
- tom[orrow]
- any weekday
- any date of the form DD [MM] [YYYY]
In the last case, the missing month and year will be filled out with the current month and year.
For weekdays, we also disambiguate as early as possible; a simple
w
will suffice to mean Wednesday, but
s will not be enough to say
Sunday. You can, however, also write the full word without any
troubles. Weekdays always schedule into the future; e.g., if today is
Monday and you schedule something for Monday, you will actually schedule
it for the next Monday (the one in seven days).
The time is specified in the
HH:MM format. The minutes may be
omitted, in which case we assume a full hour is specified.
A few examples are probably in order. Suppose we have bound the key above, pressed it, and are now confronted with a prompt:
hello +s todaywould create a TODO note with the header
helloand would schedule that for today's date.
hello +s today 12schedules the note for today at 12:00.
hello +s today 12:30schedules it for today at 12:30.
hello +d today 12:30works just like above, but creates a deadline.
hello +s thuwould schedule the note for next thursday.
hello +s 11would schedule it for the 11th of this month and this year.
hello +s 11 jan 2013would schedule the note for the 11th of January 2013.
Note that, due to ambiguity concerns, years below
25 result in
undefined parsing behaviour. Otherwise, what should
message +s 11 jan
13 resolve to—the 11th of january at 13:00 or the 11th of january in
the year 13?
There's also the possibility to take what's currently in the primary
selection and paste that as the content of the created note. This is
especially useful when you want to quickly save a URL for later and
return to whatever you were doing before. See the
orgPromptPrimary
prompt for that.
Prompts
orgPromptPrimary :: XPConfig -> String -> FilePath -> X () Source #
Like
orgPrompt, but additionally make use of the primary
selection. If it is a URL, then use an org-style link
[[primary-selection][entered message]] as the heading. Otherwise,
use the primary selection as the content of the note.
The prompt will display a little
+ PS in the window
after the type of note.
Types
data ClipboardSupport Source #
Whether we should use a clipboard and which one to use.
Constructors | https://xmonad.github.io/xmonad-docs/xmonad-contrib-0.17.0.9/XMonad-Prompt-OrgMode.html | CC-MAIN-2022-27 | en | refinedweb |
LandCoverNet is a global annual land cover classification training dataset with labels for the multi-spectral satellite imagery from Sentinel-1, Sentinel-2 and Landsat-8 missions in 2018. LandCoverNet North America contains data across North America, which accounts for ~13% of the global dataset. Each pixel is identified as one of the seven land cover classes based on its annual time series. These classes are water, natural bare ground, artificial bare ground, woody vegetation, cultivated vegetation, (semi) natural vegetation, and permanent snow/ice.
There are a total of 1561 image chips of 256 x 256 pixels in LandCoverNet North America V1.0 spanning 40.
Radiant Earth Foundation (2022) LandCoverNet North America: A Geographically Diverse Land Cover Classification Training Dataset, Version 1.0, Radiant MLHub.
from radiant_mlhub import Dataset ds = Dataset.fetch('ref_landcovernet_na_v1') for c in ds.collections: print(c.id)
Python Client quick-start guide | https://mlhub.earth/data/ref_landcovernet_na_v1 | CC-MAIN-2022-27 | en | refinedweb |
Twitter is such a treasure trove of interesting information. Recently, I’ve been using Twitter sentiment to predict NFL games. We can also see if we can use Twitter sentiment to predict the stock market. In this post, we’re going to go over how you can get all of the text from the last 100 Tweets for a search term using Python.
In this post we’ll cover:
- Creating Twitter Request Headers
- Creating a Function to Search Twitter
- Configuring the Twitter Search Parameters
- Parsing the Response
- Testing the Twitter Search Function
To get started you’re going to need a Twitter API key by signing up for a developer account on Twitter. You’ll also need to download the `requests` module with the line below in your terminal.
pip install requests
Create Twitter Request Headers
The first thing we need to do is establish how we’re going to send our request to Twitter. Twitter provides some Python libraries, but none that have good documentation or are well maintained. Let’s just do it the old fashioned way.
First, we’ll import `requests` to send HTTP requests and `json` to parse the response. We also need the Bearer Token, which we should have gotten via the Twitter API earlier. We’ll set up the search endpoint, which will be the “recent” search endpoint. Then, we’ll set up the headers. The only headers we need is the `Authorization` header which will pass in the Bearer Token.
import requests import json from twitter_config import bearertoken search_recent_endpoint = "" headers = { "Authorization": f"Bearer {bearertoken}" }
Create Twitter Search Function
After we’ve set up the headers, we need to create a function to search Twitter. In our example, we’ll create a function that searches Twitter for the last 100 tweets that are a) in English, b) don’t have links, and c) are not retweets.
Configure Twitter Search Parameters
Our Twitter `search` function will take one parameter, the search term. We expect the search term to be a string. The first thing we’ll do is configure our params for the search request. The params will construct a query for the search `term` and specify that we want only English results that don’t have links nor are retweets. After setting up the query, we will also specify that we want a maximum of 100 results.
# automatically builds a search query from the requested term # looks for english tweets with no links that are not retweets # returns the tweets def search(term: str): params = { "query": f'{term} lang:en -has:links -is:retweet', 'max_results': 100 }
Parse Response
Next, we’ll use the headers and the params we just set up to send a GET response to the endpoint. We’ll use the `json` module to parse the response we get. Remember that we want to return all the Tweets as a string. The next thing we’ll do is get all the Tweets from the `data` key. Then we’ll join all of the `text` from each of the Tweets with a period and a new line. Finally, we’ll return that string.
response = requests.get(url=search_recent_endpoint, headers=headers, params=params) res = json.loads(response.text) tweets = res["data"] text = ".\n".join( for tweet in tweets]) return text
Full Code
Here’s the full code for the function to search Twitter and return the text from the Tweets.
# automatically builds a search query from the requested term # looks for english tweets with no links that are not retweets # returns the tweets def search(term: str): params = { "query": f'{term} lang:en -has:links -is:retweet', 'max_results': 100 } response = requests.get(url=search_recent_endpoint, headers=headers, params=params) res = json.loads(response.text) tweets = res["data"] text = ".\n".join( for tweet in tweets]) return text
Test Twitter Search Functionality
Since it’s December 26, which is Boxing Day in Canada, but the day after Christmas in the US, we’ll search Twitter for Christmas.
print(search("christmas"))
The results will show us the 100 most recent tweets people are making that contain the word “Christmas”. “Scrape the Text from All Tweets for a Search Term” | https://pythonalgos.com/scrape-the-text-from-all-tweets-for-a-search-term/ | CC-MAIN-2022-27 | en | refinedweb |
There might be times when you have to remove a specific node (or multiple nodes) from service, either temporarily or permanently. This might include cases of troubleshooting nodes that are in a bad state, or retiring nodes after an update to the AMI so that all nodes are using the new AMI.
This topic describes how to temporarily prevent new workloads from being assigned to a node, as well as how to safely remove workloads from a node so that it can be permanently retired.
The
kubectl cordon <node> command will prevent any additional pods
from being scheduled onto the node, without disrupting any of the pods currently running on it. For example, let’s say a new node in your cluster has come up with some problems, and you want to cordon it before launching any new runs to ensure they will not land on that node. The procedure might look like this:
$ kubectl get nodes <none> 51m v1.14.7-eks-1861c5 ip-192-168-3-110.us-east-2.compute.internal Ready <none> 12d v1.14.7-eks-1861c5 $ kubectl cordon ip-192-168-24-46.us-east-2.compute.internal node/ip-192-168-24-46.us-east-2.compute.internal cordoned $ kubectl get no,SchedulingDisabled <none> 53m v1.14.7-eks-1861c5 ip-192-168-3-110.us-east-2.compute.internal Ready <none> 12d v1.14.7-eks-1861c5
Notice the
SchedulingDisabled status on the cordoned node.
You can undo this and return the node to service with the command
kubectl cordon <node>.
Identify user workloads
Before removing a node from service permanently, you must ensure there are no workloads still running on it that should not be disrupted. For example, you might see the following workloads running on a node (notice the specification of the compute namespace with
-n and wide output to include the node hosting the pod with
-o):
$ kubectl get po -n domino-compute -o wide | grep ip-192-168-24-46.us-east-2.compute.internal run-5e66acf26437fe0008ca1a88-f95mk 2/2 Running 0 23m 192.168.4.206 ip-192-168-24-46.us-east-2.compute.internal <none> <none> run-5e66ad066437fe0008ca1a8f-629p9 3/3 Running 0 24m 192.168.28.87 ip-192-168-24-46.us-east-2.compute.internal <none> <none> run-5e66b65e9c330f0008f70ab8-85f4f5f58c-m46j7 3/3 Running 0 51m 192.168.23.128 ip-192-168-24-46.us-east-2.compute.internal <none> <none> model-5e66ad4a9c330f0008f709e4-86bd9597b7-59fd9 2/2 Running 0 54m 192.168.28.1 ip-192-168-24-46.us-east-2.compute.internal <none> <none> domino-build-5e67c9299c330f0008f70ad1 1/1 Running 0 3s 192.168.13.131 ip-192-168-24-46.us-east-2.compute.internal <none> <none>
Different types of workloads must be treated differently. You can see the details of a particular workload with
kubectl describe po run-5e66acf26437fe0008ca1a88-f95mk -n domino-compute. The labels section of the describe output is particularly useful to
distinguish the type of workload, as each of the workloads named as
run-… will have a label like
dominodatalab.com/workload-type=<type of workload>.. The previous example
contains one each of the major user workloads:
run-5e66acf26437fe0008ca1a88-f95mkis a Batch Job, with label
dominodatalab.com/workload-type=Batch. It will stop running on its own once it is finished and disappear from the list of active workloads.
run-5e66ad066437fe0008ca1a8f-629p9, is a Workspace, with label
dominodatalab.com/workload-type=Workspace. It will keep running until the user who launched it shut it down. You have the option of contacting users to shut down their workspaces, waiting a day or two in the expectation they will shut them down naturally, or removing the node with the workspaces still running. (The last option is not recommended unless you are certain there is no un-synced work in any of the workspaces and have communicated with the users about the interruption.)
run-5e66b65e9c330f0008f70ab8-85f4f5f58c-m46j7, is an App, with label
dominodatalab.com/workload-type=App. It is a long-running process, and is governed by a kubernetes
deployment. It will be recreated automatically if you destroy the node hosting it, but will experience whatever downtime is required for a new pod to be created and scheduled on another node. See below for methods to proactively move the pod and reduce downtime.
model-5e66ad4a9c330f0008f709e4-86bd9597b7-59fd9, is a Model API. It does not have a
dominodatalab.com/workload-typelabel, and instead is easily identifiable by the pod name. It is also a long-running process, similar to an app, with similar concerns. See below for methods to proactively move the pod and reduce downtime.
domino-build-5e67c9299c330f0008f70ad1is a Compute Environment. It will finish on its own and go into a
Completedstate.
Manage long-running workloads
For the long-running workloads governed by a Kubernetes deployment, you can proactively move the pods off of the cordoned node by running a command like this:
$ kubectl rollout restart deploy model-5e66ad4a9c330f0008f709e4 -n domino-compute
Notice the name of the deployment is the same as the first part of the name of the pod in the above section. You can see a list of all deployments in the compute namespace by running
kubectl get deploy -n domino-compute.
Whether the associated app or model experiences any downtime will depend on the update strategy of the deployment. For the two example workloads above in a test deployment, one App and one Model API, we have the following (describe output filtered here for brevity):
$ kubectl describe deploy run-5e66b65e9c330f0008f70ab8 -n domino-compute | grep -i "strategy\|replicas:" Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate RollingUpdateStrategy: 1 max unavailable, 1 max surge $ kubectl describe deploy model-5e66ad4a9c330f0008f709e4 -n domino-compute | grep -i "strategy\|replicas:" Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate RollingUpdateStrategy: 0 max unavailable, 25% max surge
The App in this case would experience some downtime, since the old pod
will be terminated immediately (
1 max unavailable with only 1 pod
currently running). The model will not experience any downtime since the
termination of the old pod will be forced to wait until a new pod is
available (
0 max unavailable). If desired, you can edit the
deployments to change these settings and avoid downtime.
Manage older versions of Kubernetes
Earlier versions of kubernetes do not have the
kubectl rollout restart command, but a similar effect can be achieved by "patching" the deployment with a throwaway annotation like this:
$ kubectl patch deploy run-5e66b65e9c330f0008f70ab8 -n domino-compute -p '{"spec":{"template":{"metadata":{"annotations":{"migration_date":"'$(date +%Y%m%d)'"}}}}}'
The patching process will respect the same update strategies as the above restart command.
In cases where you have to retire many nodes, it can be useful to loop over many nodes and/or workload pods in a single command. Customizing the output format of
kubectl commands, appropriate filtering, and combining with
xargs makes this possible.
For example, to cordon all nodes in the default node pool, you can run the following:
$ kubectl get nodes -l dominodatalab.com/node-pool=default -o custom-columns=:.metadata.name --no-headers | xargs kubectl cordon
To view only apps running on a particular node, you can filter using the labels discussed previously:
$ kubectl get pods -n domino-compute -o wide -l dominodatalab.com/workload-type=App | grep <node-name>
To do a rolling restart of all model pods (over all nodes), you can run:
$ kubectl get deploy -n domino-compute -o custom-columns=:.metadata.name --no-headers | grep model | xargs kubectl rollout restart -n domino-compute deploy
When constructing such commands for larger maintenance, always run the first part of the command by itself to verify that the list of names being passed to
xargs and to the final
kubectl command are what you expect. | https://admin.dominodatalab.com/en/5.0/admin_guide/7965b6/remove-a-node-from-service/ | CC-MAIN-2022-27 | en | refinedweb |
I have simple code to write new branches to an existing TTree, but the resulting root file seems to contain two copies of the same ttree. Can anyone see why in the code below?
The problem I’m trying to solve is that I have an existing TTree that contains a few arrays of known length. I’m writing a function to loop through all entries in the TTree, and then loop over all elements in the array to find elements that pass certain cuts and place those in a new branch.
I’m still thinking how I can simplify the code below, or if it could be made faster.
Code here:
[code]from ROOT import TFile, TTree # Import any ROOT class you want
from array import array # used to make Float_t array ROOT wants
import sys
ttreeName = “NTuples/Analysis” # TTree name in all files
listOfFiles = [“testing.root”]
for fileName in listOfFiles:
file = TFile(fileName, “update”) # Open TFile
if file.IsZombie():
print “Error opening %s, exiting…” % fileName
sys.exit(0)
print “Opened %s, looking for %s…” % (fileName, ttreeName)
ttree = TTree() # Create empty TTree, and try: # try to get TTree from file. file.GetObject(ttreeName, ttree) except: print "Error: %s not found in %s, exiting..." % (ttreeName, fileName) sys.exit(0) print "found." # Add those variables into the TTree print "Adding new branches:\n ", listOfNewBranches = [] newJetPt = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetPt", newJetPt, "passjetPt/F") ) newJetEta = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetEta", newJetEta, "passjetEta/F") ) newJetPhi = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetPhi", newJetPhi, "passjetPhi/F") ) newJetEmEnergyFraction = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetEmEnergyFraction", newJetEmEnergyFraction, "passjetEmEnergyFraction/F") ) newJetFHPD = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetFHPD", newJetFHPD, "passjetFHPD/F") ) # Loop over all the entries numOfEvents = ttree.GetEntries() for n in xrange(numOfEvents): newJetPt[0] = 0.0 newJetEta[0] = 0.0 newJetPhi[0] = 0.0 newJetEmEnergyFraction[0] = 0.0 newJetFHPD[0] = 0.0 ttree.GetEntry(n) for i in 0,1,2,3: # Loop over the top 3 jets until we find one passing cuts if ttree.jetPt[i] < 5.0: break if (ttree.emEnergyFraction[i]>0.01) and (ttree.fHPD[i]<0.98): # Found a jet that passes cuts newJetPt[0] = ttree.jetPt[i] newJetEta[0] = ttree.jetEta[i] newJetPhi[0] = ttree.jetPhi[i] newJetEmEnergyFraction[0] = ttree.emEnergyFraction[i] newJetFHPD[0] = ttree.fHPD[i] break # Fill new branches for newBranch in sorted(listOfNewBranches): newBranch.Fill() file.Write() file.Close()[/code] | https://root-forum.cern.ch/t/adding-new-branches-to-existing-ttree-or-tntuple/9569 | CC-MAIN-2022-27 | en | refinedweb |
Introduction:
- The chapter will explain the interceptor “ExecAndWait” in Struts 2 with an example program.
Execute and Wait Interceptor:
- As the name suggests, execute and wait interceptor is used to wait before the execution of an application.
- The application is developed in such a way that waiting page is displayed before the next page is executed. So the sequence of execution is:
First.jsp -> wait.jsp -> next.jsp (or other JSP page)
- In our application, we are developing a login form for employees. The first page will have form elements for inputting of user credentials, and will display next page only if the credentials are verified. Now during the process of authentication (might take a little longer time), we can display a custom waiting page to inform user about anything. That’s where the exec and wait interceptor comes in the picture.
- So the first.jsp can be coded as:
//> Execute and Wait Demo - Employee Login Application </title> </head> <body> <h1> Employee Login System </h1> <hr/> <s:form <s:label <s:textfield <br/> <s:label<s:textfield <br/> <s:submit/> </s:form> </body> </html>
- Now the next page will simply print users name on successful login.
- So, next.jsp can be written as follows:
<%-- Document : next Created on : Nov 20, 2014, 12:50:20 PM Author : Admin --%> <%@page <title> Employee Page </title> </head> <body> <h1> Login success.. </h1> <hr/> <h2> Welcome <s:property </h2> </body> </html>
- The action class will only contain getter and setter methods of all the variables used in our first.jsp. The additional thing to be taken care of in action class is to provide a timed waiting (for displaying message of waiting page) through a timed loop (any loop – for, while or do while) or simply by making the current thread sleep of some milliseconds of time.
- In our example, we are making the programs thread sleep for 5 seconds (i.e. 5000 milliseconds) after user’s the credentials are verified.
- So Myaciton1.java will be coded as:
// Myaction_1.java /* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package action_class; import com.opensymphony.xwork2.ActionSupport; /** * * @author Admin */ public class Myaction_1 extends ActionSupport { private String name, pwd; public void setName(String name) { this.name = name; } public void setPwd(String pwd) { this.pwd = pwd; } public String getName() { return name; } public String getPwd() { return pwd; } public String execute() throws Exception { if(name == “abc” && pwd == “abc”) { Thread.sleep(5000); return SUCCESS; } else { return “failure”; } } }
- Also the waiting page is to be developed additionally so that it can display any useful message to the user.
- Again, this page must be refreshed so that it does not continue to display the same message again and again. This refreshing of a page can be done by meta tags of HTML.
- That is,
<meta http-
- The meta tag of HTML will automatically refresh the page as per the seconds specified in the content attribute of the meta tag.
- So wait.jsp can be coded as:
// wait.jsp <%-- Document : wait Created on : Dec 3, 2014, 3:30:59 PM Author : Admin --%> <%@page <title> Employee Login System </title> </head> <body> <h1> Employee Login System </h1> <hr/> <h2> Please wait while we verify your credentials.. </h2> </body> </html>
- Now to combine our JSP’s with action class, we need to configure our struts.xml.
// struts.xml <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" ""> <struts> <constant name = "struts.devMode" value = "true"></constant> <package name = "default" extends = "struts-default"> <action name = "click" class = "action_class.Myaction_1"> <interceptor-ref <param name = "delay"> 5000 </param> <param name = "delaySleepInterval"> 500 </param> </interceptor-ref> <result name = "wait"> wait.jsp </result> <result name = "success"> next.jsp </result> <result name = "failure"> first.jsp </result> </action> </package> </struts>
- The execute and wait interceptor is given by following syntax:
<interceptor-ref
<param name = "delay"> Number of seconds for initial delay time </param>
<param name = "delaySleepInterval"> Number of seconds to wait to check the background process </param>
<param name = "threadPriority”> Integer priority of thread </param>
</interceptor-ref>
- The parameters for executing and wait interceptor are:
- delay – the initial delay time for display of waiting page (wait.jsp in our example)
- delaySleepInterval – the number of seconds to wait before the next.jsp is to be displayed.
- threadPriority – integral value of thread priority
- These parameters are not mandatory to use. In our application as we are having only one process of execution (that is only one execute method), we do not required to configure threadPriority parameter.
->
Our application will run as follows
Figure: first run of application
Figure: waiting page (wait.jsp) is executed when authentication is under process. This page will automatically get refreshed after every 3 seconds.
| https://wideskills.com/struts/execandwait-interceptor | CC-MAIN-2022-27 | en | refinedweb |
XMonad.Layout.BinaryColumn
Description
Provides Column layout that places all windows in one column. Each window is half the height of the previous, except for the last pair of windows.
Note: Originally based on
Column with changes:
- Adding/removing windows doesn't resize all other windows. (last window pair exception).
- Minimum window height option.
Synopsis
- data BinaryColumn a = BinaryColumn Float Int
Usage
This module defines layout named BinaryColumn. It places all windows in one column. Windows heights are calculated to prevent window resizing whenever a window is added or removed. This is done by keeping the last two windows in the stack the same height.
You can use this module by adding following in your
xmonad.hs:
import XMonad.Layout.BinaryColumn
Then add layouts to your layoutHook:
myLayoutHook = BinaryColumn 1.0 32 ||| ...
The first value causes the master window to take exactly half of the screen, the second ensures that windows are no less than 32 pixels tall.
Shrink/Expand can be used to adjust the first value by increments of 0.1.
- 2.0 uses all space for the master window (minus the space for windows which get their fixed height).
- 0.0 gives an evenly spaced grid. Negative values reverse the sizes so the last window in the stack becomes larger.
data BinaryColumn a Source #
Constructors | https://xmonad.github.io/xmonad-docs/xmonad-contrib-0.17.0.9/XMonad-Layout-BinaryColumn.html | CC-MAIN-2022-27 | en | refinedweb |
Since 2014, Generative Adversarial Networks (GANs) have been taking over the field of deep learning and neural networks due to the immense potential these architectures possess. While the initial GANs were able to produce decent results, they were often found to fail when trying to perform more difficult computations. Hence, several variations of these GANs have been proposed to ensure that we are able to achieve the best results possible. In our previous articles, we have covered such versions of GANs to solve different types of projects, and in this article, we will also do the same.
In this article, we will cover one of the types of generative adversarial networks (GANs) in Wasserstein GAN (WGANs). We will understand the working of these WGAN generators and discriminator structures as well as dwell on the details for their implementation. We will look into its implementation with the gradient penalty approach, and, finally, construct a project with the following architecture from scratch. The entire project can be implemented on the Gradient platform available on Paperspace. For viewers who want to train the project, I would recommend the viewers to check out the website and implement the project alongside.
Introduction:
Generative Adversarial Networks (GANs) are a tremendous accomplishment in the world of artificial intelligence and deep learning. Since their original introduction, they have been consistently used in the development of spectacular projects. While these GANs, with their competing generator and discriminator models, are able to achieve massive success, there were several cases of failure of these networks.
Two of the most common reasons were due to either a convergence failure or a mode collapse. In convergence failure, the model failed to produce optimal or good quality results. In the case of a mode collapse, the model failed to produce unique images repeating a similar pattern or quality. Hence, to solve some of these issues or to combat numerous types of problems, there were gradually many variations and versions developed for GANs.
While we have discussed the concept of DCGANs in some of our previous articles, in this blog, we will focus on the WGAN networks for combating such issues. WGAN offers higher stability to the training model in comparison to simple GAN architectures. The loss function utilized in WGAN also gives us a termination criterion for evaluating the model. While it may sometimes take slightly longer to train, it is one of the better options to achieve more efficient results. Let us understand the concept of these WGANs in a bit more detail in the next section.
Understanding WGANs:
The idea for the working of Generative Adversarial Networks (GANs) is to utilize two primary probability distributions. One of the main entity is the probability distribution of the generator (Pg), which refers to the distribution from the output of the generator model. The other essential entity is the probability distribution from the real images (Pr). The objective of the Generative Adversarial Networks is to ensure that both these probability distributions are close to each other so that the output generated is highly realistic and high-quality.
For calculating the distance of these probability distributions, mathematical statistics in machine learning proposes three primary methods, namely Kullback–Leibler divergence, Jensen–Shannon divergence, and Wasserstein distance. The Jensen–Shannon divergence (also a typical GAN loss) is initially the more utilized mechanism in simple GAN networks.
However, this method has issues while working with gradients that can lead to unstable training. Hence, we make use of the Wasserstein distance to fix such recurring issues. The representation for the mathematical formula is as shown below. Refer to the following research paper for further reading and information.
In the above equation, the max value represents the constraint on the discriminator. In the WGAN architecture, the discriminator is referred to as the critic. One of the reasons for this convention is that there is no sigmoid activation function to limit the values to 0 or 1, which means real or fake. Instead, the WGAN discriminator networks return a value in a range, which allows it to act less strictly as a critic.
The first part of the equation represents the real data, while the second half represents the generator data. The discriminator (or the critic) in the above equation aims to maximize the distance between the real data and the generated data because it wants to be able to successfully distinguish the data accordingly. The generator network aims to minimize the distance between the real data and generated data because it wants the generated data to be as real as possible.
Learning the details for the implementation of WGANs:
For the original implementation of the WGAN network, I would recommend checking out the following research paper. It describes the implementation of the architectural build in detail. The critic adds a meaningful metric for the desired computation for problems related to GAN and also improves the training stability.
However, one of the main disadvantages of the initial research paper, which uses a method of weight clipping, was found to be that this method did not always work as optimally as expected. When the weight clipping was sufficiently large, it led to longer training times as the critic took a lot of time to adjust to the expected weights. When the weight clipping was small, it led to vanishing gradients, especially in cases of a large number of layers, no batch normalization, or problems related to RNNs.
Hence, there was a need for a slight improvement in the training mechanism of WGAN. One of the best methods introduced to combat these issues was in the following research paper which tackled this problem with the use of the gradient penalty method. This research paper help in improving the training of the WGAN. Let us look at an image of the algorithm that is proposed for achieving the required task.
The WGAN uses a gradient penalty approach to effectively solve the previous issues of this network. The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1 (Source). The algorithm above defines some of the basic parameters that we must consider while utilizing this approach.
The lambda defines the gradient penalty coefficient, while the n-critic refers to the number of critic iteration per generator iteration. The alpha and beta values refer to the constraints of the Adam optimizer. The approach proposes that we make use of an interpolation image alongside the generated image before adding the loss function with gradient penalty as it helps to satisfy the Lipschitz constraint. The algorithm is run until we are able to achieve a satisfactory convergence on the required data. Let us now look at the practical implementation of this WGAN with the gradient penalty method for constructing the MNIST project.
Construct a project with WGANs:
In this section of the article, we will develop the WGAN networks from our understanding of the method of functioning and details of implementation. We will ensure that we use a gradient penalty methodology while training the WGAN network. For the construction of this project, we will utilize the following reference link from the official Keras website, from which a majority of the code has been considered.
If you are working within Gradient, I suggest you create a Notebook using the TensorFlow runtime. This will set up your environment in a docker container with TensorFlow and Keras installed.
Importing the essential libraries:
We will make use of the TensorFlow and Keras deep learning frameworks for constructing the WGAN architecture. If you are not too familiar with these libraries, I will recommend referring to my previous articles that cover these two topics extensively. The viewers can check out the TensorFlow article from this link and the Keras blog from the following link. These two libraries should be sufficient for the construction of most of the tasks in this project. We will also import numpy for some array computations and matplotlib for some visualizations if required.
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt import numpy as np
Defining Parameters and Loading Data:
In this section, we will define some of the basic parameters, define a few blocks of neural networks to reuse throughout the project, namely the conv block and the upsample block and load the MNIST data accordingly. Let us first define some of the parameters, such as the image size of the MNIST data, which is 28 x 28 x 1 because each image has a height and width of 28 and has one channel, which means it is a grayscale image. Let us also define a base batch size and a noise dimension which the generator can utilize for the generation of the desired number of 'digit' images.
IMG_SHAPE = (28, 28, 1) BATCH_SIZE = 512 noise_dim = 128
In the next step, we will load the MNIST data, which is directly accessible from the TensorFlow and Keras datasets free example datasets. We will divide the 60000 existing images equivalently into their respective train images, train labels, test images, and test labels. Finally, we will normalize these images so that the training model can easily compute the values in the specific range. Below is the code block for performing the following actions.
MNIST_DATA = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = MNIST_DATA.load_data() print(f"Number of examples: {len(train_images)}") print(f"Shape of the images in the dataset: {train_images.shape[1:]}") train_images = train_images.reshape(train_images.shape[0], *IMG_SHAPE).astype("float32") train_images = (train_images - 127.5) / 127.5
In the next code snippet, we will define the convolutional block, which we will mostly utilize for the construction of the discriminator architecture for it to act as a critic for the generated images. The convolutional block function will take in some of the basic parameters for the convolution 2D layer as well as some other parameters, namely batch normalization, and dropout. As described in the research paper, some of the layers of the discriminator critic model make use of a batch normalization or dropout layer. Hence, we can choose to add either of the two layers to be followed after a convolutional layer if required. The code snippet below represents the function for the convolutional block.
def conv_block(x, filters, activation, kernel_size=(3, 3), strides=(1, 1), padding="same", use_bias=True, use_bn=False, use_dropout=False, drop_value=0.5): x = layers.Conv2D(filters, kernel_size, strides=strides, padding=padding, use_bias=use_bias)(x) if use_bn: x = layers.BatchNormalization()(x) x = activation(x) if use_dropout: x = layers.Dropout(drop_value)(x) return x
Similarly, we will also construct another function for the upsample block, which we will mostly utilize throughout the computation of the generator architecture of the WGAN structure. We will define some of the basic parameters and an option if we want to include the batch normalization layer or the dropout layer. Note that each upsample block is followed by a conventional convolutional layer as well. The batch normalization or dropout layer may be added after these two layers if required. Check out the below code for creating the upsample block.
def upsample_block(x, filters, activation, kernel_size=(3, 3), strides=(1, 1), up_size=(2, 2), padding="same", use_bn=False, use_bias=True, use_dropout=False, drop_value=0.3): x = layers.UpSampling2D(up_size)(x) x = layers.Conv2D(filters, kernel_size, strides=strides, padding=padding, use_bias=use_bias)(x) if use_bn: x = layers.BatchNormalization()(x) if activation: x = activation(x) if use_dropout: x = layers.Dropout(drop_value)(x) return x
In the next couple of sections, we will utilize both the convolutional block and the upsample blocks to construct the generator and discriminator architecture. Let us proceed to look at how to build the generator model and the discriminator model accordingly to create an overall highly effective WGAN architecture to solve the MNIST project.
Constructing The Generator Architecture:
With the help of the previously defined functions of the upsample blocks, we can proceed to construct our generator model for working with this project. We will now define some basic requirements, such as the noise with the latent dimension that we previously assigned. We will follow this noise up with a fully connected layer, a batch normalization layer, and a Leaky ReLU. Before we pass the output to the next upsample blocks, we need to reshape the function accordingly.
We will then pass the reshaped noise output into a series of upsampling blocks. Once we pass the output through three upsample blocks, we achieve a final shape of 32 x 32 in the height and width dimension. But we know that the shape of the MNIST dataset is in the form of 28x28. To achieve this data, we will use the Cropping 2D layer for achieving the required shape. Finally, we will finish the construction of the generator architecture by calling the model function.
def get_generator_model(): noise = layers.Input(shape=(noise_dim,)) x = layers.Dense(4 * 4 * 256, use_bias=False)(noise) x = layers.BatchNormalization()(x) x = layers.LeakyReLU(0.2)(x) x = layers.Reshape((4, 4, 256))(x) x = upsample_block(x, 128, layers.LeakyReLU(0.2), strides=(1, 1), use_bias=False, use_bn=True, padding="same", use_dropout=False) x = upsample_block(x, 64, layers.LeakyReLU(0.2), strides=(1, 1), use_bias=False, use_bn=True, padding="same", use_dropout=False) x = upsample_block(x, 1, layers.Activation("tanh"), strides=(1, 1), use_bias=False, use_bn=True) x = layers.Cropping2D((2, 2))(x) g_model = keras.models.Model(noise, x, name="generator") return g_model g_model = get_generator_model() g_model.summary()
Constructing The Discriminator Architecture:
Now that we have completed the construction of the generator architecture, we can proceed to create the discriminator network (more commonly known as the critic in WGANs). The first step we will perform in the discriminator model for performing the project of MNIST data generation is to adjust the shape accordingly. Since the dimensions of 28 x 28 lead to an odd dimension after a couple of strides, it is best to convert the image size into the dimension of 32 x 32 because it provides an even dimension after performing the striding operation.
Once we add the zero-padding layer, we can continue to develop the critic architecture as desired. We will then proceed to add a series of convolutional blocks as described in our previous function. Note the layers that may or may not use a batch normalization or dropout layer. After four convolutional blocks, we will pass the output through a flatten layer, a dropout layer, and finally, a dense layer. Note that the dense layer does not utilize a sigmoid activation function, unlike other discriminators in simple GAN networks. Finally, call the model to create the critic network.
def get_discriminator_model(): img_input = layers.Input(shape=IMG_SHAPE) x = layers.ZeroPadding2D((2, 2))(img_input) x = conv_block(x, 64, kernel_size=(5, 5), strides=(2, 2), use_bn=False, use_bias=True, activation=layers.LeakyReLU(0.2), use_dropout=False, drop_value=0.3) x = conv_block(x, 128, kernel_size=(5, 5), strides=(2, 2), use_bn=False, use_bias=True, activation=layers.LeakyReLU(0.2), use_dropout=True, drop_value=0.3) x = conv_block(x, 256, kernel_size=(5, 5), strides=(2, 2), use_bn=False, use_bias=True, activation=layers.LeakyReLU(0.2), use_dropout=True, drop_value=0.3) x = conv_block(x, 512, kernel_size=(5, 5), strides=(2, 2), use_bn=False, use_bias=True, activation=layers.LeakyReLU(0.2), use_dropout=False, drop_value=0.3) x = layers.Flatten()(x) x = layers.Dropout(0.2)(x) x = layers.Dense(1)(x) d_model = keras.models.Model(img_input, x, name="discriminator") return d_model d_model = get_discriminator_model() d_model.summary()
Creating the overall WGAN model:
Over next step is to define the overall Wasserstein GAN network. We will divide this WGAN building structure into the form of three blocks. In the first code block, we will define all the parameters that we will utilize throughout the class in various functions. Check the code snippet below to gain an understanding of the different parameters that we will utilize. Note that all the functions are to be inside the WGAN class.
class WGAN(keras.Model): def __init__(self, discriminator, generator, latent_dim, discriminator_extra_steps=3, gp_weight=10.0): super(WGAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.d_steps = discriminator_extra_steps self.gp_weight = gp_weight def compile(self, d_optimizer, g_optimizer, d_loss_fn, g_loss_fn): super(WGAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.d_loss_fn = d_loss_fn self.g_loss_fn = g_loss_fn
In the next function, we will create the gradient penalty method that we have discussed in the previous section. Note that the gradient penalty loss is calculated on an interpolated image and added to the discriminator loss as discussed in the algorithm of the previous section. This method allows us to achieve faster convergence and higher stability while training. It also enables us to achieve a better assignment of weights. Check the below code for the implementation of the gradient penalty.
def gradient_penalty(self, batch_size, real_images, fake_images): # Get the interpolated image alpha = tf.random.normal([batch_size, 1, 1, 1], 0.0, 1.0) diff = fake_images - real_images interpolated = real_images + alpha * diff with tf.GradientTape() as gp_tape: gp_tape.watch(interpolated) pred = self.discriminator(interpolated, training=True) grads = gp_tape.gradient(pred, [interpolated])[0] norm = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=[1, 2, 3])) gp = tf.reduce_mean((norm - 1.0) ** 2) return gp
In the next and final function, we will define the training step for the WGAN architecture similar to the algorithm specified in the previous section. We will first train the generator and achieve the loss for the generator. We will then train the critic model and obtain the loss for the discriminator. Once we know the losses for both the generator and the critic, we will interpret the gradient penalty. Once the gradient penalty is calculated, we will multiply it with a constant weight factor and this gradient penalty to the critic. Finally, we will return the generator and critic losses accordingly. The below code snippet defines how the following actions can be performed.
def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] batch_size = tf.shape(real_images)[0] for i in range(self.d_steps): # Get the latent vector random_latent_vectors = tf.random.normal( shape=(batch_size, self.latent_dim) ) with tf.GradientTape() as tape: # Generate fake images from the latent vector fake_images = self.generator(random_latent_vectors, training=True) # Get the logits for the fake images fake_logits = self.discriminator(fake_images, training=True) # Get the logits for the real images real_logits = self.discriminator(real_images, training=True) # Calculate the discriminator loss using the fake and real image logits d_cost = self.d_loss_fn(real_img=real_logits, fake_img=fake_logits) # Calculate the gradient penalty gp = self.gradient_penalty(batch_size, real_images, fake_images) # Add the gradient penalty to the original discriminator loss d_loss = d_cost + gp * self.gp_weight # Get the gradients w.r.t the discriminator loss d_gradient = tape.gradient(d_loss, self.discriminator.trainable_variables) # Update the weights of the discriminator using the discriminator optimizer self.d_optimizer.apply_gradients(zip(d_gradient, self.discriminator.trainable_variables)) # Train the generator random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) with tf.GradientTape() as tape: # Generate fake images using the generator generated_images = self.generator(random_latent_vectors, training=True) # Get the discriminator logits for fake images gen_img_logits = self.discriminator(generated_images, training=True) # Calculate the generator loss g_loss = self.g_loss_fn(gen_img_logits) # Get the gradients w.r.t the generator loss gen_gradient = tape.gradient(g_loss, self.generator.trainable_variables) # Update the weights of the generator using the generator optimizer self.g_optimizer.apply_gradients(zip(gen_gradient, self.generator.trainable_variables)) return {"d_loss": d_loss, "g_loss": g_loss}
Training the model:
The final step of developing the WGAN architecture and solving our project is to train it effectively and achieve the desired result. We will divide this section into a few functions. In the first function, we will proceed to create the custom callback for the WGAN model. Using this custom callback that we create, we can save the generated images periodically. The code snippet below shows how you can create your own custom callbacks to perform a specific operation.
class GANMonitor(keras.callbacks.Callback): def __init__(self, num_img=6, latent_dim=128): self.num_img = num_img self.latent_dim = latent_dim def on_epoch_end(self, epoch, logs=None): random_latent_vectors = tf.random.normal(shape=(self.num_img, self.latent_dim)) generated_images = self.model.generator(random_latent_vectors) generated_images = (generated_images * 127.5) + 127.5 for i in range(self.num_img): img = generated_images[i].numpy() img = keras.preprocessing.image.array_to_img(img) img.save("generated_img_{i}_{epoch}.png".format(i=i, epoch=epoch))
In the next step, we will create some of the essential parameters required for analyzing and solving our problem. We will define the optimizers for both the generator and the discriminator. We can utilize the Adam optimizer with the suggested hyperparameters in the research paper's algorithm that we studied in the previous section. We will then also proceed to create the generator and discriminator losses that we can monitor accordingly. These losses have some meaning, unlike the simple GAN architectures that we have developed in previous articles.
generator_optimizer = keras.optimizers.Adam( learning_rate=0.0002, beta_1=0.5, beta_2=0.9) discriminator_optimizer = keras.optimizers.Adam( learning_rate=0.0002, beta_1=0.5, beta_2=0.9) def discriminator_loss(real_img, fake_img): real_loss = tf.reduce_mean(real_img) fake_loss = tf.reduce_mean(fake_img) return fake_loss - real_loss def generator_loss(fake_img): return -tf.reduce_mean(fake_img)
Finally, we will call and insatiate all the requirements for the model. We will train our model for a total of 20 epochs. The viewers can choose to train more if they desire to do so. We will define the WGAN architecture, create the callback, and compile the model with all the associated parameters. Finally, we will proceed to fit the model, which will enable us to train the WGAN network and generate images for the MNIST project.
epochs = 20 # Instantiate the custom defined Keras callback. cbk = GANMonitor(num_img=3, latent_dim=noise_dim) # Instantiate the WGAN model. wgan = WGAN(discriminator=d_model, generator=g_model, latent_dim=noise_dim, discriminator_extra_steps=3,) # Compile the WGAN model. wgan.compile(d_optimizer=discriminator_optimizer, g_optimizer=generator_optimizer, g_loss_fn=generator_loss, d_loss_fn=discriminator_loss,) # Start training the model. wgan.fit(train_images, batch_size=BATCH_SIZE, epochs=epochs, callbacks=[cbk])
After training the WGAN model for a limited number of epochs, I was still able to achieve a decent result on the MNIST dataset. Below are the image representations of some of the good data that I was able to generate through the following model architecture. After training for some more epochs, the generator should be able to effectively generate much better quality of images. If you have the time and resources, it is recommended to run the following program for a bit more time to obtain highly efficient results. The Gradient platform provided by Paperspace is one of the best options for running such deep learning programs to achieve the best results on your training.
Conclusion:
Generative Adversarial Networks are solving some highly difficult problems in the modern era. Wasserstein GAN is a significant improvement to the simple GAN architecture helping it to combat issues such as convergence failure or a mode collapse. While arguably it may sometimes take a slightly longer time to train, with the best resources, you will always notice that the following model will obtain high-quality results with a guarantee.
In this article, we understood the theoretical working procedure of Wasserstein Generative Adversarial Networks (WGANs) and why they work more effectively in comparison to simple GAN network architectures. We also understood the implementation details of the WGAN network before proceeding to construct a WGAN network for performing the task of MNIST. We used the concept of gradient penalty alongside the WGAN network for producing highly efficient results. It is recommended that the viewers try the procedural run of the same for a higher number of epochs and perform other experiments as well. Check out the Gradient platform of Paperspace for the productive reconstruction of the project.
In future articles, we will uncover a lot more varieties and versions of generative adversarial networks that are constantly being developed to achieve great results. We will also understand concepts of reinforcement learning and develop projects with them. Until then, keep discovering and exploring the world of deep learning!
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/wgans/ | CC-MAIN-2022-27 | en | refinedweb |
Hi,
I have a Python job that calls Julia for some computation on my datasets. Right now, passing data back and forth between Julia and Python is a bottleneck – my current process is to save the Pandas DataFrame as a feather file to disk, and load it from Julia.
I know PyJulia can pass numpy arrays with 0 copying, and I’m trying to figure out if it’s possible to do the same thing with an Arrow Table. However, a table gets passed as a
PyObject, and the
Arrow library in Julia doesn’t seem to be able to convert it, even on the latest version.
import pandas as pd df = pd.DataFrame({"id": [1, 2], "name": ["bob", "sam"]}) table = pyarrow.Table.from_pandas(df) import julia jl = julia.Julia(compiled_modules=False) from julia import Arrow from julia import Base Base.length([1, 2, 3]) >> 3 Base.length(table) >> 2 Base.typeof(table) >> <PyCall.jlwrap PyObject> Arrow.Table(table) RuntimeError: <PyCall.jlwrap (in a Julia function called from Python) JULIA: MethodError: no method matching Arrow.Table(::PyObject)
Is there a better way to pass datasets between the two? | https://discourse.julialang.org/t/passing-an-arrow-table-from-python-to-julia/56104 | CC-MAIN-2022-27 | en | refinedweb |
OnixS::FIX::Message::set method must be used. If there is no field of a given tag number available in the message, this member creates an association. If the field already exists, this member updates its value with the new value. Also, there are Set methods for each standard type (Int32, Int64, Decimal, etc.). OnixS::FIX::Message::setV methods. a need to serialize such values, however, if you perform a large amount of typed set/get operations with the message object on the critical path, then these methods are the best choice from the performance point of view.
OnixS::FIX::Message class supports the Short String Optimization (SSO). If the size of the string presentation of a field value less than the size of the internal field value buffer (15 bytes for 32-bit and 23 bytes for 64-bit) then such a value is stored directly in this buffer without any new memory allocation. Therefore, setting short string values is faster because the dynamical memory allocation is not performed. Also, subsequent access to a field with the short string can work faster due to the better data locality, however, please note that SSO is not applied if a long string is set earlier to the field.
To get a field value, the OnixS::FIX::Message::get method must be used.
To check the field presence, the OnixS::FIX::Message::contain method must be used.
To remove a field, the OnixS::FIX::Message::erase method must be used.
To iterate over all fields of a FIX message you can use the OnixS::FIX::FieldSet::ConstIterator class. This iterator does not iterate over fields from a repeating group. To iterate over fields of the repeating group you need to get the corresponding OnixS::FIX::GroupInstance object from the repeating group, please see the FIX Repeating Groups.
The following example demonstrates the basic operations over message fields.
In each sub-namespace, which corresponds to certain FIX version (like OnixS::FIX::FIX40), the OnixS::FIX::FIX40::Tags namespace is defined. This namespace contains the constants for all-known tag numbers. Additionally, there are the OnixS::FIX::FIX40::Values and OnixS::FIX::FIX40::TypedValues namespaces. These namespaces contain the constants for all-known tag values. The OnixS::FIX::FIX40::Values namespace contains only string constant values, the OnixS::FIX::FIX40::TypedValues namespace contains typed constant values (string, char, int) in accordance with FIX protocol types. Both namespaces contain the same values, however, typed values can be more convenient in some cases, for example, you can compare directly your internal char or int variables with corresponding typed constant without the conversion from string to char, int.
Also, there is the OnixS::FIX::FIX50_SP2_Latest namespace. It contains tag numbers and values for the latest extension pack (e.g., EP264).
The usage of all mentioned constants makes the source code more readable. | https://ref.onixs.biz/cpp-fix-engine-guide/group__manipulating-fix-message-fields.html | CC-MAIN-2022-27 | en | refinedweb |
Today's lab focuses on formatting and file processing as well as finding (and fixing!) errors.
Python has many built-in functions for working with strings and files (string methods). In this lab, we will manipulate files using various string methods.
Let's start with the program printfile.py from the book's website. Try running it. When it asks for a file, type in printfile.py. What does it print out?
Next, let's try it on the file allcaps.txt. Let's change the print line to
print(data.lower())What happens? Why?
To print to a file (instead of to the screen) is very easy:
Let's do that for the printfile.py program:
def main(): fname = input("Enter filename: ") infile = open(fname,"r") outfile = open("out.txt","w") data = infile.read() print(data, file=outfile)Run the program. Where did it send the output? Modify this program so that all the output is in lowercase and test it on the text file above.
In python, there's often many different ways to write the same program. Let's write a program that prints the lines of a file using a for loop:
def main(): print("This program prints a file.") # get the input file name infileName = input("Please enter name of input file: ") # open the files: infile = open(infileName, "r") # read each line of the input file and print it to outfile # for line in infile: print(line) main()What happens when you run this program? Why is it doublespacing the output? When you read a line from a file, it ends with an enter or `newline' character (often represented as `\n'. We can solve this in several different ways. One way to keep the last character of a line from being printed, we can use the slice operator to truncate the string:
print(line[:-1], file = outfile)The slice, line[:-1] says that you would like the string that consists of all the characters in line up to but not including the last character (since the -1 index always refers to the last character of a string, no matter how long the string is).
Modify this second program to print out the file all lowercase and singlespaced, and test it.
# errors.py is based on dateconvert2.py from Chapter 5 of the Zelle textbook # Converts day month and year numbers into two date formats def main() # get the day month and year day, month year = eval(input("Please enter(!
If you finish early, you may work on the programming problems. | https://stjohn.github.io/teaching/cmp/cmp230/f14/lab6.html | CC-MAIN-2022-27 | en | refinedweb |
Contacts module: Access profile picture
- lukaskollmer
Hi,
Is there a way to use the
contactsmodule to get and edit profile pictures for iOS contacts?
I'm sure I'd be able to do that with
objc_util, but before I'd like to try if it can be done using the built in
contactsmodule
- Webmaster4o
I'm not sure you'd be able to with objc_util. I don't know if apps are allowed to have this kind of write access to contacts.
The
contactsmodule currently doesn't expose that functionality (still on my todo list).
It's definitely possible to do this with
objc_util/
ctypes, but I don't think it would be possible to mix and match this approach with the
contactsmodule. You can't get at the pointers to the underlying data structures in
contacts, so if you want to edit a profile picture of a person using
objc_util, you also have to find the right person object using that, and can't use the simpler
contactsinterface.
I'll see if I can come up with an example, not completely sure how much code is required right now.
Okay, here's some sample code that shows how to use
objc_utilwith the Contacts framework to:
- Find a person by name
- Access its picture, and convert it to a
ui.Imageto show it in the console
- Set a new picture (picked from photos here, but you could of course also get this elsewhere)
- Save the changes
Before you run the example, set the
CONTACT_NAMEvariable (near the top) to a name of someone in your contacts (or create a contact named "John Doe").
from objc_util import * from ctypes import string_at import contacts import ui import photos # CHANGE THIS: CONTACT_NAME = 'John Doe' # Easier to do the authorization with the contacts module (this also authorizes the Contacts.framework): if not contacts.is_authorized(): # This implicitly shows the permission dialog, if necessary: contacts.get_all_groups() # Load classes we need from Contacts.framework: NSBundle.bundleWithPath_('/System/Library/Frameworks/Contacts.framework').load() CNContactStore = ObjCClass('CNContactStore') CNContact = ObjCClass('CNContact') CNSaveRequest = ObjCClass('CNSaveRequest') def main(): store = CNContactStore.alloc().init().autorelease() # Find the first contact that matches the name. pred = CNContact.predicateForContactsMatchingName_(CONTACT_NAME) fetch_keys = ['imageDataAvailable', 'imageData'] people = store.unifiedContactsMatchingPredicate_keysToFetch_error_(pred, fetch_keys, None) if not people: print 'No person found with the name "%s"' % (CONTACT_NAME,) return p = people[0] has_image = p.imageDataAvailable() if has_image: # Show the existing picture of the contact: img_data = p.imageData() img_data_str = string_at(img_data.bytes(), img_data.length()) img = ui.Image.from_data(img_data_str) img.show() # Pick a new image from photos: new_img_data = photos.pick_image(raw_data=True) if new_img_data: # Note: objc_util automatically converts bytearray to NSData new_img_bytes = bytearray(new_img_data) # Create a mutable copy of the fetched contact... mutable_contact = p.mutableCopy().autorelease() # Assign new image data... mutable_contact.imageData = new_img_bytes # Create a save request for he contact, and execute it... save_req = CNSaveRequest.new().autorelease() save_req.updateContact_(mutable_contact) store.executeSaveRequest_error_(save_req, None) if __name__ == '__main__': main()
- lukaskollmer | https://forum.omz-software.com/topic/2734/contacts-module-access-profile-picture/5 | CC-MAIN-2022-27 | en | refinedweb |
22. UV Sensor(EF05021)
22.1. Introduction
It is able to measure the total UV intensity of the sunlight.
22.2. Characteristic
Designed in RJ11 connections, easy to plug.
22.3. Specification
22.4. Outlook
22.5. Quick to Start
22.5.1. Materials Required and Diagram
Connect the UV sensor to J1 port and the OLED to the IIC port in the Nezha expansion board as the picture shows.
22.6. MakeCode Programming
22.
22.6.2. Step 2
22.6.3. Code as below:
22.6.4. Link
Link:
You may also download it directly below:
22.6.5. Result
The detected value from the UV sensor displays on the OLED screen.
22.7. Python Programming
22.7.1. Step 1
Download the package and unzip it: PlanetX_MicroPython
Go to Python editor
We need to add enum.py and uvlevel.py for programming. Click “Load/Save” and then click “Show Files (1)” to see more choices, click “Add file” to add enum.py and uvlevel.py from the unzipped package of PlanetX_MicroPython.
22.7.2. Step 2
22.7.3. Reference
from microbit import * from enum import * from uvlevel import * uvlevel = UVLEVEL(J1) while True: display.scroll(int(uvlevel.get_uvlevel()))
22.7.4. Result
The detected value from the UV sensor displays on the micro:bit. | https://www.elecfreaks.com/learn-en/microbitplanetX/Plant_X_EF05021.html | CC-MAIN-2022-27 | en | refinedweb |
Durability Testing of Stents Using Sensitivity-Based Methods in R
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The current industry protocol for durability testing of vascular stents and frames involves testing many implants simultaneously at a range of different stimulus magnitudes (typically strain or stress). The test levels are spread out like a grid across the stimulus range of interest. Each implant is tested to failure or run-out at its specified level and a model is fit to the data using methods similar to those described in Meeker and Escobar.1 An example of a completed durability test of this type is shown below.2 Of interest to the designer is the distribution of stimulus levels that would be expected to cause a failure at a specified cycle count like 400 million.
This methodology works well if you have a large budget and is accepted by FDA. However, blanketing the parameter space with enough samples to fit a robust model can be very costly and requires enough lab equipment to do many tests in parallel. I recently learned that there is another way that has the potential to significantly reduce test burden and allows for some unique advantages. I will attempt to describe it below.
Sensitivity testing represents a different paradigm in which samples are tested sequentially rather than in parallel. In such testing, the outcome of the first test informs the stimulus level of the second test and so on. When an implant fails at a given stimulus, the next part is tested at a lower level. When it passes, the stimulus on the next is increased. In this way, the latent distribution of part strength is converged upon with each additional test providing improved precision.
The key questions that must be answered to execute a sequential sensitivity test:
- “How to specify the test levels for subsequent runs in a principled and efficient way?”
- “How to estimate the underlying failure strength distribution when the threshold for failure is not directly assessed on each run?”
Neyer was the first to present a D-Optimal answer to the first question.3 Implementation of the algorithm for determining sequential levels that minimize the test burden and converge on the true underlying strength distribution quickly is not trivial and historically has remained unavailable outside of pricey commercial software.4 Fortunately, generous and diligent researchers have recently created an open source tool in R called “gonogo” which can execute sequential testing according to the D-optimal Neyer algorithm (or its improved direct descendants).5
In the remainder of this post I will show how to use gonogo to determine the underlying strength distribution of hypothetical vascular implant and visualize uncertainty about the estimate.
- Load Libraries
- Accessing gonogo Functions
- Simulating A Sequential Sensitivity Experiment
- Confidence, Reliability, and Visualization
- Conclusion and Thanks
- Session Info
Load Libraries
library(tidyverse) library(gt) library(plotly) library(ggrepel) library(here) library(gonogo) library(ggalt)
Accessing gonogo Functions
The functions in gonogo can be accessed in 2 ways:
I will be using the code sourced from Jeff’s site as the functions are more readily editable.
source(here("gonogoRiley.R"))
Simulating A Sequential Sensitivity Experiment
There are some nice built-in simulation features in gonogo using the gonogoSim function. For our walkthrough though we’ll simulate the data manually to have better visibility of what’s happening under the hood.
Suppose we have a part budget of n=25 stents from manufacturing and our goal is to estimate the underlying distribution of stress amplitude that causes fatigue fracture. First, let’s simulate the failure strength of the 25 parts using a known distribution (which is only available in the simulation). We’ll assume failure stress amplitude for these parts is normally distributed with a mean of 600 MPa and standard deviation of 13 MPa.
Simulate the Strength of Each Part
set.seed(805) stent_strength_tbl <- tibble(part_strength_mpa = rnorm(n = 25, mean = 600, sd = 13)) stent_strength_tbl %>% gt_preview()
See Test Levels and Input Test Results
The gonogo() function will initiate the test plan for sequential testing that prescribes test levels based on the outcome of each run. This operation happens in the console as shown in the screenshot below.
We’ll use the Neyer option by setting the “test” argument equal to 2. One key feature of sequential testing is that high and low initial guesses for the mean must be provided at the onset of testing along with an initial guess for the standard deviation. These guesses help set the scale. We’ll provide vague guesses that would be reasonable given our domain knowledge of cobalt-based alloys and their fatigue properties.
If we were really running the test on physical parts we’d want to set up a test at the prescribed stimulus level, then report the result to the console by inputing 2 numbers: the level you actually ran the test at (should be the same as what was recommended) and the outcome (1 for failure or 0 for survival) separated by a single space. After the results from the first test are entered, the recommended level of the next test is presented. When all runs are complete, the results are stored in the assigned object.
# z <- gonogo(mlo = 500, mhi = 700, sg = 10, test = 2, reso = 1) #input results in console
The resulting object z is a list that has a lot of information. The following code will pull the run history and clean it up nicely with tools and formatting from gt package.
z$d0 %>% gt() %>% cols_align("center") %>% opt_row_striping(row_striping = TRUE) %>% tab_header( title = "Run History of Sensitivity Test of n=25 Stents", subtitle = "X and EX are Stimulus Levels, Y = Outcome (1 = part failed, 0 = part survived)" ) %>% cols_width(everything() ~ px(100)) %>% tab_style( style = list( cell_text(weight = "bold") ), locations = cells_column_labels(columns = everything()) )
Gonogo has some really convenient built-in visualization tools. The run history can be plotted using gonogo’s ptest function with argument plt=1.
ptest(z, plt = 1)
## NULL
The values we really care about are the estimates of the mean and sd of the latent failure strength distribution. Numerical MLE’s for these parameters can be quickly extracted as follows:
z$musig ## [1] 600.383131 7.894673
We see that these are quite close to the true parameters of mean = 600 MPa, sd = 13 MPa! That really impressive for only 25 parts and vague initial guesses.
I like to occasionally look under the hood and see how the outputs are generated. I don’t always understand it but it helps me learn. In this case, the calculations to extract an estimate are pretty simple. gonogo is using a glm with probit link function. The results are replicated here with manual calculations.
xglm <- glm(Y ~ X, family = binomial(link = probit), data = z$d0) ab <- as.vector(xglm$coef) muhat <- -ab[1] / ab[2] sighat <- 1 / ab[2] tibble( true_parameter = c(600, 13), gonogo_estimate = c(muhat, sighat) ) %>% mutate(gonogo_estimate = gonogo_estimate %>% round(1)) %>% gt()
Confidence, Reliability, and Visualization
We just identified the MLE’s for the latent strength distribution. But anytime we’re working with medical devices we have to be conservative and respect the uncertainty in our methods. We don’t want to use the mean of the distribution to describe a design. Instead, we identify a lower quantile of interest and then put a confidence interval on that to be extra conservative.6 gonogo has some great tools and functions for helping us achieve these goals. I prefer to use the tidyverse framework for my data analysis and visualization so I combine the built in functions from gonogo with some helper functions that I wrote to keep things tidy. I present them here with some description of what they do but not much explanation of how they work. Sorry.
I will note however that gonogo provides options for obtaining multiple forms of confidence intervals: Fisher Matrix, GLM, and Likelihood-Based. For additional information on how it calculates each type, refer to THIS DOCUMENTATION. In the tests I ran, GLM and LR based CIs performed almost the same; FM was different. I use GLM below for my purposes.
In the following work, I assume we care about the 95% confidence bound on the .10 quantile. gonogo calculates all intervals as 2-sided, so to get the 95% confidence bound (1-sided) we request the 90% 2-sided limits.
Extract a tibble of 2-sided, pointwise confidence limits about all quantiles
This first function takes the output from the gonogo function and calculates 2-sided .90 confidence bounds on the various quantiles. The lims() function is from gonogo.
# helper function conf_limits_fcn <- function(tbl, conf = .9) { conf_limits_tbl_full <- lims(ctyp = 2, dat = tbl, conf = conf, P = seq(from = .01, to = .99, by = .01)) %>% as_tibble() %>% rename( ciq_lower = V1, estimate_q = V2, ciq_upper = V3, ip_lower = V4, estimate_p = V5, cip_upper = V6 ) } rileysim_conf_tbl <- conf_limits_fcn(z$d0) rileysim_conf_tbl %>% gt_preview()
Extract a point estimate for the lower 2-sided confidence limit for the .10 quantile.
We present results to FDA using the terminology “confidence and reliability”. By extracting the lower end of the 2-sided 90% confidence band on the .10 quantile, we obtain an estimate of the 95/90 tolerance bound i.e. a bound above which we would expect 90% or more of the population to lie, with 95% confidence. This would be a conservative estimate of the endurance limit for this stent design.
lower_conf_fcn <- function(tbl, percentile) { lower_scalar <- tbl %>% mutate(qprobs = qnorm(estimate_p)) %>% arrange(qprobs) %>% filter(estimate_p == percentile) %>% mutate(ciq_lower = ciq_lower %>% round(1)) %>% pluck(1) lower_qnorm <- tbl %>% mutate(qprobs = qnorm(estimate_p)) %>% arrange(qprobs) %>% filter(estimate_p == percentile) %>% pluck(7) lower_tbl <- tibble( estimate_q = lower_scalar, estimate_p = percentile, qprobs = lower_qnorm ) return(lower_tbl) } lower_conf_fcn(rileysim_conf_tbl, percentile = .1) %>% pluck(1) ## [1] 580.2
Plot CDF
Now some plotting functions to help visualize things. First, we use our confidence limit dataframe from above to construct a cdf of failure strength. With some clever use of geom_label_repel() and str_glue() we can embed our data right onto the plot for easy communication. The estimated endurance limit from our test sequence is 580 MPa.
plot_cdf_fcn <- function(tbl, percentile = .1, units = "MPa") { tbl %>% ggplot(aes(x = ciq_lower, y = estimate_p)) + geom_line(size = 1.3) + geom_line(aes(x = ciq_upper, y = estimate_p), size = 1.3) + geom_line(aes(x = estimate_q, y = estimate_p), linetype = 2) + geom_ribbon(aes(xmin = ciq_lower, xmax = ciq_upper), alpha = .1, fill = "purple") + geom_point(data = lower_conf_fcn(tbl, percentile = percentile), aes(estimate_q, estimate_p), color = "purple", size = 2) + geom_label_repel( data = lower_conf_fcn(tbl, percentile = percentile), aes(estimate_q, estimate_p), label = str_glue("1-sided, 95% Confidence Bound:\n{lower_conf_fcn(rileysim_conf_tbl, percentile = .1) %>% pluck(1)} {units}"), fill = "#8bd646ff", color = "black", segment.color = "black", segment.size = .5, nudge_y = .13, nudge_x = -25 ) + geom_segment(data = tbl %>% filter(estimate_p == percentile), aes(x = ciq_lower, xend = ciq_upper, y = estimate_p, yend = estimate_p), linetype = 3) + theme_bw() + labs( x = "Stress (MPa)", y = "Percentile", caption = "Dotted line is estimate of failure strength distribution\nConfidence bands are 2-sided 90% (i.e. 1-sided 95 for lower)", title = "Sensitivity Testing of Stent Durability", subtitle = str_glue("placeholder") ) } a <- plot_cdf_fcn(tbl = rileysim_conf_tbl, percentile = .1) a + labs(subtitle = str_glue("Estimates: Mean = {muhat %>% round(1)} MPa, Sigma = {sighat %>% round(1)} MPa"))
Normal Probability Plot
Engineers in my industry have an unhealthy obsession to normal probability plots. Personally I don’t like them, but Minitab presents them by default and engineers in my industry will look for data in this format. As part of this project I taught myself how to convert distributional data from a CFD into a normal probability plot. I show the helper function here and the result.
norm_probability_plt_fcn <- function(tbl, percentile, units = "MPa") { conf_tbl <- tbl %>% mutate(qprobs = qnorm(estimate_p)) %>% arrange(qprobs) probs_of_interest <- c(0.01, 0.05, seq(0.1, 0.9, by = 0.1), 0.95, 0.99) probs_of_interest_tbl <- conf_tbl %>% filter(estimate_p %in% probs_of_interest) conf_tbl %>% ggplot() + geom_line(aes(x = ciq_lower, y = qprobs), size = 1.3) + geom_line(aes(x = ciq_upper, y = qprobs), size = 1.3) + geom_line(aes(x = estimate_q, y = qprobs), linetype = 2) + geom_ribbon(aes(xmin = ciq_lower, xmax = ciq_upper, y = qprobs), alpha = .1, fill = "purple") + geom_point(data = lower_conf_fcn(tbl, percentile = percentile), aes(estimate_q, qprobs), color = "purple", size = 2) + geom_label_repel( data = lower_conf_fcn(tbl, percentile = percentile), aes(estimate_q, qprobs), label = str_glue("1-sided, 95% Confidence Bound:\n{lower_conf_fcn(rileysim_conf_tbl, percentile = .1) %>% pluck(1)} {units}"), fill = "#8bd646ff", color = "black", segment.color = "black", segment.size = .5, nudge_y = 1, nudge_x = -20 ) + geom_segment(data = conf_tbl %>% filter(estimate_p == percentile), aes(x = ciq_lower, xend = ciq_upper, y = qprobs, yend = qprobs), linetype = 3) + scale_y_continuous(limits = range(conf_tbl$qprobs), breaks = probs_of_interest_tbl$qprobs, labels = 100 * probs_of_interest_tbl$estimate_p) + theme_bw() + labs( x = "Stress (MPa)", y = "Percentile", caption = "Dotted line is estimate of failure strength distribution\nConfidence bands are 2-sided 90%", title = "Sensitivity Testing of Stent Durability", subtitle = str_glue("TBD_2") ) } b <- norm_probability_plt_fcn(rileysim_conf_tbl, percentile = .1) b + labs(subtitle = str_glue("Estimates: Mean = {muhat %>% round(1)} MPa, Sigma = {sighat %>% round(1)} MPa"))
Conclusion and Thanks
There you have it! A way to determine failure strength distributions using principled, sequential testing techniques guided by gonogo. If you’ve made it this far I really appreciate your attention. I would like to thank Paul Roediger for producing this amazing toolkit and for graciously offering your time to help me troubleshoot and work through some challenges on this project earlier this year. I’ll always be grateful to the open source community for making these things possible.
Session Info
sessionInfo() ## R version 4.0.3 (2020-10-10) ## Platform: x86_64-w64-mingw32/x64 (64-bit) ## Running under: Windows 10 x64 (build 18363) ## ##alt_0.4.0 gonogo_0.1.0 here_1.0.0 ggrepel_0.8.2 ## [5] plotly_4.9.2.1 gt_0.2.2 forcats_0.5.0 stringr_1.4.0 ## [9] dplyr_1.0.7 purrr_0.3.4 readr_1.4.0 tidyr_1.1.3 ## [13] tibble_3.1.4 ggplot2_3.3.5 tidyverse_1.3.0 ## ## loaded via a namespace (and not attached): ## [1] fs_1.5.0 lubridate_1.7.9.2 ash_1.0-15 RColorBrewer_1.1-2 ## [5] httr_1.4.2 rprojroot_2.0.2 tools_4.0.3 backports_1.2.0 ## [9] bslib_0.3.0 utf8_1.2.2 R6_2.5.1 KernSmooth_2.23-18 ## [13] DBI_1.1.0 lazyeval_0.2.2 colorspace_2.0-2 withr_2.4.2 ## [17] tidyselect_1.1.1 compiler_4.0.3 extrafontdb_1.0 cli_3.0.1 ## [21] rvest_0.3.6 xml2_1.3.2 labeling_0.4.2 bookdown_0.21 ## [25] sass_0.4.0 scales_1.1.1 checkmate_2.0.0 proj4_1.0-10 ## [29] digest_0.6.28 rmarkdown_2.11 pkgconfig_2.0.3 htmltools_0.5.2 ## [33] extrafont_0.17 dbplyr_2.0.0 fastmap_1.1.0 highr_0.9 ## [37] maps_3.3.0 htmlwidgets_1.5.4 rlang_0.4.11 readxl_1.3.1 ## [41] rstudioapi_0.13 jquerylib_0.1.4 generics_0.1.0 farver_2.1.0 ## [45] jsonlite_1.7.2 magrittr_2.0.1 Rcpp_1.0.7 munsell_0.5.0 ## [49] fansi_0.5.0 lifecycle_1.0.1 stringi_1.7.4 yaml_2.2.1 ## [53] MASS_7.3-53 grid_4.0.3 crayon_1.4.1 haven_2.3.1 ## [57] hms_0.5.3 knitr_1.34 pillar_1.6.2 reprex_0.3.0 ## [61] glue_1.4.2 evaluate_0.14 blogdown_0.15 data.table_1.14.0 ## [65] modelr_0.1.8 vctrs_0.3.8 Rttf2pt1_1.3.8 cellranger_1.1.0 ## [69] gtable_0.3.0 assertthat_0.2.1 xfun_0.26 broom_0.7.8 ## [73] viridisLite_0.4.0 ellipsis_0.3.2↩︎↩︎↩︎
Neyer provided a commercial software called Sentest to facilitate practitioners who wanted to use D-optimal methods without developing their own software↩︎↩︎
Since we only care about a 1-sided confidence interval, the 1-sided confidence bound on a specified quantile becomes analogous to a tolerance interval. | https://www.r-bloggers.com/2021/11/durability-testing-of-stents-using-sensitivity-based-methods-in-r/ | CC-MAIN-2022-27 | en | refinedweb |
What
Think of a situation where you have got an exception and you want to print some custom message in your logs, so that it can be understandable by the whole team.
There can be some situations where you want to just eat up the exception and want your test to carry on with rest of the execution..
How to Handle Exception. The syntax for multiple catch blocks looks like the following:
try{ //Some code }catch(ExceptionType1 e1){ //Code for Handling the Exception 1 }catch(ExceptionType2 e2){ //Code for Handling the Exception 2 }.
// Method Signature\ public static void anyFunction() throws Exception{ try{ // write your code here }catch (Exception e){ // Do whatever you wish to do here // Now throw the exception back to the system throw(e); } }
Multiple Exceptions: Multiple exceptions can be handled in the throws clause.
public static void anyFunction() throws ExceptionType1, ExceptionType2{ try{ //Some code }catch(ExceptionType1 e1){ //Code for Handling the Exception 1 }catch(ExceptionType2 e2){ //Code for Handling the Exception 2 }
Finally: The finally keyword is used to create a block of code that follows a try block. A finally block of code always executes, whether or not an exception has occurred.
try{ //Protected code }catch(ExceptionType1 e1){ //Catch block }catch(ExceptionType2 e2){ //Catch block }catch(ExceptionType3 e3){ //Catch block }finally{ //The finally block always executes. } 'public static boolean bResult;' variable in my main driver script and I set its value to 'true' 'true'. The idea behind it to change the value from 'true' to 'false' at the time of any exception. The exception will be caught with the try catch block and the statement to change the value from 'true' to 'false' will reside in that block. For example:
public static void openBrowser(String object){ try{ Log.info("Opening Browser"); driver=new FirefoxDriver(); }catch(Exception e){ Log.info("Not able to open Browser --- " + e.getMessage()); DriverScript.bResult = false; } }
It means the bResult value will set to 'false' only in case of an exception, other wise it will always remain 'true'.
package config; import java.util.concurrent.TimeUnit; import static executionEngine.DriverScript.OR; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import executionEngine.DriverScript; import utility.Log; public class ActionKeywords { public static WebDriver driver; public static void openBrowser(String object){ try{ Log.info("Opening Browser"); driver=new FirefoxDriver(); //This block will execute only in case of an exception }catch(Exception e){ //This is to print the logs - Method Name & Error description/stack Log.info("Not able to open Browser --- " + e.getMessage()); //Set the value of result variable to false DriverScript.bResult = false; } } public static void navigate(String object){ try{ Log.info("Navigating to URL"); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.get(Constants.URL); }catch(Exception e){ Log.info("Not able to navigate --- " + e.getMessage()); DriverScript.bResult = false; } } public static void click(String object){ try{ Log.info("Clicking on Webelement "+ object); driver.findElement(By.xpath(OR.getProperty(object))).click(); }catch(Exception e){ Log.error("Not able to click --- " + e.getMessage()); DriverScript.bResult = false; } } public static void input_UserName(String object){ try{ Log.info("Entering the text in UserName"); driver.findElement(By.xpath(OR.getProperty(object))).sendKeys(Constants.UserName); }catch(Exception e){ Log.error("Not able to Enter UserName --- " + e.getMessage()); DriverScript.bResult = false; } } public static void input_Password(String object){ try{ Log.info("Entering the text in Password"); driver.findElement(By.xpath(OR.getProperty(object))).sendKeys(Constants.Password); }catch(Exception e){ Log.error("Not able to Enter Password --- " + e.getMessage()); DriverScript.bResult = false; } } public static void waitFor(String object) throws Exception{ try{ Log.info("Wait for 5 seconds"); Thread.sleep(5000); }catch(Exception e){ Log.error("Not able to Wait --- " + e.getMessage()); DriverScript.bResult = false; } } public static void closeBrowser(String object){ try{ Log.info("Closing the browser"); driver.quit(); }catch(Exception e){ Log.error("Not able to Close the Browser --- " + e.getMessage()); DriverScript.bResult = false; } } }
Excel Utils Class:
package utility; import java.io.FileInputStream; import java.io.FileOutputStream; import org.apache.poi.xssf.usermodel.XSSFSheet; import org.apache.poi.xssf.usermodel.XSSFWorkbook; import org.apache.poi.xssf.usermodel.XSSFRow; import config.Constants; import executionEngine.DriverScript; public class ExcelUtils { private static XSSFSheet ExcelWSheet; private static XSSFWorkbook ExcelWBook; private static org.apache.poi.ss.usermodel.Cell Cell; private static XSSFRow Row; public static void setExcelFile(String Path) throws Exception { try { FileInputStream ExcelFile = new FileInputStream(Path); ExcelWBook = new XSSFWorkbook(ExcelFile); } catch (Exception e){ Log.error("Class Utils | Method setExcelFile | Exception desc : "+e.getMessage()); DriverScript.bResult = false; } } public static String getCellData(int RowNum, int ColNum, String SheetName ) throws Exception{ try{ ExcelWSheet = ExcelWBook.getSheet(SheetName); Cell = ExcelWSheet.getRow(RowNum).getCell(ColNum); String CellData = Cell.getStringCellValue(); return CellData; }catch (Exception e){ Log.error("Class Utils | Method getCellData | Exception desc : "+e.getMessage()); DriverScript.bResult = false; return""; } } public static int getRowCount(String SheetName){ int iNumber=0; try { ExcelWSheet = ExcelWBook.getSheet(SheetName); iNumber=ExcelWSheet.getLastRowNum()+1; } catch (Exception e){ Log.error("Class Utils | Method getRowCount | Exception desc : "+e.getMessage()); DriverScript.bResult = false; } return iNumber; } public static int getRowContains(String sTestCaseName, int colNum,String SheetName) throws Exception{ int iRowNum=0; try { //ExcelWSheet = ExcelWBook.getSheet(SheetName); int rowCount = ExcelUtils.getRowCount(SheetName); for (; iRowNum<rowCount; iRowNum++){ if (ExcelUtils.getCellData(iRowNum,colNum,SheetName).equalsIgnoreCase(sTestCaseName)){ break; } } } catch (Exception e){ Log.error("Class Utils | Method getRowContains | Exception desc : "+e.getMessage()); DriverScript.bResult = false; } return iRowNum; } public static int getTestStepsCount(String SheetName, String sTestCaseID, int iTestCaseStart) throws Exception{ try { for(int i=iTestCaseStart;i<=ExcelUtils.getRowCount(SheetName);i++){ if(!sTestCaseID.equals(ExcelUtils.getCellData(i, Constants.Col_TestCaseID, SheetName))){ int number = i; return number; } } ExcelWSheet = ExcelWBook.getSheet(SheetName); int number=ExcelWSheet.getLastRowNum()+1; return number; } catch (Exception e){ Log.error("Class Utils | Method getRowContains | Exception desc : "+e.getMessage()); DriverScript.bResult = false; return 0; } } }
This is all for exception handling in the framework. In the next chapter of Test Result Reporting, we will learn how to report the test case as failed when the value of bresult is 'false'.
Similar Articles
| https://www.toolsqa.com/selenium-webdriver/keyword-driven-framework/exception-handling/ | CC-MAIN-2022-27 | en | refinedweb |
ST_Distance
ST_Distance takes two geometry columns and returns a double column. The output column represents the planar distance between the two input geometries. For multipoints, lines, and polygons, the distance is calculated from the nearest point between the geometries. The result will be in the same units as the input geometry data. For example, if your input geometries are in a spatial reference that uses meters, the result values will be in meters.
If the two geometry columns are in different spatial references, the function will automatically transform the second geometry into the spatial reference of the first.
If your input geometries are in a geographic coordinate system, use ST_GeodesicDistance to calculate distance.
For more details, go to the GeoAnalytics Engine API reference for distance.
This function implements the OpenGIS Simple Features Implementation Specification for SQL 1.2.1.
Examples
from geoanalytics.sql import functions as ST, Point data = [ (Point(-176, -15), Point(-176, -15)), (Point(-176, -15), Point(-176, -20)), (Point(-176, -15), Point(-175, -15)) ] df = spark.createDataFrame(data, ["point1", "point2"]) df.select(ST.distance("point1", "point2").alias("distance")).show()
+--------+ |distance| +--------+ | 0.0| | 5.0| | 1.0| +--------+ | https://developers.arcgis.com/geoanalytics/sql-functions/st_distance/ | CC-MAIN-2022-27 | en | refinedweb |
Spark view engine is a neat little web utility that allows you to write code like this:
<viewdata products="IEnumerable[[Product]]"/> <ul if="products.Any()"> <li each="var p in products">${p.Name}</li> </ul> <else> <p>No products available</p> </else>
If your eye skipped over it because it looks just like html I wouldn’t blame you. But look again. Loops and conditions are nicely embedded in html. Spark really does let the html dominate (which can be a good thing). These are but a few of all the nice features. If you have them, spend a couple of minutes browsing the spark documentation to learn what’s more.
To learn a bit about the inner workings of this wonderful tool I set out to add WebForms support. The spark codebase has a very testable design and can be coerced to help out in a WebForms control without modification. The end result look like this:
<spark:View <ul if="Container.GetProducts().Any()"> <li each="var p in Container.GetProducts()">${p.Name}</li> </ul> <else> <p>No products available</p> </else> </spark:View>
To use it download
and unzip into your bin folder.
Register the control in web.config:
<pages> <controls> <add tagPrefix="spark" namespace="Spark.Web.Forms" assembly="Spark.Web.Forms"/>
Add something useful to iterate over to your page or user control:
public string[] Values; protected void Page_Load(object sender, EventArgs e) { Values = new string[] { "Simple", "As", "Pancake" }; }
And add the spark control:
<spark:View <ul> <li each="var c in Page.Values">${c}</li> </ul> </spark:View>
Some features are definitely missing (and they might require modification of the spark code itself):
Its good to see the new rendering engines coming out. This looks promising but so does Razor - who will win :)!
Is it possible to use Razor with WebForms?
Awesomesauce Cristian!
@Magnus On Scot Gu's blog he says to "watch this space" when talking about Razor with classic web forms. Not sure when that will be though!
Thanks David! Maybe I'll go with Spark for now then, it is indeed nice to be able to do loops etc, and to do it without butt ugly syntax.
@Magnus, is there a particular feature from Razor that you're missing in sparks?
@Stefan, no just curious :)
For one client, we actually allowed them to have Spark code in an EPIServer XHTML property. When the page rendered, a Page Adapter executed the Spark markup, so they could alter the HTML output down to the single word or sentence based on external criteria.
Awesome work! I would really love it if you would continue improving this as I think this could be extremely useful, especially combined with MVP. | https://world.episerver.com/blogs/Cristian-Libardo/Dates/2010/11/Spark-View-Engine-on-WebForms/ | CC-MAIN-2019-30 | en | refinedweb |
Dependency Injection for Quartz.NET in .NET Core
Bohdan Stupak
・4 min read
The post was originally published at codeproject
Introduction
Quartz.NET is a handy library that allows you to schedule recurring tasks via implementing
IJob interface. Yet the limitation of it is that, by default, it supports only parameterless constructor which complicates injecting external service inside of it, i.e., for implementing repository pattern. In this article, we'll take a look at how we can tackle this problem using standard .NET Core DI container.
The whole project referred in the article is provided inside the following Github repository. In order to better follow the code in the article, you might want to take a look at it.
Project Overview
Let's take a look at the initial solution structure.
The project
QuartzDI.Demo.External.DemoService represents some external dependency we have no control of. For the sake of simplicity, it does quite a humble job.
The project
QuartzDI.Demo is our working project which contains simple Quartz.NET job.
public class DemoJob : IJob { private const string Url = ""; public static IDemoService DemoService { get; set; } public Task Execute(IJobExecutionContext context) { DemoService.DoTask(Url); return Task.CompletedTask; } }
which is set up in a straightforward way:
var props = new NameValueCollection { { "quartz.serializer.type", "binary" } }; var factory = new StdSchedulerFactory(props); var sched = await factory.GetScheduler(); await sched.Start(); var job = JobBuilder.Create<DemoJob>() .WithIdentity("myJob", "group1") .Build(); var trigger = TriggerBuilder.Create() .WithIdentity("myTrigger", "group1") .StartNow() .WithSimpleSchedule(x => x .WithIntervalInSeconds(5) .RepeatForever()) .Build(); await sched.ScheduleJob(job, trigger);
We provide our external service via job's
static property
DemoJob.DemoService = new DemoService();
As the project is a console application, during the course of the article, we'll have to manually install all needed infrastructure and will be able to build more thorough understanding what actually .NET Core brings to our table.
At this point, our project is up and running. And what is most important it is dead simple which is great. But we pay for that simplicity with a cost of application inflexibility which is fine if we want to leave it as a small tool. But that's often not a case for production systems. So let's tweak it a bit to make it more flexible.
Creating a Configuration File
One of the inflexibilities is that we hard-code URL we call into a
DemoJob. Ideally, we would like to change it and also change it depending on our environment. .NET Core comes with appsettings.json mechanism for that matter.
In order to start working with .NET Core configuration mechanism, we have to install a couple of Nuget packages:
Microsoft.Extensions.Configuration Microsoft.Extensions.Configuration.FileExtensions Microsoft.Extensions.Configuration.Json
Let's create a file with such name and extract our URL there:
{ "connection": { "Url": "" } }
Now we can extract our value from the config file as follows:
var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", true, true); var configuration = builder.Build(); var connectionSection = configuration.GetSection("connection"); DemoJob.Url = connectionSection["Url"];
Note that to make it happen, we had to change Url from constant to property.
public static string Url { get; set; }
Using Constructor Injection
Injecting service via a
static property is fine for a simple project, but for a bigger one, it might carry several disadvantages: such as job might be called without service provided thus failing or changing the dependency during the object runtime which makes it harder to reason about objects. To address these issues, we should employ constructor injection.
Although there is nothing wrong with Pure Dependency Injection and some people argue that you should strive for it in this article, we'll use built-in .NET Core DI container which comes with a Nuget package
Microsoft.Extensions.DependencyInjection.
Now we specify service we depend on inside constructor arguments:
private readonly IDemoService _demoService; public DemoJob(IDemoService demoService) { _demoService = demoService; }
In order to invoke a parameterful constructor of the job, Quartz.NET provides IJobFactory interface. Here's our implementation:
public class DemoJobFactory : IJobFactory { private readonly IServiceProvider _serviceProvider; public DemoJobFactory(IServiceProvider serviceProvider) { _serviceProvider = serviceProvider; } public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler) { return _serviceProvider.GetService<DemoJob>(); } public void ReturnJob(IJob job) { var disposable = job as IDisposable; disposable?.Dispose(); } }
Let's register our dependencies:
var serviceCollection = new ServiceCollection(); serviceCollection.AddScoped<DemoJob>(); serviceCollection.AddScoped<IDemoService, DemoService>(); var serviceProvider = serviceCollection.BuildServiceProvider();
The final piece of a puzzle is to make Quartz.NET use our factory. IScheduler has property JobFactory just for that matter.
sched.JobFactory = new DemoJobFactory(serviceProvider);
Using Options Pattern
Now we can pull the same trick with configuration options. Again, our routine starts with a Nuget package. This time
Microsoft.Extensions.Options.
Let's create a strongly typed definition for configuration options:
public class DemoJobOptions { public string Url { get; set; } }
Now we populate them as follows:
serviceCollection.AddOptions(); serviceCollection.Configure<DemoJobOptions>(options => { options.Url = connectionSection["Url"]; });
And inject them into a constructor. Not that we inject IOptions, not the options instance directly.
public DemoJob(IDemoService demoService, IOptions<DemoJobOptions> options) { _demoService = demoService; _options = options.Value; }
Conclusion
In this article, we've seen how we can leverage .NET Core functionality to make our use of Quartz.NET more flexible.
Did you know that Oracle owns the trademark to Javascript?
Did you know that Oracle owns the trademark to Javascript?
Great tutorial. This actually gives me a much better understanding of all of the magic going on when configuring an Asp.NetCore Startup class.
Hi Jeremy :)
I'm glad it helped. | https://dev.to/bohdanstupak1/dependency-injection-for-quartz-net-in-net-core-3oh7 | CC-MAIN-2019-30 | en | refinedweb |
In the last blog post, I wrote about the challenges of writing an integration test for a Spring command line application. One of the solutions for this issue discussed in the blog post was to use the
@IntegrationTest annotations to inject Java system properties and use that to run the application instead of the normal command line arguments. This blog post describes how to perform this.
The first step is to rewrite our test to use the
@IntegrationTest annotations. This will result in a test that looks as follows:
@RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = {Application.class}) @IntegrationTest(value = "input:expectedOutput") public class ApplicationIntegrationTest { @Autowired private Application application; @Rule public OutputCapture outputCapture = new OutputCapture(); @Test public void shouldGenerateResultFiles() throws Exception { application.run(); assertTrue(outputCapture.toString().contains("expectedOutput")); } }
At this point, it is worth taking a look at what the
@IntegrationTest annotation causes Spring to do. This is a meta-annotation 1 that specifies a number of test listeners including
IntegrationTestPropertiesListener.
@Documented @Inherited @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.TYPE) @TestExecutionListeners(listeners = { IntegrationTestProperties @interface IntegrationTest { String[] value() default {}; }
IntegrationTestPropertiesListener is an implementation of
TestExecutionListener which is a mechanism used by Spring to react to test execution events like
beforeTestClass,
prepareTestInstance,
beforeTestMethod,
beforeTestMethod and
afterTestClass.
For the purposes of what we are trying to achieve, the listener we are really interested in is the
beforeTestClass of
IntegrationTestPropertiesListener.
@Override public void prepareTestInstance(TestContext testContext) throws Exception { Class<?> testClass = testContext.getTestClass(); AnnotationAttributes annotationAttributes = AnnotatedElementUtils .getMergedAnnotationAttributes(testClass, IntegrationTest.class.getName()); if (annotationAttributes != null) { addPropertySourceProperties(testContext, annotationAttributes.getStringArray("value")); } }
As we can see, the value of the
value element 2 of our
@IntegrationTest gets injected to the configuration of the test context.
Now that we have understood and used the
@IntegrationTest annotation to push in configuration, it is time to make our application consume this configuration.
@Configuration @EnableAutoConfiguration @ComponentScan public class Application implements CommandLineRunner { @Autowired DataService dataService; @Value("${input}") String input; public static void main(String[] args) { SpringApplication.run(Application.class, args); } @Override public void run(String... args) { dataService.perform(input); } }
An added benefit of this approach is that this forces us to use named parameter arguments when running the application as opposed to the position based command line arguments. This of course does not solve the problem of testing if you absolutely have to use position based arguments for your application. It would be nice Spring provided a mechanism to inject the command line arguments in a test before
SpringApplicationContextLoader 3 took over. But I suspect that this is not a common enough use case of Spring that users have asked the Spring team to implement it.
- Meta annotations are Spring annotations that can modify and act up on other annotations. For an example of customizing behavior using meta annotations, see this blog post. [return]
- The tendency of programmers to name the default element of an annotation
valueis one of my least favorite aspects of Java annotations. In most cases, there is another name that conveys the intent of the element better. I plan to write my thoughts about this in a blog post soon. [return]
- See the previous blog post to see how
SpringApplicationContextLoaderexecutes the application without arguments. [return] | https://sdqali.in/blog/2015/12/11/integration-testing-spring-command-line-applications/?utm_source=site&utm_medium=related | CC-MAIN-2019-30 | en | refinedweb |
In this article I will show some ways how to create user controls with OnChanged event . The examples are written using C# for Windows applications.
We often should know, whether were any changes on our form or not. And if such changes were, some process is carried out: just a message, some property has to be changed to "false", etc. There are no problems if we use "standard" controls (controls from the Toolbox: TextBox, ComboBox, etc), because every such control has OnChanged event (I mean TextChanged, SelectedIndexChanged, etc.) and we can catch every change. But... a little more interesting situation can be.
For example: we use UserControl1 that contains four TextBoxes, three ComboBoxes and UserControl2 that, in its turn, contains two TextBoxes and UserControl3 that, in its turn..., and so on; we have to catch any change on our form and when the number of changes of the UserControl2 will be equaled 2 we should "switch off" (enabled = false) this control.... In this case we have to "supply" the controls with event OnChanged.
In this article I will show some ways how to create user controls with OnChanged event. The examples are written using C# for Windows applications.
First of all let's divide our OnChanged events(which we are going to create) into two big parts: the events that have no event data and events that have.
The first case: The events that have no event data.
In this case in the code page of your user control you have to declare a delegate for OnChanged event and an event itself :
public delegate void Changed_EventHandler(object sender,EventArgs e);
public event Changed_EventHandler C_Changed;
But .NET ( !!! ) have made part of this work: there is a method that handles the event that has no event data. And this is EventHandler Delegate. Using that you can replace (if you want) two lines of the above code by one:
public event EventHandler C_Changed;
Now you can add the next code
if (C_Changed != null) {C_Changed(this, e);}
to every OnChanged event of your control (again, saying "OnChanged event" I mean TextChanged, SelectedIndexChanged, etc.) and to force the control "to notice" each change occurring on it.
But more "elegant and programmer" way (if it is possible to say so) is use of a method, named OnC_Changed, that raises the event in all cases, that you need. In this case you can easily to use some method doOnChanged that executes the process logic for the C_Changed event :
protected virtual void OnC_Changed ( EventArgs e )
{
if (C_Changed != null)
//Invokes the delegates.
C_Changed(this, e);
doOnChanged();
}
And now let's pass to practice and create some test project that has to have a Windows form with a user control "UC_TreeText". The last one consists of : textbox "textBox_C1", textbox "textbox _C2" with ReadOnly property set to "true" (just to see total of changes of the UC_TreeTex control), user control "UC_TwoText". In its turn the UC_TwoText control consists of : textbox "textBox_C1" , textbox "textBox_C2", textbox "textBox_C3" with ReadOnly property set to "true" (just to see total of changes of the UC_TwoText control).
The process logic at changes of the UC_TwoText control is the following: at each change of the control color of the textBox_C3 control should vary and we should see total of changes of the UC_TwoText control. The process logic at changes of the UC_TreeText control is the following: at each change of the control we should see total of changes of the UC_TreeText control.
Our solution (named "WinProjects_Test") includes (by analogy) two projects: Output Type of the first project is "Windows Application" (named "OnChanged_UC") and Output Type of the second is "Class Library" (named "OnChanged_UC_Controls"). The second project includes two User Controls: "UC_TreeText" and "UC_TwoText". The first project includes only a Windows form: "Form1".
The sequence of creation of our projects is the following.
After creating the solution with two projects add to the UC_TwoText control three textbox controls (with the names specified above). Double-click textBox_C1 and textBox_C2 to add TextChanged events and then add to the code page of the UC_TwoText control the next code:
#region "forEventChanged"
public event EventHandler C_Changed;
private int NumChanges = 0;//the number of changes
// The protected OnC_Changed method raises the event by invoking
// the delegates. In this case the sender is always this.
protected virtual void OnC_Changed(EventArgs e)
C_Changed(this, e);
//The method doOnChanged executes the process logic
//for "C_Changed" event.
private void doOnChanged()
//increase value of NumChanges by one
++NumChanges;
//changes BackColor to another color
if (textBox_C3.BackColor != Color.Yellow)
textBox_C3.BackColor = Color.Yellow ;
else
{
textBox_C3.BackColor = Color.LightGreen ;
}
//write down and show the number of changes
textBox_C3.Text = NumChanges.ToString();
#endregion
private void textBox_C1_TextChanged(object sender, System.EventArgs e)
OnC_Changed(e);
private void textBox_C2_TextChanged(object sender, System.EventArgs e)
OnC_Changed(e);
Now add to the UC_TreeText control two textbox controls (with the names specified above) and the UC_TwoText control. Double-click textBox_C1; then right-click on the UC_TwoText and select Properties, click "Events" button, double-click on "C_Changed". Now the UC_TreeText.cs code page are ready for adding a few lines of the following code:
#region "forEventChanged"
private int NumChanges =0;//the number of changes
protected virtual void OnC_Changed(EventArgs e)
if (C_Changed != null)
//Invokes the delegates.
C_Changed(this, e);
doOnChanged();
}
++NumChanges;
textBox_C2.Text = NumChanges.ToString();
OnC_Changed(e);
private void uC_TwoText1_C_Changed(object sender, System.EventArgs e)
Inside of the OnChanged_UC project from "My User Controls" tab drag and drop on the Form1 our UC_TreeText control.
That is all for the first part and you may test the project. Just start the project and be convinced that all our tasks (see above) are carried out.
The second case: The events that have event data.
Now we have to have some class that holds event data. In order to "facilitate" process to declare delegate for our OnChanged event and to create the event data class I recommend the following steps. Within the same namespace of the project of our user controls two classes have to be contained : a class for delegate declaration (public class OnChangedEvent) and a class that contains the data for the OnChanged event and derives from System.EventsArgs (public class C_EventArgs : EventArgs) .
To the OnChangedEvent class add the delegate declaration:
// Delegate declaration.
public delegate void Changed_EventHandler(object sender,C_EventArgs e);
The C_EventArgs class may have a few constructors, properties, methods and has to define some logic processes that corresponds to the certain changes of the certain control.
Now, to create OnChanged event in some user control of your project just add to the code page of this control:
// The event member that is of type Changed_EventHandler
//(class OnChangedEvent.cs).
public event OnChangedEvent.Changed_EventHandler C_Changed;
To raise the event you use a method similar to the method considered above with at least one differences: you have to use "C_EventArgs":
protected virtual void OnC_Changed(C_EventArgs e)
//you can set some properties here : e.[property_1];
C_Changed(this,e);
doOnChanged ();
Now we are going to pass to practice. Let's a little change our test project.
One of the logical processes for our control, that we are going to use for our OnChanged event, is the following : after the certain quantity of changes of the control the last one has to change its "Enabled" property to "false" and to inform us on that (some message). Of course you can add some more processes, but for our example enough one process (and accordingly one "Constructor"). We are going to use this process only for the UC_TwoText control.
Add to the UC_TwoText project class "OnChangedEvent.cs". Add to the code page class C_EventArgs and then add code for our logical process:
public class OnChangedEvent
public OnChangedEvent()
//
// TODO: Add constructor logic here
// Delegate declaration.
public delegate void Changed_EventHandler(object sender,C_EventArgs e);
//=========================================================
// Class that contains the data for
// the C_Changed event. Derives from System.EventArgs.
public class C_EventArgs : EventArgs
{
private readonly int i_Changes = 0;
private int i_NumChanges = 0;
private Control o_Control ;
//Constructor.
public C_EventArgs(Control oControl, int iChanges)
this.i_Changes = iChanges;
o_Control = oControl ;
// The C_Changes property returns the maximal number of changes ,
// which can be made for the current control.
public int C_Changes
{
get
{
return i_Changes;
}
//The C_NumChanges property allows to set the number of changes
//( for the current control ) when the C_Changed event is generated.
//Here there is some logic process: to show a message and to change
//"Enable" property to "false" when the number of changes equel
//the maximal allowed number of changes .
public int C_NumChanges
set
{
i_NumChanges = value;
if (i_Changes <= i_NumChanges + 1 &&
o_Control != null)
{
MessageBox.Show (
" Enough changes for this control!","OnChanged_UC");
o_Control.Enabled = false;
}
}
Now change the code page of the UC_TwoText control :
public event OnChangedEvent.Changed_EventHandler C_Changed;
protected virtual void OnC_Changed(C_EventArgs e)
e.C_NumChanges = NumChanges ;
C_Changed(this,e);
//increase value of NumChanges by one
//changes BackColor to another color
if (textBox_C3.BackColor != Color.Yellow)
textBox_C3.BackColor = Color.Yellow ;
else
//after five changes the control will be "swiched off"
OnC_Changed(new C_EventArgs(this,5));
//after five changes the control will be "swiched off"
OnC_Changed(new C_EventArgs(this,5));
That is all for the second part and you may test the project. Just start the project and be convinced that all our tasks (see above for the second case) are carried out: every change of one of three our textboxes is followed by some logical process: writing down of the total of changes for every user control, changing of color and so on... and "switch off" one of our user controls after the certain quantity of its changes (we have chosen 5; see Figure 1, Figure 2).
Figure 1.
Figure 2.
CONCLUSION
I hope that this article (with the detailed description of the test project ) will help you to choose the most suitable way when you need to "catch" changes, which occur on your user controls, and to "react" to this change according to necessary logic.
Good luck in programming !
View All | https://www.c-sharpcorner.com/article/onchanged-event-for-user-controls/ | CC-MAIN-2019-30 | en | refinedweb |
This topic teaches you best practices for using Jenkins with Google Kubernetes Engine. To implement this solution, see setting up Jenkins on Kubernetes Engine.:
When your build process uses containers, one virtual host can run jobs against different operating systems.
Kubernetes Engine provides ephemeral build executors, allowing each build to run in a clean environment that's identical to the builds before it.
As part of the ephemerality of the build executors, the Kubernetes Engine cluster is only utilized when builds are actively running, leaving resources available for other cluster tasks such as batch processing jobs.
Build executors launch in seconds.
Kubernetes Engine leverages the Google global load balancer to route web traffic to your instance. The load balancer handles SSL termination, and provides a global IP address that routes users to your web front end on one of the fastest paths from the point of presence closest to your users through the Google backbone network.
For a deep dive into Jenkins on Kubernetes Engine, watch the Next 2018 talk on YouTube:
Deploying the Jenkins master with Helm
Use Helm to deploy Jenkins from the Charts repository. Helm is a package manager you can use to configure and deploy Kubernetes apps.
For guidance on how to configure Jenkins for Kubernetes Engine, see Configuring Jenkins for Kubernetes Engine.
The following image describes the architecture for deploying Jenkins in a multi-node Kubernetes cluster.
Deploy the Jenkins master into a separate namespace in the Kubernetes cluster. Namespaces allow for creating quotas for the Jenkins deployment as well as logically separating Jenkins from other deployments within the cluster.
Creating Jenkins services
Jenkins provides two services that the cluster needs access to. Deploy these services separately so they can be individually managed and named.
An externally-exposed NodePort service on port 8080 that allows pods and external users to access the Jenkins user interface. This type of service can be load balanced by an HTTP load balancer.
An internal, private ClusterIP service on port 50000 that the Jenkins executors use to communicate with the Jenkins master from inside the cluster.
The following sections show sample service definitions.
--- kind: Service apiVersion: v1 metadata: name: jenkins-ui namespace: jenkins spec: type: NodePort selector: app: master ports: - protocol: TCP port: 8080 targetPort: 8080 name: ui
--- kind: Service apiVersion: v1 metadata: name: jenkins-discovery namespace: jenkins spec: selector: app: master ports: - protocol: TCP port: 50000 targetPort: 50000 name: slaves
Creating the Jenkins deployment
Deploy the Jenkins master as a deployment with a replica count of 1. This ensures that there is a single Jenkins master running in the cluster at all times. If the Jenkins master pod dies or the node that it is running on shuts down, Kubernetes restarts the pod elsewhere in the cluster.
It's important to set requests and limits as part of the Helm deployment, so that the container is guaranteed a certain amount CPU and memory resources inside the cluster before being scheduled. Otherwise, your master could go down due to CPU or memory starvation.
The Jenkins home volume stores XML configuration files and plugin JAR files
that make up your configuration. This data is stored on a Persistent Disk
managed by the GKE cluster and will persist data across restarts of Jenkins. To
change the size of the persistent disk edit the
Persistence.Size value when installing
Jenkins with Helm.
Connecting to Jenkins
Once the Jenkins pod has been created you can create a load balancer endpoint to connect to it from outside of Cloud Platform. Consider the following best practices.
Use a Kubernetes ingress resource for an easy-to-configure L7 load balancer with SSL termination.
Provide SSL certs to the load balancer using Kubernetes secrets. Use
tls.certand
tls.keyvalues, and reference the values in your ingress resource configuration.
Configuring Jenkins
Securing Jenkins
After you connect to Jenkins for the first time, it's important to immediately secure Jenkins. You can follow the Jenkins standard security setup tutorial for a simple procedure that leverages an internal user database. This setup doesn't require additional infrastructure and provides the ability to lock out anonymous users.
Installing plugins
You can install the following plugins to enhance the interactions between Jenkins and Kubernetes Engine.
The Kubernetes plugin enables using Kubernetes service accounts for authentication, and creating labeled executor configurations with different base images. The plugin creates a pod when an executor is required and destroys the pod when a job ends.
The Google Authenticated Source plugin enables using your service account credentials when accessing Cloud Platform services such as Cloud Source Repositories.
To add additional plugins using the Helm chart, edit the list of plugins in the values file that you pass to the Helm install or upgrade commands.
Customizing the Jenkins Agent Docker image
When creating a pod template, you can either provide an existing Docker image, or you can create a custom image that has most of your build-time dependencies installed. Using a custom image can decrease overall build time and create more consistent build environments.
Your custom Docker image must install and configure the Jenkins JNLP slave agent. The JNLP agent is software that communicates with the Jenkins master to coordinate running your Jenkins jobs and reporting job status.
One option is to add
FROM jenkins/jnlp-slave to your image configuration.
For example, if your application build process depends on the Go runtime, you
can create the following Dockerfile to extend the existing image with your own
dependencies and build artifacts.
FROM jenkins/jnlp-slave RUN apt-get update && apt-get install -y golang
Then, build and upload the image to your project's Container Registry repository by running the following commands.
docker build -t gcr.io/[PROJECT]/my-jenkins-image .
gcloud docker -- push gcr.io/[PROJECT]/my-jenkins-image
When creating a pod template, you can now set the Docker image field to the
following string, where
[PROJECT] is replaced with your project name and
[IMAGE_NAME] is replaced with the image name.
gcr.io/[PROJECT]/[IMAGE_NAME]
The above example ensures that the Go language runtime is pre-installed when your Jenkins job starts.
Building Docker Images in Jenkins
Cloud Build can be used from within your Jenkins jobs to build Docker images without
needing to host your own Docker daemon. Your Jenkins job must have service account
credentials available that have been granted the
cloudbuild.builds.editor role.
For an example Jenkins Pipeline file, see this GitHub repository.
Kaniko is another option for users looking to build containers inside their clusters. Kaniko does not require a Docker daemon to be present in order to build and push images to a remote registry.
For an example of using Kaniko in Jenkins, see this GitHub repository.
What's next
See the setting up Jenkins on Kubernetes Engine tutorial.
Learn about how to set up continuous deployment to Kubernetes Engine using Jenkins.
Try out other Google Cloud Platform features for yourself. Have a look at our tutorials. | https://cloud.google.com/solutions/jenkins-on-kubernetes-engine?hl=ru | CC-MAIN-2019-30 | en | refinedweb |
31996/how-print-all-the-instance-instance-state-using-python-boto3
You can simply loop through using a for loop.
Using for loop you can traverse through all the instances.
Here is a simple program that you can use after configuring your IAM using using AWS CLI.
import boto3
ec2 = boto3.resource('ec2')
for instance in ec2.instances.all():
print (instance.id , instance.state)
Hope this helps.
Hey! You can try something like this:
for region in `aws ec2 describe-regions --output text | cut -f3`
do
echo -e "\nListing Instances in region:'$region'..."
aws ec2 describe-instances --region $region
done
Here, it loops through all the regions, printing the ec2 instances that exist in them.
The following will print the instance status in all the regions.
for region in `aws ec2 describe-regions --output text | cut -f3`
do
echo -e "\nInstances status in region: '$region'"
aws ec2 describe-instance-status --include-all-instances
done
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
...READ MORE
You can use a for loop to ...READ MORE
You just need to have the list ...READ MORE
You can refer to this question here: code, it ...READ MORE
Both terminating and stopping has same code ...READ MORE
OR | https://www.edureka.co/community/31996/how-print-all-the-instance-instance-state-using-python-boto3 | CC-MAIN-2019-30 | en | refinedweb |
A Flutter plugin to call native contacts view on Android and iOS devices.
To use this plugin, add native_contact as a dependency in your pubspec.yaml file.
// Import package import 'package:native_contact/native_contact.dart'; // Add a contact await NativeContact.addNewContact(contact);
example/README.md
Demonstrates how to use the native_contact plugin.
For help getting started with Flutter, view our online documentation.
Add this to your package's pubspec.yaml file:
dependencies: native_contact: ^0.0.2
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:native_contact/native_contact.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Document public APIs. (-0.35 points)
44 out of 45 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.
Format
lib/native_contact.dart.
Run
flutter format to format
lib/native_contact.dart.
Package is pre-v0.1 release. (-10 points)
While nothing is inherently wrong with versions of
0.0.*, it might mean that the author is still experimenting with the general direction of the API. | https://pub.dev/packages/native_contact | CC-MAIN-2019-30 | en | refinedweb |
Quickstart: Use your own notebook server to get started with Azure Machine Learning
Use your own Python environment and Jupyter Notebook Server to get started with Azure Machine Learning service. For a quickstart with no SDK installation, see Quickstart: Use a cloud-based notebook server to get started with Azure Machine Learning.
This quickstart shows how you can use the Azure Machine Learning service workspace to keep track of your machine learning experiments. You will run Python code that log values into the workspace.
View a video version of this quickstart:
If you don’t have an Azure subscription, create a free account before you begin. Try the free or paid version of Azure Machine Learning service today.
Prerequisites
- A Python 3.6 notebook server with the Azure Machine Learning SDK installed
- An Azure Machine Learning service workspace
- A workspace configuration file (.azureml/config.json).
Get all these prerequisites from Create an Azure Machine Learning service workspace.
Use the workspace
Create a script or start a notebook in the same directory as your workspace configuration file (.azureml/config.json).
Attach to workspace
This code reads information from the configuration file to attach to your workspace.
from azureml.core import Workspace ws = Workspace.from_config()
Log values
Run this code that uses the basic APIs of the SDK to track experiment runs.
- Create an experiment in the workspace.
- Log a single value into the experiment.
- Log a list of values into the experiment.
from azureml.core import Experiment # Create a new experiment in your workspace. exp = Experiment(workspace=ws, name='myexp') # Start a run and start the logging service. run = exp.start_logging() # Log a single number. run.log('my magic number', 42) # Log a list (Fibonacci numbers). run.log_list('my list', [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]) # Finish the run. run.complete()
View logged results
When the run finishes, you can view the experiment run in the Azure portal. To print a URL that navigates to the results for the last run, use the following code:
print(run.get_portal_url())
This code returns a link you can use to view the logged values in the Azure portal in your browser.
Clean up resources
Important
You can use the resources you've created here as prerequisites to other Machine Learning tutorials and how-to articles.
If you don't plan to use the resources that you created in this article, delete them to avoid incurring any charges.
ws.delete(delete_dependent_resources=True)
Next steps
In this article, you created the resources you need to experiment with and deploy models. You ran code in a notebook, and you explored the run history for the code in your workspace in the cloud.
You can also explore more advanced examples on GitHub or view the SDK user guide.
Feedback | https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-run-local-notebook | CC-MAIN-2019-30 | en | refinedweb |
Deep Sleep Summary
@iotmaker I am measuring the same power comsumption ... 620 uA with expansion board and 520 uA without it.
Yes, in the meanwhile I read dozens of posts. The question is why after reset I don't get the GPIO pin that triggered the wakeup?
Do the deepsleep shield fixing this GPIO wake up cause bug?
Actually, If the Lopy is working fine with this reset what else is enhancing the deepsleep shield?
@Colateral wake from deep sleep is actually similar to a reset: as the CPU and its RAM are not powered during deep sleep, you can't just resume at the next Python instruction, as it restarts from scratch.
Differentiating a wake up from deep sleep from other reasons (regular power on, reset...) is still a bit difficult, the subject has already been discussed in other threads.
Hello,
I'm using LopY with Expansion board. SW upgraded to ver 1.7.8.b1
I tried deepsleep functionality simple as below.
import machine machine.pin_deepsleep_wakeup(['P18', 'P15'],machine.WAKEUP_ALL_LOW, True) machine.deepsleep(15*1000) print("\n\n Wakeup", machine.wake_reason())
The board is going to sleep and indeed is waking up on P18 dumping the following output .9010,len:12 ho 0 tail 12 room 4 load:0x3fff9020,len:388 load:0x40078000,len:11584 load:0x4009fc00,len:848 entry 0x4009fd9c
** I don't understand why the LoPy was reset after this wakeup **
In order to dump the reason, I also added in main.py the following
print("\n\n Wakeup main reason", machine.wake_reason())
and has no output
Wakeup main reason (1, [])
** I believe there is bug that no GPIO is mentioned in that list **
@PeterBaugh From the battery positive lead in series to the VIn of the expansion board
- PeterBaugh last edited by
@iotmaker See this post for the correct orientation of the deep sleep schield
Well turns out that if you put it backwards like the Official documentation
you will fry it..
i put the other shield for my other sipy the correct way and now it works. 620uA
Still the current with the sipy is 89mA
just doing this
from deepsleep import DeepSleep import pycom pycom.heartbeat(False) ds = DeepSleep() while True: ds.go_to_sleep(60) # go to sleep for 60 seconds print("Woke up")
Great summary. I agree with @RobTuDelft: many people and projects are waiting on this to settle so some definitive conclusion from pycom would be great. They're doing such a great job but we need to know what they've done or what they're working on.
- jmarcelino last edited by
@Emmanuel-Goudot
The Pysense problem was fixed yesterday
- Emmanuel Goudot last edited by
For Pytrack, could include (if I am right)
- GPS/Accel powered during deep sleep: No (explain low 20µA deep sleep current)
For Pysense, something is obviously powerd which consume 2.5mA, but as electric schematics is not avail, it is difficult to say...
- PeterBaugh last edited by
Hi There,
How are you measuring the current in deep sleep? Also do you know of a way of keeping a PIN or PINs high when going into deepsleep, I dont think this is possible because of the board reseting when coming out of deep sleep.
Thanks
- RobTuDelft last edited by
Nice summary and this topic indeed deserves some clarification/overview.
Without deep sleep shield I think the device will only run for a day or so. At least that is my experience. It also depends on the devices you connect of course, because they stay powered in deep sleep. | https://forum.pycom.io/topic/1589/deep-sleep-summary/15 | CC-MAIN-2019-30 | en | refinedweb |
- Type:
Bug
- Status: Resolved
- Priority:
Medium
- Resolution: Done
-
- Labels:
- Environment:
Swift 3 in Xcode 8.0 beta 3
I guess currently ObjC class extensions don't support stored type properties. But the error message is very confusing and without any location information:
a declaration cannot be both 'final' and 'dynamic'
import Foundation extension Int { // Swift extensions support stored type properties. static let a = 1 } class Foo: NSObject {} extension Foo { static let a = 1 } | https://bugs.swift.org/browse/SR-993%0A | CC-MAIN-2019-30 | en | refinedweb |
Details
- Type:
Bug
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 2.3.4.1, 2.3.7
-
- Component/s: Core Actions
- Labels:None
- Environment:
JDK: 1.6.0_37
AS: Tomcat 7.0.22
Description
Sample code:
import com.opensymphony.xwork2.ActionSupport; public class HelloAction extends ActionSupport { private String pName; public String execute() { return SUCCESS; } public String getpName() { return pName; } public void setpName(String pName) { this.pName = pName; } }
I can't set param's value use url '{contextPath}
/hello.action?pName=111'.
According to JavaBeans Spec sec.
String java.beans.Introspector.decapitalize(String name)". | https://issues.apache.org/jira/browse/WW-3909 | CC-MAIN-2019-30 | en | refinedweb |
A service mesh is made up of proxies deployed locally alongside each service instance, which control network traffic between their local instance and other services on the network. These proxies "see" all the traffic that runs through them, and in addition to securing that traffic, they can also collect data about it. Starting with version 1.5, Consul Connect is able to configure Envoy proxies to collect layer 7 metrics including HTTP status codes and request latency, along with many others, and export those to monitoring tools like Prometheus.
In this guide, you will deploy a basic metrics collection and visualization pipeline on a Kubernetes cluster using the official Helm charts for Consul, Prometheus, and Grafana. This pipeline will collect and display metrics from a demo application.
Tip: While this guide shows you how to deploy a metrics pipeline on Kubernetes, all the technologies the guide uses are platform agnostic; Kubernetes is not necessary to collect and visualize layer 7 metrics with Consul Connect.
Learning Objectives:
- Configure Consul Connect with metrics using Helm
- Install Prometheus and Grafana using Helm
- Install and start the demo application
- Collect metrics
» Prerequisites
If you already have a Kubernetes cluster with Helm and kubectl up and running, you can start on the demo right away. If not, set up a Kubernetes cluster using your favorite method that supports persistent volume claims, or install and start Minikube. If you do use Minikube, you may want to start it with a little bit of extra memory.
$ minikube start --memory 4096
You will also need to install kubectl, and both install and initialize Helm by following their official instructions.
If you already had Helm installed, check that you have up
to date versions of the Grafana, Prometheus, and Consul charts. You can update
all your charts to the latest versions by running
helm repo update.
Clone the GitHub repository that contains the configuration files you'll use while following this guide, and change directories into it. We'll refer to this directory as your working directory, and you'll run the rest of the commands in this guide from inside it.
$ git clone $ cd consul-k8s-l7-obs-guide
» Deploy Consul Connect Using Helm
Once you have set up the prerequisites, you're ready to install Consul. Start by cloning the official Consul Helm chart into your working directory.
$ git clone
Open the file in your working directory called
consul-values.yaml. This file
will configure the Consul Helm chart to:
- specify a name for your Consul datacenter
- enable the Consul web UI
- enable secure communication between pods with Connect
- configure the Consul settings necessary for layer 7 metrics collection
- specify that this Consul cluster should run one server
- enable metrics collection on servers and agents so that you can monitor the Consul cluster itself
You can override many of the values in Consul's values file using annotations on
specific services. For example, later in the guide you will override the
centralized configuration of
defaultProtocol.
# name your datacenter global: datacenter: dc1 server: # use 1 server replicas: 1 bootstrapExpect: 1 disruptionBudget: enabled: true maxUnavailable: 0 client: enabled: true # enable grpc on your client to support consul connect grpc: true ui: enabled: true connectInject: enabled: true # inject an envoy sidecar into every new pod, # except for those with annotations that prevent injection default: true # these settings enable L7 metrics collection and are new in 1.5 centralConfig: enabled: true # set the default protocol (can be overwritten with annotations) defaultProtocol: 'http' # tell envoy where to send metrics proxyDefaults: | { "envoy_dogstatsd_url": "udp://127.0.0.1:9125" }
Warning: By default, the chart will install an insecure configuration of Consul. This provides a less complicated out-of-box experience for new users but is not appropriate for a production setup. Make sure that your Kubernetes cluster is properly secured to prevent unwanted access to Consul, or that you understand and enable the recommended Consul security features. Currently, some of these features are not supported in the Helm chart and require additional manual configuration.
Now install Consul in your Kubernetes cluster and give Kubernetes a name for your Consul installation. The output will be a list of all the Kubernetes resources created (abbreviated in the code snippet).
$ helm install -f consul-values.yaml --name l7-guide ./consul-helm NAME: consul LAST DEPLOYED: Wed May 1 16:02:40 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES:
Check that Consul is running in your Kubernetes cluster via the Kubernetes dashboard or CLI. If you are using Minikube, the below command will run in your current terminal window and automatically open the dashboard in your browser.
$ minikube dashboard
Open a new terminal tab to let the dashboard run in the current one, and change
directories back into
consul-k8s-l7-obs-guide. Next, forward the port for the
Consul UI to localhost:8500 and navigate to it in your browser. Once you run the
below command it will continue to run in your current terminal window for as
long as it is forwarding the port.
$ kubectl port-forward l7-guide-consul-server-0 8500:8500 Forwarding from 127.0.0.1:8500 -> 8500 Forwarding from [::1]:8500 -> 8500 Handling connection for 8500
Let the consul dashboard port forwarding run and open a new terminal tab to the
consul-k8s-l7-obs-guide directory.
» Deploy the Metrics Pipeline
In this guide, you will use Prometheus and Grafana to collect and visualize metrics. Consul Connect can integrate with a variety of other metrics tooling as well.
» Deploy Prometheus with Helm
You'll follow a similar process as you did with Consul to install Prometheus via
Helm. First, open the file named
prometheus-values.yaml that configures the
Prometheus Helm chart.
The file specifies how often Prometheus should scrape for metrics, and which endpoints it should scrape from. By default, Prometheus scrapes all the endpoints that Kubernetes knows about, even if those endpoints don't expose Prometheus metrics. To prevent Prometheus from scraping these endpoints unnecessarily, the values file includes some relabel configurations.
Install the official Prometheus Helm chart using the values in
prometheus-values.yaml.
$ helm install -f prometheus-values.yaml --name prometheus stable/prometheus NAME: prometheus LAST DEPLOYED: Wed May 1 16:09:48 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES:
The output above has been abbreviated; you will see all the Kubernetes resources that the Helm chart created. Once Prometheus has come up, you should be able to see your new services on the Minikube dashboard and in the Consul UI. This might take a short while.
» Deploy Grafana with Helm
Installing Grafana will follow a similar process. Open and look
through the file named
grafana-values.yaml. It configures Grafana to use Prometheus as its datasource.
Use the official Helm chart to install Grafana with your values file.
$ helm install -f grafana-values.yaml --name grafana stable/grafana NAME: grafana LAST DEPLOYED: Wed May 1 16:57:11 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ... NOTES: 1. Get your 'admin' user password by running: kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: grafana.default.svc.cluster.local Get the Grafana URL to visit by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 3000 3. Login with the password from step 1 and the username: admin
Again, the above output has been abbreviated. At the bottom of your terminal output are shell-specific instructions to access your Grafana UI and log in, displayed as a numbered list. Accessing Grafana involves:
- Getting the secret that serves as your Grafana password
- Forwarding the Grafana UI to localhost:3000, which will not succeed until Grafana is running
- Visiting the UI and logging in
Once you have logged into the Grafana UI, hover over the dashboards icon (four squares in the left hand menu) and then click the "manage" option. This will take you to a page that gives you some choices about how to upload Grafana dashboards. Click the black "Import" button on the right hand side of the screen.
Open the file called
overview-dashboard.json and copy the contents into the
json window of the Grafana UI. Click through the rest of the options, and you
will end up with a blank dashboard, waiting for data to display.
» Deploy a Demo Application on Kubernetes
Now that your monitoring pipeline is set up, deploy a demo application that will generate data. We will be using Emojify, an application that recognizes faces in an image and pastes emojis over them. The application consists of a few different services and automatically generates traffic and HTTP error codes.
All the files defining Emojify are in the
app directory. Open
app/cache.yml
and take a look at the file. Most of services that make up Emojify communicate
over HTTP, but the cache service uses gRPC. In the annotations section of the
file you'll see where
consul.hashicorp.com/connect-service-protocol specifies
gRPC, overriding the
defaultProtocol of HTTP that we centrally configured in
Consul's value file.
At the bottom of each file defining part of the Emojify app, notice the block
defining a
prometheus-statsd pod. These pods translate the metrics that Envoy
exposes to a format that Prometheus can scrape. They won't be necessary anymore
once Consul Connect becomes compatible with Envoy 1.10. Apply the configuration
to deploy Emojify into your cluster.
$ kubectl apply -f app
Emojify will take a little while to deploy. Once it's running you can check that it's healthy by taking a look at your Kubernetes dashboard or Consul UI. Next, visit the Emojify UI. This will be located at the IP address of the host where the ingress server is running, at port 30000. If you're using Minikube you can find the UI with the following command.
$ minikube service emojify-ingress --url
Test the application by emojifying a picture. You can do this by pasting the following URL into the URL bar and clicking the submit button. (We provide a demo URL because Emojify can be picky about processing some image URLs if they don't link directly to the actual picture.)
Now that you know the application is working, start generating automatic load so that you will have some interesting metrics to look at.
$ kubectl apply -f traffic.yaml
» Collect Application Metrics
Envoy exposes a huge number of metrics, but you will probably only want to monitor or alert on a subset of them. Which metrics are important to monitor will depend on your application. For this getting-started guide we have preconfigured an Emojify-specific Grafana dashboard with a couple of basic metrics, but you should systematically consider what others you will need to collect as you move from testing into production.
» Review Dashboard Metrics
Now that you have metrics flowing through your pipeline, navigate back to your
Grafana dashboard at
localhost:3000. The top row of the dashboard displays
general metrics about the Emojify application as a whole, including request and
error rates. Although changes in these metrics can reflect application health
issues once you understand their baseline levels, they don't provide enough
information to diagnose specific issues.
The following rows of the dashboard report on some of the specific services that make up the emojify application: the website, API, and cache services. The website and API services show request count and response time, while the cache reports on request count and methods.
» Clean up
If you've been using Minikube, you can tear down your environment by running
minikube delete.
If you want to get rid of the configurations files and Consul Helm chart, recursively remove the
consul-k8s-l7-obs-guide directory.
`
$ cd .. $ rm -rf consul-k8s-l7-obs-guide
» Summary
In this guide, you set up layer 7 metrics collection and visualization in a Minikube cluster using Consul Connect, Prometheus, and Grafana, all deployed via Helm charts.. | https://learn.hashicorp.com/consul/developer-mesh/l7-observability-k8s | CC-MAIN-2019-30 | en | refinedweb |
Runtime Arguments
How is it that Microsoft people do not seem to understand that they need to evolve and improve their API over time? Some transformations, such as those triggered by the &, ||, && operators, split command lines into several parts. java Echo "Drink Hot Java" Drink Hot Java Parsing Numeric Command-Line Arguments If an application needs to support a numeric command-line argument, it must convert a String argument that represents a How can I discover the Python version in QGIS?
Return Value: None. For example, a program might allow the user to specify verbose mode--that is, specify that the application display a lot of trace information--with the command line argument -verbose. Test program For exposition's sake, we'll be using this small program to generate the example output below: #include
Java Command Line Arguments Example
Compliments? I explore the final frontier What next after a Windows domain account has been compromised? this is the way i am doing it VirtualMachineManager manager = Bootstrap.virtualMachineManager(); LaunchingConnector connector = manager.defaultConnector(); Map arguments = connector.defaultArguments(); ((Connector.Argument)arguments.get("options")).setValue(userVMArgs); ((Connector.Argument)arguments.get("main")).setValue(cmdLine); here userVMargs is classpath of my main class and Copyright © 1995, 2015 Oracle and/or its affiliates.
- Another note... "a" == "a" will work in many cases, because Strings are special in Java, but 99.99999999999999% of the time you want to use .equals.
- Arguments: Argument - Supplies the argument to encode.
- It's also able to print help messages detailing the options available for a command line tool." Commons CLI supports different types of options: POSIX like options (ie.
- Table of Contents Contact Us | Contribute | Ask Question | login Subscribe Us91-99904499350120-4256464 Home Core Java Servlet JSP EJB Struts2 Mail Hibernate Spring Android Design P Quiz Projects Interview
- Should not two backslashes be transformed into four backslashes?
- The arguments passed from the console can be received in the java program and it can be used as an input.
The most common indirection is a trip trough the venerable cmd.exe, which we encounter when we use the system function, construct a script for later execution, write a makefile for nmake, An idiom or phrase for when you're about to be ill Wrap a seasonal present Reduce execution time of linq/lamda inside a loop more hot questions question feed lang-java about us Help, my office wants infinite branch merges as policy; what other options do we have? Java.exe Command Line Arguments That's why ^ works as the line continuation character: it tells cmd to copy a subsequent newline as itself instead of regarding that newline as a command terminator.
I don't know for sure what the It in It's used to accept input from the command prompt while running the program. Command Line Arguments In Java Eclipse class A{ public static void main(String args[]){ for(int i=0;i
For example, the UNIX command that prints the contents of a directory--the ls utility program--accepts arguments that determine which file attributes to print and the order in which the files are Java Vm Arguments Environment: Arbitrary. --*/ { // // Unless we're told otherwise, don't quote unless we actually // need to do so --- hopefully avoid problems if programs won't // parse quotes properly The short, sweet answer to the question posed is: public static void main( String[] args ) { if( args.length > 0 && args[0].equals( "a" ) ) { // first argument is How do we construct an argument string understood by CommandLineToArgvW?
Command Line Arguments In Java Eclipse
share|improve this answer answered Apr 4 '09 at 0:56 TofuBeer 43k990137 add a comment| up vote 0 down vote Command line arguments are stored as strings in the String array(String[] args) website here Browse other questions tagged java command-line arguments or ask your own question. Java Command Line Arguments Example tar -zxvf foo.tar.gz) GNU like long options (ie. How To Take Command Line Arguments In Java more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
says: July 29, 2014 at 9:01 pm This all quoting and unquoting is insane and inconsistent. check over here Note to Applet Programmers: The runtime system only passes command line arguments to Java applications--applets use parameters instead. if you pass an argument to _spawnlp() that contains a space, you will need to escape it yourself. After step 1, then if and only if the command line produced will be interpreted by cmd, prefix each shell metacharacter (or each character) with a ^ character. Java Command Line Arguments Parser
Echoing Command-Line Arguments The
Echo example displays each of its command-line arguments on a line by itself: public class Echo { public static void main (String[] args) { for (String s: This is because The Space Character Separates Command Line Arguments. The arguments passed can be anything. his comment is here Since "a" from the command line and "a" in the source for your program are allocated in two different places the == cannot be used.
Reply extrarius says: April 5, 2013 at 7:14 am For CommandLineToArgvW, a backslash only needs to be escaped if it is followed by another backslash or a double quote (because those Use Of Command Line Arguments In Java CommandLine - Supplies the command line to which we append the encoded argument string. mean?
passed at the time of running the java program.
There is no way to escape those, or the surrounding quotes, in a way that both bypasses cmd's special quote treatment and succeeds at starting the program. Let us re-write above example once again where we will print program name and we also pass a command line argument by putting inside double quotes − #include
Conventions There are several conventions that you should observe when accepting and processing command line arguments with a Java application. These arguments could be filenames, debugging flags, hostnames, or any other kind of information: the point is that we are to take a string and make sure some child program receives This convention is a good one because it provides a way to encode any command line argument as part of a command line string without losing information. weblink Command line arguments are the inputs that accept from the command prompt while running the program.
Is it possible to send all nuclear waste on Earth to the Sun? Try Compiling and Running the Examples: FAQs. Do not: Simply add quotes around command line argument arguments without any further processing. whoami, of course, can be replaced by any number of harmful commands.
share|improve this answer answered Apr 4 '09 at 0:16 Joachim Sauer 188k36399508 add a comment| up vote 1 down vote Try to pass value a and compare using the equals method The problem is that there is no ArgvToCommandLineW. I don't know for sure what the This in This essentially speeds up the program execution when the program depends on user input. Complaints?
Creating a new node style with three circles More up-to-date alternative for "avoiding something like the plague"? In the off chance you used something like: if(argv[0] == "a") then it does not work because == compares the location of the two objects (physical equality) rather than the contents For example, suppose a Java application called Sort sorts lines in a file. The correct solution We've seen that properly quoting an arbitrary command line argument is non-trivial, and that doing it incorrectly causes subtle and maddening problems.
Adding quotation marks is insufficient Having seen the problems with the previous approach, one might suggest that we heed the advice provided in the the C runtime documentation and surround arguments | http://dailyerp.net/command-line/runtime-arguments.html | CC-MAIN-2017-39 | en | refinedweb |
Wiki: PythonPython Coding and Syntax Reference
by Oliver; Dec. 22, 2013
IntroductionThis is a collection of miscellaneous Python syntax for reference. I'm new to Python—the language responsible for introducing the adjective Pythonic into English—but I've already discovered many of its selling points:
- the Python shell, invoked by simply typing python on the command line
- the ability to print out arrays and hashes with a simple print() statement (no loops!)
- wide usage as a backend web programming language (e.g., Django)
- regex functionality as good as Perl's
- good math libraries, good plotting libraries, good libraries all around
- the ability to easily make modules and have them double as stand-alone scripts
- an awesome package manager, pip
- Jupyter, a notebook GUI a bit like RStudio or Matlab
An important goal of the Python developers is making Python fun to use.It's also conquered a giant swath of territory: scientists like it for NumPy, SciPy, and Pandas; web people like it for Django and Flask; teachers like it because it's perfect for beginners; et cetera. Adding to its appeal, the official documentation is comprehensive and elegant.
This wiki—so you know what to expect—is more a reminder to myself than a carefully crafted article. Note that there are two versions of Python, Python 2.x and Python 3.x, which the docs call "the first ever intentionally backwards incompatible Python release." I assume Python 2 here, but I'll use a python3 tag when I want to explicitly discuss a Python 3 feature or point out a difference. For a good (free!) professional tutorial, see the book Dive Into Python (Python 2) or Dive Into Python 3 (Python 3).
The Python ShellThe first lesson of Python is that you can open up a Python shell (i.e., a program that interprets Python commands) on the command line simply by typing:
$ python(where the $ denotes the ordinary bash prompt). Screenshot:
As you can see, the python prompt is typically triple angle brackets:
>>>
Data TypesHere are some, but not all, of the Python data types:
- int
- float
- str
- list [ ]
- tuple ( )
- dict { }
- set
- bool
>>> x = ['a', 'b', 'c', 1, 2, 3] >>> x ['a', 'b', 'c', 1, 2, 3]The indices of the list employ zero-based counting:
>>> x[0] 'a' >>> x[5] 3 >>> x[-1] # negative indicies count backwards 3We can grab ranges, too. In python, the list range:
x:ysignifies the range from x to y, not including y. For example:
>>> x[1:2] ['b'] >>> x[:3] # from the beginning up to (but not including) 3 ['a', 'b', 'c'] >>> x[3:] # from 3 to the end [1, 2, 3]Note the same thing works on a plain old string:
>>>>> s[0:4] 'test'Now, let's define a tuple:
>>> y = ('a', 'b', 'c', 1, 2, 3) >>> y ('a', 'b', 'c', 1, 2, 3)For these basic range operations, it behaves similarly to our list x:
>>> y[0] 'a' >>> y[:3] ('a', 'b', 'c')However, because tuples are immutable (see the Stackoverflow discussion here), x and y behave differently when it comes to changing elements:
>>> x[0] = 'z' # x is a list >>> x ['z', 'b', 'c', 1, 2, 3]
>>> y[0] = 'z' # y is a tuple, so this doesn't work Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignmentFinally, let's define a dict:
>>> z = {'a': 1, 'b': 2, 'c': 3} >>> z {'a': 1, 'c': 3, 'b': 2}We can input a key to access a value:
>>> z['a'] 1 >>> z['b'] 2Also note you can cast variables from one type to another:
>>> z = {'a': 1, 'b': 2, 'c': 3} >>> z {'a': 1, 'c': 3, 'b': 2} >>> str(z) "{'a': 1, 'c': 3, 'b': 2}" >>> list(z) ['a', 'c', 'b'] >>> set(z) {'a', 'c', 'b'}Forget what variable type you're using? You can always call Python's type() function. E.g., here:
>>> type(z) <type 'dict'>
>>> print 'joe' joe >>> print('joe') joepython3
However, in Python 3 print() is a proper function and thus accepts only this syntax:
>>> print('joe') joeTherefore, it's a good idea to always use parentheses whichever version of Python you're using. Check out What’s New In Python 3.0 for more differences. To quote that source:)!(Source: What’s New In Python 3.0)
A wonderfully convenient feature of Python is that it can handle printing objects of any datatype. E.g.:
>>> z = {'a': 1, 'b': 2, 'c': 3} >>> print(z) {'a': 1, 'c': 3, 'b': 2}Want to print to stderr, not stdout, in your script? That's:
import sys sys.stderr.write('Error\n')
Data Types, Continued: Objects in PythonIn the section on Data Types, we defined a list, a tuple, and a dict:
>>> x ['a', 'b', 'c', 1, 2, 3] >>> y ('a', 'b', 'c', 1, 2, 3) >>> z {'a': 1, 'c': 3, 'b': 2}Python is a modern object-oriented programming language and, as such, x is an instance (or object, if you like) of the list class; y is an instance of the tuple class; and z is an instance of the dict class. Dive Into Python tells us about objects in Python:.To see the attributes of, say, z, we can call the function dir() on it (scroll right):
Still, this begs the question. What is an object?.
This is so important that I'm going to repeat it in case you missed it the first few times: everything in Python is an object. Strings are objects. Lists are objects. Functions are objects. Even modules are objects.
>>> dir(z) ['_', 'viewitems', 'viewkeys', 'viewvalues']Note I use attributes to mean any variables inherent to z and any methods you can call on z (some people use attributes to mean the variables or properties only, not the methods). First, what's with the double underscores? This blog has a succinct explanation:
[A double underscore before and after a name indicates a] special method name.Stackoverflow elaborates:
Note that names with double leading and trailing underscores are essentially reserved for Python itself: "Never invent such names; only use them as documented."So these are internal functions we can ignore for now. The other methods, however, are useful for us. Python uses the dot syntax, so you can access the attribute of an object as:
object.attributeLet's try some of the methods which dir() printed out. The keys() method prints a list of the dict's keys:
>>> z.keys() ['a', 'c', 'b']The values() method prints a list of the dict's values:
>>> z.values() [1, 3, 2]And items() prints a list of (key, value) tuples:
>>> z.items() [('a', 1), ('c', 3), ('b', 2)]Look what happens if we call z.items instead of z.items():
>>> z.items <built-in method items of dict object at 0x7fa213d42400>The note says that items is a method of our object, and we also get a reference to the object or, if you like, its id. 0x7fa213d42400 is the object z's memory address (in hexadecimal). You can also see this with the built-in function id():
>>> id(z) 140334094099456As a sanity check, verify that 140334094099456 == 0x7fa213d42400:
>>> int('0x7fa213d42400', 0) 140334094099456As Dive Into Python mentioned, we can examine our method's __doc__ attribute:
>>> print(z.items.__doc__) D.items() -> list of D's (key, value) pairs, as 2-tuplesThis is a good illustration of how the dot syntax conveniently allows us to chain things together, accessing an object's attribute's attribute, ad infinitum.
We'll see below that we define functions in Python using the keyword def. If we define a simple function describe:
>>> def describe(x): print x.__doc__then:
>>> describe(z.items) D.items() -> list of D's (key, value) pairs, as 2-tuples
>>> describe(z.items()) list() -> new empty list list(iterable) -> new list initialized from iterable's itemsIn the first case, we're getting the doc string of the items function; in the second case, calling items returns a list so we're getting the doc string of a list. Python has two builtin functions, type and help, which you can try calling on z.items and z.items() as an exercise.
Into the Weeds on Python ObjectsThere's a great stackoverflow post that gets into the weeds on Python objects:
And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.(Source: What is a metaclass in Python?).:e.g.:
- you can assign it to a variable
- you can copy it
- you can add attributes to it
- you can pass it as a function parameter
>>>>
I encourage you to read the complete post! The article also gives a little bit of Python trivia about the type() function::
Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters, and return a class.
It works like this, fwiw:
>>> myClass = type('myClass', (), {'name': 'Oliver'}) >>> myClass <class '__main__.myClass'> >>> x = myClass() >>> x <__main__.myClass object at 0x109f2f610> >>> x.name 'Oliver'
Conditional LogicFor example:
a = 0 b = 1 if a: print ('A') elif b: print ('B') else: print ('C') # output is BA unique feature of Python is that there are no curly brackets { } to demarcate blocks of code and define scope. Instead, indentation serves this purpose. The conventional unit of indentation in Python is a half-tab (4 spaces), although you can use a different number of spaces as long as you're consistent. Also note that, unlike many programming languages, you don't need a semi-colon at the end of a line (although you can still use one to combine two lines:
>>> print('hello'); print('hello') hello hello)
LoopsBasic for loop to print 1 through 3:
>>> for i in [1, 2, 3]: print(i) 1 2 3or:
>>> for i in range(1,3+1): print(i) 1 2 3You can loop over more complicated data structures, like a list of tuples:
>>> for i in [('a', 1), ('c', 3), ('b', 2)]: print(i) ('a', 1) ('c', 3) ('b', 2)If we loop with two variables, we get:
>>> for i,j in [('a', 1), ('c', 3), ('b', 2)]: print(i) a c bOr we can print out both:
for i,j in [('a', 1), ('c', 3), ('b', 2)]: print('i = ' + i + ', j = ' + str(j))This yields:
i = a, j = 1 i = c, j = 3 i = b, j = 2We get the same result if we have a dict z, such that:
>>> z = {'a': 1, 'c': 3, 'b': 2}and our loop is:
>>> for i,j in z.items(): print('i = ' + i + ', j = ' + str(j))And if we have two tuples, we can zip them together and get the same result again:
>>> x = ('a', 'c', 'b') >>> y = (1, 3, 2)
>>> for i,j in zip(x,y): print('i = ' + i + ', j = ' + str(j))Oftentimes, we want to loop through a list and print the index as well as the value. We can use the built-in function enumerate to accomplish this:
x = ['a', 'b', 'c'] for i,j in enumerate(x): print('index: ' + str(i) + ', element: ' + j)This gives us:
index: 0, element: a index: 1, element: b index: 2, element: cRead about zip, enumerate and the other built-in Python functions here:
>>> help(zip) >>> help(enumerate)Python has a syntactically compact way of doing loops called list comprehension we will see below.
break, continueObserve the following 2 scripts and the output they produce.
script1.py:
#!/usr/bin/env python for i in range(1, 5 + 1): if i == 3: break print(i)
$ ./script1.py 1 2script2.py:
#!/usr/bin/env python for i in range(1, 5 + 1): if i == 3: continue print(i)
$ ./script2.py 1 2 4 5This illustrates the difference between break and continue: break exits the whole loop, while continue merely exits the current iteration.
File I/OReading a file:
with open('myfile', 'r') as f: contents = f.read()Reading a file line by line:
with open('myfile', 'r') as f: for line in f: print(line),The with syntax takes care of closing the file object automatically. Note: the comma suppresses the default newline appended by print.
Often, you want the file name to be passed in by the user:
import sys with open(sys.argv[1], 'r') as f: for line in f: print(line),Take input from std:in and write to a file:
with open('myfile', 'w') as f: for line in sys.stdin: f.write(line)Read every row of std:in into a list:
# read file into list contents = sys.stdin.read().split('\n')Note we can save ourselves an indent by reading from and/or writing to multiple files at once like so:
with open('file1.txt', 'w') as f, open('file2.txt', 'w') as g: # do something ... f.write('write something\n') g.write('write something else\n')
Number ManipulationSuppose:
>>> counter = 1To increment:
>>> counter += 1 >>> counter 2In Python:
counter++ # this doesn't existdoes not exist.
Integer operations return integers in Python 2, so note the difference between these two expressions:
>>> 1/2 0
>>> 1./2 0.5python3
Note in Python 3, this behavior changes and 1/2 yields 0.5:
>>> 1/2 0.5You can still get Python 2 style division with:
>>> 1//2 0In both Python 2 and 3, use the ** operator to exponentiate:
>>> 3**2 9 >>> 3**3 27Square root:
>>> 2**(1./2) 1.4142135623730951Trigonometry (use radians):
>>> import math >>> math.cos(0) 1.0 >>> math.sin(math.pi/2) 1.0This uses the math module, which is built in to Python.
Scipy and numpy are popular science and math libraries, which you have to install on your own. For example, to compute 52 choose 5, the number of 5 card combinations from a 52 card deck, it's:
>>> from scipy import special >>> special.binom(52, 5) 2598960.0
String ManipulationSplit a string on tab, returning a list:
>>>>> print(a) hello kitty >>> a.split('\t') ['hello', 'kitty']Join a list on comma, returning a string:
>>> b = ['hello', 'kitty'] >>> ", ".join(b) 'hello, kitty'Do a Perl-style chomp—i.e., strip a newline character off of the end of a string:
>>>>> print(c) hello >>> print(c.rstrip('\n')) helloDefine a multi-line string:
mystr='''This is a multiline string'''
The .format() methodAs we've already seen, one way to concatenate strings is to use a plus sign:
a = 'Hello ' b = 'kitty' print(a + b) # output is 'Hello kitty'When you want to throw some variables into a string, a more professional way to do this is to take advantage of the string object's built-in format() method. E.g.:
>>>>>>> "{} {}".format(a, b) 'Hello kitty' >>> "{0} {1}".format(a, b) 'Hello kitty' >>> "{word1} {word2}".format(word1=a, word2=b) 'Hello kitty' >>> "{word1} {word2}".format(word1='Hello', word2='kitty') 'Hello kitty'If you have a dict object, you can use string's format() method like this:
>>> d = {'friend1': 'Kim', 'friend2': 'Joe'} >>> "Hello {friend1}, Goodnight {friend2}".format(**d) 'Hello Kim, Goodnight Joe'As this page says:
The special syntax ** before the dictionary indicates that the dictionary is not to be treated as a single actual parameter. Instead keyword arguments for all the entries in the dictionary effectively appear in its place.
UnicodeIn Python 2, ASCII is the default character encoding. If you want to use the richer unicode character set (utf-8), you have to prefix your string with a "u". Let's try printing the Chinese and Japanese character for "cat", which is 猫:
>>> print('cat = \u732B') cat = \u732B >>> print(u'cat = \u732B') cat = 猫python3
In Python 3, however, unicode encoding is default:
>>> print('cat = \u732B') cat = 猫
List Operations
extend and appendextend and append are two methods to add elements to your list. Suppose we have a list x such that:
>>> x = ['a', 'b', 'c']If we're adding a string to the list, these two methods do the same thing:
>>> x.extend('d') >>> x ['a', 'b', 'c', 'd'] >>> x.append('e') >>> x ['a', 'b', 'c', 'd', 'e']However, passing a list as an argument reveals the difference between the two. extend merges while append appends:
>>> x.extend(['f', 'g']) >>> x ['a', 'b', 'c', 'd', 'e', 'f', 'g']
>>> x.append(['h', 'i']) >>> x ['a', 'b', 'c', 'd', 'e', 'f', 'g', ['h', 'i']]pop() returns the last element of the list:
>>> x.pop() ['h', 'i'] >>> x ['a', 'b', 'c', 'd', 'e', 'f', 'g']You can also pass pop an index:
['a', 'b', 'c', 'd', 'e', 'f', 'g'] >>> x.pop(0) 'a' >>> x ['b', 'c', 'd', 'e', 'f', 'g']In addition to extend, you can join two lists by using +:
>>> ['x', 'y'] + ['q', 'r'] ['x', 'y', 'q', 'r']
rangeIn Python 2, range is a function that returns a list:
>>> range(1,11) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]xrange constructs an xrange object that behaves similarly but avoids storing a list in memory. These are identical:
for i in range(1,11): print(i)
for i in xrange(1,11): print(i)though the later is more efficient if the range is large.
python3
In Python 3, range returns an iterable range object, not a list. The benefit is that, like Python 2 xrange (which doesn't exist in Python 3), it does not need to create a list in memory and can generate the next needed value for your iteration on the fly.
Copying a ListCopy the list x into the new variable y:
>>> y = list(x) >>> y ['b', 'c', 'd', 'e', 'f', 'g']The following does not make a fresh copy of x:
>>> z = xObserve:
>>> x ['b', 'c', 'd', 'e', 'f', 'g'] >>> z = x >>> z[0] = 5 >>> z [5, 'c', 'd', 'e', 'f', 'g'] >>> x [5, 'c', 'd', 'e', 'f', 'g']We see that changing z also changes x. What's going on? There's a good explanation here:
In Python variables are just tags attached to objects ... If we do: b = a. We didn’t copy the list referenced by a. We just created a new tag b and attached it to the list pointed [to] by a.We can understand this better if we use the id function:
>>> y = list(x) >>> z = x >>> id(x) 4462945920 >>> id(z) # z has the same id as x 4462945920 >>> id(y) # y doesn't because it's a new copy 4462979192
Example 1Convert a list of strings into a list of indices:
>>> x = ['a', 'b', 'c'] >>> x = range(len(x)) >>> x [0, 1, 2]
Example 2Let's suppose we have a string x such that:
>>> print(x) 0 2 0.1 1 0.2 0 0.3 0 0.4 0 0.5 0 0.6 0 0.7 0 0.8 0 0.9 0 1 0How can we get each column into a separate list? Python's split works on any whitespace so it evaporates both tabs and newlines:
>>> print(x.split()) ['0', '2', '0.1', '1', '0.2', '0', '0.3', '0', '0.4', '0', '0.5', '0', '0.6', \ '0', '0.7', '0', '0.8', '0', '0.9', '0', '1', '0']As described in this stackoverflow post, the next step falls under the rubric of list slicing, according to the syntax:
some_list[start:stop:step]As we've seen, if we leave out the stop part, the range defaults to the end:
>>> print(x.split()[0::2]) ['0', '0.1', '0.2', '0.3', '0.4', '0.5', '0.6', '0.7', '0.8', '0.9', '1'] >>> print(x.split()[1::2]) ['2', '1', '0', '0', '0', '0', '0', '0', '0', '0', '0']
Dict OperationsDeclare an empty dict:
d = {}Now let's look at some simple operations. Let:
d = {'a': 1, 'b': 2, 'c': 3, 'd': 4}Basic dict operations we've already seen:
>>> for key in d: print key a c b d
>>> d.keys() ['a', 'c', 'b', 'd']
>>> d.values() [1, 3, 2, 4]
>>> d.items() [('a', 1), ('c', 3), ('b', 2), ('d', 4)]
>>> for key,value in d.items(): print(key, value) ('a', 1) ('c', 3) ('b', 2) ('d', 4)python3
In Python 3, the dict methods keys(), values(), and items() don't return list objects but instead—for the purposes of efficiency—return iterable "view objects". This change does not concern us much because we can still iterate over them and cast them as lists if necessary (read more: What are Python dictionary view objects?).
In both Python 2 and 3, to get a list of sorted keys:
>>> d {'a': 1, 'c': 3, 'b': 2, 'd': 4} >>> sorted(d) ['a', 'b', 'c', 'd']Check for the existence of a key:
if 'b' in d: print(d['b']) else: print('not found') # output is: 2Python throws an error if the key doesn't exist:
>>> d['e'] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'e'A better and more succinct style to prevent this sort of key-not-found error is to use the dict object's get() method:
>>> d.get('b', 'notfound') 2 >>> d.get('e', 'notfound') 'notfound'To delete the key b and its value, you can use:
>>> del d['b']If you want to protect yourself against key-not-found errors, you can use the pop() method. To delete the key c and return its value or None if the key doesn't exist, use:
>>> d.pop('c', None) 3
Multi-dimensional DictsIn Python, you can create a multi-dimensional or multi-tiered dict which, as you can read on StackOverflow, is a "a dictionary where the values are themselves also dictionaries." The best way to do this is to use defaultdict from Python's collections. Suppose we have a text file, testfile.txt:
1 2 3 234 dfg wre x4 few 4kOur goal is to slurp this up into a dictionary such that the the first two columns represent keys and the last one is the value—e.g.,
d['1']['2'] = '3'for the first row and so on.
Observe the following the script, testme.py, first using a regular old dict:
#!/usr/bin/env python import sys d = {} with open(sys.argv[1], 'r') as f: for line in f: (c1, c2, c3) = line.split() d[c1][c2] = c3 print(d)Running this script yields an error:
$ ./testme.py testfile.txt Traceback (most recent call last): File "./testme.py", line 12, in <module> d[c1][c2] = c3 KeyError: '1'because Python is upset we haven't initialized the multi-dimensional dict properly. Now let's use a defaultdict instead of an ordinary one:
#!/usr/bin/env python import sys from collections import defaultdict d = defaultdict(dict) with open(sys.argv[1], 'r') as f: for line in f: (c1, c2, c3) = line.split() d[c1][c2] = c3 print(d)Now it works like a charm:
$ ./testme.py testfile.txt defaultdict(<type 'dict'>, {'1': {'2': '3'}, '234': {'dfg': 'wre'}, 'x4': {'few': '4k'}})
Defining Functions in PythonTo define a function use the keyword def, as in:
def myfunction(): '''This function prints hello world''' print('hello world')The first line of the function is a string which seems to be just floating there. This is called a docstring, and the Python website explains it here:
A docstring is a string literal that occurs as the first statement in a module, function, class, or method definition. Such a docstring becomes the __doc__ special attribute of that object.So if we call the help function, we get:
>>> help(myfunction) Help on function myfunction in module __main__: myfunction() This function prints hello worldWe'll also see this string if we examine the attribute myfunction.__doc__. While you don't have to write a docstring, it's best to use one in the interest of having well-documented code.
Functions, of course, can return a value as well as just doing something:
>>> def myfunction(x): return x + 1 >>> j = myfunction(3) >>> j 4Suppose we define a function to print the mean and variance of a list in a script called example.py:
import numpy as np def print_basic_stats(x): '''Print the mean and variance of a list of numbers''' print('The mean is ' + str(np.mean(x))) print('The variance is ' + str(np.var(x)))(numpy is a package with math & science functions) We can access this function on the Python command line as follows:
>>> import example >>> example.print_basic_stats([2,3,4]) The mean is 3.0 The variance is 0.666666666667We can access it in another script, say example2.py, in much the same way:
import example example.print_basic_stats([2,3,4])There's one wrinkle here. I'm tacitly assuming that I opened up the Python interpreter in the directory where example.py resides. If I don't, I'll get a "No module named example" error. We'll have a similar problem if example2.py and example.py are in different directories. How can we ensure Python finds our example.py script? To answer this question, we have to know all the paths Python searches. We can get this information using the sys module:
>>> import sys >>> sys.pathIf you execute this command, you'll see a list of various directories. As the docs say:
[sys.path is a] list of strings that specifies the search path for modules. Initialized from the environment variable PYTHONPATH, plus an installation-dependent default.So, if example.py is in the directory /some/path/python_examples, we can add this to Python's search path in example2.py as follows:
import sys sys.path.append('/some/path/python_examples') import example example.print_basic_stats([2,3,4])Now import example will work in all circumstances. As an aside, you'll notice that after you do this import a file called example.pyc will be created in the directory where example.py resides. This is your program compiled into bytecode and you can read about it on this Stackoverflow link.
List Comprehension, Anonymous (Lambda) Functions, MapList comprehension is a quick, Pythonic way to manipulate a list without invoking the full machinery of a for loop. Suppose we want to square the elements of the list a and save the result in another list, b. With a traditional for loop, that's:
>>> a = [1, 2, 3] >>> b = [] >>> for i in a: b.append(i*i) >>> b [1, 4, 9]With list comprehension, we can do it like this:
>>> a = [1, 2, 3] >>> b = [i*i for i in a] >>> b [1, 4, 9]You can read this as the list of i2 elements produced by the iteration:
for i in aAnother way to manipulate lists in Python is with the map fuction. In the last section, we saw how to define functions. Let's make a function to square a number:
>>> def squarefunction(x): return x*x >>> squarefunction(2) 4 >>> squarefunction(3) 9We can define a function without explicitly giving it a name using a lambda function. This is also known as an anonymous function. The simple square function is:
>>> lambda x: x*x <function <lambda> at 0x108bb1d70>What's neat is that we can save this function in a variable (again, without ever having given it a name):
>>> y = lambda x: x*x >>> y(2) 4 >>> y(3) 9The punchline is that these are all equivalent ways to square each element of our list:
>>> map(y, [1, 2, 3]) [1, 4, 9]
>>> map(lambda x: x*x, [1, 2, 3]) [1, 4, 9]
>>> map(squarefunction, [1, 2, 3]) [1, 4, 9]
>>> [i*i for i in [1, 2, 3]] [1, 4, 9]Question: How would you produce the following string with list comprehension:
my/path/1 my/path/2 my/path/3 my/path/4 my/path/5?
Answer:
" ".join(["my/path/" + str(j) for j in range(1,6)])
Example 1Here's an example of creating a subset list according to whether or not the orginal list's elements contain some string:
mystr = "1,2:3,2:4" # create a list from our string: mylist = mystr.split(","); # mylist is ['1', '2:3', '2:4']Now suppose we want our new list to contain only elements of the original list which contain a colon:
mylist_subset = [s for s in mylist if ":" in s] # mylist_subset is ['2:3', '2:4']
Example 2Here's another example combining list comprehension and lambda funtions, courtesy of my friend Ohad:
>>> from scipy import log2 >>> H = lambda x: [p*log2(p) for p in x if p>0] >>> H([1, 2, 3]) [0.0, 2.0, 4.7548875021634682] >>> H2 = lambda x: -sum([p*log2(p) for p in x if p>0]) >>> H2([1, 2, 3]) -6.7548875021634682
RegexRegex (regular expression) reminder, which I stole somewhere off the internet:
# \d [0-9] Any digit # \D [^0-9] Any character not a digit # \w [0-9a-zA-Z_] Any "word character" # \W [^0-9a-zA-Z_] Any character not a word character # \s [ \t\n\r\f] whitespace (space, tab, newline, carriage return, form feed) # \S [^ \t\n\r\f] Any non-whitespace character # * Match 0 or more times # + Match 1 or more times # ? Match 1 or 0 times # {n} Match exactly n times # {n,} Match at least n times # {n,m} Match at least n but not more than m timesPython has the ability to grab bits of a regular expression and store them in variable. For example:
(?P<my_variable>\w+)would store anything that matched it (a string of "word" characters of at least length 1) in the variable my_variable. re is the module that deals with regular expression operations in Python.
Example 1 (bioinformatics): grabbing sub-strings out of a string:
import re line='gene_id "XLOC_033544"; transcript_id "TCONS_00092538";' match = re.search(r'gene_id "(\S+)"; transcript_id "(\S+)";', line) geneid = match.group(1) tranid = match.group(2) print(geneid, tranid) # output is: ('XLOC_033544', 'TCONS_00092538')Example 2 (bioinformatics): printing elements of a certain pattern:
>>> import re >>>>> for i in mystr.split(";"): ... if (re.search(r'(\w+)=(\w+)', i)): print(i) ... P1_F=44 P2_F=42 i=xyzThis prints the semi-colon delimited elements that fit the pattern blob=blob.
OOP Python (Object Oriented Programming in Python)As we've seen, Python is all about object-oriented-ness. For example:
x = 5instantiates x as a member of the integer class, and we can see all its attributes by calling dir(x).
Let's create our own super-simple "Circle" class—representing a circle—to see how Python's OOP machinery works. We'll put it in a file called cir.py:
#!/usr/bin/env python import math class Circle: '''A Circle Object''' def __init__(self, myradius): self.radius = myradius def getradius(self): return self.radius def getcircumference(self): return 2*math.pi*self.radius def getarea(self): return math.pi*self.radius*self.radius def setradius(self, r): self.radius = r print("You've set the radius to " + str(r))Assuming cir.py is in Python's search path, we run:
>>> from cir import Circleto import our class into the python shell. Now that we have access to our Circle class, we can make a Circle object:
>>> c = Circle(1.0)We can call various get and set methods on our object:
>>> c.getarea() 3.141592653589793 >>> c.setradius(4.0) You've set the radius to 4.0 >>> c.getarea() 50.26548245743669We can print out the object, as is:
>>> print(c) <__main__.Circle instance at 0x106b1bc20>And we can see all the methods we're allowed to call on an object of type Circle using the dir() method:
>>> dir(c) ['__doc__', '__init__', '__module__', 'getarea', 'getcircumference', 'getradius', 'radius', 'setradius']
Plotting with matplotlibMatplotlib is an excellent tool for plotting. To use it import pylab:
import pylab pylab.xlabel('x ax') pylab.ylabel('y ax') pylab.title('My Plot') pylab.plot([1,2,3,4,5,6],[2,4,7,3,0,2]) pylab.savefig("/my/path/test.png")Produces:
Importing Modules in Python and the NamespaceWhat's the difference between:
import pylaband:
from pylab import *?
The later floods every function from pylab into the namespace, so we can just type:
plot([1,2,3,4,5,6],[2,4,7,3,0,2])while the former requires us to use:
pylab.plot([1,2,3,4,5,6],[2,4,7,3,0,2])Needless to say, this is the safer choice because it won't risk collisions with homemade functions we've created. If we just want to use a specific function from a module, we can import it as:
from pylab import plotThis allows us to call plot rather than pylab.plot. Still, I prefer the verbose way because it makes your code more readable.
__name____name__ is a special variable in Python. In many Python scripts, you see the line:
if __name__ == "__main__":near the end. Look at the following three scripts, each organizing code in a different style.
In script1.py, we just spill our code out into the script:
#!/usr/bin/env python print('print __name__: ') print(__name__)In script2.py, we sequester our code into a function and execute it if __name__ == "__main__":
#!/usr/bin/env python def myfunction(): """example""" print('print __name__: ') print(__name__) if __name__ == "__main__": myfunction()In script3.py, we only have the function:
#!/usr/bin/env python def myfunction(): """example""" print('print __name__: ') print(__name__)Running script1.py and script2.py produces the same thing:
$ ./script1.py print __name__: __main__
$ ./script2.py print __name__: __main__script3.py is just a function definition, so it does nothing:
$ ./script3.pyAlthough this produces no output, we could import this script into another script and use the function there.
What are the relative merits of script1.py vs script2.py? They both do the same thing, but script2.py is more modular—its guts are packaged into a function.
Now consider this script, which I'll call wrapper.py:
#!/usr/bin/env python import script2 script2.myfunction()Running this yields:
$ ./wrapper.py print __name__: script2We see that the variable __name__ changes its value, according to whether we're running script2.py as a stand-alone or importing it into another script. When we run the script from the command line, __name__ equals __main__. Otherwise, it doesn't. At last we see the advantage of a construction like the one in script2.py: the function myfunction() will only run if we execute this script from the command line. This gives us the convenient ability to use a script both as a stand-alone and as an imported module.
Reading ArgumentsLet myscript.py be:
#!/usr/bin/env python import sys print(sys.argv[0]) print(sys.argv[1])then:
$ ./myscript.py test ./myscript.py testsys.argv[0] is the name of the script itself; sys.argv[1] is the first user-passed argument, and so on.
Exit script if no arguments:
if (len(sys.argv) == 1): exit(0)
Example Reading Arguments: Using argparseThe argparse module provides a convenient way to get arguments into your python script. Here's the syntax, following the doc's example:
#!/usr/bin/env python import argparse # ------------------------------------- def main(): '''Main block''' args = get_arg() if args.verbose: print("verbosity turned on") if args.vcf: print(args.vcf) if args.sample: print(args.sample) def get_arg(): '''Get Arguments''' # parser = argparse.ArgumentParser(description="run pipeline") parser.add_argument("-v", "--verbose", action="store_true", help="verbose mode") parser.add_argument("-f", "--vcf", help="vcf input file") parser.add_argument("-s", "--sample", type=int, help="sample index") args = parser.parse_args() return args # ------------------------------------- if __name__ == "__main__": main()Another nice feature of argparse lets you take input either as an argument or as piped in from std:in:
parser.add_argument("-i", "--input", type=argparse.FileType('r'), default=sys.stdin, help="input file")
System CommandsRun a system command:
import subprocess cmd="ls " + "../" proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # it's already started running with the Popen call; # this ensures it finishes before we move on proc.wait() # print return code print(proc.returncode) # print stdout stderr tuple proc.communicate()For example:
import subprocess >>>>> proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) >>> proc.wait() 2 >>> print(proc.returncode) 2 >>> proc.communicate() ('', 'ls: cannot access nonexistentfile: No such file or directory\n')
Making your Own Python PackagesLet's say we want to make a package to run system commands. We'll make a directory:
$ tree runsys/ runsys/ |-- __init__.py |-- __init__.pyc |-- runsys.py `-- runsys.pycwhere runsys/runsys.py is:
#!/usr/bin/env python import subprocess def run_cmd(cmd, bool_verbose, bool_getstdout): """Run system cmd""" # echo command if (bool_verbose): print(cmd) proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.wait() # return stdout if (bool_getstdout): return proc.communicate()[0].rstrip()Now in some other Python script we could do this:
from runsys.runsys import run_cmd cmd="mkdir -p tmp" run_cmd(cmd, 1, 0)
Useful Commands from the os ModuleThe os module gives you the functionality of some basic unix commands in Python. Get it:
import osCheck for the existence of myfile:
os.path.isfile("myfile")Get the abs path of myfile:
os.path.abspath("myfile")Check if myfile is non-zero:
if ( os.path.getsize("myfile") > 0 ): ...Get the cwd:
cwdir = os.getcwd()Get the directory where your script itself resides:
script_dir = os.path.dirname(__file__)If we want to make sure we get an absolute file path which reads through symbolic links, we could even do this:
script_dir = os.path.dirname(os.path.realpath(__file__))Get the name of your script:
os.path.basename(__file__)
Hooking Python up to an Sqlite DatabaseImport the sqlite library:
import sqlite3See the docs: Wiki: MySQL and SQLite.
Installing Python Packages with PipYou can use pip to install python modules. First, you need get pip itself. Install it, as described in the docs (assuming root access):
$ wget $ sudo python get-pip.pyTo install packages, the syntax is simple:
$ sudo pip install django $ sudo pip install numpyIf you don't have root access, you can install the modules locally with the --user flag. For example, to use pylab, install matplotlib:
$ pip install --user matplotlibThis installs stuff in the directory:
$HOME/.localSee all of your installed modules:
$ pip freeze
virtualenvFrom 100 Useful Unix Commands - virtualenv is a command line tool to keep a series of packages isolated in a virtual enviroment like aIt's common practice to annotate your project's python dependencies in a requirements.txt file:
(venv) $ pip freeze > requirements.txtThen somebody who, say, clones your project on Github can simply run:
$ virtualenv venv $ source venv/bin/activate (venv) $ pip install -r requirements.txtto get the dependencies.
If you were doing a Django project, everytime you wanted to start coding, the first order of business would be to turn on virtualenv and the last would be to turn it off. To exit virtualenv, type:
(venv) $ deactivate
python3
If you're using Python 3, this stackoverflow post tells us how to get a python3 virtual environment:
$ sudo pip install --upgrade virtualenv # ensure virtualenv is up to date $ virtualenv -p python3 venv
Python and WebdevOne of the many awesome things about Python is its prevalence on the web, as a language popular in backend frameworks like Django and Flask. I have some unpolished notes on Django and Flask here and here.
Simple Python CGI ScriptPrint out environmental variables:
#!/usr/bin/env python import os,sys print "Content-Type: text/html\n" print("<html>") print("hello world") print("<br>") print("<br>") keys = os.environ.keys() keys.sort() for k in keys: print(k) print(" ") print(os.environ[k]) print("<br>") print("</html>")In the web browser, this will output stuff like:
hello world ADDRFAM inet CONTENT_LENGTH CONTENT_TYPE DAEMON /usr/bin/uwsgi DOCUMENT_ROOT /path/to/html/root GATEWAY_INTERFACE CGI/1.1 HTTP_ACCEPT text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_ENCODING gzip, deflate HTTP_ACCEPT_LANGUAGE en-us HTTP_CONNECTION keep-alive HTTP_COOKIE __unam=ac294eb-13ee2414bd8-5d900a97-6; HTTP_HOST myuniversity.edu HTTP_USER_AGENT Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) \ AppleWebKit/534.50.2 (KHTML, like Gecko) Version/5.0.6 Safari/533.22.3 HTTP_X_FORWARDED_FOR 156.145.29.38 PATH /usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin PIDFILE /var/run/uwsgi.pid
The IPython Shell and IPython Notebook GUIIPython is an awesome program, which provides a much better python shell with stuff like auto-complete, nice help, easy functionality for system commands and more.
To invoke the IPython shell:
$ ipythonOne way to get IPython is in the Anaconda Python Distribution which is described on the Anaconda website as a "Completely free enterprise-ready Python distribution for large-scale data processing, predictive analytics, and scientific computing."
Another nice IPython feature is the ability to run a Python shell in your web browser:
$ ipython notebookRead about it here: Jupyter.
Example Problem: Re-format a Text File of DataHere's the problem, borrowing from The Unix Intro. Take a file, example_data.txt, that looks like this:
,height,weight,salary,age 1,106,111,111300,62 2,124,91,79740,40 3,127,176,15500,46And make it look like this:
1 height 106 2 height 124 3 height 127 1 weight 111 2 weight 91 3 weight 176 1 salary 111300 2 salary 79740 3 salary 15500 1 age 62 2 age 40 3 age 46Let
Example Problem with Nested Dicts: Making a Multi-Dimensional HashI was recently given the following problem:
Consider a function incr_dict, which takes two arguments, which behaves like this in Python:
>>> dct = {} >>> incr_dict(dct, ('a', 'b', 'c')) >>> dct {'a': {'b': {'c': 1}}} >>> incr_dict(dct, ('a', 'b', 'c')) >>> dct {'a': {'b': {'c': 2}}} >>> incr_dict(dct, ('a', 'b', 'f')) >>> dct {'a': {'b': {'c': 2, 'f': 1}}} >>> incr_dict(dct, ('a', 'r', 'f')) >>> dct {'a': {'r': {'f': 1}, 'b': {'c': 2, 'f': 1}}} >>> incr_dict(dct, ('a', 'z')) >>> dct {'a': {'r': {'f': 1}, 'b': {'c': 2,'f': 1}, 'z': 1}}
incr_dict(dct, ('a', 'b', 'c')) is conceptually like:Here's my solution, after reading this Stackoverflow post:
dct['a']['b']['c'] += 1
except that it creates any necessary intermediate and leaf nodes.
debug = 0 # boolean (0 = quiet, 1 = verbose) # from def getFromDict(dataDict, mapList): '''get a given value from a nested dictionary from keys (provided as a list)''' for k in mapList: dataDict = dataDict[k] return dataDict # from def setInDict(dataDict, mapList, value): '''set a given value in a nested dictionary for keys (provided as a list)''' for k in mapList[:-1]: dataDict = dataDict[k] dataDict[mapList[-1]] = value def incr_dict(dataDict, mapList): '''increment a given value in a nested dictionary for keys (provided as a list) or, if entry doesnt exist, create and set to 1''' if (debug): print("starting list " + str(mapList)) print("starting dict " + str(dataDict)) # got to 1 before end for k in mapList[:-1]: if (debug): print("list elt " + str(k)) print("pre dict " + str(dataDict)) # if key in dataDict, change dataDict to point to inner dict if k in dataDict: dataDict = dataDict[k] # else create key and make value empty dict, then change dataDict to point to empty dict else: dataDict[k] = {} dataDict = dataDict[k] if (debug): print("post dict " + str(dataDict)) # now deal w last elt # if exists, increment if mapList[-1] in dataDict: dataDict[mapList[-1]] += 1 # else create and set value to 1 else: dataDict[mapList[-1]] = 1 def main(): # initialize empty dict dct = {} incr_dict(dct, ('a', 'b', 'c')) print("result") print(dct) incr_dict(dct, ('a', 'b', 'c')) print("result") print(dct) incr_dict(dct, ('a', 'b', 'f')) print("result") print(dct) incr_dict(dct, ('a', 'r', 'f')) print("result") print(dct) incr_dict(dct, ('a', 'z')) print("result") print(dct) # test long tuple incr_dict(dct, ('a', 'x', 'f', 'b', 'c', 'd', 'e', 'f', 'g', 'h')) print("result") print(dct) incr_dict(dct, ('a', 'x', 'f', 'b', 'c', 'd', 'e', 'f', 'g', 'i')) print("result") print(dct) if __name__ == '__main__': main()
Easter EggsRead The Zen of Python by Tim Peters:
>>> import thisOpen up this XKCD cartoon (Python 3+):
>>> import antigravity | http://www.oliverelliott.org/article/computing/wik_python/ | CC-MAIN-2017-39 | en | refinedweb |
Continue of Bridges Construction Site (1/3): Asynchronous – synchronous bridge.
———————————————-
Hello again, colleagues!
Let’s continue to review solutions for the problem of integration in heterogeneous interfaces landscape using bridges.
Uff.. most complex part of this post is finished! 🙂
——————-
See also:
Bridges Construction Site (1/3): Asynchronous – synchronous bridge.
Bridges Construction Site (3/3): SAP PI bridges – “exotic” and recommendations.
——————-
- 3. Sync-Async Bridge and its modules – RequestOnewayBean, WaitResponseBean and NotifyResponseBean.
- 3.1 Synchronous-asynchronous bridge with modules in the Sender Communication Channel.
- 3.2 Example of synchronous – asynchronous bridge with modules in Sender Communication Channel.
- 3.3 Synchronous – asynchronous bridge with modules in Receiver Communication Channel.
- 3.4 Example : Synchronous – asynchronous bridge with modules in Receiver Communication Channel.
3. Sync-Async Bridge and its modules – RequestOnewayBean, WaitResponseBean and NotifyResponseBean.
Let’s think about a new problem: we are running the process in the one system and want to send some information to the external system. It’s simple. But we also need immediate confirmation of of successful reception (or an error message).
Our external system works in asynchronous manner only, ie it takes the input by one service (it can be a file, asynchronous SOAP, SQL, HTTP, etc.), and send an acknowledgment by another service after while.
Let me remind you the scheme of such interfaces:
Figure 1 : Synchronous – Asynchronous Bridge
Sync-async bridge using following modules:
• RequestOnewayBean is responsible for converting synchronous message to asynchronous.
• WaitResponseBean – leaves a synchronious communication channel open and waits for the response. When receiving asynchronous response – converts it to a synchronous message and sends it back to the original system. If the answer does not come within the specified time – sends an error message to the source system.
• NotifyResponseBean is used instead of the standard adapter module in the asynchronous response sender channel to transfer the response message directly to WaitResponseBean ( bypassing the processing in the messaging subsystem of PI).
RequestOnewayBean and WaitResponseBean can be used in a sender communication channel or in a receiver communication channel.
RequestOnewayBean must be inserted before the standard adapter module (usually CallSAPAdapter); WaitResponseBean – after.
For using modules in the Communication Channel you should go to the tab «Module» and enter the following values:
«Module Key» can be anything – so long as it is unique and the keys would be difficult to confuse – it will help us later to determine the parameters of modules.
Further settings depends on the placement of modules: in a sender communication channel or in a receiver communication channel.
But we have one more problem connected to synchronous-asynchronous bridge.
Imagine that several messages are going through the bridge simultaneously. So, at some point few queries are waiting for responses. Some response is coming from the target system – but how we know which of pending channels must be used to send a specific answer?
Figure 2 : the task of determining the recipient of asynchronous response
Here comes new entity – a correlation. Correlation is an identifier , the “key” , which is held on the correspondence between the response and awaiting queue.
The term ” correlation ” can also be found in the Sync-Async Bridge in ccBPM – there may be a correlation using not only one but a set of identifiers .
So the task is to “remember” some id during the sending of asynchronous message and check this id during the receiving an answer:
Figure 3: Determination of the recipient using the asynchronous response correlation
The correlationID parameter (defined in the PI message header) is used to correlate a response from asynchronous system. The value in correlationID must be equal to the message GUID of the synchronous request message . This parameter belongs to a group of Adapter-Specific Message Attributes and can be changed using the standard module – DynamicConfigurationBean.
Another example in SAP standard documentation («Configuring the Async / Sync Bridge Using the JMS Adapter») correlate messages using standard settings of communication channels JMS (“Set PI Conversation ID” parameter).
In other kinds of adapters similar standard parameter is missing and you have to make some tweaks – in particular , to save Message GUID and fill correlationID parameter using PI or with the help of the target system. In the example below (3.2) , you can find one of the ways to work with correlations in file adapter.
3.1 Synchronous-asynchronous bridge with modules in the Sender Communication Channel.
If we use Sender Communication Channel for bridge’s modules, the scheme of message processing would be next:
Figure 4: The logic of the synchronous-asynchronous bridge with modules in the Sender CC.
- Receiving a synchronous message from external system via communication channel, sending it to the module RequestOnewayBean. The module switch the message type from synchronous to asynchronous by changing the message header.
- Depending on the parameter passThrough, module pass the message down the chain of modules or directly sending it to the message processing subsystem of PI (skipping step 3).
- Message processing by the standard adapter module, transfering it to PI message processing subsystem.
- Sending a request to asynchronous communication channel.
- Asynchronous call to the external system
- The response from external system is coming to async sender communication channel
- Return the response to the NotifyResponseBean, which directly transmits a response to WaitResponseBean, bypassing the messaging subsystem PI.
- WaitResponseBean changes message type from asynchronous to synchronous and returns it to the source system.
Module parameters for RequestOnewayBean when placing it in the Sender Communication Channel:
Module parameters for WaitResponseBean when placing it in the Sender Communication Channel:
NotifyResponseBean module in this configuration must be located in the recieving communication channel for a response from asynchronous system.
The module must be used instead of the standard adapter module.
NotifyResponseBean have following module parameters:
Once we set up all the necessary modules , we need to configure the request and response correlation.
To do so, we need keep the message GUID somehow and pass it to the target system. The system, in turn, should return a response with a the GUID in some part of message. When the answer is coming, we need to fill out the parameter in the message header responsible for the correlation – correlationId (Dynamic Configuration header section) with stored GUID value.
Solution can be different for each of the adapter types. I’ll show you one of possible file adapter solution in the example below.
After setting up the correlation the only thing to do is to create routing rules linking source synchronous interface to the target asynchronous interface.
For the routing of response it is sufficient to create a Sender Agreement only – everything else will be complete by the NotifyResponseBean module.
3.2 Example of synchronous – asynchronous bridge with modules in Sender Communication Channel.
Let’s take a look to the specific example.
Business case: we have an external client who gives us an information about new books available in the store using a synchronous web-service .
PI should receive this information and send it to our database. This DB is pretty old, so the Interface of DB is based on the file exchange. The status of operation is also returned as a file, so PI should get it and return it to the client.
Figure 5: Example diagram.
To implement this interface, we need to create the following development objects:
Figure 6: Development objects in the Integration Repository
Please note the operations interfaces mapping – it synchronous, but “one-way” – it works in one direction only. To create it we need to use a “trick” with a temporary changing of receiver interface type, like it was done in the mapping from the previous article about the asynchronous-synchronous bridges.
After that, create all necessary routing objects in the Integration Builder:
Figure 7: Configuration objects
The question may arise – how the routing of response can be done by Sender Agreement only?
You can configure full routing rule for the answer or make response routing via Integrated Configuration. But in this particular case it does not make sense – the answer still will be transferred to the module NotifyResponseBean and directly to waiting module WaitResponseBean after that, bypassing the messages processing subsystem of PI.
Now let’s put the modules into place.
Synchronous SOAP communication channel:
Figure 8: Synchronous SOAP Sender communication channel
Please don’t forget to set Quality Of Service = Best Effort, ie identify the channel as synchronous.
Then add modules RequestOnewayBean and WaitResponseBean:
Fig. 9: Modules in SOAP Sender synchronous communication channel
Then set up a file communication channel to submit a request to the database:
Fig.: 10: Asynchronous File communication channel for DB request.
Please note the configuration of file name:
Filename is formed by a variable that have Message ID value. Filename = message GUID without any extension.
Why do so? Thus, in our example, we store the GUID of request message to use it in correlation later. In reality, you can do it same way. or you can pass GUID in the message body. The main thing – target system must get it, save it and return it in an accessible manner to PI along with the response.
Next step: set up a file communication channel for the response:
Fig. 11: File communication channel to get a response from the target system.
We assume that the system produces a response in another local directory on the PI-server as an XML-file.
The name of the file must be the same GUID as request.
Modules configuration:
Fig.: 12: Modules in asynchronous file sender comm. channel
Standard module DynamicConfigurationBean writes the file name to the correlationId parameter in the header of PI message – ie fills correlation for further processing of the response by the bridge.
We are using NotifyResponseBean module instead of standard adapter module CallSAPAdapter. NotifyResponseBean forwards the response directly to the waiting module WaitResponseBean. If nobody expects the answer with such correlation value – an error will be written to the log.
Well, that’s all settings now, time to check our bridge.
Form a WSDL for web-service (use context menu on Integrated Configuration -> Display WSDL). Then, use WSDL to form and send SOAP request using any suitable SOAP-tool (I used a freeware tool SOAPUI).
Fig.: 13: Sending a synchronous request from SOAPUI
After sending the file appears in the directory with the name of the message GUID and request data inside.
Fig.: 14: File request received by the target system
Now let’s simulate the recipient DB system.
Edit file – put the response data in it.
Fig.: 15: File with response
Move it into the directory books/status. If we were quick enough, the file will be processed by the PI and the response in SOAPUI will look like this:
Fig.: 16: Response successfully received by SOAP-client
Synchronous-asynchronous bridge successfully built.
All fine, but ..
Just imagine: our customer came and asked us to add one more field to the answer – timestamp.
We don’t have response mapping, the target system also “can’t” provide us response in new format.
What should we do?
Fig.17: Additional development objects in Integration Repository
Also define the mapping, which will convert the response from the system BS_ExtDB in new response format for BS_WebClient.
Figure 18: Add timestamp to the response
Now configure the routing in Integration Directory:
• create a new “pseudo” File communication channel CC_Dummy_file_receiver and move bridge’s modules DynamicConfigurationBean and NotifyResponseBean there, with all configuration parameters;
• restore the module CallSAPAdapter in communication channel SS_ExtDB_FileSender;
• remove Sender Agreement for SS_ExtDB_FileSender;
• create and configure the Integrated Configuration for response routing from BS_ExtDB to BS_WebClient.
Fig.: 19: Additional configuration objects
Check our bridge as before and we must get the right answer:
Fig.: 20: Request and response with timestamp.
Ok, the customer is satisfied now, our synchronous-asynchronous bridge works perfect.
Congratulations! 🙂
3.3 Synchronous – asynchronous bridge with modules in Receiver Communication Channel.
There are nothing specific in this version of bridge comparing to the previous. There are the same three modules: two of them – in asynchronous Receiver Communication Channel, transmitting a request to target system; one more – in asynchronous Sender Communication Channel for proceessing the response.
Figure 21 : The logic of the synchronous-asynchronous bridge with modules in the receiver communication channel.
- Receiving the synchronous request from external system.
- Transfer the message to PI messaging system.
- Message processing in PI, transfer it to a synchronous communication channel – to RequestOnewayBean module. The module switch the message processing type from synchronous to asynchronous by changing the message header.
- Message goes to the standard adapter module.
- Asynchronous call to external system.
- Asynchronous response from external system. NotifyResponseBean processing the module instead of standard adapter module.
- Transfer request to WaitResponseBean module, correlation check.
- The module switch the message processing type from asynchronous to synchronous, transfer the message to messaging sub-system of PI.
- Response message is going to synchronous communication channel.
- Response to external system.
Modules options are independent from the placement .
Please note: there is another information in SAP help, there are more options for modules when placing in a Receiver Channel. I have not found any meaningful application of these parameters.
RequestOnewayBean parameters:
WaitResponseBean parameters:
NotifyResponseBean parameters:
You also should take care about correlation.
3.4 Example : Synchronous – asynchronous bridge with modules in Receiver Communication Channel.
Let’s rebuild our example from 3.2.
Figure 22: Development objects in Integration Repository
At this time we need two synchronous interfaces – SI_WebClient_SaveBook_sync and SI_ExtDB_SaveBook_sync, as well as mapping between them-OM_SaveBook_WebClient_to_ExtDB. We also need asynchronous interface SI_ExtDB_Status (should be based on the same message type as the response in SI_ExtDB_SaveBook_sync) – we will use it for response from an asynchronous system.
Configure routing in Integration Directory as follows:
Fig.: 23: Configuration settings, Integration Directory
Remove all modules from SOAP sender communication channel and move them to file receiver communication channel:
Fig.: 24: asynchronous file receiver communication channel for request.
Fig.: 25: asynchronous file receiver communication channel for response.
Reconfigure Integrated Configuration – we have new communication channels for sender and receiver, new interface and mapping; all we have from the previous example – target system only.
To get the file with response, we need only the Sender Agreement connected to the channel CC_ExtDB_File_Sender_WithModule and interface SI_ExtDB_Status.
Start SOAP test tool (SOAPUI in my case) and send test request. You should have a file with request in the file system.
Replace its content with the answer, move it to the folder with responses, wait for PI processing – and you should get following picture:
Fig.: 26: bridge at work – request and response in SOAPUI.
If your picture looks the same – congratulations! You have just built live sync-async bridge! 🙂
—————————————————————–
See also:
Bridges Construction Site (1/3): Asynchronous – synchronous bridge.
Bridges Construction Site (3/3): SAP PI bridges – “exotic” and recommendations.
—————————————————————–
With best regards,
Alexey Petrov
Thanks Alexey petrov . well explained . Thanks for sharing …!!!
Hi Alexey,
Nice Blog!! I have a query, appreciate if you could help.
Is there anyway we could set correlation in response async message if Receiver is ABAP Proxy in SAP ECC. I know SAP ECC is synchronous system, but for PPO we went with approach of having ECC as asynchronous system. Our scenario is SOAP sender (sync) – ABAP Proxy(SOAP) as Receiver.
Thanks,
Miten
Hi Miten,
try to go this way:
1) Get message ID in inbound proxy
or–xi-message-header
2) Save it somethere.
3) Pass it back via client proxy – use additional field in the message structure cause it’s not possible to change message header in client proxy.
4) Use mapping or Dynamic Configuration Bean to put saved ID into correlationId.
With best regards,
Alex
Hi Alexey,
We mapped Message id to Inbound Proxy. In Response mapping we assigned Message id to Adapter attribute and then used Dynamic Configuration Bean in Receiver Agreement of Response.
It works well. Thank you.
Regards,
Miten
Hi Miten,
you are welcome!
Thanks for sharing the result. 🙂
With best regards,
Alex
Hi Alexey,
excellent series of articles, thank you.
Please add the link to the third blog in this one:
Bridges Construction Site (1/3): Asynchronous – synchronous bridge.
Best regards,
Andy.
Hi Andy,
thanks for the advice!
I’ve added cross-links to all 3 posts.
With best regards,
Alex
Alexey petrov
Hello, very interesting your article but I have a question another way to get the correlation ID for the sender channel.
Thanks
Hello Erick,
the main task is to save message GUID and fill CorrelationID parameter in the message header. This can be done in the sender channel too.
Most adapters (except JMS) have no standard tools for it.
So this would be custom solution – with standard module DynamicConfigurationBean, own module, message mapping or something else.
In this particular case with file adapter – if I need to use another way – I would think about saving the GUID into the message body.
With best regards,
Alex
Hello Alexey Pitroff.
Please could you tell me how you would use the message ID in the message body, this point is not clear to me. Please explain to me your example.
Thanks.
Hello Alexey,
I also use Sync Async Bridge and did the trick for the OM. Sync SOAP Sender, Async IDoc Receiver (and some hidden stuffs to do the correlation), but I’m having problems with SAP PI’s processing, cause it forces to do a Response Mapping. Other than that, the Sync SI, when transported, does not overwrite the Async settings because of conflicts (SAP Note: 1580750). Did you manage to solve that issue?
This is already resolved.
Great information Alexey Pitroff.
I wonder if you anyone can shed some light on how to make this work with
SOAP ABAP Proxy (Sync) – > PI -> SOAP Webservice (Async)
I’m using the Get Weather operation at
This all works great synchronously.
I’ve followed the steps above but most blogs all use the FILE adapter as it’s easier to demonstrate but with a SOAP receiver the message get sent but because I’ve used the sync/async pattern no response comes back and it times out.
I suppose the issue is SOAP async means there’s no response unless there is some sort of callback mechanism that could point it back to a SOAP Sender….but that’s me thinking out loud.
Can you or anyone give some guidance on how to realise this scenario…get the response form the SOAP service that works synchronously to work asynchronously as part of the sync/async pattern?
Thanks.
Hi Peter,
Good day! Is there another SOAP Web Service that responds with your ‘Request Message’?
Request Message = ABAP Proxy to SOAP
Response Message = SOAP to ABAP Proxy
If yes then you should set an Async – Sync Bridge with your request flow’s receiver channel, set the Sender Interface and Namespace of the message expected to be received there. After that, create an ICo or Classical Configuration using those sender interface and namespace with Virtual Receiver – your virtual receiver system should be the sender system of request flow.
If not, I don’t think you need to use S/A bridge since its designed to have the response flow. Check with your ABAP team if they need a response and that’s when you’ll need Sync / Async Brdige, if not, change the Service Interface to Async and let your ABAP team regenerate the proxy on their side.
Best Regards,
Nica
Thanks Genica Bocalan for the prompt response!
” Is there another SOAP Web Service that responds with your ‘Request Message’?” – No there isn’t sadly. I do like your design though. If we did this though you’d never get the response back to the ABAP system would you as it would be an async proxy?
“Check with your ABAP team if they need a response and that’s when you’ll need Sync / Async Bridge” – Yes the response from the service is required as it needs to be acted upon.
So I’m still stuck with how to get the response back from the service and I’m wondering if the service itself doesn’t fit the scenario. Would a true async web service have some call back mechanism where it would know to direct a response to a PI Sender channel??
Thanks,
Peter
Hello Peter,
There will be no response if the receiver was an Async SOAP interface (it has HTTP 200 but only a blank soap envelope) but you can do something like this link below.
The only problem if you’re going to align this reference with your design is, whenever PI splits the message into more than 1 message (which will be like Sender Proxy to Receiver SOAP and Receiver File setup), the message ID changes which will also be the problem of closing the S/A Bridge – because you need to send back the ‘correlation ID’ for PI to identify that response message was the pair of a request message. So better create a file somewhere in PI’s file directory in the mapping part of your build, don’t add another target message in the ICo.
This is case to case, you can use this for ‘acknowledgement’ only, but if you need the receiver’s response, then they should design another interface for that, or better yet, send the response back to the original SOAP interface.
This is not the best practice and this poses performance issues, so if you still can change the design, propose Async Proxies. Async Outbound Request and Async Inbound Response. In PI’s request design, Async Outbound Proxy to Async SOAP and Async Inbound Proxy.
SYNC ASYN Bridge but the receiver is IDOC | https://blogs.sap.com/2014/04/14/bridges-construction-site-23-synchronous-asynchronous-bridge/ | CC-MAIN-2017-39 | en | refinedweb |
How do you differentiate an instance member from a class member having the same name in Swift 3? What's usually working before now produces an error in Xcode 8 beta 5:
"static member 'textColor' cannot be used on instance of type UITag"
public class UITag : UILabel {
static var textColor = UIColor.white
override public init(frame: CGRect) {
super.init(frame: frame)
textColor = UITag.textColor /* error: static member cannot be used on instance of type UITag */
text = " not set "
}
}
This is a strange error and we could discuss whether it is a compiler bug that it is actually allowed to shadow a non-static variable with a static variable, however note that it is definitely bad code to have two properties with the same name, one static and one not static, because the last one will shadow the previous one. Probably
defaultTextColor would be a better name.
A simple workaround is to use:
super.textColor = ... | https://codedump.io/share/Cljni0iGDQcA/1/differentiating-class-from-instance-member-with-same-name-in-swift-3 | CC-MAIN-2017-39 | en | refinedweb |
This project is being moved to github. In the process, it is being expanded by a merge with two other libaries: libagf and ctraj since the both of them use the "libpetey" library as a dependency. You can find the new project here:
Note: the NEWS file concatinates changes for both revision 289 and 312 so that they both fall under revision 312. Note that there are very few changes to r. 312. The NEWS file will be corrected in subsequent releases.
New in this version:
- improvements in sparse_calc sparse matrix calculator
- refinements to supernewton root-finder
- namespaces
Check the NEWS file for more details.
This version has been released in conjunction with a new version of ctraj. The date calculator is now included in the makefile and installation and has been refined somewhat. It is needed in ctraj. The sparse matrix calculator has been vastly expanded and refined. It is extremely useful for working with output from the ctraj tracer simulation. The Gaspard-Rice simulator can also output sparse matrices representing the Laplace equation corresponding to the system.... read more
Numerous bug fixes with this latest revision. A bare-bones sparse calculator program is now working, though it needs a load of testing and many important features are missing.
New in this version:
- subroutine to parse command options that's easier to use than getopt
- utilities for eigenvalue decomposition of sparse matrices
- subroutine to read in whole ascii files
- and much more...
libpetey has been moved from the libagf distribution to msci. It has also been migrated to svn. To better support the ctraj project, it now includes libraries for sparse matrices and specialized datasets. | https://sourceforge.net/p/msci/news/ | CC-MAIN-2017-39 | en | refinedweb |
Chapter 15 - Developing: Applications - Migrating Python
On This Page
Introduction and Goals
Scenario 1: Interoperating Python on UNIX with SQL Server
Scenario 2: Port the Python Application to Win32
Introduction and Goals
This chapter contains a detailed discussion of changes that must be made to existing Python applications to work with Microsoft® SQL Server. At the conclusion of this chapter, the Python application should be capable to successfully connect to the SQL Server database that was migrated from Oracle. The solution can then be tested. Microsoft Win32® Platform
Quick port using the Microsoft Windows® Services for UNIX 3.5 platform
Python is a portable, platform-independent, and general-purpose language with support for writing database client applications. Database capabilities are modularized, and they can be augmented through the use of additional APIs.
Python database modules that are based on the Database API (DB-API) specification can be used to access relational databases, including SQL Server and Oracle. As long as the database module used to access the Oracle database adheres to the DB-API specification, porting to SQL Server is straightforward and can be done with minimal changes. If the existing database drivers do not meet DB-API specifications, the driver will need to be replaced and configured.
Because of the cross-platform capabilities and the use of modular database drivers, some of the migration strategies are more feasible than others. For example, because Python can be ported to Windows, there is no need to rewrite the application in the .NET framework or for the Win32 environment. Also, because the application can run within the Windows environment, a quick port using Windows Services for UNIX is not necessary in most cases.
Based on the available migration strategies, two scenarios can be developed to migrate the Python application. These scenarios include:
Scenario 1: Interoperating Python on UNIX with SQL Server
If the business requirements do not include eliminating the UNIX environment, an interoperation strategy can be implemented quickly. Few changes need to be made to the source code, and installing a new driver allows the Python application to connect to a SQL Server database. Interoperation can also be used as an interim step, if the migration is performed in phases.
Scenario 2: Port the Python Application to Win32
Python applications can also be ported to run natively on the Windows platform. As with interoperation, few changes need to be made to the source code. These changes are usually related to connectivity issues.
Note: If your Python applications use UNIX system calls extensively (such as frequently calling
exec), Python applications to SFU/SQL Server):
A port of Python for interix has been made available by Interop Systems and can be downloaded from.
A connectivity driver to the SQL Server database. This is provided by the port of FreeTDS on Interix and is downloadable from Interop Systems at. FreeTDS provides two connectivity options for the Python applications to connect to the SQL Server database. One is a library called CTlib, The other is an ODBC driver.
If you chose the CTLib option for your database connectivity driver, you will need the Sybase Connectivity Module for your Python application to make appropriate calls to CTLib. The source code for this Sybase Connectivity Module can obtain from Object Craft at. This solution is not available pre-compiled from Interop Systems but is a simple port for a UNIX developer.
If you chose ODBC for your database connectivity, you will need an additional interface for the Python application to make ODBC calls. This is available as a commercial package called mxODBC and can be obtained from.
If you use the ODBC driver, you will also need an ODBC driver manager. Two different ODBC driver managers iODBC and unixODBC are available for Windows Services for UNIX from.
This technology option (porting Python applications to SFU), however, has not been fully tested as a part of development of this solution and therefore has not been detailed further.
Scenario 1: Interoperating Python on UNIX with SQL Server
Two common modules used for connecting a Python application to an Oracle database in the UNIX environment are:
DCOracle2
mxODBC
DCOracle2 () does not support SQL Server, but mxODBC () supports the ODBC interface and can be used with SQL Server. If the existing application uses DCOracle2, this interface will need to be replaced to allow connectivity with SQL Server.
Because ODBC is not database-specific, only minor modifications are needed to connect to the migrated database.
Case 1: Interoperating Using the mxODBC Module
To interoperate a Python-based application using DCOracle2 module with SQL Server, follow these steps:
Note that these steps assume that a DCOracle2 module will be replaced with the mxODBC module.
Install the ODBC driver.
In order for mxODBC to connect to SQL Server, an ODBC driver manager must be installed. Two available driver managers are:
iODBC, available at.
DataDirect ODBC for UNIX Platforms, available at. For detailed installation instructions, refer to.
Install the mxODBC module based on the Python version being used.
For installation instructions, refer to.
Configure the ODBC driver to work with mxODBC through the driver manager.
Create a SQL Server data source.
To function, ODBC needs a data source to connect to the database. The Data Source Name (DSN) is generally defined in an odbc.ini file that is used by the driver manager. The DSN is used by the driver manager to load the ODBC driver.
iODBC offers a graphical user interface to set up the DSN. Complete instructions are available at.
The DSN can also be configured by manually modifying the odbc.ini file. The following example file uses a SQL Server DSN.
[ODBC Data Sources] SS_HR_DB=Sample MS SQLServer [SS_HR_DB] Driver=/opt/odbc/lib/mxODBC.so Description=SQL Server 2000 Database=hrapp LogonID=daveb Password=cougar Address=win2kdbp1,1433
Test the connectivity.
iODBC contains a utility named odbctest which can be used to test DSN entries and interact with the database by connecting and issuing queries directly without any code.
The mxODBC package consists of a test script that can also be used to verify the database connectivity. To perform the test, execute the following command:
python mx/ODBC/Misc/test.py
Modify the application to use mxODBC instead of the existing DCOracle2 API.
Note The examples in this section use the iODBC driver. Syntax may vary slightly based on the ODBC driver that is used in your solution.
There are several, minor changes that may need to be made to the existing Python application to allow connectivity with the SQL Server database. These common issues that should be modified include:
Import statements
The import statement, generally found at the head of the application code, should be modified to include the new database module.
If the application currently uses the DCOracle2 module, the import statement is as follows:
import DCOracle2
This entry should be modified to allow for the mxODBC module. The import statement should be changed as follows:
import mx.ODBC.iODBC
Connection objects
The connection object will also need to be modified to point to the new data source. The current entry should be similar to the following example:
db = DCOracle2.Connect('scott/tiger@orahrdb')
This entry should be modified to reflect the new data source. In the following example, the DriverConnect() API is used to pass additional information to SQL Server.
db = mx.ODBC.iODBC.DriverConnect('DSN=sqlserverDSN;UID=scott;PWD=tiger')
This iODBC string will have to be modified based on the driver manager in use.
There are other connection methods, such as ODBC() and connect(), which are also supported by mxODBC. The advantage with DriverConnect() is that it allows more configuration information, such as the log file name, to be passed as part of the connection string.
Cursor Execution
Change the cursor object execute() and executemany() method calls to use a question mark (?) for parameters instead of numeric parameters (:1) or named parameters (:column_name). The following example should be similar to your existing Python code being used with Oracle.
c = db.cursor() custId = "ALFKI" c.execute("Select * from customers where customerID = :1", custId) ... id = "123" name = "Bill" desc = "CoffeeShop" c.execute("insert into categories " + \ " (categoryid, categoryName, description)" + \ "values (:cid,:cname,:cdesc)" \ ,cid = id , cname = name ,cdesc= desc \ )
The code should be modified for use with SQL Server. In the following code, note that numeric and named parameters have been replaced.
c= db.cursor() custId = "ALFKI" c.execute("Select * from customers where customerID = ?",(custId,)) ... id = "123" name = "Bill" desc = "CoffeeShop" c.execute("insert into categories" + \ " (categoryid, categoryName, description)" + \ "values (?,?,?)" \ ,(id, name, desc) \ )
Cursors — multiple executions
The executemany() function is used with DCOracle2 to perform multiple inserts with a single call by passing a list of parameters. For example,
cursor.executemany("insert into inventoryinfo values (:1,:2)",\ [('Bill','Dallas'),('Bob','Atlanta'),('David','Chicago'), \ ('Ann','Miami'),('Fin','Detroit'),('Paul','Dallas'),\ ('Scott','Dallas'),('Mike','Dallas'),('Leigh','Dallas')])
For using multiple cursors on a connection to SQL Server, use the setconnectoption() method to set the SQL.CURSOR_TYPE to SQL.CURSOR_DYNAMIC as shown as follows and replace the parameter specification style in the executemany() call:
# Connect to the database db = mx.ODBC.iODBC.DriverConnect('DSN=sqlserverDSN;UID=scott;PWD=tiger' # Set SQL Server connection option DB.setconnectoption(SQL.CURSOR_TYPE, SQL.CURSOR_DYNAMIC) cursor = db.cursor () cursor.executemany("insert into inventoryinfo values (?,?)",\ [('Bill','Dallas'),('Bob','Atlanta'),('David','Chicago'), \ ('Ann','Miami'),('Fin','Detroit'),('Paul','Dallas'),\ ('Scott','Dallas'),('Mike','Dallas'),('Leigh','Dallas')])
Stored Procedures
The DB API 2.0 callProc() method for executing stored procedures is not implemented in mxODBC. To overcome this deficiency, a stored procedure can be executed similar to a SQL statement using one of the execute methods. However, a requirement in this case is that the stored procedure returns some result. All calls to callProc() need to change to use cursor.execute, as shown in the following example:
c.execute("{call proc_name(?,?)}", (1,2)})
Change all embedded SQL statements to T-SQL.
This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications — Migrating Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.
Scenario 2: Port the Python Application to Win32
Because Python is a cross-platform language, porting applications from a UNIX environment to Windows is not much more complex than the steps required for interoperation. The API used in the discussion on porting to the Windows environment is the mxODBC. The strategy, steps, and actual changes required are very similar to those discussed for interoperation. Zope offers an ADO adapter for SQL Server on the Windows platform which can give better performance than the use of ODBC. Details on this can be obtained from.
Case 1: Porting a Python Application using mxODBC
Most of the following steps are covered in more detail in the "Case 1: Interoperating Using the mxODBC Module" section discussed in scenario 1.
To port the application to Windows, follow these steps:
Download Python for Windows from.
Install Python on Windows.
Add the Python folder to the Windows CLASSPATH environment variable.
Install the appropriate mxODBC for the version of Python installed.
Create an ODBC DSN for the target database.
Create an ODBC data source to connect to SQL Server. After the DSN is created, a Python application using mxODBC can connect to SQL Server and run queries.
Transport source code to the target server.
Update the application code.
If your application uses DCOracle2, then the application changes required will be similar to those discussed in the "Scenario 1: Interoperating Python on UNIX with SQL Server" section earlier in this chapter. Some variations based on the Windows environment include the import statement and the connection object as shown in the following examples.
The import statement should be changed from:
import DCOracle2
to
import mx.ODBC.Windows
The connection object must be initialized as in the following example:
db = mx.ODBC.Windows.DriverConnect('DSN=sqlserverDS')
Note This DSN uses Windows Authentication Mode to connect to SQL Server.
Change all embedded SQL statements to T-SQL.
This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications — Migrating Oracle SQL and PL/SQL" for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.
Download
Get the Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows
Update Notifications
Feedback
Send us your comments or suggestions | https://technet.microsoft.com/en-us/library/bb463139.aspx | CC-MAIN-2017-39 | en | refinedweb |
Github user clebertsuconic commented on a diff in the pull request:
--- Diff: artemis-commons/src/main/java/org/apache/activemq/artemis/utils/critical/CriticalMeasure.java
---
@@ -17,36 +17,48 @@
package org.apache.activemq.artemis.utils.critical;
-import org.jboss.logging.Logger;
+import java.util.concurrent.atomic.AtomicLongFieldUpdater;
public class CriticalMeasure {
- private static final Logger logger = Logger.getLogger(CriticalMeasure.class);
+ //uses updaters to avoid creates many AtomicLong instances
+ private static final AtomicLongFieldUpdater<CriticalMeasure> TIME_ENTER_UPDATER
= AtomicLongFieldUpdater.newUpdater(CriticalMeasure.class, "timeEnter");
+ private static final AtomicLongFieldUpdater<CriticalMeasure> TIME_LEFT_UPDATER
= AtomicLongFieldUpdater.newUpdater(CriticalMeasure.class, "timeLeft");
private volatile long timeEnter;
--- End diff --
timeEnter shouldn't be defaulted to MInvalue? I didn't see any updates on that PR
--- | http://mail-archives.apache.org/mod_mbox/activemq-dev/201709.mbox/%[email protected]%3E | CC-MAIN-2017-39 | en | refinedweb |
Morning,
View Complete Post
[XmlElement("foo", IsNullable=true)] // I believe Bet that this onl
I.
If I've an XML which has nested namespaces, how should I deserialize it into an object? It's not throwing an exception but the values are getting lost.
XmlSerializer
xmlSerializer = new
XmlSerializer
Hi
Hi,
I'd like to have the date and time from server.
DateTime.Now Is ok?
Must be something else?.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/26176-xml-deserialization-nullable-datetime-object.aspx | CC-MAIN-2017-39 | en | refinedweb |
This is the first post in a series exploring the Arrange Act Assert pattern and how to apply it to Python tests.
The goal of the series is to present a recognisable and reusable test template following the Arrange Act Assert pattern of testing. In addition, I aim to present strategies for test writing and refactoring which I’ve developed over the last couple of years, both on my own projects and within teams.
In this part I will introduce the Arrange Act Assert pattern and discuss its constituent parts.
What is Arrange Act Assert?
The “Arrange-Act-Assert” (also AAA and 3A) pattern of testing was observed and named by Bill Wake in 2001. I first came across it in Kent Beck’s book “Test Driven Development: By Example” and I spoke about it at PyConUK 2016.
The pattern focuses each test on a single action. The advantage of this focus is that it clearly separates the arrangement of the System Under Test (SUT) and the assertions that are made on it after the action.
On multiple projects I’ve worked on I’ve experienced organised and “clean” code in the main codebase, but disorganisation and inconsistency in the test suite. However when AAA is applied, I’ve found it helps by unifying and clarifying the structure of tests which helps make the test suite much more understandable and manageable.
TL;DR: The shape of an AAA test
Here is a test that I was working on recently that follows the AAA pattern. I’ve extracted it from Vim and blocked out the code with the colour that Vim assigns.
Hopefully in this rough image you will see three sections to the test separated by an empty line:
- First there is the test definition, docstring and Arrangement.
- Empty line.
- In the middle, there is a single line of code - this is the most important part: The Act.
- Empty line.
- Finally there are the Assertions. You can see that the Assert block code lines all start with the orange / brown colour - that is because the Python keyword assert is marked with this colour in Vim with my current configuration..
Follow this pattern across your tests and your suite will be much improved.
Background
I’ll now go into detail on each of these parts using Pytest and a toy test example - a simple happy-path test for Python’s builtin list.reverse function.
I’ve made the following assumptions:
- We all love PEP008, so we want tests to pass flake8 linting.
- PEP020, The Zen of Python, is also something we work towards - I will use some of it’s “mantras” when I justify some of the suggestions in this guide.
- Simplicity trumps performance. We want a test suite that is easy to maintain and manage and can pay for that with some performance loss. I’ve assumed this is a reasonable trade off because the tests are run much less frequently than the SUT in production.
This post is only an introduction to the AAA pattern. Where certain topics will be covered in more detail in future posts in this series, I have marked them with a footnote.
Definition
The definition of the test function.
Example
def test_reverse():
Guidelines
- Name your function something descriptive because the function name will be shown when the test fails in Pytest output.
- Good test method names can make docstrings redundant in simple tests (thanks Adam!).
Docstring
An optional short single line statement about the behaviour under test.
Example
""" list.reverse inverts the order of items in a list, in place """
Guidelines
Docstrings are not part of the AAA pattern. Consider if your test needs one or if you are best to omit it for simplicity.
If you do include a docstring, then I recommend that you:
Follow the existing Docstring style of your project so that the tests are consistent with the code base you are testing.
Keep the language positive - state clearly what the expected behaviour is. Positive docstrings read similar to:
X does Y when Z
Or…
Given Z, then X does Y
Be cautious when using any uncertain language in the docstring and follow the mantra “Explicit is better than implicit” (PEP20)
Words like “should” and “if” introduce uncertainty. For example:
X should do Y if Z
In this case the reader could be left with questions. Is X doing it right at the moment? Is this a TODO note? Is this a test for an expected failure?
In a similar vein, avoid future case.
X will do Y when Z
Again, this reads like a TODO.
Arrange
The block of code that sets up the conditions for the test action.
Example
There’s not much work to do in this example to build a list, so the arrangement block is just one line.
greek = ['alpha', 'beta', 'gamma', 'delta']
Guidelines
- Use a single block of code with no empty lines.
- Do not use assert in the Arrange block. If you need to make an assertion about your arrangement, then this is a smell that your arrangement is too complicated and should be extracted to a fixture or setup function and tested in its own right [1].
- Only prepare non-deterministic results not available after action [2].
- The arrange section should not require comments. If you have a large arrangement in your tests which is complex enough to require detailed comments then consider:
Act
The line of code where the Action is taken on the SUT.
Example
result = greek.reverse()
Guidelines
Start every Action line with result =.
This makes it easier to distinguish test actions and means you can avoid the hardest job in programming: naming. When every result is called result, then you do not need to waste brain power wondering if it should be item = or response = etc. An added benefit is that you can find test actions easily with a tool like grep.
Even when there is no result from the action, capture it with result = and then assert result is None. In this way, the SUT’s behaviour is pinned.
If you struggle to write a single line action, then consider extracting some of that code into your arrangement.
The action can be wrapped in with ... raises for expected exceptions. In this case your action will be two lines surrounded by empty lines.
Assert
The block of code that performs the assertions on the state of the SUT after the action.
Example
assert result is None assert greek == ['delta', 'gamma', 'beta', 'alpha']
Guidelines
- Use a single block of code with no empty lines.
- First test result, then side effects.
- Limit the actions that you make in this block. Ideally, no actions should happen, but that is not always possible.
- Use simple blocks of assertions. If you find that you are repeatedly writing the same code to extract information from the SUT and perform assertions on it, then consider extracting an assertion helper [4].
The final test
Here’s the example test in full:
def test_reverse(): """ list.reverse inverts the order of items in a list, in place """ greek = ['alpha', 'beta', 'gamma', 'delta'] result = greek.reverse() assert result is None assert greek == ['delta', 'gamma', 'beta', 'alpha']
I hope that this introduction has been helpful and you will return for the next post in the series.
Next in this series
I have not been able to cover all the common cases in the guide above. The following are planned topics for follow up posts:
Links will appear above when I complete these follow up posts.
Don’t miss out: subscribe and receive an email when I post the next part of this series. | http://jamescooke.info/arrange-act-assert-pattern-for-python-developers.html | CC-MAIN-2017-39 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
Hello everyone,
I have some sensors that keep a list of objects with OnCollision and OnSeparation. However, when one body is disposed, the OnSeparation event is not raised in the sensor. Is that reasonable? I thought after disposing, OnSeparations would be invoked since
there is indeed a separation.
Could it be possible to modify the Dispose method in Body so OnSeparation can be raised for those touching objects in the ContactList?
public class Body{ #region otherstuff .... #endregion public void Dispose()
{
if (!IsDisposed)
{
ContactEdge contactEdge = ContactList;
Contact contact;
while (contactEdge != null && (contact = contactEdge.Contact) != null)
{
//Report the separation to both participants:
if (contact.FixtureA != null && contact.FixtureA.OnSeparation != null)
contact.FixtureA.OnSeparation(contact.FixtureA, contact.FixtureB);
//Reverse the order of the reported fixtures. The first fixture is always the one that the
//user subscribed to.
if (contact.FixtureB != null && contact.FixtureB.OnSeparation != null)
contact.FixtureB.OnSeparation(contact.FixtureB, contact.FixtureA);
contactEdge = contactEdge.Next;
}
World.RemoveBody(this);
IsDisposed = true;
GC.SuppressFinalize(this);
}
}}
I suppose that's okay as long as the Dispose method is not executed from World.Step(...). Can you agree with me?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://farseerphysics.codeplex.com/discussions/352733 | CC-MAIN-2017-39 | en | refinedweb |
Provide Port Binding Information for Nova Live Migration¶
Nova Live Migration consists of 3 stages:
- pre_live_migration - executed before migration starts; the migration target is already known, but the instance still resides on the source.
- live_migration_operation - migration itself which consists of 2 substages:
- before the VM is running on the migration target compute.
- after VM is running on the destination compute, but migration is still in progress (post copy migration).
- post_live_migration - executed after migration; source VM does not exist anymore. Used for finalizing the migration.
Today, port binding occurs on the target compute host during migration, considered now in the post_live_migration stage. This, unfortunately, is too late in the process as there are cases where Nova requires this information in the pre_live_migration stage. We simply cannot move the port binding to the pre_live_migration stage as the original port binding would be deleted, causing issues due to the instance being active still on the original host. Further details are contained in the ‘Alternatives’ section.
Problem Description¶
For live migration improvements, it is required that Neutron allows port binding on the target compute host during the pre_live_migration stage, without removing the original port binding on the source compute.
The proposal is to have a port binding on the source AND target compute hosts where only one port binding is active. After instance migration is complete, the target port binding is activated and the source port binding is deactivated, but not deleted. The original port binding on the source compute host is kept for a possible move of the instance back to the source, and will only be removed during the post_live_migration stage.
The issues can be divided in 2 categories:
#1 Keep the source instance running if the port binding on the target fails:
Instance error state when port binding fails:
Today, the port binding is triggered in the post_live_migration stage, after the migration has completed. If the port binding fails, the instance is then stuck in an error state.
The solution is to move the port binding to the pre_live_migration stage, where there is an opportunity to fail the port binding earlier. This would prevent the instance from being stuck in an errored state after the migration is complete and fail the migration at a pre_live_migration stage.
The issue with moving the port binding to the pre_live_migration stage is that some drivers will shutdown the port binding on the source compute host, even though the instance is still active. To achieve a cleaner migration solution, Neutron needs to be modified to allow a port binding on both the source and target compute hosts in an active/inactive state.
#2 Handling the differences in port binding details between hosts:
Live migration between hosts running different l2 agents:
Another case to consider is the chance that the migration occurs between two hosts running different l2 agents. The requirement here is on Nova to update the instance definition before the migration is executed. In the case of libvirt, Nova would update the domain.xml with the target interface definition. For more information, refer to [3] and [4].
A special case to consider is where a migration occurs between agents with differing firewall drivers, i.e. from a host running the ovs hybrid-fw driver to a host running the new ovs conntrackd firewall driver.
Such a migration must only be allowed with source and target binding using the same VNIC type.
Live migration with MacVTap agent when different physnet mappings are used:
MacVTap today has restrictions with live migration in certain situations [1] and requires an update to the instance definition (libvirt domain.xml) before starting a migration.
Updating the instance definition occurs during the live_migration_operation stage, before the instance is running on the target compute host. Two port bindings, one active at the source and one inactive at the target host are required for this operation to succeed.
Proposed Change¶
For migration, Nova will require the port binding information from the target compute during the pre_live_migration stage. This can be achieved by allowing a compute port to be bound to the migration source and the migration target host.
This can be achieved by the following steps:
#1 Expand upon existing API entities under port, allowing CRUD bindings.
#2 Update ML2 to support the changes.
#3 Update the DB to support the changes.
Usage by Nova¶
Nova will utilize the expanded API, and the potential flow is as follows:
- pre_live_migration: Create the inactive binding for target host.
- live_migration_operation: Use information gathered from the inactive binding to modify the instance definition.
- live_migration_operation: Once the instance is active, set the inactive binding to active, and the previous active binding to inactive.
- post_live_migration: Remove the inactive binding on the source compute host.
If rollback is performed after the instance is active on target: From a Neutron standpoint, if the binding is active on the target host, Nova will need to set the source binding back to active:
PUT /v2.0/ports/{port_id}/bindings/{host_id}/activate
For more details on the Nova implementation , see the related Nova Blueprint [3] and its Spec [4]. Neutron will not dictate the implemented capabilities of Nova live migration and will support either path, to rollback or to not rollback.
Binding API Extension for Ports¶
List Bindings¶
GET /v2.0/ports/{port_id}/bindings
{ "bindings": [ { "host_id": "source-host_id", "vif_type": "ovs", "vif_details": { "port_filter": true, "ovs_hybrid_plug": true }, "profile": {}, "vnic_type": "normal", "status": "active" }, { "host_id": "target-host_id", "vif_type": "bridge", "vif_details": { "port_filter": true, }, "profile": {}, "vnic_type": "normal", "status": "inactive" }, ] }
More parameters see Show Binding
Important key features of list bindings:
- Compute bindings will currently be listed and a request for unsupported bindings will return ‘NotImplemented’ until the capability is introduced.
- All bindings will be listed and pagination will be used when many bindings are returned.
Show Binding¶
GET /v2.0/ports/{port_id}/bindings/{host_id}
{ "binding": { "host_id": "target-host_id", "vif_type": "target-vif-type", "vif_details": { "port_filter": true, }, "vnic_type": 'NORMAL', "profile": {}, "status": "active" } }
Important key features of show binding:
- DVR ports exposed in this resource will show the real vif_type of ‘distributed’ ports as they are stored in DistributedPortBindings, i.e. ovs.
Create Binding¶
POST /v2.0/ports/{port_id}/bindings
{ "binding": { "host_id": "target-host_id" } }
Response parameters
see List Bindings
{ "binding": { "host_id": "target-host_id", "vif_type": "ovs", "vif_details": { "port_filter": true, "ovs_hybrid_plug": true }, "vnic_type": 'NORMAL', "profile": {}, "status": "active" } }
If the binding fails, a new return code of 4xx or 5xx should be returned. This differs from today where a failed binding returns a 2xx response code and the vif_type is set to “binding_failed”.
Important key features of update/create binding:
- By default, the status will be active when creating a port binding. If a binding is created, doesn’t exist already, and an existing binding is already active, the binding will default to inactive, requiring the operator to activate the new binding.
- If a binding being added already exists, a 4xx will be returned.
- A compute port can only have 1 active binding at a time. This is not an enforcement by Neutron, but a result of the operation surrounding PortBinding. This feature expands the capability of having multiple bindings, but will only allow for 1 active binding for compute ports.
- At this time, creation of a binding will be limited to compute ports.
- The existing API is not touched, which will return host_id:{host_id} as the current active binding.
- Activating an inactive compute binding will deactivate the current active binding.
Update Binding¶
PUT /v2.0/ports/{port_id}/bindings/{host_id}
All create parameters are valid for update as well. See Create Binding.
{ "binding": { "vnic_type": 'NORMAL', "profile": {} } }
Response parameters
see Show Binding
{ "binding": { "host_id": "target-host_id", "vif_type": "ovs", "vif_details": { "port_filter": true, "ovs_hybrid_plug": true }, "vnic_type": 'NORMAL', "profile": {"foo":"bar"}, "status": "active" } }
On failed binding, a 4xx or 5xx return code should be returned.
Activating an Inactive Binding¶
PUT /v2.0/ports/{port_id}/bindings/{host_id}/activate
Response parameters
see Show Binding
{ "binding": { "host_id": "target-host_id", "vif_type": "ovs", "vif_details": { "port_filter": true, "ovs_hybrid_plug": true }, "vnic_type": 'NORMAL', "profile": {"foo":"bar"}, "status": "active" } }
Important key features of activate binding:
- Activating a compute binding that is inactive will deactivate the existing active binding, as a compute port can only have 1 binding active at a time.
- Operation will be limited to compute ports.
- Attempting to activate an existing active binding will return a 4xx.
- Returns 5xx if activating the binding fails.
Delete Binding¶
DELETE /v2.0/ports/{port_id}/bindings/{host_id}
This operation does not accept a request body and does not return a response body.
Important key features of delete binding:
- Active/Inactive bindings can be removed.
- Deleting an active compute port binding, where an inactive binding exists does not activate the binding. The operator will be required to explicitly activate the binding.
Overlap Between Existing vs New APIs¶
All the functionality of the existing API will be covered by the new API as well. This section describes the overlap.
Effects on Existing APIs¶
Slight adjustment to existing APIs:
- Show Port will still just show the binding like today. For compute ports, it would only show the active binding.
- Create Port will create an unbound binding as before, but with the status of active.
- Update Port with host_id will still re-trigger port binding for a host. The difference will be update_port() will only action on the active binding.
- vif_type is set to “binding_failed” and http code 2xx is still used on failed port binding when binding is triggered via the existing port binding extension.
Sub Resource Extension¶
Neutron ports will be extended with a sub resource bindings, having a member name of port to preserve portbindings and ports extensions. The new sub resource extension will be portbindings_extended.py and have a parent resource of ports.
The following methods will be added to the newly created service plugin bindings_plugin.py:
- get_port_bindings()
- get_port_binding()
- create_port_binding()
- update_port_binding()
- delete_port_binding()
- update_port_binding_activate()
ML2 Changes¶
Existing methods create_port() and update_port() will need to be updated to support actioning only on active status bindings. In addition, a new status of inactive will be introduced to neutron-lib for use in PortBinding.
Status Usage¶
The status column in PortBinding will store an additional state ‘inactive’ where the current states are ‘active’ and ‘down’. Neutron-lib will only require the addition of PORT_STATUS_INACTIVE.
from neutron_lib import constants as const const.PORT_STATUS_ACTIVE const.PORT_STATUS_INACTIVE
Create/Update/Delete Port¶
New methods will be introduced in support of the new sub resource under ports, but the current create_port(), update_port() and delete_port() will be modified to only act on active bindings in the PortBinding table.
Today, create_port() adds an empty unbound binding in PortBinding and the following changes will be made in support of this spec:
- Create an unbound binding with status active in the PortBinding table.
In addition, update_port() will be adjusted for active status with the following changes:
- Update will only change binding information on the active binding in the PortBinding table.
Finally, delete_port() will be adjusted for active status with the following changes:
- Delete will only act on the active binding in the PortBinding table.
Data Model Changes¶
The PortBinding table will expand the primary key to column host, allowing selection of the binding based on port_id and host.
In addition, a status column will be introduced in the expansion where states active, down, and inactive will be values.
Online upgrades, Blueprint [9], requires the addition of host to primary_keys and a new field status for Port Binding OVO. Version of the object should be bumped if push-notifications, under Blueprint [10], will be merged first, and PortBinding object will be present on the RPC wire. Defining a default value for the status field would not require online data migration.
Changes to Mechanism Drivers¶
A new mechanism driver method is required to determine if the new way of binding things is supported. This must be validated for all binding levels and allow fallback to previous methods. By default, new method will return unsupported.
Besides the in tree mechanism drivers for l2 agents (ovs, lb, sr-iov, macvtap) the following drivers need to be considered:
- l2pop (however there are ideas to eliminate l2pop)
- ironic
- third party mechanism drivers
Activate RPC Port Update/Delete¶
The existing port_update and port_delete RPC message will be adjusted to send, when the agent retrieves device information with get_devices_details_list_and_failed_devices, specific binding information for the host regardless of binding state. This will allow the addition of additional plumbing to occur as follows:
- Activate will result in a port_update, which will pass the relative binding information for the host and dictate the transition from inactive to active in the get_devices_details_list_and_failed_devices response. This will allow for a GARP to be sent out, updating the topology to a change in status.
- Activate will result in a port_delete rpc call to the source host, removing the source VIF. This will need to be accomplished due to Nova not being able to issue a delete port, as the port still exists on a different host. The binding will remain in an inactive state where binding information has been populated, but the port, from an agent perspective, will not exist. In addition, the transition from active to inactive will be indicated in the rpc call, influencing the update_device_list to not update the port state.
In the case where push-notifications are implemented for ports under Blueprint [10] the get_devices_details_list_and_failed_devices would not be adjusted for transition state. Instead, the binding transition state would be sent to the agent as part of the port object. The remaining actions are the same.
Other changes¶
- Neutron/Openstack Python Client support.
- Neutron-Lib support of new constant PORT_STATUS_INACTIVE see Activating an Inactive Binding.
Command Line Client Impact¶
Support for port bindings will be needed in OSC. The following will be added:
$ openstack port binding list {ARGS} <port> $ openstack port binding show {ARGS} <port> <host> $ openstack port binding create {ARGS} <port> $ openstack port binding update {ARGS} <port> <host> $ openstack port binding delete {ARGS} <port> <host> $ openstack port binding activate {ARGS} <port> <host>
Community Impact¶
Yes. This change has been discussed on the ML, in Neutron meetings (especially ML2), at mid-cycles, and at the design summit.
Alternatives¶
An alternative is to use the current resources under ports to facilitate this change to live migration. The problem is, current dict structures would need to be expanded to accommodate the ‘bindings’ key. This may cause some confusion as the user already receives ‘bindings:profile’ and various other values.
Implementation¶
Assignee(s)¶
Primary assignee:
Other contributors:
Please add your name here and attend the ML2 Subteam Meeting if you’d like to contribute.
Testing¶
Tempest Tests¶
Addition of a scenario test, test_bindings.py, to walk through the creation of a source migration instance, creation of the inactive binding on a secondary host, creation of a secondary target migration instance, activating the inactive binding, deactivating the source migration active binding, and then validating connectivity is still working.
Functional Tests¶
Additional functional tests will be added to ml2/test_plugin.py to expand on the current port binding tests. This will accommodate for a status check in the case of adding an inactive binding.
Documentation Impact¶
Yes. | http://specs.openstack.org/openstack/neutron-specs/specs/backlog/ocata/portbinding_information_for_nova.html | CC-MAIN-2017-39 | en | refinedweb |
This is the 8th MVC (Model view controller) tutorial and in this article we
try to understand how we can validate data passed in MVC URL’s.
MVC is all about action which happens via URL and data for those actions is also
provided by the URL. It would be great if we can validate data which is passed
via these MVC URL’s.
For instance let’s consider the MVC URL . If anyone wants to view
customer details for 1001 customer code he needs to enter .
The customer code is numeric in nature. In other words anyone entering a MVC URL
like is invalid. MVC framework
provides a validation mechanism by which we can check on the URL itself if the
data is appropriate. In this article we will see how to validate data which is
entered on the MVC URL. So let’s start step by step.
The first is to create a simple customer class model which will be invoked by
the controller.
public class Customer
{
public int Id { set; get; }
public string CustomerCode { set; get; }
public double Amount { set; get; }
}
The next step is to create a simple controller class which ‘DisplayCustomer’ function which displays the
customer using the ‘id’ value. This function takes the ‘id’ value and looks up
through the customer collection. Below is the downsized reposted code of the
function.
[HttpGet]
public ViewResult DisplayCustomer(int id)
{
Customer objCustomer = Customers[id];
return View("DisplayCustomer",objCustomer);
}
If you look at the ‘DisplayCustomer’ function it takes an ‘id’ value which is
numeric. We would like put a validation on this id field with the following
constraints:-
• Id should always be numeric.
• It should be between 0 to 99.
We want the above validations to fire when the MVC URL is invoked with data.
The validation described in the step 2 can be achieved by applying regular
expression on the route map. If you go to global.asax file and see the maproute
function on the inputs to this function is the constraint as shown in the below
figure.
In case you are new to regular expression we would advise you to go through this
video on regular expressions
So in order to accommodate the numeric validation we need to the specify the
regex constraint i.e. ‘\d{1,2}’ in the ‘maproute’ function as shown below.
‘\d{1,2}’ in regex means that the input should be numeric and should be maximum
of length 1 or 2 , i.e. between 0 to 99.
You can specify ‘maproute’ functions,
it’s time to test if these validations work.
So in the first test we have specified valid 1 and we see that the controller is
hit and the data is displayed.
If you try to specify value more than 100 you would get the other page. In MVC everything is an action and those
actions invoke the views or pages. We can not specify direct hyperlinks like
, this would defeat the purpose of MVC. In other words we need to specify
actions and these actions will invoke the URL’s.
In the next article we will look in to how to define outbound URL in MVC
views which will help us to navigate from one page to other page.
Latest Articles
Latest Articles from Questpond
Login to post response | http://www.dotnetfunda.com/articles/show/1527/how-to-validate-data-provided-in-mvc-urls-mvc-tutorial-number-8 | CC-MAIN-2017-39 | en | refinedweb |
In analogy to filter effects that manipulate pixels in an image, PlotDevice provides the path mathematics to create filter effects that manipulate vector curves. Below are some demonstations of the kinds of effects you can achieve.
A path filter that grows hair on each point in a contour. We measure the length of each contour and then calculate a number of points on it, based on the length (so longer segments will get more hairs and shorter segments will get less). On each point we draw some wiggly curves.
size(550, 300) background("#3a3526") font("georgia", "bold", 175) path = textpath("hairs", 40, 200) for contour in path.contours: prev = None n = contour.length for pt in contour.points(int(n)): nofill() stroke(1, 0.75) pen(random(0.25, 0.5)) if prev != None: with bezier(prev.x, prev.y): curveto( pt.ctrl1.x - random(30), pt.ctrl1.y, pt.ctrl2.x, pt.ctrl2.y + random(30), pt.x, pt.y ) curveto( pt.ctrl1.x + random(10), pt.ctrl1.y, pt.ctrl2.x, pt.ctrl2.y - random(10), pt.x + random(-20, 20), pt.y + random(-10, 10) ) prev = pt
A path filter that weaves a web between the points in each contour. Depending on the length of a contour, a number of points are calculated along the contour. Then points are connected randomly with straight lines (as long as the distance between them is smaller than some given number, fontsize() / 5 in this case).
from math import sqrt size(550, 300) background("#3a3526") font("helvetica", "bold", 125) path = textpath("SPIDER", 20, 200) m = 2.0 for contour in path.contours: n = contour.length + 50 points = list(contour.points(n)) for i in range(int(n)): pt1 = choice(points) d = float("inf") while d > fontsize()/5: pt2 = choice(points) d = sqrt((pt2.x-pt1.x)**2 + (pt2.y-pt1.y)**2) with nofill(), stroke("white", 0.9), pen(0.35): line( pt1.x + random(-m, m), pt1.y + random(-m, m), pt2.x + random(-m, m), pt2.y + random(-m, m) )
A path filter that sketches a piece of text by drawing different layers of (gradually converging) paths on top of each other. For each contour a number of points are calculated. Two points are roughly connected by a straight line. The connection gets more accurate as more layers are drawn.
size(550, 300) background("#3a3526") font("georgia", "bold", 175) path = textpath("draft", 40, 200) m = 15 for i in range(m): m -= 1 for contour in path.contours: prev = None n = contour.length for pt in contour.points(n/80*i): nofill() stroke(1, 0.75) pen(0.25) if prev != None: line( pt.x, pt.y, prev.x + random(-m, 0), prev.y + random(-m, 0) ) line( pt.x + random(-m, 0), pt.y + random(-m, 0), prev.x, prev.y ) prev = pt
A path filter that draws perpendicular spikes along each contour. A spike is calculated from the angle between two consecutive points. Subtracting 90° from this angle gives us the perpendicular angle jutting outwards from the curve. We then find the point halfway between the starting and ending point and push it upwards. If we connect a curve from the starting point to this point, and a curve from this point to the ending point, we get a spike connecting the two.
from math import degrees, atan2 from math import sqrt, pow from math import radians, sin, cos size(550, 300) background("#3a3526") font("helvetica", "bold", 125) path = textpath("SPIKED", 40, 200) m = 5 # spike length c = 0.8 # spike curvature # From the PlotDevice math tutorial: def angle(x0, y0, x1, y1): return degrees( atan2(y1-y0, x1-x0) ) def distance(x0, y0, x1, y1): return sqrt(pow(x1-x0, 2) + pow(y1-y0, 2)) def coordinates(x0, y0, distance, angle): x1 = x0 + cos(radians(angle)) * distance y1 = y0 + sin(radians(angle)) * distance return x1, y1 # The "spike" function between two points. def perpendicular_curve(pt0, pt1, curvature=0.8): d = distance(pt0.x, pt0.y, pt1.x, pt1.y) a = angle(pt0.x, pt0.y, pt1.x, pt1.y) mid = Point( pt0.x + (pt1.x-pt0.x) * 0.5, pt0.y + (pt1.y-pt0.y) * 0.5 ) dx, dy = coordinates(mid.x, mid.y, m, a-90) vx = pt0.x + (mid.x-pt0.x) * curvature vy = pt0.y + (mid.y-pt0.y) * curvature curveto(vx, vy, dx, dy, dx, dy) vx = pt1.x + (mid.x-pt1.x) * curvature vy = pt1.y + (mid.y-pt1.y) * curvature curveto(dx, dy, vx, vy, pt1.x, pt1.y) for contour in path.contours: prev = None n = contour.length / 8 for pt in contour.points(n): nofill() stroke(1) strokewidth(0.75) if not prev: beginpath(pt.x, pt.y) elif pt.cmd == MOVETO: moveto(pt.x, pt.y) else: perpendicular_curve(prev, pt, c) prev = pt endpath()
A path filter that trashes the path by inserting random line segments between two points.
size(550, 300) background("#3a3526") font("georgia", "bold", 175) path = textpath("trash", 40, 200) def trash(path, pt0, pt1, m=0.2, n=20, d=3.0): # Add trash between two points. # m: controls how much of the path is trashed. # n: the number of lines to insert. # d: the maximum length of inserted lines. if random() < m: for i in range(random(n)): pt0.x += random(-d, d) pt0.y += random(-d, d) path.lineto(pt0.x, pt0.y) path.lineto(pt1.x, pt1.y) # Create a blot/speckle near the current point. # We have to add this to the path at the end. if random() < m*0.3: x = pt1.x - random(-d*4, d*4) y = pt1.y - random(-d*2, d*2) blot = Bezier() blot.moveto(x, y) for i in range(random(n)): x += random(-d, d) y += random(-d, d) blot.lineto(x, y) blot.closepath() return blot p = Bezier() extensions = [] for contour in path.contours: prev = None n = contour.length / 8 for pt in contour.points(n): if not prev: p.moveto(pt.x, pt.y) elif pt.cmd == MOVETO: p.moveto(pt.x, pt.y) else: blot = trash(p, prev, pt) if blot: extensions.append(blot) prev = pt for blot in extensions: p.extend(blot) fill(1) nostroke() drawpath(p) | https://plotdevice.io/tut/Path_Filters | CC-MAIN-2017-39 | en | refinedweb |
I easy way to present a file selection dialog to the user with which they can select the excel file.
My first choice was wx. But then realised i needed something thats part of standard python and that landed me to this page File Tkinter dialogs (Python recipe)
Here’s the super convenient and simple way to get a minimal file selection GUI from standard python modules.
import Tkinter,tkFileDialog root = Tkinter.Tk() root.withdraw() filename = tkFileDialog.askopenfile(parent=root,mode='r',filetypes=[(“Excel file”,”*.xls”)],title='Choose an excel file') if filename != None: print “This excel file has been selected”, file
Thanks internet!!
And here’s a good presentation on Tkinter titled “Tkinter Does Not Suck”. I agree…
Advertisements
One thought on “Simple and Minimal File Selection GUI with Standard Python”
Pingback: Matplotlib Embedded with Tkinter. | SukhbinderSingh.com | https://sukhbinder.wordpress.com/2014/05/07/simple-and-minimal-file-selection-gui-with-standard-python/ | CC-MAIN-2017-39 | en | refinedweb |
The LockWorkStation function submits a request to lock the workstation's display. Locking a workstation protects it from unauthorized use.
LockWorkStation
The following example locks the workstation using the LockWorkStation function. The function requires version Windows 2000 Professional or later.
This example also illustrates load-time dynamic linking. If the DLL is not available, the application using load-time dynamic linking must simply terminate. The run-time dynamic linking example, however, can respond to the error. This is a good way to prevent forced termination. You can use this technique in the applications which should be runnable at all Windows platforms but with delimited functionality following the currently running version of the system.
#include <windows.h>
#define IsWin2000Plus() ((DWORD)(LOBYTE(LOWORD(GetVersion()))) >= 5)
int (__stdcall * MyLockWorkStation)();
int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPSTR lpCmdLine, int nCmdShow)
{
HINSTANCE hinstLib;
if (IsWin2000Plus())
{
hinstLib = LoadLibrary("USER32.DLL");
if (hinstLib)
{
MyLockWorkStation = (int (__stdcall *)())
GetProcAddress(hinstLib, "LockWorkStation");
if (MyLockWorkStation != NULL)
(MyLockWorkStation) ();
}
FreeLibrary(hinstLib);
}
else
MessageBox(NULL, "This application requires"
" Windows 2000 Professional or higher!",
"Lock Workstation", MB_OK);
return 0;
}
This application has the same result as pressing Ctrl+Alt+Del and clicking Lock Work. | https://www.codeproject.com/Articles/10600/Lock-workstation?fid=187517&df=90&mpp=25&sort=Position&spc=Relaxed&tid=1255742 | CC-MAIN-2017-39 | en | refinedweb |
In today’s post, we are going to write a Twitter app that allows the user to type in a twitter username and display the tweets from the user. We will be using the new Twitter API v1.1 released late last year and use a couple of Telerik’s RadControls for Windows Phone 8. We will also make use of Joe Mayo’s excellent Linq2Twitter library to simplify things.
Let’s get started.
Launch Visual Studio 2012 and select, Visual C#-> Windows Phone -> RadControls for Windows Phone and give it a meaningful name. On the project configuration wizard screen, leave the default component selected as shown in Figure 1 and press next.
Figure 1: The First Screen to our RadControls for Windows Phone 8 App.
The second page in the project configuration wizard is shown in Figure 2.
Figure 2: The Second Screen to our RadControls for Windows Phone 8 App.
As you can tell from this screen, we have built into the template two of the most common pages in a professional Windows Phone app. The “About” and “How To” page. We have also added the ability to quickly add functionality such as error diagnostics, trial and rate reminders. The error diagnostics will trap any unhandled exceptions and the trial and rate reminders will prompt your users to either purchase the app or rate the app depending on if you desire this functionality or not. We are building a simple first app, so remove all the checkmarks except Diagnostics.
Our UI will consist of RadTextBox and RadDataBoundListBox. Both of these controls contain the needed functionality to get started quickly. A screenshot of the final app is shown in Figure 3.
Figure 3: Our Final Windows Phone 8 App.
Let’s take a look first at the completed XAML for our ContentPanel in MainPage.xaml:
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="80*" />
<RowDefinition Height="533*" />
</Grid.RowDefinitions>
<telerikPrimitives:RadTextBox x:
<telerikPrimitives:RadTextBox.ActionButtonStyle>
<Style TargetType="telerikPrimitives:RadImageButton">
<Setter Property="ButtonShape" Value="Ellipse"/>
</Style>
</telerikPrimitives:RadTextBox.ActionButtonStyle>
</telerikPrimitives:RadTextBox>
<telerikPrimitives:RadDataBoundListBox Grid.
<telerikPrimitives:RadDataBoundListBox.ItemTemplate>
<DataTemplate>
<StackPanel Orientation="Horizontal" Height="110" Margin="10">
<Image Source="{Binding ImageSource}" Height="73" Width="73" VerticalAlignment="Top" Margin="10,10,8,10"/>
<TextBlock Text="{Binding Message}" Margin="10" TextWrapping="Wrap" FontSize="18" Width="320" />
</StackPanel>
</DataTemplate>
</telerikPrimitives:RadDataBoundListBox.ItemTemplate>
</telerikPrimitives:RadDataBoundListBox>
</Grid>
We are using RadTextBox to collect data from the end user. By using this control we can easily implement the following features without writing additional code:
RadDataBoundListBox will allow our users to have a powerful control that handles many items as well as Pull-To-Refresh functionality. This allows the end-user to request a data refresh by pulling the top edge of the scrollable content down and releasing it. Inside of the ItemTemplate, we are going to create a DataTemplate that contains an Image and a TextBlock. The Image will show the user’s twitter avatar and the TextBlock will contain the text of the tweet.
Now that we have defined our UI, let’s go ahead and pull in the Linq2Twitter library using NuGet. We can right-click on references and select, “Manage NuGet References” then type in linq2twitter and click install as shown in Figure 4.
Figure 4: Adding Linq2Twitter to our Windows Phone 8 App.
Once installed, we can check “References” and should see “LinqToTwitterWP”.
Twitter API 1.1 requires authentication on every API endpoint. That means that from now on you will need to create an app that contains your Consumer Key, Consumer Secret, Access Token and Access Token Secret. You can easily create an app by visiting. Once that is in place, you can get your keys by visiting your apps page.
Note that the developer will have to manually create their access tokens in the app’s settings page as shown in Figure 5.
Figure 5: The OAuth Settings Page.
Begin by creating a simple class called TwitterItem and adding the following two properties. Make sure you mark the class as public as shown below:
public class TwitterItem
{
public string ImageSource { get; set; }
public string Message { get; set; }
}
Switch over to our MainPage.xaml.cs and before our MainPage constructor, we will need to add in the following code:
SingleUserAuthorizer singleUserAuthorizer = new SingleUserAuthorizer()
Credentials = new SingleUserInMemoryCredentials()
{
ConsumerKey = "YOUR_CONSUMER_KEY",
ConsumerSecret = "YOUR_CONSUMER_SECRET",
TwitterAccessToken = "YOUR_ACCESS_TOKEN",
TwitterAccessTokenSecret = "YOUR_ACCESS_TOKEN_SECRET"
}
};
In this snippet, we are using Linq2Twitter to authenticate with Twitter about who we are and it will automatically determine what our permissions are. We can now drop in a method to Load Tweets once the user presses the search button (included in the RadTextBox) or refreshes the list with RadDataBoundListBox. The source code is listed below.
public void LoadTweets()
{
if (singleUserAuthorizer == null || !singleUserAuthorizer.IsAuthorized)
{
MessageBox.Show("Not Authorized!");
}
else
var twitterCtx=new TwitterContext(singleUserAuthorizer);
(from tweet in twitterCtx.Status
where tweet.Type == StatusType.User &&
tweet.ScreenName == txtUserName.Text
select tweet)
.MaterializedAsyncCallback(asyncResponse => Dispatcher.BeginInvoke(() =>
{
if (asyncResponse.Status != TwitterErrorStatus.Success)
{
MessageBox.Show("Error: " + asyncResponse.Exception.Message);
return;
}
lstTwitter.ItemsSource=
(from Status tweet in asyncResponse.State
select new TwitterItem
{
ImageSource=tweet.User.ProfileImageUrl,
Message=tweet.Text
})
.ToList();
}));
lstTwitter.StopPullToRefreshLoading(true);
}
private void txtUserName_ActionButtonTap(object sender, EventArgs e)
LoadTweets();
private void lstTwitter_RefreshRequested(object sender, EventArgs e)
We first check to see if we have authenticated properly and if we aren’t then inform the user. If we are authenticated then we are going to select the tweets with the username typed into our TextBox with an Async callback and add the results to our RadDataBoundListBox’s ItemSource property. Finally, we will turn off the Refresh loading animation from our RadDataBoundListBox.
Today, we saw just how quickly you could get up and running with a new Windows Phone 8 project by using a couple of our controls. Since we used our controls and a great open-source library, we were able to complete this project in just a couple of minutes. If you have any questions or comments, then please leave them below. You may also grab the source code for this project if you wish.
-Michael Crump (@mbcrump)
Special thanks to Lance McCarthy for his OAuth insight and Joe Mayo for reviewing the Linq2Twitter code.. | http://www.telerik.com/blogs/create-a-twitter-app-using-radcontrols-for-windows-phone-8 | CC-MAIN-2015-32 | en | refinedweb |
Once you’ve created an index and added documents to it, you can search for those documents.
To get a whoosh.searching.Searcher object, call searcher() on your Index object:
searcher = myindex object is the main high-level interface for reading the index. It has lots of useful methods for getting information about the index, such as lexicon(fieldname).
>>> list(searcher.lexicon("content")) [u"document", u"index", u"whoosh"]
However, the most important method on the Searcher object is search(), which takes a whoosh.query.Query object and returns a Results object:
from whoosh.qparser import QueryParser qp = QueryParser("content", schema=myindex.schema) q = qp.parse(u"hello world") with myindex.searcher() as s: results = s.search(q)
By default the results contains at most the first 10 matching documents. To get more results, use the limit keyword:
results = s.search(q, limit=20)
If you want all results, use limit=None. However, setting the limit whenever possible makes searches faster because Whoosh doesn’t need to examine and score every document.
Since displaying a page of results at a time is a common pattern, the search_page method lets you conveniently retrieve only the results on a given page:
results = s.search_page(q, 1)
The default page length is 10 hits. You can use the pagelen keyword argument to set a different page length:
results = s.search_page(q, 5, pagelen=20)
The Results object acts like a list of the matched documents. You can use it to access the stored fields of each hit document, to display to the user.
>>> # Show the best hit's stored fields >>> results[0] {"title": u"Hello World in Python", "path": u"/a/b/c"} >>> results[0:2] [{"title": u"Hello World in Python", "path": u"/a/b/c"}, {"title": u"Foo", "path": u"/bar"}]
By default, Searcher.search(myquery) limits the number of hits to 20, So the number of scored hits in the Results object may be less than the number of matching documents in the index.
>>> # How many documents in the entire index would have matched? >>> len(results) 27 >>> # How many scored and sorted documents in this Results object? >>> # This will often be less than len() if the number of hits was limited >>> # (the default). >>> results.scored_length() 10
Calling len(Results) runs a fast (unscored) version of the query again to figure out the total number of matching documents. This is usually very fast but for large indexes it can cause a noticeable delay. If you want to avoid this delay on very large indexes, you can use the has_exact_length(), estimated_length(), and estimated_min_length() methods to estimate the number of matching documents without calling len():
found = results.scored_length() if results.has_exact_length(): print("Scored", found, "of exactly", len(results), "documents") else: low = results.estimated_min_length() high = results.estimated_length() print("Scored", found, "of between", low, "and", high, "documents")
Normally the list of result documents is sorted by score. The whoosh.scoring module contains implementations of various scoring algorithms. The default is BM25F.
You can set the scoring object to use when you create the searcher using the weighting keyword argument:
from whoosh import scoring with myindex.searcher(weighting=scoring.TF_IDF()) as s: ...
A weighting model is a WeightingModel subclass with a scorer() method that produces a “scorer” instance. This instance has a method that takes the current matcher and returns a floating point score.
See Sorting and faceting.
See How to create highlighted search result excerpts and Query expansion and Key word extraction for information on these topics.
You can use the filter keyword argument to search() to specify a set of documents to permit in the results. The argument can be a whoosh.query.Query object, a whoosh.searching.Results object, or a set-like object containing document numbers. The searcher caches filters so if for example you use the same query filter with a searcher multiple times, the additional searches will be faster because the searcher will cache the results of running the filter query
You can also specify a mask keyword argument to specify a set of documents that are not permitted in the results.
with myindex.searcher() as s: qp = qparser.QueryParser("content", myindex.schema) user_q = qp.parse(query_string) # Only show documents in the "rendering" chapter allow_q = query.Term("chapter", "rendering") # Don't show any documents where the "tag" field contains "todo" restrict_q = query.Term("tag", "todo") results = s.search(user_q, filter=allow_q, mask=restrict_q)
(If you specify both a filter and a mask, and a matching document appears in both, the mask “wins” and the document is not permitted.)
To find out how many results were filtered out of the results, use results.filtered_count (or resultspage.results.filtered_count):
with myindex.searcher() as s: qp = qparser.QueryParser("content", myindex.schema) user_q = qp.parse(query_string) # Filter documents older than 7 days old_q = query.DateRange("created", None, datetime.now() - timedelta(days=7)) results = s.search(user_q, mask=old_q) print("Filtered out %d older documents" % results.filtered_count)
You can use the terms=True keyword argument to search() to have the search record which terms in the query matched which documents:
with myindex.searcher() as s: results = s.seach(myquery, terms=True)
You can then get information about which terms matched from the whoosh.searching.Results and whoosh.searching.Hit objects:
# Was this results object created with terms=True? if results.has_matched_terms(): # What terms matched in the results? print(results.matched_terms()) # What terms matched in each hit? for hit in results: print(hit.matched_terms())
Whoosh lets you eliminate all but the top N documents with the same facet key from the results. This can be useful in a few situations:
Whether a document should be collapsed is determined by the value of a “collapse facet”. If a document has an empty collapse key, it will never be collapsed, but otherwise only the top N documents with the same collapse key will appear in the results.
See Sorting and faceting for information on facets.
with myindex.searcher() as s: # Set the facet to collapse on and the maximum number of documents per # facet value (default is 1) results = s.collector(collapse="hostname", collapse_limit=3) # Dictionary mapping collapse keys to the number of documents that # were filtered out by collapsing on that key print(results.collapsed_counts)
Collapsing works with both scored and sorted results. You can use any of the facet types available in the whoosh.sorting module.
By default, Whoosh uses the results order (score or sort key) to determine the documents to collapse. For example, in scored results, the best scoring documents would be kept. You can optionally specify a collapse_order facet to control which documents to keep when collapsing.
For example, in a product search you could display results sorted by decreasing price, and eliminate all but the highest rated item of each product type:
from whoosh import sorting with myindex.searcher() as s: price_facet = sorting.FieldFacet("price", reverse=True) type_facet = sorting.FieldFacet("type") rating_facet = sorting.FieldFacet("rating", reverse=True) results = s.collector(sortedby=price_facet, # Sort by reverse price collapse=type_facet, # Collapse on product type collapse_order=rating_facet # Collapse to highest rated )
The collapsing happens during the search, so it is usually more efficient than finding everything and post-processing the results. However, if the collapsing eliminates a large number of documents, collapsed search can take longer because the search has to consider more documents and remove many already-collected documents.
Since this collector must sometimes go back and remove already-collected documents, if you use it in combination with TermsCollector and/or FacetCollector, those collectors may contain information about documents that were filtered out of the final results by collapsing.
To limit the amount of time a search can take:
from whoosh.collectors import TimeLimitCollector, TimeLimit with myindex.searcher() as s: # Get a collector object c = s.collector(limit=None, sortedby="title_exact") # Wrap it in a TimeLimitedCollector and set the time limit to 10 seconds tlc = TimeLimitedCollector(c, timelimit=10.0) # Try searching try: s.search_with_collector(myquery, tlc) except TimeLimit: print("Search took too long, aborting!") # You can still get partial results from the collector results = tlc.results()
The document() and documents() methods on the Searcher object let you retrieve the stored fields of documents matching terms you pass in keyword arguments.
This is especially useful for fields such as dates/times, identifiers, paths, and so on.
>>> list(searcher.documents(indexeddate=u"20051225")) [{"title": u"Christmas presents"}, {"title": u"Turkey dinner report"}] >>> print searcher.document(path=u"/a/b/c") {"title": "Document C"}
These methods have some limitations:
It is sometimes useful to use the results of another query to influence the order of a whoosh.searching.Results object.
For example, you might have a “best bet” field. This field contains hand-picked keywords for documents. When the user searches for those keywords, you want those documents to be placed at the top of the results list. You could try to do this by boosting the “bestbet” field tremendously, but that can have unpredictable effects on scoring. It’s much easier to simply run the query twice and combine the results:
# Parse the user query userquery = queryparser.parse(querystring) # Get the terms searched for termset = set() userquery.existing_terms(termset) # Formulate a "best bet" query for the terms the user # searched for in the "content" field bbq = Or([Term("bestbet", text) for fieldname, text in termset if fieldname == "content"]) # Find documents matching the searched for terms results = s.search(bbq, limit=5) # Find documents that match the original query allresults = s.search(userquery, limit=10) # Add the user query results on to the end of the "best bet" # results. If documents appear in both result sets, push them # to the top of the combined results. results.upgrade_and_extend(allresults)
The Results object supports the following methods: | http://pythonhosted.org/Whoosh/searching.html | CC-MAIN-2015-32 | en | refinedweb |
pyramid_contextauth 0.7.1
Pyramid security extension to register multiple contexts based authentication policies.
A simple pyramid extension to register contexts based authentication policy. Introspectables for policies registered are added to configuration and will appear in debugtoolbar with their associated contexts.
from pyramid.security import remember, forget from pyramid.authentication import AuthTktAuthenticationPolicy def includeme(config): config.include('pyramid_contextauth') config.register_authentication_policy( AuthTktAuthenticationPolicy('secret'), Context1, ) config.register_authentication_policy( ContextAuthenticationPolicy(), (Context2, Context3), ) class Context1(object): pass class Context2(object): pass class Context3(object): pass class ContextAuthenticationPolicy(object): def authenticated_userid(self, request): return unauthenticated_userid(request) def unauthenticated_userid(self, request): "A dummy example" return request.POST.get('userid') def effective_principals(self, request): if self.unauthenticated_userid(request): return ['User'] return [] def remember(self, request, prinicpal, **kw): return remember(request, prinicpal, **kw) def forget(self, request): return forget(request)
Changelog
0.7
- Policy checks on each resource lineage and get the first policy it gets.
- Add coverall in after_success of travis config.
0.6
- Removing decorator authentication_policy: extension should not instantiate authentication policy class internally.
0.5
- Registering same context to multiple policies raises a configuration error.
- Unregister old policy when overriding a context with another policy.
- Change register_authentication_policy and authentication_policy signatures.
0.4
- Add introspectables to config for registered authentication policies.
- Rename register_context to register_policy
0.3
- Break backward compatibility as ContextBasedAuthenticationPolicy.register_context now requires config instance as first argument.
- Add config.register_authentication_policy configuration directive which accepts a list of contexts.
- Use registry adpaters to register policies rather than a dict.
- Add a decorator authentication_policy to register policies when doing a config scan.
0.2.1
- Adjust requirements files and dependencies.
0.2
- Update dependencies by adding requirements files.
0.1.1
- Changed register_context interface which breaks compatibility with 0.0.3
0.0.3
- Commit configuration before returning from includeme.
0.0.2
- When not provided, authenticated_userid and effective_principals from super class CallbackAuthenticationPolicy are used.
0.0.1
- Initial version
- Downloads (All Versions):
- 29 downloads in the last day
- 228 downloads in the last week
- 810 downloads in the last month
- Author: Hadrien David
- License: BSD-derived ()
- Package Index Owner: hadrien
- DOAP record: pyramid_contextauth-0.7.1.xml | https://pypi.python.org/pypi/pyramid_contextauth/0.7.1 | CC-MAIN-2015-32 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.