text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
HTTP API interface for üWave.
$ cnpm install u-wave-api-v1
REST API plugin for üWave, the collaborative listening platform.
Getting Started - API - Building - License
Note: üWave is still under development. Particularly the
u-wave-coreand
u-wave-api-v1modules will change a lot before the "official" 1.0.0 release. Make sure to always upgrade both of them at the same time.
npm install u-wave-api-v1
The module exports a middleware that can be used with express-style HTTP request handlers.
Creates a middleware for use with Express or another such library. The first parameter is a
u-wave-core instance. Available options are:
server- An HTTP server instance.
u-wave-api-v1uses WebSockets, and it needs an HTTP server to listen to for incoming WebSocket connections. An example for how to obtain this server from an Express app is shown below.
socketPort-.
import express from 'express'; import stubTransport from 'nodemailer-stub-transport'; import uwave from 'u-wave-core'; import createWebApi from 'u-wave-api-v1'; const app = express(); const server = app.listen(); const secret = fs.readFileSync('./secret.dat'); const uw = uwave({ secret: secret, }); const api = createWebApi(uw, { secret: secret, // Encryption secret server: server, // HTTP server recaptcha: { secret: 'AABBCC...' }, // Optional mailTransport: stubTransport(), // Optional onError: (req, error) => {}, // Optional }); app.use('/v1', api);
Returns a middleware that attaches the üWave core object and the üWave api-v1 object to the request. The
u-wave-core instance will be available as
req.uwave, and the
u-wave-api-v1 instance will be available as
req.uwaveApiV1. This is useful if you want to access these objects in custom routes, that are not in the
u-wave-api-v1 namespace. E.g.:
app.use('/v1', api); // A custom profile page. app.get('/profile/:user', api.attachUwaveToRequest(), (req, res) => { const uwave = req.uwave; uwave.getUser(req.params.user).then((user) => { res.send(`<h1>Profile of user ${user.username}!</h | https://developer.aliyun.com/mirror/npm/package/u-wave-api-v1 | CC-MAIN-2021-04 | en | refinedweb |
Tuesday, August 24, 2010
NCache 3.8 Service Pack 1 (SP1) contains important fixes and enhancements. The most important and demanded feature added in this release is the support of .Net frame work 4.0 The API is completely compatible with the 3.8 release version and applications can upgrade without re-building/re-compiling the application.
Following are some enhancements made in this release:
The code base of NCache cache server has been converted to .NET 4.0 and the NCache client is available in both .NET 2.0 and 4.0 versions.
There is an improvement in client cache management through NCache Manager where project files will contact to client nodes on refresh option and this has improved fast loading of NCache Manger project file.
We have resolved this issue by copying all the assemblies in the NCache bin/assembly folder and now Visual Studio does not have to locate the dependent assemblies in GAC. This has resolved the issue.
NCache samples are now builded with visual studio 2008.
VeriSign issue, default in service configuration file should be
generatePublisherEvidence. The enhancement is made.
ReadThru provider interface signature has been modified to support maximum features of NCache. There is new structure introduced under the namespace “Alachisoft.NCache.Runtime.Caching” called ProviderCacheItem which is similarly to the CacheItem. You can now easily specify expirations, tags, eviction hints, dependencies etc.
New interfaces
<p>public void LoadFromSource(string key, out ProviderCacheItem cacheItem)</p> <p>public Dictionary
LoadFromSource(string[] keys)</p>
Now, you can specify IsResyncExpiredItem property in Cache Loader so that the expired items can be reloaded automatically.
NCache is not supporting the latest version of NHibernate 2.1.2. We have also added region support in this release. NHibernate sample application is also modified with NHibernate regions support.
+1 (214) 764-6933 (US)
+44 20 7993 8327 (UK) | https://www.alachisoft.com/resources/release-notes/release-notes-ncache-3.8-sp1.html | CC-MAIN-2021-04 | en | refinedweb |
Visual Basic .NET
Welcome to VB.NET section of C# Corner. In this section, you will find various VB.NET related source code samples, articles, tutorials, and tips using C# language. Learn VB.NET programming with C# Corner.
Articles
(520)
Blogs
(30)
Resources
(1)
Videos
(0)
News
(1)
All
Articles
Blogs
News
Videos
RECENT POSTS
Drawing rubber-band lines and shapes in VB.NET
I would like to show how we can draw rubber-band lines and shapes in GDI+ with just a few lines of code.
nildo
Mar 06, 2019
Getting an External IP Address Locally using VB.Net
This short article shall address the easiest way possible to get your external IP address (and local/internal IP address) using VB.Net.
Scott Lysle
Mar 05, 2019
Export GridView Data to PDF Format in VB.NET
In this article we will know how to export gridview data to pdf format.
Satyapriya Nayak
Mar 05, 2019
Tic Tac Toe Game in VB.NET
TicTacToe is a demonstration of the AI game playing minimax algorithm. The game plays out every possible combination of moves from each position and consequently is unbeatable.
Paul Lockwood
Feb 26, 2019
Registration Form using Captcha Image Implementation in VB.NET
In this article we will create a registration page with captcha implemented to it.
Satyapriya Nayak
Feb 26, 2019
Search records using textbox in VB.NET
In this article we will search records from database using textbox and show respective data in the datagridview in three different ways.
Satyapriya Nayak
Feb 26, 2019
SmtpClient Class in ASP.NET using VB.NET
In this article we will discuss SmtpClient Class. SmtpClient Class allows the developer to send emails (electronic mails). SmtpClient Class is available in System.Net.Mail Namesapce.
Abhimanyu K Vatsa
Feb 07, 2019...
Scott Lysle
Jan 23, 2019
Integrating Java and .Net Framework
To evaluate the possibilities of Java and .NET framework convergence.The article begins by briefly probing what constitutes the Java platform and .NET framework.
Ashish Banerjee
Jan 21, 2019
Display XML File using XML Control in VB.NET
In this article we will know how to Display xml file using xml control.
Satyapriya Nayak
Dec 18, 2018
Display and Hiding SIP on Pocket PC in VB.NET
When you get your hands on a Pocket PC for the first time you have to wonder just how the heck do you enter information.
John O Donnell
Dec 18, 2018.
Dipal Choksi
Dec 04, 2018...
Scott Lysle
Dec 04, 2018
Embed Word in a Web Page with an Easy VB Custom Control in VB.NET
This article describes an approach to displaying word documents within a web page using a simple custom server control.
Scott Lysle
Nov 29, 2018
String Functions In VB
String functions are mainly used to manipulate the string in Visual basic.
Dhaval Patel
Oct 16, 2018
Using Lightbox in an ASP.NET Application in VB.NET
The article describes Lightbox as, ...a simple, unobtrusive script used to overlay images on the current page. It delivers a nice, professional looking method for displaying images as overlays thro...
Scott Lysle
Sep 05, 2018
Make DropDown CheckedListBox in VB.net
In this article, we will see how to make DropDown CheckedListBox in VB.net.
Hirendra Sisodiya
Aug 29, 2018
Create Login Application In Excel Macro Using Visual Basic
In this article you will learn how to create Login Application in Excel Macro using Visual Basic for Applications.
Karthikeyan K
Jul 25, 2017
CDO Object in Web Services using VB.NET
This article suggest how to resolve the issue Could not access CDO. Message while using SMTP to send email on web services.
Kumar Sethuraman
Jun 26, 2017
What is VB.NET Namespace
This article describe namespaces in VB.Net. Group of code having a specific name is a Namespace. In a Namespace the groups of components are somehow related to each other. Namespaces are similar in...
Sanjeev Kumar
Jun 26, 2017
MicroStation V8 VBA Programming
Getting Started with VBA Programming. The recent release of MicroStation V8 introduces another automation feature, Visual Basic for Applications (VBA). This new implementation of VBA provides rapid...
Daniel Bollavarapu
Jun 23, 2017 F...
Alin
Jun 20, 2017
MVC Version(s) With VS 2015
This article will explain how to add different versions of MVC in Visual Studio 2015.
Sumit Sharma
May 08, 2017
Read External Program Text Using VB.Net
This article shows how to create a simple program to read and edit external program(Another Application) text, without modifying the external program(Another Application or Another Program) source ...
Syed Shanu
Apr 26, 2017
How To Make Your Computer Speak To You While Opening It
In this blog, you will learn how to make your computer speak to you while opening it.
Victor Franklin
Dec 08, 2016
View More
No Records Available. | https://www.c-sharpcorner.com/technologies/visual-basic-dot-net | CC-MAIN-2021-04 | en | refinedweb |
C++ Program to Compute Maximum Sum of the Sub-Array
Hello Everyone!
In this tutorial, we will learn about the most efficient method to compute the maximum sum of the sub-arrays ,> #include <bits/stdc++.h> using namespace std; int maxSubArray(int m[], int n) { int sum = 0, s = 0, i; for (i = 0; i < n; i++) { s += m[i]; if (s < 0) s = 0; sum = max(s, sum); } //Sorting the array using the system defined sort() method sort(m, m + n); //if all the elements are negative then return the largest element of the array if (m[n - 1] < 0) sum = m[n - 1]; return sum; } int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Most Efficient Program to find the Maximum sum of Subarray ===== \n\n"; int i, n, sum = 0; int arr[] = {-2, 1, -3, 4, -1, 2, 1, -5, 4}; //number of elements in the array n = sizeof(arr) / sizeof(arr[0]); //Printing the elements of the array cout << "\n\n The " << n << " elements of the array are : "; for (i = 0; i < n; i++) { cout << arr[i] << " "; } //Calling a method to find the maximum sum of subarray sum = maxSubArray(arr, n); cout << "\n\n\nThe Maximum sum of the Subarrays of the given array is: " << sum; cout << "\n\n\n"; return 0; }
Output:
We hope that this post helped you develop a better understanding of the logic to compute the maximum sum of the sub-arrays of an array in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : ) | https://studytonight.com/cpp-programs/cpp-program-to-compute-maximum-sum-of-the-subarray | CC-MAIN-2021-04 | en | refinedweb |
Parent Directory
|
Revision Log
gather function fixed.
UPDATE THE COPYRIGHT DATES
Should fix #381 - Trilinos support for periodic boundaries. An assumption was made which doesn't hold when periodic boundaries are used. Removed that unnecessary assumption which fixes matrix size and indices. Obviously we are lacking tests for this so will open up a ticket.
type fixes in debug mode and savanna lib reference fix.
trying to support 2nd order elements in finley with trilinos. WIP
moved Distribution struct to escript and updated domains accordingly. We can now build the full escript suite without paso (-:
moved reference to JMPI down to Coupler.
type change in paso.
merging trilinos branch to trunk. We can now build with trilinos and use it instead of paso for single PDEs. There are some more things to be done...
Relicense all the things!
Fixing some namespaces and includes
Fixed a corner case in finley where some ranks have no reduced DOFs. This came to light when bumping the number of testing ranks to 20 on Savanna.
fixed another current exception hang.
Bye bye esysUtils. Also removed first.h as escript/DataTypes.h is now required everywhere and fulfills that role by including a boost python header first.
moved esys MPI to escript.
eliminated Esys_setError() et al from finley. We do have the issue of potentially throwing exceptions in some ranks and not others but I'd claim that was the case before in various places..
64-bit index fixes in finley.
index fixes and some code cleanup in finley. Introduced index_size attribute to finley dump (netCDF) files.
Work to get finley compiling with long indices.
Fixing institution name to comply with policy
Changes brought across from the debian preparation branch.
Fix for python compile warnings
Updating all the dates
release changes
fixing more cases of indexing empty vectors
fixing some indexing into empty vectors, related to #291
Merging ripley diagonal storage + CUDA support into trunk. Options file version has been incremented due to new options 'cuda' and 'nvccflags'.
compiler help
more on CAP.
This commit is brought to you by the number 4934 and the tool "meld". Merge of partially complete split world code from branch.
Coupler/Connector shared ptrs.
paso::SharedComponents now header-only and shared ptr managed.
paso::Distribution instances are now managed by a boost::shared_ptr, methods are all inline.
paso::Coupler and paso::Connector.
I changed some files. Updated copyright notices, added GeoComp.
Eliminated all const_cast<Data*> hacks in ripley and finley now that Data.getSampleDataRO returns a const pointer.
Steube!!!!!
Remove bool_t Part of random.
Hopefully addresses mantis721.
finley now uses Data objects directly instead of going through the C wrapper.
finley ElementFile is now a class....
Merging dudley and scons updates from branches
Don't panic. Updating copyright stamps
Updating copyright notices
getListOfTags method added to FunctionSpace class
*** empty log message ***
*** empty log message ***
*** empty log message ***
Initial revision
This form allows you to request diffs between any two revisions of this file. For each of the two "sides" of the diff, enter a numeric revision. | https://svn.geocomp.uq.edu.au/escript/trunk/finley/src/NodeFile.cpp?view=log&pathrev=6620 | CC-MAIN-2018-26 | en | refinedweb |
Pre First Article of Share PointFor past some days I have been writing and recording videos in design patterns, UML, FPA, Enterprise blocks and lot you can watch the videos at You can download my 400 .NET FAQ EBook from
When you want to create any site in SharePoint we need to prepare a site collection and define site inside the site collection. Ok, 'Look:-. We have people in our organizations.. People are assigned task.. To complete task we need to exchange data.. We also need to plan/monitor tasks..:-. They are pages which are shared across sites like 'settings.aspx' , which will helps us set generic properties across sites in a site collection.. The second important part is that we need to save application pages in 'C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS' folder. If you browse to the folder you will find SharePoint's own application pages.
Ok, what we will do is that to build confidence let's make a simple page called as 'Simple 'simplepage.aspx' source code as shown below. We need to do the following:-. First refer the assembly using the 'Assembly directive.. Refer the masterpage files as 'Application.master'.. Import the sharepoint namespace. If we had used the behind code we would have imported this in the behind code itself.... Second we need to use the assembly directive to refer the behind code.
Once we have referred the Assembly and set the Page attributes. Its time to fill the content in the placeholders defined in the master page 'Application:-. All features needs to be copied in "C:\Program Files\Common Files\Microsoft Shared\Web server extensions\12\Template\FEATURES\" directory. Microsoft Share reads the features from this directory. If you open the directory you can find pre-installed features by Share Point as shown below.
...
To generate a new GUID click on Tools àCreate GUID and click 'New GUID'. Tools menu you will get from within the IDE. We have marked the GUID value which you need to copy and paste in the 'feature.xml 'file. 'SPFeatureReceiver' class and implement all the events.
As a sample in the 'FeatureActivated' event we have set the description and title of the website.
In 'FeatureDeactivating' we have reverted back the title and description. àand go to 'C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN' directory.
To install the feature run the below command using 'STSADM'. Please note that you need to specify the relative directory path and not the physical path of 'Feature.xml' file.
To ensure that SharePoint registers about this feature run IISRESET on the machine.Step 5:- Now click on the Site Action àSite Settings àSite Features and Activate the feature.
Now you can see your feature enabled in the site actions menu. If you click on the feature i.e 'Display.
If you want only administrators to view the features set RequireSiteAdministrator="True" as shown in the below 'ElementManifest.XML' file.
The source code has the following things:-. Simple SharePoint behind code.. Inline Share Point behind code. SharePoint feature code
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/kb/246-sharepoint-quick-start-faq-part-2.aspx | CC-MAIN-2018-26 | en | refinedweb |
Results 1 to 1 of 1
- Join Date
- Sep 2012
- Location
- Nashville, TN
- 100
Reverse Proxy setup and configuration help
Running a Reverse Proxy in Apache
mod_proxy - Apache HTTP Server Version 2.4
What I am trying to do is use a reverse proxy to access my TFS server, Jira Server and Confluence server from a single namespace.
IE dev.example.com/ and dev.example.com/tfs will both go to my internal tfs server tfs.internal.local
dev.example.com/jira will go to
dev.example.com/confluence will go to
What So Far I have been trying to get jira working. What I have so far is the following from a virtualhost prespective. I get the page but when I look at the access log I see entries like the
[04/Sep/2014:14:24:42 -0500] "GET /s/d41d8cd98f00b204e9800998ecf8427e-T/en_US-5y2yjx/6329/4/1.2.11/_/download/batch/com.atlassian.labs.hipchat.hipchat-for-jira-plugin:resources/com.atlassian.labs.hipchat.hipchat-for-jira-plugin:resources.js HTTP/1.1" 404 480 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.78.2 (KHTML, like Gecko) Version/7.0.6 Safari/537.78.2"
Code:
<VirtualHost *:80> ServerName dev.example.com ServerAlias dev <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off # Working Solution ProxyPass /jira <Location /jira> ProxyHTMLExtended On SetOutputFilter proxy-html ProxyPassReverse RequestHeader unset Accept-Encoding </Location> </VirtualHost>
How can I get that to work under the <Location /jira> section? | http://www.linuxforums.org/forum/servers/202725-reverse-proxy-setup-configuration-help.html | CC-MAIN-2018-26 | en | refinedweb |
After you've installed the leantouch, there's a lean> touch in gameobject.
2. An example of creating prefab by clicking on a screen.
2. 1 building a scene to join leantouch.
2. 2 build a square, drag to asset, turn to prefab.
2. 3 in leantouch ( gameobject ), build leanspawn ( component ) and drag the block into prefab.
2. 4 continue to establish the action of leanfingerdown, selecting objects for the clicked event ( leantouch, I. E. Leanspawn ), and selects leanspawn. Spawn in function.
A simple format for 3. C #.
Distance in 2. 5 leanspawn is the distance to the camera.
is done, and clicking on the screen will appear a box.
3. 1 increase click event.
3. 2 real-time access to finger information.
3. 3 Lean.Touch. leangesture can implement multiple operations, such as rotation.
3. 4 leaving UI no action.
3. 5 uses using lean, touch;.
A combination of 3. 6 provides multiple operatio &.
Leangesture. Getpinchscale ( leantouch. Getfingers ( true ) ).
4. 1 edit from project settings to script execution order.
4. 2 setting order.
5. 1 leancameramove.
Let the camera be dragged by finger.
5. 2 leancamerazoom.
So that the camera can be.
5. 3 leancameramovesmooth.
Allows the camera to be dragged by finger, and the effect is smoother.
5. 4 leancamerazoomsmooth.
The camera can be amplified by, and the effect is smoother.
5. 5 leanpitchyaw.
Let the camera go around and watch around space.
5. 5 leanpitchyawsmooth.
Makes the camera orbit around the space, and the effect is smoother.
6. 1 leanfingertap.
Tap to unde & tand as when requiredtapinterval is 2
When ignoreifovergui is set, it can be invalidated on the UI
6. 2 leanspawn.
Set a precast body and set the distance from the camera
6. 3 leandestroy.
Set the time to destroy
6. 4 leanfingerheld.
Long press
6. 5 leanfingerline.
Drag to create a line
6. 6 leanmultitap.
by connecting a script, passing count, highest,
6. 6 leanfingertrail.
Keep track of your fingers.
6. 7 leanopenurl.
Links can be implemented by button
6. 8 leanpressselect to set the object to be clicked and dragged.
After leanpressselect settings, you'll be saved in currentselectables by clicking multiple points
6. 9 leanselectablespriterenderercolor.
By adding a"leanselectablespriterenderercolor"script on a clicked object, you can click on the, and note that the clicked needs to join the circlecollider2d 3d collision component, and SpriteRenderer. And leanselectable is also required.
6. 10 LeanSelectable.
By adding an action to select, leanselectable can be used to set leandestroy. Destroynow to remove.
6. 11 LeanTranslate.
Adding leantranslate to the object, with the above components, can implement object drag, leantranslatesmooth can smooth points.
6. 12 implements 3d objects to be clicked and dragged.
In leanpressselect, selectusing set raycast,
A leanselectablerenderercolor can set the color change of a 3d object.
6. 13 LeanSelectableTranslateInertia3D.
A leanselectabletranslateinertia 3d can combine a rigid body to set a drag.
6. 14 LeanTapSelect.
Choose an object for an object, and the selected object is recorded in currentselectable, which can only be selected
6. 15 LeanRotate.
Rotate, set x, y, z axis corresponding rotation
6. 16 LeanScale.
Zoom in.
7. 1 leanswipedirection4.
Slide in four directions
7. 2 leanswipedirection8.
Slide in eight directions
7. 3 leanswiperigidbody2d/3d leanswiperigidbody2d/3dnorelease.
Rigid body sliding
8. 1 leantouchevents.
9. How to make an object can be pushed by tap.9. How to make an object can be pushed by tap.
using UnityEngine;using System.Collections.Generic; namespace Lean.Touch { //This script will hook into event LeanTouch event, and spam the console with the informationpublicclass LeanTouchEvents : MonoBehaviour { protectedvirtualvoidOnEnable() { //Hook into the events we need; } protectedvirtualvoidOnDisable() { //Unhook the events; } publicvoidOnFingerDown(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " began touching the screen"); } publicvoidOnFingerSet(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " is still touching the screen"); } publicvoidOnFingerUp(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " finished touching the screen"); } publicvoidOnFingerTap(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " tapped the screen"); } publicvoidOnFingerSwipe(LeanFinger finger) { Debug.Log("Finger" + finger.Index + " swiped the screen"); } publicvoidOnGesture(List<LeanFinger> fingers) { Debug.Log("Gesture with" + fingers.Count + " finger(s)"); Debug.Log(" pinch scale:" + LeanGesture.GetPinchScale(fingers)); Debug.Log(" twist degrees:" + LeanGesture.GetTwistDegrees(fingers)); Debug.Log(" twist radians:" + LeanGesture.GetTwistRadians(fingers)); Debug.Log(" screen delta:" + LeanGesture.GetScreenDelta(fingers)); } } }
9. 1 increased a leantouch.
9. 2 adds a leantapselect and selects it to be 2d 3d ( select using ).
9. 3 on object, join leanselectable and ensure that collider is in.
9. 4 can use some of the functions of leanselect, or you can add actions to the event.
Pull the object with leanselectable and select a handler to get it. | https://www.dowemo.com/article/46327/unity-leantouch-notes | CC-MAIN-2018-26 | en | refinedweb |
Often you want formatted text, but you're not interested in writing it to Stdout or Stderr.
The existing CSharp Formatter tutorial doesn't make it obvious that string formatting is available in any other context than the console. But it is!! Anywhere you use strings, you can also create formatted strings!!
The interesting code is in the tango.text.convert.Layout module, in a templated class called Layout(T). You can instantiate the template with char, dchar, or wchar types, and use the OpCall syntax to format your strings, like this:
import tango.text.convert.Layout;
// ...
auto charFormatter = new Layout!(char);
char[] formattedString = charFormatter("This is the thing: '{}'", thing);
auto wcharFormatter = new Layout!(wchar);
wchar[] formattedWString = wcharFormatter ("This is the thing: '{}'", thing);
auto dcharFormatter = new Layout!(dchar);
dchar[] formattedDString = dcharFormatter ("This is the thing: '{}'", thing);
You might consider the sprint() method also, which avoids heap activity. Also, if you're already using Stdout or Stderr, it can be simpler to apply the Layout instance exposed there instead:
char[256] out = void;
auto content = Stdout.layout.sprint (out, "This is the thing: '{}'", thing);
One might also utilize the locale enabled extensions in a similar manner, by using tango.text.locale.Locale instead of Layout. Locale is a derivative of Layout, so all formatting methods are common with Locale adding support for more sophisticated formatting options along with culture-specific currency, time and date consideration:
import tango.text.locale.Locale;
auto layout = new Locale (Culture.getCulture ("fr-FR"));
auto formattedString = layout ("This is the date: {}", DateTime.now);
It's also possible to make Stdout and Stderr locale-aware, by replacing the (shared) layout instance:
Stdout.layout = new Locale (Culture.getCulture ("fr-FR")); | http://www.dsource.org/projects/tango/wiki/TutCSharpFormatterComments | CC-MAIN-2018-26 | en | refinedweb |
Tango offers a built-in ability to get stacktraces and visualize them.
This is performed in the tango.core.stacktrace.* modules.
Support is still in evolution, in general you need to at least compile with debug info (-g) to have the function names. See the platform dependent hints for more info.
Stack traces can be useful to know where the exception originated and its context.
You can activate stack tracing simply with
import tango.core.stacktrace.TraceExceptions;
Then if you print the exception with something like this
e.writeOut((char[] s) { Stdout(s); });
Stdout.flush();
you will see the stacktrace.
To find the name of a function pointer you can use one of
char[] nameOfFunctionAt(void* addr, char[] buf);
char[] nameOfFunctionAt(void * addr);
for delegates you can extract the address with .funcptr
tango.core.stacktrace.Demangler has a demangler that can be used to demangle D names.
demangler.demangle(myMangledString);
one can even allocate new demangler with different verbosity levels, or modify the verbosity of the default demangler
You need to compile without optimizations and with -g to get the stack traces.
A dbginfo.dll is needed, but that should be part of windows
Stacktracing works also without -g, but the names are then missing and one should recover them later.
The symbol names are returned, but line numbers are missing.
Tracing might have problems with nested functions (that lead to nested stackframes that interrupt the stacktracing procedure).
Stacktracing works also without -g, but symbol names are not resolved.
The function names and addresses can be recovered with the addr2line utility.
At the moment there is no easy to use D program to parse a stacktrace, call addr2line and return demangled function names and file/number, a contribution would be appreciated.
The main functions used to get a stack trace are
/// builds a backtrace of addresses, the addresses are addresses of the *next* instruction,
/// *return* addresses, the most likely the calling instruction is the one before them
/// (stack top excluded)
extern(C) size_t rt_addrBacktrace(TraceContext* context, TraceContext *contextOut,
size_t*traceBuf,size_t bufLength,int *flags);
and
/// tries to sybolize a frame information, this should try to build the best
/// backtrace information, if possible finding the calling context, thus
/// if fInfo.exactAddress is false the address might be changed to the one preceding it
/// returns true if it managed to at least find the function name
extern(C) bool rt_symbolizeFrameInfo(ref Exception.FrameInfo fInfo,
TraceContext* context,char[]buf);
both functions can be changed with
extern(C) void rt_setAddrBacktraceFnc(AddrBacktraceFunc f);
and
extern(C) void rt_setSymbolizeFrameInfoFnc(SymbolizeFrameInfoFnc f);
Finally the printing format of a frame function can be changed setting Exception.FrameInfo.defaultFramePrintingFunction.
Improvements are welcome. | http://www.dsource.org/projects/tango/wiki/TutStackTrace | CC-MAIN-2018-26 | en | refinedweb |
Language in C Interview Questions and Answers
Ques 71. How to set the system date through a C program ?
Ans. We can set the system date using the setdate( ) function as shown in the following program. The function assigns the current time to a
structure date.
#include "stdio.h"
#include "dos.h"
main( )
{
struct date new_date ;
new_date.da_mon = 10 ;
new_date.da_day = 14 ;
new_date.da_year = 1993 ;
setdate ( &new_date ) ;
}
Is it helpful? Add Comment View Comments
Ques 72. How can I write a general-purpose swap without using templates?
Ans. Given below is the program which uses the stringizing preprocessor directive ## for building a general purpose swap macro which can swap two integers, two floats, two chars, etc.
#define swap( a, b, t ) ( g ## t = ( a ), ( a ) = ( b ), ( b ) = g ## t )
int gint;
char gchar;
float gfloat ;
main( )
{
int a = 10, b = 20 ;
char ch1 = 'a' , ch2 = 'b' ;
float f1 = 1.12, f2 = 3.14 ;
swap ( a, b, int ) ;
printf ( "\na = %d b = %d", a, b ) ;
swap ( ch1, ch2, char ) ;
printf ( "\nch1 = %c ch2 = %c", ch1, ch2 ) ;
swap ( f1, f2, float ) ;
printf ( "\nf1 = %4.2f f2 = %4.2f", f1, f2 ) ;
}
swap ( a, b, int ) would expand to,
( gint = ( a ), ( a ) = ( b ), ( b ) = gint )
Is it helpful? Add Comment View Comments
Ques 73. What is a heap ?
Ans. Heap is a chunk of memory. When in a program memory is allocated dynamically, the C run-time library gets the memory from a collection of unused memory called the heap. The heap resides in a program's data segment. Therefore, the amount of heap space available to the program is fixed, and can vary from one program to another.
Is it helpful? Add Comment View Comments
Ques 74. How to obtain a path of the given file?
Ans. The function searchpath( ) searches for the specified file in the subdirectories of the current path. Following program shows how to make use of the searchpath( ) function.
#include "dir.h"
void main ( int argc, char *argv[] )
{
char *path ;
if ( path = searchpath ( argv[ 1 ] ) )
printf ( "Pathname : %s\n", path ) ;
else
printf ( "File not found\n" ) ;
}
Is it helpful? Add Comment View Comments
Ques 75. Can we get the process identification number of the current program?
Ans. Yes! The macro getpid( ) gives us the process identification number of the program currently running. The process id. uniquely identifies a program. Under DOS, the getpid( ) returns the Program Segment Prefix as the process id. Following program illustrates the use of this macro.
#include <stdio.h>
#include <process.h>
void main( )
{
printf ( "The process identification number of this program is %X\n",
getpid( ) ) ;
}
Is it helpful? Add Comment View Comments? | http://www.withoutbook.com/Technology.php?tech=11&page=15&subject= | CC-MAIN-2020-16 | en | refinedweb |
Continuous collision detection is on for colliding with static mesh geometry.
Collisions will be detected for any static mesh geometry in the path of this Rigidbody, even if the collision occurs between two FixedUpdate steps. Static mesh geometry is any MeshCollider which does not have a Rigidbody attached. This also prevent Rigidbodies set to ContinuousDynamic mode from passing through this Rigidbody.
//This script allows you to switch collision detection mode at the press of the space key //Attach this script to a GameObject //Click the GameObject, go to its Inspector and click the Add Component Button. Then, go to Physics>Rigidbody.
using UnityEngine; using UnityEngine.UI;
public class Example : MonoBehaviour { Rigidbody m_Rigidbody;
void Start() { m_Rigidbody = GetComponent<Rigidbody>(); }
public void Update() { //Press the space key to switch the collision detection mode if (Input.GetKeyDown(KeyCode.Space)) SwitchCollisionDetectionMode(); }
//Switch between the different Collision Detection Modes void SwitchCollisionDetectionMode() { switch (m_Rigidbody.collisionDetectionMode) { //If the current mode is continuous, switch it to continuous dynamic mode case CollisionDetectionMode.Continuous: m_Rigidbody.collisionDetectionMode = CollisionDetectionMode.ContinuousDynamic; break; //If the current mode is continuous dynamic, switch it to discrete mode case CollisionDetectionMode.ContinuousDynamic: m_Rigidbody.collisionDetectionMode = CollisionDetectionMode.ContinuousSpeculative; break;
// If the curren mode is continuous speculative, switch it to discrete mode case CollisionDetectionMode.ContinuousSpeculative: m_Rigidbody.collisionDetectionMode = CollisionDetectionMode.Discrete; break;
//If the current mode is discrete, switch it to continuous mode case CollisionDetectionMode.Discrete: m_Rigidbody.collisionDetectionMode = CollisionDetectionMode.Continuous; break; } } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/CollisionDetectionMode.Continuous.html | CC-MAIN-2020-16 | en | refinedweb |
In my previous posts, We learned with examples of various consumer interfaces as follows
Primitive Consumer functional interfaces are defined in java.util.function package.
It has an only single abstract method which takes object value and other numeric value and results in nothing.
This accepts two values as input like BiConsumer interface. All functional interfaces can be used as a variable assigned with lambda expression or method reference
accept(T t, integer value) takes two arguments
t is an object and value is an integer
ObjIntConsumer<T>java.util.function.ObjIntConsumer is a functional interface. It has a single abstract method
accept(T t, integer value) takes two arguments
t is an object and value is an integer
Method Reference exampleMethod reference is declared and called with double colon operator, This assigned with Consumer, calling accept() method with two arguments and returns nothing.
The output of the above-generated code isThe output of the above-generated code is
import java.util.function.ObjIntConsumer; public class MethodReferenceExample { static void append(String str, int value) { System.out.println(str+value); } public static void main(String[] args) { ObjIntConsumer<String> IntToDoubleFunction = MethodReferenceExample::append; IntToDoubleFunction.accept("Hello",4); } }
Hello4
Lambda Expression ExampleHere lambda expression declared by passing two parameters and assigned with ObjIntConsumer interface. Calling accepts with Object and int value and returns nothing.
Output isOutput is
ObjDoubleConsumer <String> objDoubleConsumer = ( v1 , v2 ) -> { System.out.println(v1 + " " +v2);}; objDoubleConsumer.accept("object",1d);
test 1
ObjLongConsumer<T>java.util.function.ObjLongConsumer is a functional interface. It has single abstract method - accept(Object,primitive value), Takes object and integer value as input and returns no result
The output of the above code execution isThe output of the above code execution is
ObjLongConsumer <String> objLongConsumer = ( v1 , v2 ) -> { System.out.println(v1 + " " +v2);}; objLongConsumer.accept("object",4l);
object 4
EmoticonEmoticon | https://www.cloudhadoop.com/2018/08/java8-numeric-object-consumer-examples.html | CC-MAIN-2020-16 | en | refinedweb |
Excel Library for C# and VB.NET applications
The fastest way to get started with the GemBox.Spreadsheet library is by exploring our collection of C# and VB.NET examples. These are live examples that show the supported features and APIs that can be used to achieve various Excel-related tasks with the GemBox.Spreadsheet component.
System Requirements
GemBox.Spreadsheet requires only .NET, it doesn't have any other dependency.
You can use it on:
- .NET Framework 3.5 - 4.8
- .NET Core 3.0
- Platforms that implement .NET Standard 2.0 or higher.
Hello World
The first step in using the GemBox.Spreadsheet library is to add a reference to GemBox.Spreadsheet.dll within your C# or VB.NET project. There are three ways to do that.
a) Add from NuGet.
You can add GemBox.Spreadsheet as a package using the following command from the NuGet Package Manager Console:
Install-Package GemBox.Spreadsheet
Or you can search and add GemBox.Spreadsheet from the NuGet Package Manager.
b) Add from Setup.
You can download the GemBox.Spreadsheet Setup from this page. After installing the setup, you can add a reference to GemBox.Spreadsheet.dll from the Global Assembly Cache (GAC).
c) Add from a DLL file.
You can download a GemBox.Spreadsheet.dll file from this page and add a reference by browsing to it.
The second step is to add a directive for the GemBox.Spreadsheet namespace.
For a C# project, use:
using GemBox.Spreadsheet;
For a VB.NET project, use:
Import GemBox.Spreadsheet
The third step is to set the license key to use GemBox.Spreadsheet in one of its working modes.
To use a Free mode in a C# project, use:
SpreadsheetInfo.SetLicense("FREE-LIMITED-KEY");
To use a Free mode in a VB.NET project, use:
SpreadsheetInfo.SetLicense("FREE-LIMITED-KEY")
You can read more about GemBox.Spreadsheet's working modes on the Evaluation and Licensing help page.
The last step is to write your application-specific Excel workbook code, like the following example code, which shows how to create a simple Excel workbook. It shows how to initialize the GemBox.Spreadsheet content model, populate some cells, and then save an
ExcelFile object to a file in the desired format.
using GemBox.Spreadsheet; class Program { static void Main() { // If using Professional version, put your serial key below. SpreadsheetInfo.SetLicense("FREE-LIMITED-KEY"); var workbook = new ExcelFile(); var worksheet = workbook.Worksheets.Add("Hello World"); worksheet.Cells[0, 0].Value = "English:"; worksheet.Cells[0, 1].Value = "Hello"; worksheet.Cells[1, 0].Value = "Russian:"; // Using UNICODE string. worksheet.Cells[1, 1].Value = new string(new char[] { '\u0417', '\u0434', '\u0440', '\u0430', '\u0432', '\u0441', '\u0442', '\u0432', '\u0443', '\u0439', '\u0442', '\u0435' }); worksheet.Cells[2, 0].Value = "Chinese:"; // Using UNICODE string. worksheet.Cells[2, 1].Value = new string(new char[] { '\u4f60', '\u597d' }); worksheet.Cells[4, 0].Value = "In order to see Russian and Chinese characters you need to have appropriate fonts on your PC."; worksheet.Cells.GetSubrangeAbsolute(4, 0, 4, 7).Merged = true; workbook.Save("Hello World.%OutputFileType%"); } }
Imports GemBox.Spreadsheet Module Program Sub Main() ' If using Professional version, put your serial key below. SpreadsheetInfo.SetLicense("FREE-LIMITED-KEY") Dim workbook As New ExcelFile() Dim worksheet = workbook.Worksheets.Add("Hello World") worksheet.Cells(0, 0).Value = "English:" worksheet.Cells(0, 1).Value = "Hello" worksheet.Cells(1, 0).Value = "Russian:" ' Using UNICODE string. worksheet.Cells(1, 1).Value = New String(New Char() {ChrW(&H417), ChrW(&H434), ChrW(&H440), ChrW(&H430), ChrW(&H432), ChrW(&H441), ChrW(&H442), ChrW(&H432), ChrW(&H443), ChrW(&H439), ChrW(&H442), ChrW(&H435)}) worksheet.Cells(2, 0).Value = "Chinese:" ' Using UNICODE string. worksheet.Cells(2, 1).Value = New String(New Char() {ChrW(&H4F60), ChrW(&H597D)}) worksheet.Cells(4, 0).Value = "In order to see Russian and Chinese characters you need to have appropriate fonts on your PC." worksheet.Cells.GetSubrangeAbsolute(4, 0, 4, 7).Merged = True workbook.Save("Hello World.%OutputFileType%") End Sub End Module | https://www.gemboxsoftware.com/spreadsheet/examples/c-sharp-vb-net-excel-library/601 | CC-MAIN-2020-16 | en | refinedweb |
Workflows¶
Description
Programming workflows in Plone.
Introduction¶
The DCWorkflow product manages the default Plone workflow system.
A workflow state is not directly stored on the object. Instead, a separate portal_workflow tool must be used to access a workflow state. Workflow look-ups involve an extra database fetch.
For more information, see
Creating workflows¶
The recommended method is to use portal_workflow in the Management Interface to construct the workflow through-the-web and then you can export it using GenericSetup’s portal_setup tool.
Include necessary parts from exported workflows.xml and workflows folder in your add-on product GenericSetup profile (add-on folder profiles/default).
Model the workflow online¶
Go to ‘http:yourhost.com:8080/yourPloneSiteName/portal_workflow/manage_main’, copy and paste ‘simple_publication_workflow’, to have a skeleton for start-off, rename ‘copy_of_simple_publication_workflow’ to ‘your_workflow’ or add a new workflow via the dropdwon-menu and have a tabula rasa.
Add and remove states and transitions, assign permissions etc.
Putting it in your product¶
Go to ‘http:yourhost.com:8080/yourPloneSiteName/portal_setup/manage_exportSteps’, check ‘Workflow Tool’ and hit ‘Export selected steps’, unzip the downloaded file and put the definition.xml-file in ‘your/product/profiles/default/workflows/your_workflow/’ (you’ll need to create the latter two directories).
Configure workflow via GenericSetup¶
Assign a workflow¶
In your/product/profiles/default/workflows.xml, insert:
<?xml version="1.0" ?> <object name="portal_workflow" meta_type="Plone Workflow Tool" purge="False"> <object name="your_workflow" meta_type="Workflow" /> </object>
Assigning a workflow globally as default¶
In your/product/profiles/default/workflows.xml, add:
<object name="portal_workflow"> (...) <bindings> <default> <bound-workflow </default> </bindings>
Binding a workflow to a content type¶
Example with GenericSetup workflows.xml
<?xml version="1.0"?> <object name="portal_workflow" meta_type="Plone Workflow Tool"> <bindings> <type type_id="Image"> <bound-workflow </type> </bindings> </object>
Disabling workflow for a content type¶
If a content type doesn’t have a workflow it uses its parent container security settings. By default, content types Image and File have no workflow.
Workflows can be disabled by leaving the workflow setting empty in portal_workflow in the Management Interface.
Example how to do it with GenericSetup workflows.xml
<?xml version="1.0"?> <object name="portal_workflow" meta_type="Plone Workflow Tool"> <property name="title">Contains workflow definitions for your portal</property> <bindings> <!-- Bind nothing for these content types --> <type type_id="Image"/> <type type_id="File"/> </bindings> </object>
Updating security settings after changing workflow¶
Through the web this would be done by going to the Management Interface > portal_workflow > update security settings
To update security settings programmatically use the method updateRoleMappings. The snippet below demonstrates this:
from Products.CMFCore.utils import getToolByName # Do this after installing all workflows wf_tool = getToolByName(self, 'portal_workflow') wf_tool.updateRoleMappings()
Programatically¶
Getting the current workflow state¶
Example:)
Filtering content item list by workflow state¶
Here is an example how to iterate through content item list and let through only content items having certain state.
Note
Usually you don’t want to do this, but use content aware folder listing method or portal_catalog query which does filtering by permission check.
Example:
portal_workflow = getToolByName(self.context, "portal_workflow") # Get list of all objects all_objects = [ obj for obj in self.all_content if ISubjectGroup.providedBy(obj) or IFeaturedCourses.providedBy(obj) == True ] # Filter objects by workflow state (by hand) for obj in all_objects: status = portal_workflow.getStatusOf("plone_workflow", obj) if status and status.get("review_state", None) == "published": yield obj
Changing workflow state¶
You cannot directly set the workflow to any state, but you must push it through legal state transitions.
Security warning: Workflows may have security assertations which are bypassed by admin user. Always test your workflow methods using a normal user.
Example how to publish content item
banner:
from Products.CMFCore.WorkflowCore import WorkflowException workflowTool = getToolByName(banner, "portal_workflow") try: workflowTool.doActionFor(banner, "publish") except WorkflowException: # a workflow exception is risen if the state transition is not available # (the sampleProperty content is in a workflow state which # does not have a "submit" transition) logger.info("Could not publish:" + str(banner.getId()) + " already published?") pass
Example how to submit to review:
from Products.CMFCore.WorkflowCore import WorkflowException portal.invokeFactory("SampleContent", id="sampleProperty") workflowTool = getToolByName(context, "portal_workflow") try: workflowTool.doActionFor(portal.sampleProperty, "submit") except WorkflowException: # a workflow exception is risen if the state transition is not available # (the sampleProperty content is in a workflow state which # does not have a "submit" transition) pass
Example how to cause specific transitions based on another event (e.g. a parent folder state change). This code must be part of your product’s trusted code not a workflow script because of the permission issues mentioned above. See also see Events
# Subscribe to the workflow transition completed action from five import grok from Products.DCWorkflow.interfaces import IAfterTransitionEvent from Products.CMFCore.interfaces import IFolderish @grok.subscribe(IFolderish, IAfterTransitionEvent) def make_decisions_visible(context,event): if (event.status['review_state'] != 'cycle_complete'): #nothing to do return children = context.getFolderContents() wftool = context.portal_workflow #loop through the children objects for obj in children: state = obj.review_state if (state=="alternate_invisible"): # below is workaround for using getFolderContents() which provides a # 'brain' rather than an python object. Inside if to avoid overhead # of getting object if do not need it. what = context[obj.id] wftool.doActionFor(what, 'to_alternate') elif (state=="denied_invisible"): what = context[obj.id] wftool.doActionFor(what, 'to_denied') elif (...
Gets the list of ids of all installed workflows¶
Useful to test if a particular workflow is installed:
# Get all site workflows ids = workflowTool.getWorkflowIds() self.assertIn('link_workflow', ids, "Had workflows " + str(ids))
Getting default workflow for a portal type¶
Get default workflow for the type:
chain = workflowTool.getChainForPortalType(ExpensiveLink.portal_type) self.assertEqual(chain, ('link_workflow',), "Had workflow chain" + str(chain))
Getting workflows for an object¶
How.assertEqual(len(chain), 1) # this must must be the workflow name self.assertEqual(chain[0], 'link_workflow', "Had workflow " + str(chain[0])) | https://docs.plone.org/develop/plone/content/workflow.html | CC-MAIN-2020-16 | en | refinedweb |
And yes, go ahead and bring it up on python-dev. Don't bother with c.l.py unless you are particularly masochistic. --Guido On Thu, Mar 4, 2010 at 7:09 PM, Brian Quinlan <brian at sweetapp.com> wrote: > Wow, timing is everything - I sent Guido an e-mail asking the same thing < > 30 seconds ago :-) > > Cheers, > Brian > > On Mar 5, 2010, at 2:08 PM, Jesse Noller wrote: > >> *mega snip* >> >> Jeffrey/Brian/all - Do you think we are ready to move this to the >> grist mill of python-dev? Or should we hold off until I get off my >> rump and do the concurrent.* namespace PEP? -- --Guido van Rossum (python.org/~guido) | http://mail.python.org/pipermail/stdlib-sig/2010-March/000942.html | CC-MAIN-2013-20 | en | refinedweb |
IISIntrinsicsAttribute Class
Assembly: System.EnterpriseServices (in system.enterpriseservices.dll)
The ASP intrinsic objects can be obtained within a COM+ object using the named properties available from the COM+ object context when the object was created from ASP. With ASP.NET, a new set of intrinsic objects is used, which are not accessed from a context but rather methods in a namespace, much like the ContextUtil class. The case of COM+ objects created from ASP.NET is viable; therefore the intrinsic objects available to COM objects from the COM+ object context might be required so that the COM+ object can interact with ASP.NET.
For more information about using attributes, see Extending Metadata Using Attributes.
System.Attribute
System.EnterpriseServices.IISIntrinsics. | http://msdn.microsoft.com/en-US/library/system.enterpriseservices.iisintrinsicsattribute(v=vs.80).aspx | CC-MAIN-2013-20 | en | refinedweb |
.scope.context; 17 18 import java.util.HashMap; 19 import java.util.Map; 20 import java.util.Stack; 21 import java.util.concurrent.atomic.AtomicInteger; 22 23 import org.springframework.batch.core.Step; 24 import org.springframework.batch.core.StepExecution; 25 26 /** 27 * Central convenience class for framework use in managing the step scope 28 * context. Generally only to be used by implementations of {@link Step}. N.B. 29 * it is the responsibility of every {@link Step} implementation to ensure that 30 * a {@link StepContext} is available on every thread that might be involved in 31 * a step execution, including worker threads from a pool. 32 * 33 * @author Dave Syer 34 * 35 */ 36 public class StepSynchronizationManager { 37 38 /* 39 * We have to deal with single and multi-threaded execution, with a single 40 * and with multiple step execution instances. That's 2x2 = 4 scenarios. 41 */ 42 43 /** 44 * Storage for the current step execution; has to be ThreadLocal because it 45 * is needed to locate a StepContext in components that are not part of a 46 * Step (like when re-hydrating a scoped proxy). Doesn't use 47 * InheritableThreadLocal because there are side effects if a step is trying 48 * to run multiple child steps (e.g. with partitioning). The Stack is used 49 * to cover the single threaded case, so that the API is the same as 50 * multi-threaded. 51 */ 52 private static final ThreadLocal<Stack<StepExecution>> executionHolder = new ThreadLocal<Stack<StepExecution>>(); 53 54 /** 55 * Reference counter for each step execution: how many threads are using the 56 * same one? 57 */ 58 private static final Map<StepExecution, AtomicInteger> counts = new HashMap<StepExecution, AtomicInteger>(); 59 60 /** 61 * Simple map from a running step execution to the associated context. 62 */ 63 private static final Map<StepExecution, StepContext> contexts = new HashMap<StepExecution, StepContext>(); 64 65 /** 66 * Getter for the current context if there is one, otherwise returns null. 67 * 68 * @return the current {@link StepContext} or null if there is none (if one 69 * has not been registered for this thread). 70 */ 71 public static StepContext getContext() { 72 if (getCurrent().isEmpty()) { 73 return null; 74 } 75 synchronized (contexts) { 76 return contexts.get(getCurrent().peek()); 77 } 78 } 79 80 /** 81 * Register a context with the current thread - always put a matching 82 * {@link #close()} call in a finally block to ensure that the correct 83 * context is available in the enclosing block. 84 * 85 * @param stepExecution the step context to register 86 * @return a new {@link StepContext} or the current one if it has the same 87 * {@link StepExecution} 88 */ 89 public static StepContext register(StepExecution stepExecution) { 90 if (stepExecution == null) { 91 return null; 92 } 93 getCurrent().push(stepExecution); 94 StepContext context; 95 synchronized (contexts) { 96 context = contexts.get(stepExecution); 97 if (context == null) { 98 context = new StepContext(stepExecution); 99 contexts.put(stepExecution, context); 100 } 101 } 102 increment(); 103 return context; 104 } 105 106 /** 107 * Method for de-registering the current context - should always and only be 108 * used by in conjunction with a matching {@link #register(StepExecution)} 109 * to ensure that {@link #getContext()} always returns the correct value. 110 * Does not call {@link StepContext#close()} - that is left up to the caller 111 * because he has a reference to the context (having registered it) and only 112 * he has knowledge of when the step actually ended. 113 */ 114 public static void close() { 115 StepContext oldSession = getContext(); 116 if (oldSession == null) { 117 return; 118 } 119 decrement(); 120 } 121 122 private static void decrement() { 123 StepExecution current = getCurrent().pop(); 124 if (current != null) { 125 int remaining = counts.get(current).decrementAndGet(); 126 if (remaining <= 0) { 127 synchronized (contexts) { 128 contexts.remove(current); 129 counts.remove(current); 130 } 131 } 132 } 133 } 134 135 private static void increment() { 136 StepExecution current = getCurrent().peek(); 137 if (current != null) { 138 AtomicInteger count; 139 synchronized (counts) { 140 count = counts.get(current); 141 if (count == null) { 142 count = new AtomicInteger(); 143 counts.put(current, count); 144 } 145 } 146 count.incrementAndGet(); 147 } 148 } 149 150 private static Stack<StepExecution> getCurrent() { 151 if (executionHolder.get() == null) { 152 executionHolder.set(new Stack<StepExecution>()); 153 } 154 return executionHolder.get(); 155 } 156 157 /** 158 * A convenient "deep" close operation. Call this instead of 159 * {@link #close()} if the step execution for the current context is ending. 160 * Delegates to {@link StepContext#close()} and then ensures that 161 * {@link #close()} is also called in a finally block. 162 */ 163 public static void release() { 164 StepContext context = getContext(); 165 try { 166 if (context != null) { 167 context.close(); 168 } 169 } 170 finally { 171 close(); 172 } 173 } 174 175 } | http://static.springsource.org/spring-batch/xref/org/springframework/batch/core/scope/context/StepSynchronizationManager.html | CC-MAIN-2013-20 | en | refinedweb |
Multi requirements we have. Well, all but one: rich:tree is single select only, not multi select. Or is it?
Some theory behind the rich:tree component
The RichFaces rich:tree component is a component that can display hierachical data in two ways. The first way is to display a org.richfaces.model.TreeNode with its children. The RichFaces API also provides a default implementation of the org.richfaces.model.TreeNode interface, which is the org.richfaces.model.TreeNodeImpl class.
The second way is to use a RichFaces rich:recursiveTreeNodesAdaptor to display a java.util.List or array of any kind of object, as long as it has some member that holds a java.util.List or array of child objects. Due to some heavy preprocessing of the data that is displayed in the tree, along with us feeling more comfortable with java.util.List we choose this approach in our project. In this article I’ll use the first approach to show that our solution also works in this case.
Building a simple rich:tree
To be able to display a rich:tree in a JSF page, you need few simple classes. The first class I used is called SelectionBean and it looks like this
import org.richfaces.event.NodeSelectedEvent; import org.richfaces.model.TreeNode; import org.richfaces.model.TreeNodeImpl; public class SelectionBean { private TreeNode rootNode = new TreeNodeImpl(); public SelectionBean() { TreeNodeImpl childNode = new TreeNodeImpl(); childNode.setData("childNode"); childNode.setParent(rootNode); rootNode.addChild("1", childNode); TreeNodeImpl childChildNode1 = new TreeNodeImpl(); childChildNode1.setData("childChildNode1"); childChildNode1.setParent(childNode); childNode.addChild("1.1", childChildNode1); TreeNodeImpl childChildNode2 = new TreeNodeImpl(); childChildNode2.setData("childChildNode2"); childChildNode2.setParent(childNode); childNode.addChild("1.2", childChildNode2); } public void processTreeNodeImplSelection(final NodeSelectedEvent event) { System.out.println("Node selected : " + event); } public TreeNode getRootNode() { return rootNode; } }
It creates a simple TreeNode hierarchy that can be displayed in a tree with the following Facelets JSF page:
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <body> <h:form <a4j:outputPanel <rich:panel <f:facetTree</f:facet> <rich:tree <rich:treeNode> <h:outputText </rich:treeNode> </rich:tree> </rich:panel> </a4j:outputPanel> </h:form> </body> </html>
The result looks like this:
Quite simple, like I said.
Multi select in the rich:tree
The SelectionBean class has been prepared to catch node selection events. If you modify the rich:tree line in the Facelets JSF page to read like this
<rich:tree
you should see selection events being registered in the log of your application server:
Node selected : org.richfaces.event.AjaxSelectedEvent[source=org.richfaces.component.html.HtmlTree@19eaa86]
Not very helpful information, but at least we know that the node selection events are registered. Now, if you look at the taglib doc for rich:tree you’ll notice that there is no way to configure the tree to accept multiple selections. So let’s modify the SelectionBean to keep track of node selections itself.
We’ll need some Set to hold the selected tree nodes in. We could use a List, but that will create doubles if we click a node more than one time. Remember, rich:tree has no way of knowing if a node recently was selected or not. So everytime we click a node that rich:tree thinks not to be selected, it raises a selection event again! We only want to know which nodes are clicked and keep track of that. we don’t want to know how many times a node is selected. So therefore a Set will do nicely.
There’s one more thing to a rich:tree. Its backing UIComponent is a org.richfaces.component.html.HtmlTree and in the TreeModel of that tree, each node is uniquely identified by a RowKey Object. So, I’ll use a
private Map<Object, TreeNode> selectedNodes = new HashMap<Object, TreeNode>();
Now, everytime a node is selected I’ll add its TreeNode to the Map under the RowKey key. Assuming we have a global Map member as defined above, the processTreeNodeImplSelection method can now be modified to
public void processNodeSelection(final NodeSelectedEvent event) { HtmlTree tree = (HtmlTree)event.getComponent(); Object rowKey = tree.getRowKey(); TreeNode selectedNode = tree.getModelTreeNode(rowKey); selectedNodes.put(rowKey, selectedNode); for (Object curRowKey : selectedNodes.keySet()) { System.out.println("Selected node : " + selectedNodes.get(curRowKey).getData()); } }
If you click the three nodes in the tree in some random order, you’ll get this output:
Selected node : childChildNode1 Selected node : childNode Selected node : childChildNode2
So, we are keeping track of all selected nodes!
Making the selected nodes visible in the tree
Now our bean knows that we have selected multiple nodes, but the tree still displays only one selected node at a time. Using e.g. FireBug it’s easy to determine the difference is CSS class between a selected node and a non-selected node. Using the default Blue skin for RichFaces, a non-selected node has style class
dr-tree-h-text rich-tree-node-text
while a selected node has style class
dr-tree-h-text rich-tree-node-text dr-tree-i-sel rich-tree-node-selected
The border around a selected node is there because of the dr-tree-i-sel style. We only need a way to make all selected nodes (that is, the ones that are stored in the Map in our bean) use that style. One way is to tell each TreeNode that is has been selected. But how can we do that? Well, for instance by introducing a class that holds both the text that will be displayed in the tree as well as a Boolean that holds the selection state of the node. Such a class could be like this
public class NodeData { private String nodeText; private Boolean selected = Boolean.FALSE; public NodeData(String nodeText) { this.nodeText = nodeText; } [getters and setters] }
With this class we need to make a few changes to our SelectionBean. First of all, when building the node hierarchy we need to use the NodeData class instead of a simple String. This means we’ll have to modify the constructor method so it looks like this
public SelectionBean() { TreeNodeImpl childNode = new TreeNodeImpl(); childNode.setData(new NodeData("childNode")); childNode.setParent(rootNode); rootNode.addChild("1", childNode); TreeNodeImpl childChildNode1 = new TreeNodeImpl(); childChildNode1.setData(new NodeData("childChildNode1")); childChildNode1.setParent(childNode); childNode.addChild("1.1", childChildNode1); TreeNodeImpl childChildNode2 = new TreeNodeImpl(); childChildNode2.setData(new NodeData("childChildNode2")); childChildNode2.setParent(childNode); childNode.addChild("1.2", childChildNode2); }
Next, the processNodeSelection method needs to tell a node that it is selected by setting the selected Boolean in NodeData to true. The method becomes
public void processNodeSelection(final NodeSelectedEvent event) { HtmlTree tree = (HtmlTree)event.getComponent(); Object rowKey = tree.getRowKey(); TreeNode selectedNode = tree.getModelTreeNode(rowKey); ((NodeData)selectedNode.getData()).setSelected(Boolean.TRUE); selectedNodes.put(rowKey, selectedNode); for (Object curRowKey : selectedNodes.keySet()) { System.out.println("Selected node : " + ((NodeData)selectedNodes.get(curRowKey).getData()).getNodeText()); } }
Finally, we need to modify our Facelets JSF page in two ways. The first one is to make sure the h:outputText element displays the nodeText of the NodeData. The second modification is to have the rich:treeNode set it’s nodeClass accordingly to the selected NodeData Boolean. The Facelets JSF page lines look like this
<rich:treeNode <h:outputText </rich:treeNode>
Now, if you reload the application in your browser, all of a sudden you can "select" multiple nodes in the tree.
Future enhancements
The above scenario isn’t ideal. First of all, now single selection of nodes doesn’t work anymore. To fix this, you may want to add a checkbox that toggles the selection state from single to multiple and back. Another issue is that accidentically selected nodes cannot be deselected anymore. The selection state checkbox may partially solve that, however. Once you select a node that you didn’t want to select, toggle the checkbox, select a single node, then toggle the checkbox again and start selecting multiple nodes once more. Another way would be to have another checkbox that allows you to deselect any selected node. Finally, users may want to hold a key, e.g. the CTRL key, and then start selecting multiple nodes. I haven’t got a clue how to do that, so if you know please drop me an email
Ideally the RichFaces rich:tree would have native multiple selection support. Perhaps this post will actually make that possible.
Related posts:
- Selecting a 'pruned tree' with selected nodes and all their ancestors – Hierarchical SQL Queries and Bottom-Up Trees
- Migrating the ADF 10g Hierarchical Table Report to JDeveloper & ADF Trinidad and onwards to 11g (RichFaces)
- Dropping trees
- Building ADF Faces Tree based on POJOs (without using the ADF Tree Data Binding)
- Creating Multi-Type Node Children and Child Node labels in ADF Faces Tree Component
This entry was posted by Wouter van Reeven on January 29, 2009 at 9:46 pm, and is filed under General. Follow any responses to this post through RSS 2.0.Both comments and pings are currently closed.
Didn't find any related posts :(
Hi Andriy,
Can you share link? It will be very helpful.
Thank You.
Hi Andriy, i’m really interested in your code.
it would be great if you post the full example!
Thank you.
Hi Andriy, i’m really interested in your code.
it would be great if you post the full example!
Thank you.
Hi!
I’ve implemented the same feature with rich:tree but in a completely different way.
To be able to select multiple I’ve used checkboxes inside each row to display the node’s “selected” property, and processed that data on submit.
If someone needs an example I can post a code here.
Â
Regards
Hi Wouter, as you’re interested in PrimeFaces these days, I’d like to point to PrimeFaces Tree which has built-in support to checkbox based selection, around 5 lines of code, you can implement this.
Demo:
Attach to privious one:
Property “UseJBossWebLoader” should be “true”
Dear cuixf and Krishna,
Change the following property in your jboss-service.xml file.
File:
jboss-4.2.2.GA\server\default\deploy\jboss-web.deployer\META-INF\jboss-service.xml
Should change to:
true
Regards
thanks for your article, sir
you said:”you should see selection events being registered in the log of your application server:”, how could i register the selection event in the example?
help me please thank u
I am getting the same InvocationTargetException, as mentioned by cuixf and other above.
Any solution to that?
Hello,
Thank you for the tutorial.
Unfortunetly i still couldn’t make the first simple tree on the tutorial.
I think i need you help.
First i want to tell you that i made a Seam Web Project with JBoss AS 4.2 Runtime.
When you made those simple tree, what kind of project did u have? A Dynamic Web Project or Seam Web Project?
And in my eclipse, there is no errror as i copied your jsf Page.
But on the browser i didn’t see anything. It’s only a white page.
Could you help me please?
Thank you
The WAR’s project is no problem,but my EAR’s project see InvocationTargetException warning.
ClassLoader ???
_______________________________________________
yukimi Says:
April 3rd, 2009 at 12:01 pm
Hi, thank you for this example. It’s great. However, I have problem with HtmlTree. Using your source code, whenever I click on the node, it shows
18:06:10,354 WARN [lifecycle] /admin/category.xhtml @28,79 nodeSelectListener=â€#{CategoryManager.processNodeSelection}â€: java.lang.reflect.InvocationTargetException
If I removed HtmlTree, I don’t see InvocationTargetException warning. What’s wrong?
Please help.
Thanks.
Hi, thank you for this example. It’s great. However, I have problem with HtmlTree. Using your source code, whenever I click on the node, it shows
18:06:10,354 WARN [lifecycle] /admin/category.xhtml @28,79 nodeSelectListener=”#{CategoryManager.processNodeSelection}”: java.lang.reflect.InvocationTargetException
If I removed HtmlTree, I don’t see InvocationTargetException warning. What’s wrong?
Please help.
Thanks.
hi ı have prolem with HtmlTree ı couldnt find org.richfaces.component.html.HtmlTree library
please help me how to parse this
thanks
Dear Sir,
Many thanks for this type of post. Its self-explanatory. Currently I am working on rich:tree with checkbox. If you have any idea or progress regarding checkbox tree, please share with us.
Thank you
Ismail
=====
Thanks for the post, it helped a lot. I still have a problem, though. I can select multiple nodes but I do not see them selected only after I refresh the page. Do you have the same problem with your implementation? If you know how to solve it, please respond. | http://technology.amis.nl/2009/01/29/multi-select-in-richfaces-trees/ | CC-MAIN-2013-20 | en | refinedweb |
Build an interactive program that reads in the information about the Inventory objects from the file Inventory.txt into an array of Inventory objects
Provide a Panel on the screen that has a TextField for the product code and a Button called “Lookupâ€. The user should be able to type in a Product code into the JTextField, and hit the button. Your program should then do a Binary Search of the array of Inventory objects to find the appropriate inventory object.
Build a second JPanel on the screen to show the results of the lookup. This panel should have:
A JTextField showing the Price of the Item
A JTextField showing the Quantity on Hand of the Item
A JTextField showing the Status of the lookup
If the item is found, the Price and Quantity on Hand should be displayed. If not found, the Price and Quantity on Hand should be blank, and the Status should say “Item Not Foundâ€
This program can be done with a simple grid layout on the screen, but a more elegant solution would be to have a JPanel on the top of the screen with a Label saying “Product Codeâ€, a JTextField for the user entry, and a JButton. Then, a JPanel with the GridLayout could be built for the bottom of the screen with three rows and Labels on each row such as “Priceâ€, “Qty on Handâ€, and “Statusâ€.
The JPanel class is not mentioned in the book, but provides a way to divide the screen up. Simply create a Panel with “JPanel p = new JPanel ( );†You can then add a Panel to the Frame with the “add†method, followed by the desired section of the screen: BorderLayout.NORTH for the top, or BorderLayout.CENTER for the rest of the screen. Items can then be added to the JPanel just like the JFrame, and they will go to the section of the JFrame given to the JPanel. You can also change the layout of the JPanel with the same setLayout( ) method as the JFrame, but you’re just changing one part of the screen.
//PanelSample.java
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class PanelSample extends JFrame
{
public PanelSample()
{
setTitle("INVEVTORY REPORT JAVA 205");
setLayout(new BorderLayout());
setSize(400,150);
Container contentPane = getContentPane();
topPanel= new JPanel(new FlowLayout());
lookUp = new JButton("Look Up");
productCode = new JTextField("Enter product code here");
topPanel.add(new JLabel("Product Code:"));
topPanel.add(productCode);
topPanel.add(lookUp);
bottomPanel = new JPanel(new GridLayout(0,2));
price = new JTextField(" ");
quantityOnHand = new JTextField(" ");
status = new JTextField(" ");
bottomPanel.add(new JLabel("Price: " ));
bottomPanel.add(price);
bottomPanel.add(new JLabel("Quantity On Hand : "));
bottomPanel.add(quantityOnHand);
bottomPanel.add(new JLabel("Status: "));
bottomPanel.add(status);
contentPane.add(topPanel, BorderLayout.NORTH);
contentPane.add(bottomPanel, BorderLayout.SOUTH);
addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e)
{
System.exit(0);
} // windowClosing
}); // WindowListener
} // end of constructor
static JPanel topPanel;
static JPanel bottomPanel;
static JButton lookUp;
static JTextField productCode;
static JTextField price;
static JTextField quantityOnHand;
static JTextField status;
public static void main(String [] args)
{
JFrame frame = new PanelSample();
frame.show();
} // main
} // PanelSample
I am sorry those websites did not have the answer. | http://www.chegg.com/homework-help/questions-and-answers/build-an-interactive-program-that-reads-in-the-information-about-the-inventory-objects-fro-q3319640 | CC-MAIN-2013-20 | en | refinedweb |
------------------------------------------------------------------------- The Elder Scrolls III: Tribunal FAQ v0.98 Morrowind Expansion Pack for the PC Last Updated: February 4th, 2005 ------------------------------------------------------------------------- Newest versions of this guide can be found ONLY at: WRITTEN EXLUSIVELY FOR GAMEFAQS.COM Copyright 2003-2005 cnick. "We are as goddddds!" As of V0.98, I'm no longer taking any more e-mails for this FAQ. While it's mostly complete, I imagine there are unseen corrections to be made. However, any corrections that need to be made are immaterial by now, and I'm no longer interested in making them. Besides, how am I supposed to find the time to update this guide when I'm drooling over screenshots of ESIV: Oblivion? * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * t a b l e o f c o n t e n t s * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1. Introduction - author's note - updates 2. Assumptions 3. Getting to Mournhold - the dark brotherhood - what to do now? 4. Main Quests - destroying the goblin army - cleansing the shrine of the dead - barilzar's mazed band - attack on mournhold - uncover assassination plot - so, you want me to fight the deaf guy? - one, final quest - ending the end of times - f33r my 1337 skillz, mournhold! - salas valor's last stand - forging nerevar's blade - i will walk through the fire! - final dungeon, part i - final dungeon, part ii - the end of our journey 5. Tienius Delitian's Quests - a source of rumors - temple informant - disloyalty among the guards - conspiracy against the King - anonymous writer - meeting lady barenziah 6. Miscellaneous Quests - the black dart gang - blind date - the bouncer - champion of clutter - dirty business - the droth dagger - fargoth, part ii - he may be dumb, but at least he's not naked - hot pockets - how to get custom armor - mercenary for hire - the museum of artifacts - mournhold's battle bots - no beer makes golena go something, something - robe of the lich - ten-tongues' secret supplier - to be or not to be - the unfaithful husband - when psycho wizards attack 7. Fog Fix 8. Closing Find what you want faster: -------------------------- 1. Highlight the section name (e.g. "1. introduction", "- updates"). 2. Press Ctrl C 3. Press Ctrl F 4. Press Ctrl V, and hit return. ========================================================================= 1. Introduction ========================================================================= -- Author's Note ------------------------------------------------------- Welcome to my ESIII: Tribunal FAQ. Before I begin, I'd like to say a few words about the game in general. First, and foremost, I'm assuming that if you need this guide, you've played through main game (aka Morrowind). Although Tribunal does not require the player to meet any special requirements in Morrowond to reach Mournhold, you need to be familiar with the game's mechanics. I would never recommend anyone going through Tribunal without putting some hours into Morrowind; the creatures in Mournhold are tough. Ironically, you can accomplish most of Tribunal at an early level. I'm not saying a Level 1 Mage could work his way through the game, but you certainly do not have to be deep into Morrowind to play with Tribunal. However, the weaker the character, the more time you may need to get through all the quests. If you're tired with your current Morrowind character, go ahead and create a new one, and start off straight into Tribunal. The game is easy enough as it is, why not make it a bit more difficult. Plus, making new characters is fun ;) Morrowind and its expansions are hard games to write quests for, as they about as non-linear as games come. In reality, I can't really lead you along the entire game. This guide requires that you have the knowledge to know where people are, and where to look for items/people/etc. -- Updates ------------------------------------------------------------- Version 0.98 (February 4, 2005) - Various Corrections. Version 0.4 (June 15, 2004) - Added Dome of Serlyn fog fix. Version 0.3 (September 10, 2003) - Black Dart Gang quest finally added. - Updated Museum of Artifact list. - Finished Andoren quest (cheating husband). Version 0.2 (August 13, 2003) - Ashstorms puzzle done. - Strategy for Imperfect done. - Couple more rewards added in romance quest. - Updated Museum of Artifact list. Version 0.1 (August 6, 2003) - Initial version. Version 0.0 (July 14th, 2003) ========================================================================= 2. Assumptions ========================================================================= There are a couple of assumptions this FAQ makes. * Your level can vary all you want. In my humble opinion, you want to be somewhere around Level 25 when you approach the final quests. You can enter Mournhold at Level 1 if you want, but it's unlikely that you can get past the first main quest until you reach level 10 (depending on your character class). * Speaking of character class, you'll have the easiest time if you're combat-oriented. Duh. The game is unfortunately unbalanced in its fighting system, and you do indeed speak loudly if you hold a big stick. The closer you are to the magic classes, the more you're in trouble. Trouble as in, I have no idea how you could use a pure-magic class and go through Tribunal unless you're fairly high leveled. * You have a levitate spell (or many, many potions available). Although you can't levitate in Mournhold (or the last dungeon), the number of times I used levitate put me in shock. It's a great spell that helps you in combat, as well as finding nice treasure. I may have to go through Tribunal again to confirm this thought, but I'm damn near close to saying this is an absolute spell; without it, Tribunal may be near impossible to get through (especially if you aren't a warrior-type character). Simply use levitate, float up out of reach of your melee attackers, and fire magic or arrows until they're dead. Cheap? Yes. Effective? You betcha. * Mark and Recall are your friends. Use them always. Just like Morrowind, you'll be constantly returning back to your quest-giver. This speeds up mundane chores, such as walking. * The key to leveling is to gain three x5 multipliers per level. This means you can get your attributes near 100, and make you super powerful early on. To get a multiplier on an attribute during the level up process, you need to increase the attribute a certain amount of skill points. 1-4 skill points = x2 5-7 skill points = x3 8-9 skill points = x4 10 skill points = x5 So, you want to increase an attribute (Intelligence, Strength) 10 times through a number of different skills. For example, raising security 5 points, and enchant 5 points equal 10 total skill point increase for intelligence. At the next level, you'll have a x5 multiplier for Intelligence. Easy, right? The burden on doing this is that you only have 10 skill points to raise on major/minor skills before you gain a level. The key is to create a character that has doesn't have too many skills in a certain attribute as a major/minor skill. That way, you can raise a non-major/minor skill 10 times, get the x5 multiplier, but not gain a level. * Continuing with leveling up, check out Wes Ide's Skill Level-Up Tips at GameFAQs <>. It's obviously not for Tribunal, but the guide has some helpful tips to speed up the leveling process. ========================================================================= 3. Getting to Mournhold ========================================================================= Quests are listed by the order in which you are given. Since Morrowind basically has a infinite amount of different classes in the game, I won't be able to give exact strategies for your class. If you're having problems through Tribunal, I suggest going back to Morrowind and doing the Guild or House quests. Or for that matter, the main quest. ------------------------------------------------------------------------- - The Dark Brotherhood - ------------------------------------------------------------------------- -- Assassination ------------------------------------------------------- Before you can do the available quests in Tribunal, you need to first get to Mournhold. Once Tribunal is installed, load up whatever character you want to use, and start sleeping. It doesn't matter where, but I recommend some place outside, so you have room to move around. I did 12 hours each time I was getting a character to Mournhold, as an assasin randomly attacks while you are asleep. Eventually, you'll hear a loud noise, and an assassin will come out of nowhere to kill you. And since we're here, these assassins must really suck. They make loud "noises" when they're sneaking up on someone? If you're at a low-level, you may have some trouble with him. However, I finished him off with my level 1 Mage with no problem, so he isn't that tough. He was able to kill me two times previously, but that was because I was sleeping in an enclosed environment. That limits your ability to run away from him, and to get a good attack front. The morale of the story: stay in open areas in the wilderness, and you'll be okay. If you are using a character that has journeyed throughout Morrowind, this fight should be no contest. Be careful: if you do face off with an assassin inside a building. The people inside won't do anything, and you may hit them in the conflict. I've done this a couple of times. If you mess up (i.e. die), just keep sleeping until you face another one. In all, you can fight assassins 10 different times for a total of 14 adversaries (GameFAQs Messageboard), depending on your level. The paragraph below explains how to continue towards getting to Mournhold. Once you do this, you will stop fighting the assassins. If you want to fight them for their loot, then do not go to Ebonheart, do not talk to an Imperial Guard, and do not pass go. After you successfully destroy your opponent, head towards a town with a general guard (doesn't matter if guard is Imperial or not), and talk with him about your attackers. It appears the assassin was from a group called the Dark Brotherhood. He insists that you should probably find some help, as the Dark Brotherhood are the bad asses of the assassinations. The guard says that Apelles Matius has just arrived from the mainland in Ebonheart. Perhaps he can help us. -- Leaving Vvardenfell ------------------------------------------------- Once you've arrived in Ebonheart, go into the main fort area to some stairs leading to the bridge to the Grand Counsel Chamber. You'll find him as long as you don't enter inside any building. He's outside, roaming the grounds, as the Imperial Guards tell you. He's wearing adamantium armor, so he's easy to pick out. Talk with him about the Dark Brotherhood, and he says you should go to Mournhold, so you can find out why the 'hood is after you. There is a problem, however. Because of the blight, ships from Vvardenfell are turned around when they approach the capital city. Apelles says that a mage who also just recently arrived at Ebonheart can help you. From the stairs where Apelles is, go across the bridge into the Grand Counsel Chambers. Turn right, and talk with Asciene Rane, the mage in the red robe. Talk to her about leaving Vvardenfell, and she will automatically warp you to the Royal Palace in Mournhold. The Khajiit named Effe-Tei can teleport you back to Ebonheart, if you want (he should be right in front of you). Welcome to Mournhold. -- Laying the Smackdownth on the Dark Brotherhood ---------------------- Or something like that. And I'll come right out and say it: the Royal Palace Guards' armor fricking rock. Me wants. Once you are in Mournhold, immediately talk with a Royal Guard and ask him about the Dark Brotherhood. Depending on your race and personality stat, you may need to bribe the sucker so that he tells you about the Dark Brotherhood. I had to bribe him 100 Gold to raise his deposition over 50. That was enough for him to say that the Dark Brotherhood hang out at the Great Bazaar. That helps a lot, but now we are stuck in a large, unfamiliar city. There are numerous possible ways to reach the Great Bazaar from where you're standing (Reception Area of the Royal Palace). If this is your first time in Mournhold, it may be worth it to simply have a look around the whole town, to get used to where things are. Fortunately, while the city is large, you can't leave. It should only take 10 minutes to get a good feeling of where things are. Whether or not you decide to do this, we need to find the Dark Brotherhood, and talk some sense into these guys. Why? While the assassin group may seem like the major threat in the game, they aren't. Someone put the contract out to kill you, and the boss will likely know who did it. From the reception area where you teleported in, turn around and you'll exit into the Courtyard. Exit south towards the Plaza Brindisi Dorom (and for the rest of the guide, I'll simply refer this section of town as the Plaza), then head east towards the gate leading to the Great Bazaar. From the entrance, move further down the wall southeast, and enter the sewers below Mournhold. From your starting point, turn left and move along the straight-forward passage. You'll run past a fork-in-the-road, where one path leads into some water. A skeleton guards this; kill him off if you want to, but don't follow this path. Once you get towards the "sewer" section of the sewers, follow along, skipping any doors you run into. When you run into the door leading to the Manor District, you found the right place. Remember, if you used any doors, or entered in any large caverns, you went the wrong way. And oh yeah, welcome to the Dark Brotherhood's secret, evil lair. Caution: Multiple ass-kickings in their future. Since each character is different, it's hard to make any kind of strategy for this raid against their hideout. Luckily, you'll find out if your character is man enough to accomplish the next feat by the time you face the first assassin you meet. Lightning magic pwns, fire sucks, and I never had a chance to use frost magic. Each assasin has a magical poison shortsword, which does more damage than I'd like. But if you're going toe-to-toe with them, I assume you got some meat on you. The following next minutes involve you killing off 20+ assassins. When you reach a large cavern, with two ruined buildings, you've killed most of them. Remember to grab their gear; it's worth a lot of money. In the cavern, go up to the building on the floor above you (Moril Manor Northern Building), and clear out any scum. Skip opening the Iron Door (the door that doesn't have you load), as the head hauncho of the gang resides in there. Let's wait until he's all that's left. And note: the assassins outside respawn. Be careful when leaving this building, as you could get ambushed real fast by them. When you're at full health, and ready for a fight, open up the doors, and the Dark Brotherhood leader will come out (possibly with a rat companion), ready to take you out. He uses a bound longbow, which can be deadly to any magic-based character who never bothered to use armor. Lightning-based magic works, but he's resistant to most magic. Just pound on him 'till he goes. That always works ;) When he's dead, grab the contract off his body. Read it if you want; it's fairly easy to tell who put the contract on you (*cough*Helseth*cough*). Leave the sewers, and head back to the Royal Palace. ------------------------------------------------------------------------- - What to do now? - ------------------------------------------------------------------------- The contract you've managed to take from the Dark Brotherhood makes it quite obvious the reason for these attacks: the new King, Helseth. If you talk to people around the city, they'll suggest you see Tienius Delitian or Fedris Hler. I prefer to see Delitian first, and go through his quests. You get a great sword for a reward, so it's worth it to go through them (you don't have to, however). When you talk with Delitian, he casually passes your attempted assassination as a "mistake." It just seems the paranoid Helseth wants to take out any competition he may have. No hard feelings, right? Alas, there's nothing we can do to Helseth, at the moment. He takes the time to ask if you're willing to do some duties, so that you may earn the King's trust. After this, you're free to do whatever you want to do. Whether or not you decide to do Delitian's quests, talk with Fedris Hler to continue with the main story. Or you can do a couple of miscellaneous quests. They're all fairly short, and you get some decent items out of them. Check the respective section in this guide to learn more about these quests. From here on, I'll continue with the main quest. However, Use the search function described in the beginning of this document to find the sections dealing with Delitian's quests, or any miscellaneous quest. ========================================================================= 4. Main Quests ========================================================================= Once you're done removing the Dark Brotherhood threat, enter the Tribunal Temple in the northern section of Mournhold. Talk with Fedris Hler, the dark elf wearing a brown robe in the reception area. ------------------------------------------------------------------------- - Destroying the Goblin Army - ------------------------------------------------------------------------- Fedris Hler gives your first quest in the main part of Tribunal. Choose matters to discuss, and Hler goes into detail about King Helseth raises an army of goblins beneath Mournhold. This allows Helseth some protection against likely Temple attacks. Hler asks that you take out two goblin warchiefs, and two atmer trainers, who are helping to train the army. This is a long quest, and a good test on whether or not your character is capable of going through the rest of Tribunal. Stock up on lots of health and magicka potions, as you're going to need it. Enter the Residential Sewers via Godsreach (northwest corner), then go east, south, east, then south again to a ladder leading to the West Sewers. Once up there, go east through a very straight-forward path until you reach a door to the Battlefield (there is a second doorway that leads you to the Palace Sewers; skip that). Along the way, you'll fight what seems like a legion of goblin fighters. If you're having trouble now, I recommend heading back to the mainland and leveling up. It gets much harder. Once you're in the Battlefield, you'll reach a huge cavern. If you can open level 100 locks, use the door to the east (City Gate). If you can't, use a levitate spell/potion to reach the highest point on the northern wall. There is a door that leads to the Abandoned Passageway. Both ways will lead to the same area, but lets take the City Gate path. Go through the city gate, then unlock the level 100 door to reach the Residential Ruins. At the fork, the northern door leads to the Tears of Amun-Shae, where the two warchiefs are. Further down leads to Teran Hall, where the two atmer trainers are. Enter the Tears room, and take out the warchiefs in the caves to your west and east. They're both fairly close to the entrance, and you won't have to worry about other enemies dropping in. When the two are dead, you'll receive a journal entry update. Leave the Tears of Amun-Shae, back to the Residential Ruins. Follow the unknown path to Teran Hall. The enemies you killed here previously may have respawned, so make sure you're ready to fight when you first re-enter the ruins. Make your way through Teran Hall, and you'll come up to the two Atmer trainers (one decked out in full orc armor; both have ebony weapons!). Finish off the trainers, and grab their weapons (Ebony longsword and war axe; 25000 gold!) then return back to Hler. Use an Almsivi Intervention spell/scroll to return quickly. You may want to go back to do some quest work (Thrud miscellaneous quest, Delitian quest assassinations). Talk with Hler, and you will reward you with a rather large sum of gold (15000). ------------------------------------------------------------------------- - Cleansing the Shrine of the Dead - ------------------------------------------------------------------------- When you talk with Hler again for some more services for the Lady, he'll say you should tak with Gavas Drin, Archcanon of the Temple for a quest. From the reception area, enter the west door to his offices, then left at the fork, then left into the first door you run into. Talk with Gavas, and you'll order that you escort Urvel Dulni to the Shrine of the Dead, so that he can cleanse it of the undead building up there. Yes. Another escorting quest. The best course of action for this one, however, is to first clear out the path to the Shrine of the Dead, then return and bring Urvel with you. Why? The enemies to the Shrine are fairly easy, but the Profane Acolytes that guard the inner sections are tough mothers. It may not be the best idea to have Urvel wander around as you fight these suckers. Make sure you don't talk to him, if you plan on going in first. I think talking with him immediately activate him to follow you. Enter the Temple Sewers from the basement. To reach the basement, go to the Halls of Ministry via Drin's offices, then go down to the basement. Once in the sewers, go west, north, east, then north into a large open area. Remember this area, as I'll refer to it more often (as we pass through it a bunch). Head east from here, to Temple Sewers East. When you go through the door, you'll immediately be attacked by a Black Dart Gang member. He isn't too tough, but those darts can be deadly. And if you're at all interested in taking out these guys, refer to the miscellaneous quest that deals with them. Continue east until you reach the Temple Gardens. Here, you'll face a bunch of regular ol' skeletons. Again, nothing too hard, as long as you don't fool around with them. Continue moving along until you reach a fork. The door ahead of you takes you to the Temple Shrine. The path to your right takes you to Gedna Relvel's Tomb, which leads to the Robe of the Lich quest. I recommend entered the tomb, and move forward a bit until you receive a message about a smell. This triggers the quest. Refer to the Robe of the Lich quest in MISCELLANEOUS QUESTS. Inside the Shrine, you'll come up to your first Profane Acolyte. He'll immediately summon a bonewalker, then use his powerful magic against you. As with any summoner, just concentrate on the Acolyte. With him dead, open up the Old Metal Door, with four more Acolytes. Fortunately, you can face each one by themself. Simply throw a spell or an arrow/dart at one to get his attention, then go back into the first large area. Take him out, then get another one. Rinse and Repeat. When all four Acolytes are dead, return to the Temple to get Urvel Dulni. Return back to the Shrine of the Dead, and lead him to the altar where you killed off the four Acolytes. When he "cleanses" the area, return back to Drin to complete the quest. Your reward: Blessed Spear. When asked about more quests, Drin will say to talk to Hler. ------------------------------------------------------------------------- - Barilzar's Mazed Band - ------------------------------------------------------------------------- Talk with Fedris Hler about services to our Lady Almalexia. Apparently, our lady friend wants to get ahold of Barilzar's Mazed Band, although Hler (or for that matter, Drin) have no idea why she would want this. He mentions that to get the band, you'll have to go through the Abandoned Crypt, which was previously blocked off by rocks. It seems that Almalexia has removed the rocks, and given us entry. Excellent. Bid farewell to Hler, and return back to the Temple Sewers. At the large open-space area, take the western route towards the Abandoned Crypt. Despite it's spooky name, there isn't much to be scared of in the crypt. You'll face off with three Lichs (similiar to the Profane Acolytes of the last quest). Eventually, you'll run into Barilzar himself, who is a bit pissed off that you've disturbed whatever the hell he was doing by himself. Kill him as usual. Careful now; he does hold a large stick (aka Daedric fricking Claymore!). Once Barilzar is removed, take the band from his cold, dead body and return back to Hler. Short quest, ay? Hler says Drin wants to see you, so head over to his office and talk with him. He mentions that Almalexia wants to see you herself, so enter the High Chapel (enter from any area in the Temple), and speak with the Goddess. For your reward, she gives you Almalexia's Light. It restores 25 points to all your attributes. Not bad. ------------------------------------------------------------------------- - Attack on Mournhold - ------------------------------------------------------------------------- When you talk with either Almalexia, Drin, or Hler, each will give their own personalized response on how they have nothing for you to do. Yay. If there are any miscellaneous quests you like to do, go ahead and do that. I'm not sure what exactly activates the next "main" quest, but I believe you want to rest (or do something) for roughly 12 hours. At that point, anyone you talk to will say something about creatures coming from the statue in Plaza Brindisi Dorom. Since no one will talk to you (everyone says you should go to the Plaza, with no ability to talk about anything else). Head to the Plaza. When you arrive, you'll see a bunch of weird looking creatures coming from the statue in the center of the plaza. You can let the High Ordinators and Royal Guard kill off the creatures, or you can help them out. And no, none of the guards will die; once again, no cool looking Royal Guard armor for us ;). When all the creatures are dead, report back to either Tienius Delitian or Fedris Hler. It makes no difference (although if you talk to both, they'll let you know about it). Both give you the same quest: Go inside the Dwemer ruin through the opening at Memorial Statue, to Bamz-Amschend. Go inside, and make your way through a short cave, until you approach a large opening (Heartfire Hall). When you receive a journal entry update after seeing the new creatures and the Dwemer fighters' battle return back to Delitian (you can also go back to Hler, but talk to Delitian gains access to a couple more quests from the King himself). When you return to Delitian, he tells you that the King himself requests an audience with you (who happens to be right in front of the throne). ------------------------------------------------------------------------- - Uncover Assassination Plot - ------------------------------------------------------------------------- Ah, another assassination plot against the King. For those who already have gone through Tienius Delitian's quests, we've already stopped one of these. However, this one's a bit cooler in my opinion. Helseth reports to you that an orc in the Winged Guar has information about the newest plot. Using the secret term "uncle's farm" the orc should give you the latest information. Leave the King's presence, and head into the Winged Guar, then down into the basement where the bar is. Since there is only one orc inside, it's obvious that Bakh gor-Sham is the orc to talk to. You may need to bump your disposition up, as he won't reply to the code word if its less than 60. When you mention "uncle's farm," Bakh will tell you the plot is actually against the King's mother, Barenziah. Well... that's strange. Who could possibly be behind this. Return back to the King, and tell him of this news. He orders that you will be the one to stop the attack, but hiding in the antechamber outside Barenziah's chambers. Go to her room (take door to Imperial Cult Services behind throne, then move straight), and close the two doors (one to her chambers, one to the antechamber). Then, go behind the screen, right against where the screen and the wall meet. When you're there, rest for 12 hours. When you wake up, you will receive a journal entry update, saying you're in the right place. The assassins may or may not be attacking now. If they aren't, sleep some more. They will show up eventually. When the three Dark Brotherhood assassins come in, take'em out. We've faced them before; they're nothing special. Take their expensive equipment from their bodies, and return to the King. He will reward you with Helseth's Collar, and mentions that there may be more work you can do for him. ------------------------------------------------------------------------- - So, you want me to fight the deaf guy? - ------------------------------------------------------------------------- Talk with Helseth again. I don't know what his deal is with putting your life in absolute danger, but he wants you to fight his uber-badass bodyguard Karrod. Yay. Sleep 24 hours, and talk with the King again. Mention you're ready to fight, and the King and the rest of the Royal Guard will walk to the edges of the room, leaving Karrod and you. Eventually, Karrod will take out his weapon and attack you. There's not much I can say to help you out here, other than heal when you need to, and be patient with your attacks. There's no reason to get your butt kicked here. I managed to have my low level Mage run around the throne room for about five minutes, throwing spells that did maybe 1/20 of his life and finally take down Karrod. Despite his reputation, he doesn't hurt too bad. When you get Karrod's life down fast, the match will end, and the King will reward you with the Dagger of Symmachus (worth 10,000 gold at the museum). ------------------------------------------------------------------------- - One, Final Quest - ------------------------------------------------------------------------- After you've successfully beaten Karrod (hopefully), talk with the King one last time. His last quest requires you to continue through the game. He suspects that Almalexia had something to do with the attack on Mournhold, and he wants you to get "close" to her. Besides, the only person the King thinks who could've created the creatures is either Almalexia herself, or her old friend Sotha Sil. Almalexia is likely to be the only person with information on him, so he requests that you do work for her. And if you speak with the King before you finish the game's main quest, he comes most angry. Head to the Temple, and talk with Almalexia to continue the game. Remember to talk with Helseth once you've finished the main quest, as this is a REAL quest. ------------------------------------------------------------------------- - Ending the End of Times - ------------------------------------------------------------------------- First order of business from Almalexia is to discover what the End of Times cult is up to. Almalexia tells you that seven members of the cult have been found dead in their homes; poisoned. She wants to know what these people's beliefs are, so she suggests talking to a few people. Meralyn Othan on the western part of the Great Bazaar had a brother (Sevil) who was in the cult, and was one of the seven to be found dead. Almalexia believes she may know more about the cult. Secondly, we must seek out Eno Romari, the head of the cult itself. Let's first talk with Meralyn, who is in the Great Bazaar. Talk to her about the End of Times, and her brother, and she'll explain that the cult believes the Tribunal is losing its powers, and that members are killing themselves in some sort of cleansing. Dastardly. Now, let's move onto the Eno himself. We don't know where he is specially, so talk with people about him. But since this is a guide, I'll simply tell you where he is: right outside the Winged Guar, wearing a white robe. Very easy to spot. Talk with him about his beliefs, and the topics that are created from that. When you talk about the cleansing, that's the information you need for Almalexia (you should receive a journal update for two from this discussion). Return to Almalexia, and talk to her about the End of Times. She's pissed off that there are people in Mournhold who believe that she is losing her powers, but thanks you for your service. While you don't have to kill Eno, she does say she'll take "care" of him. Sweet. ------------------------------------------------------------------------- - F33r my 1337 skillz, Mournhold! - ------------------------------------------------------------------------- Still pissed off, Almalexia wants to revive the power of Karstangz-Beharn. This Dwemer machine in Bamz-Amschend is the power to affect the weather of Mournhold. Almalexia is not too happy, and she wants to create Ashstorms in Mournhold, and make her people pay for their ignorance... or something. When talking with her, make sure you ask about "Ashstorms in Mournhold!" When you bring this up, she gives you a Powered Dwemer Coherer. This is absolutely necessary to complete this quest. If you don't get this, you'll have to completely go through the ruins again, as the enemies will revive. And they're tough as rock. Enter Bamz-Amshcheud, and enter Heartfire Hall (where you saw the big battle earlier). There may be a couple guys still standing, so take care of them (ideally from up on top, so that you can pick each one off). From the bottom of Heartfire Hall, enter the Passage of Whispers. Enter the Hall of Wails if you want to get the items to complete the clutter quest in the MISCELLANEOUS QUESTS section. Since you're here, you mind as well do it, no? Go back to the Passage of Whisper, through a Heavy Dwemer Door down to Radac's Forge. Go through one of the doors that has a level 100 door. One of the chests should contain two Dwemer Satchel Packs. You absolutely need two of these, so make sure you get these. When you have two Satchel Patches, continue through Radac's Forge until you come to some collapsed rocks. Hit spacebar on the rocks when its icon comes up, and the game will ask if you want to use one of the Satchel Packs to blow open a hole. Say yes, and run your ass back. After the explosion, go back to the rocks, and crawl through them to the Passage of the Walker. The rest of the journey is fairly linear, as you go through King's Walk up to Skybreak Gallery. Once in Skybreak Gallery, take out the couple Dwemer enemies on the outside rim. Once they're done, head towards the center. On the north facing side of the center, press spacebar on the Dwemer Junction Box. Press yes, and you put the Powered Dwemer Coherer inside, powering up Karstangz-Beharn. Turn around, and look at the three levers. The left and center levers move the pictures, and the right lever stops the pictures. You want to stop the pictures so that you see one with the volcano, spewing ash all around. The left lever moves to the right, the center moves to the left. My best advice is to lower the right lever, and then move the left lever until you scroll through to the volcano. I think that works ;). Once you have the weather set on ashstorms, return to Almalexia. She's most pleased with your work, despite the fact that she's caused countless slowdowns on people's computers. Message to Bethesda: WE FRICKING HATE ASHSTORMS! ------------------------------------------------------------------------- - Salas Valor's Last Stand - ------------------------------------------------------------------------- I like this quest. Not only does Almalexia give us permission to kill one of her Hands (the elite of Ordinators), but we also get to keep all the cool looking armor they use. Back to the quest. It appears Salas Valor has gone nuts, and abandoned his post as a Hand of Almalexia. But instead of retiring quietly, he decided to bitch about his previous employer. Although hesitant to give you the order, Almalexia assures you that this is the only option available. Almalexia tells you to ask people where he is, but I'll save you the time and just say go to Godsreach. Look around the Winged Guar, and you should spot Valor fairly easy by his magically enchanted High Ordinator gear. Talk with him, and he'll immediately attack you. Valor is tough, but nothing entirely special. He does have a high reflect magic chance, so be careful you don't kill yourself if you use magic. His ebony longbade rocks, but its charges don't last too long. When he's dead, take his equipment, and return to Almalexia. She's disappointed that Valor had to be killed, but happy that you've done her dirty work. For your reward, Almalexia offers to do one of three things: 1. Skin like iron (CE, fortify light, medium, heavy armor +5) 2. Protection against paralysis (CE, 20% paralysis resistance) 3. Warm, reflected glory (CE, Fortify Health + 10) I suppose you could choose the fourth option, but you'd have to be nuts to choose that. ------------------------------------------------------------------------- - Forging Nerevar's Blade - ------------------------------------------------------------------------- It's about fricking time. Going through Morrowind, I was surprised you were never able to recreate Nerevar's sword. Heck, the main quest makes no mention of it, until now. Talk with Almalexia, and you'll begin a long conversation about Nerevar, and that she believes you are the Nerevarine (does this conversation change if you are already the Nerevarine?). She explains that after Nerevar allied with the Dwemer, the Dwarf-King Dumac forged two blades for Nerevar and Almalexia when they were married. Nerevar was given Trueflame, Almalexia, Hopesfire. Hopesfire remains with the goddess, but Trueflame was destroyed in the last battle at Red Mountain, where Nerevar was killed. Almalexia as one piece of the blade, and claims that the other two pieces are conveniently in Mournhold. It's time to find them, then a craftsman who can forge the blade. Practically any person outside the Temple will say go see Yogak gro-Gluk if you want Trueflame reforged. Well, that was fast. Now, onto the two pieces. I can send you on a wild goose-hunt, or I can simply tell you to talk with Karrod in the throne room in the palace. If you've fought him, his disposition should be high enough to where he will give you the second piece (Old Dwemer Weapon). I'm not exactly sure who points you to Karrod, but I think it may be Barenziah. She also mentions talking to Plitinius Mero. Do that. Mero says that you should try Torasa Arath at the Museum of Artifacts in Godsreach. Talk with her, and she'll say she has a Dwemer Battle Shield that may work, but she wants you to donate two items to the museum. Blasted. Unless you have two good items to go to waste, I suggest doing King Helseth's quests (if you haven't), or some of the miscellaneous quests (Golena Sadri, Robe of the Lich) to find two items to donate. When you've done that, take the three items to Craftmen's Hall. Talk with Yagak gro-Gluk, and have him forge the blade. Wait 48 hours, and talk with him. He will give you the blade, but say someone will have to enchant the sword to reflame it. And despite the fact that we gave way 20,000+ gold at the museum, the Dwemer piece we got from the museum appears to be worthless. Darn. Ask Yagak about reflaming, and he'll say try looking for the writings of Radac Stungnthumz, who lived in Bamz-Amschend. Great; I hate that place. ------------------------------------------------------------------------- - I will walk through the fire! - ------------------------------------------------------------------------- Enter Bamz-Amschend, and enter the passage of whispers, down to Radac's Forge. If you don't have another Dwemer Satchel Pack (you should have at least one more), you're going to have to search throughout the ruins to find one. The Hall of Winds (opposite of Passage of Whispers in Heartfire Hall) has one. In Radac's Forge, enter the first Heavy Dwemer Door you see, and search for Radac Stungnthumz's ghost. He moves around, so you may need to look around a bit. However, I've never had any problems finding him. When you talk to him, you'll discover that he will reflame the sword, but he needs some Pyroil Tar to do the job. While he used to have some, it's long gone. He knows that there are some found in the lower caves of Norenen-dur, in the citadel of Myn Dhrur. Good stuff. Leave Radac, and continue through to the Passage of the Walker (through the first collapsed rocks). When you reach another Heavy Dwemer Door, turn around, and you'll find another group of collapsed rocks. Use your second Dwemer Satchel Pack, and create an opening for you to go through. Alright, we're in Norenen-dur. Move forward a bit, then east towards a door that goes to the Teeth that Ghash. There two other doors here, but they lead to nothing (as far as I know). Continue moving along until you reach the Citadel of Myn Dhrur. Along this path (and in the Citadel itself), you'll fight the badasses of Morrowind. Storm Atronachs, Dremoras, Golden Saints, and Winged Twilights are your foes here, not to mention the Dremora lord, Khash-Ti Dhrur. When you kill Dhrur, search his body to get the Pyroil Tar. If you're the adventurous type, levitate up to the top of the Citadel, and enter Wailingdelve, which contains a lot of Daedric equipment along the waterfall. Return back to Radac whenever you're ready. Talk to Radac about "add fire" again, and he'll reflame the sword. Sweet. Return to Almalexia once you've done this. ------------------------------------------------------------------------- - Final Dungeon, Part I - ------------------------------------------------------------------------- After you're relit Trueflame, talk with Almalexia. She'll explain that Sotha Sil has gone nuts, and she wants you to take him out. At this point, you have two options. Teleport to the Clockwork City, and finish the main quest, or say you need to prepare, and do whatever you want. You need to be at an significantly high level to do this last mission. At the very least, keep an extra save at Mournhold, just in case. Teleport to the Clockwork City. The City is really one puzzle after another. Most doors (ones that don't lead to a new area) are opened by levers that require a high strength to push. Off hand, most are easy to do, with the exception of one. Enter the Outer Flooded Gates, by using the level on the right pillar (from where you first entered). Go through the door, and head east past the moving axes. This is pretty easy to do. A nice trick I used was hold down tab, then turn the camera 90 degrees. This allowed me to get real close to the axes, and see exactly when they would come down to kill me. Hit another level, then go south, east, then north into a larger space, then east through a door, to the Inner Flooded Halls. From here, you'll face the norm; a bunch of fabricants. They're tough, but not too hard as long as they don't bunch up on you. From another large room, go east to a set of three different axe traps. Again, use the same strategy as above, and pass through all six axes. It may be a good idea to save now, so you don't have to go through it again, though. Use the door to the Hall of Delirium, so south at the fork, then east, then south again to the Central Gearworks (bypassing the spikes on the ground). And if you don't have any way to fortify your acrobatics or speed, I recommend picking up the speed elixers that the fabricants drop. In the Central Gearworks, go up the spiral ladder, then into the Hall of Theuda (passing the fire; it seems like you can't pass it, but you can), then to the Hall of Kasia. Take out the three fabricants, and slowly walk up the ramp. See what you need to do? Wait the spinning spike to pass you, then run like heck to the door on the second floor. If you can fortify acrobatics, you can jump straight to the door. If you can't, use those speed elixers (at least 15+), then wait for the spike to pass you, then race to the door to the Dome of Serlyn. ------------------------------------------------------------------------- - Final Dungeon, Part II - ------------------------------------------------------------------------- You may be having "fog" problems in the Dome of Serlyn. Refer to section 7 (Fog Fix) of this document for help. Inside, you'll find two levers to the right of a machine. Keep the left one red, and wait until you hear a bunch of sound from the machine (it's creating a fabricant). Keep pressing the right level, until it turns green. Immediately, run into the machine, pass the enemy, and through the door to the Dome of Udok. In Udok, press the lever to your right to create a bridge. The lever requires a huge amount of strength (100+). It's unlikely you have 100 Strength unless you're at an extremely high level. So what are you supposed to do? The Hulking Fabricants drop strength elixers; fortify stength +10 for 10 seconds. Pick up a couple of these, use them, then press the level. Moving on, the rest of Clockwork City is fairly linear, through the Hall of Mileitho, then to the Dome of the Imperfect. I ran away from most enemies ^_^, tired as hell from the boring fabricants. However, once you arrive in the Done of the Imperfect, you're greeted by something most unexpected: a fricking robot! I have no idea how to kill this guy effectively. I fortifed by acrobatics, then jumped on top to the non-moving Imperfect, and shot fireball after fireball. The Imperfect couldn't touch me, but it took fricking forever to do this. But unless you're an absolute wuss (like me), this isn't an effective way of killing him off. This is likely because you aren't using magic-based attacks, nor do you have what seems like an infinite amount of magicka (I can assure you, I used up dozens of potions to continue my onslaught). I'll assume you're at a high level, and holding a weapon comparable to the badassness of Trueflame. High speed is absolutely required, so that you can dodge the Imperfects stomp/magic attacks. Boots of Blinding Speed are great at this, but if you have a decently high athletics (and you aren't carrying much), then that's enough to dodge his attacks. Furthermore, you'll need to beef up your strength. How much? *cough*A couple hundred*cough*. The higher the better; I've managed to kill the Imperfect in one hit, with a strength value at somewhere near 3000. The lesson in all of this? Fortify strength pwns us all. If don't have access to these kinds of potions, then I suggest the cowardly way of killing this guy that I mentioned first. Once he's dead, enter the Dome of Sotha Sil. ------------------------------------------------------------------------- - The End of our Journey - ------------------------------------------------------------------------- Go up to the lifeless Sotha Sil, and you'll receive a journal entry update. Turn around, and attempt to leave. Almalexia will show up, using Barilzar's Mazed Band. Dastardly. Almalexia uses her time to give you a lecture about many things. Such as: Sending you to Sotha Sil to die, so that she could use you as a martyr for her new world order, outside of the present Tribunal ways. She confesses that she killed Sotha Sil herself, and used his waning powers on the attack of Mournhold. After much talk, Almalexia pulls out Hopesfire, and attacks. The toughest amount in Tribunal, Almalexia relies mostly on Hopesfire. She did use a couple of magic attacks at the beginning of the battle, so you may want a decent magic resistance (especially considering the shock damage from Hopesfire). Other than her damaging weapon, Almalexia just has a lot of life. Pound on the bitch 'till she dies really isn't a simplified version of the fight. Heal when needed, and keep pounding away. I found that I took more damage off her at the end of the fight, so don't fret if it seems like you're doing no damage against her. She'll go down eventually. When she dies, take Hopesfire and Barilzar's Mazed Band from her body, Equip the band on you, and you'll have an option to teleport to Vivec, Mournhold, or Sotha Sil. Teleport anywhere, and you'll appear in the High Chapel of Mournhold, no matter what. Exit the Temple to the Courtyard, and you'll be greeted by Azura. Considering this is the last part of the game, I'll say nothing about what Azura says. Weeeee! We're done. If you did King Helseth's quests, return to him for your reward for keeping an "eye" on Almalexia: full set of Royal Guard armor. Sweet. Let's hope for expansion #3 soon guys. ========================================================================= 5. Tienius Delitian's Quests ========================================================================= After you have the contract from the Dark Brotherhood, talk with Tienius Delitian about it (considering the contract itself says that the King sent the attacks on you). Instead of saying he's sorry, he gives you some quests. The reward: one sweet sword. ------------------------------------------------------------------------- - A Source of Rumors - ------------------------------------------------------------------------- Delitian's first quest is simple: Find the source of rumors about King Llethan's death. We don't need to find out who's doing it. Rather, you only need to find this propaganda. There is a number of ways we can do this, but I'll go into detail what I found to be the easiest, and fastest. While you don't have to do this (because I'm telling you), approach a Royal Guard and raise his disposition up to above 60. Ask him about the King's death, and he'll mention you should try Llethan Manor in Godsreach. It's located in the northwest corner of the area, near the sewers entrance. Once inside the Manor, make your way to Ravani Llethan and talk to her the King's death. She will bring up the Common Tongue newsletter, and how Helseth has been rumored to have killed hundreds of people in the West with poison. Why wouldn't he try it on the King? Sounds like we found our source. Leave the manor, then enter the Winged Guar bar southeast of your present location. Go upstairs, then use the door to get on the balcony outside. Look around, and you should find a copy of the Common Tongue newsletter, along with its accusations of the King. This is the evidence we need; return back to Delitian. ------------------------------------------------------------------------- - Temple Informant - ------------------------------------------------------------------------- King Helseth wants to know whether or not the Tribunal Temple will support him when he takes over as King of Morrowind. Usually, the puppet King has no real poweror ambititons, and the Temple supports the King freely, such as Llethan. Helseth, however, has some real ambition, and he's afraid he will gain the Temple as an enemy. He wants to know before they strike. Delitian wants you to find a member of the Temple that is discontent with the way things are going inside the organization. Exit the Palace, and enter the Temple on the north side of Mournhold. Once in the reception area, take note to the brown robe wearing Dark Elf: Fedris Hler. He's the quest-giver for the main quest. You may or may not have already done quests for him, but if you haven't, this is the guy. Enter the infirmary and walk into the first door you run into. Talk with Galsa Andrano. She's not the only person discontented with the temple, but we only need to find one person. Feel free to talk with others inside the temple; the process is all the same, however. Do whatever you want. Talk with Galsa. You may need to bump her disposition up quite a few points, so that she spills the beans that she's unhappy with the current situation inside the temple. When you have the choice, choose to listen sympathetically, then ask if you can talk with her again. Delitian made sure that you do this, as she may change her feelings about becoming an informant if you are just using her. Talk with her again about being discontent, then ask her about the Temple's opinion of Helseth. She says its clear that if the Temple has its way, Helseth will not become King. With that information, go back to the Palace and talk with Delitian again. ------------------------------------------------------------------------- - Disloyalty among the Guards - ------------------------------------------------------------------------- As usual, there seems to be some disloyalty amongst the guards (repeat of the Imperial Legion quest in Gnisis?). Delitian orders you to find whoever is behind the disloyalty. Great. If you haven't noticed by now, there are tons of Royal guards in Mournhold. Fortunately, one of the suckers is standing right next to you. Talk with Ivulen Irano about your Hlaalu connections. He will blurt out that you should talk with Aleri Aren... twice. Ah ha! That must mean she may be involved as well. Exit out the Throne Room, then continue through the reception area, through the legion depot, into the Guard's quarters. Stay on the same floor while doing this. Inside the quarters, find Aleri Aren (she moves around a lot; check downstairs if you can't find her near the beds), and talk to her about your Hlaalu connections. She'll be quick to point out that she has no idea what Ivulen is talking about, and that she'll have to talk to him about this misunderstanding. Return back to Delitian, and give him this news. We still need solid evidence to point out these guards. It looks like Aleri may be a tough one to break. However, Ivulen looks like a pushover. Go back to the Guard's Quarters, and look under his bed near this chest. Pick up the handwritten notes, and you'll get a Journal Entry update. It's a bit hard to do this without the guards seeing you. Just be patient, and they'll eventually leave the area. Also note: A Common Tongue Newsletter under his pillow. While you don't need this as evidence (judging by the lack of journal update notice), it makes it clear that Ivulen at least one member behind this. Once again, go back to Delitian, and give him the handwritten notes. This is the evidence you need, and he's pleased as usual. Instead of taking out the guards, he merely suggests that he will transfer their patrol times. Interesting, no? ------------------------------------------------------------------------- - Conspiracy against the King - ------------------------------------------------------------------------- There seems to be some discontent with Helseth becoming King (especially considering the mysterious circumstances of Llethan's death). Delitian orders you to find out if there are people behind a conspiracy, and to find out who they are. He suggests checking out Llethan's Manor, as they would likely be connected to any conspiracy. Once inside the Manor, go into the room with Revani Llethan. Search for her desk, and "steal" the handwritten letter. This is the evidence Delitian needs, so return back to the Royal Palace to continue on with the quest. Delitian requests that you kill off the members of the conspiracy brought to light in the letter: Forven Berano, Hloggar the Bloody, and Bedal Alen. He writes up official writs for their assassination, so we're not stuck with over 3000 Gold in bounties. Talk about each person, and Tienius will offer his idea where each person is. Berano usually hangs around the temple, as he is a member of the Temple. Hloggar likely doesn't pay for housing, so it is likely that he lives in the sewers. Unfortunately, Delitian is less specific with Alen, who could be anywhere in the city. Let's start with the easiest kill: Forven Berano. Go to the Temple Courtyard, then move north behind the temple itself. He's standing right there, in a wide-open area for you to take him out. If you have trouble with him, I recommend spending some time to level back in the mainland. The next two assassinations are a bit harder. Note: Berano holds a really nice Ebony Shortsword. It's a great short blade, and its also worth tons of gold. Secondly, let's get Alen, as we are still in the city. If you talk to people about him, they will mention that he is somewhat of a workworm. It may be a good idea to check out the bookstore in the Great Bazaar. Go inside, then upstairs. Fortunately, Alen won't attack you until you do. So take this time to find a good place of attack; make sure you don't get stuck in this small room, in case you have some trouble. With Alen dead, we only got one guy left, Hloggar. Head to Godsreach, then down to the Residential Sewers in the northwest corner. Once down in the sewers, head east, south, then a little bit more east, then south to a ladder to the West sewers. If you had talked to people about him, they would've mentioned the West sewers, and even give directions on how to get there. Fortunately, you have me here, so we don't need to worry about that. Also note: You may see a gang of dark elves holding Dilborn, a naked Breton, hostage. Check the miscellaneous quest about Thrud to discover how to complete this quest. Anyway, back to Hloggar. Once in the West sewers, head west and follow the cave through the waterfall. When you approach the Nord, he will immediately charge at you with his large axe. If you need space, I recommend moving back to at least the waterfall area, and try a levitate spell to get away from him. Still, if you made it this far, Hloggar should be no problem. Once he's dead, return back to Delitian for your reward: 3000 Gold. ------------------------------------------------------------------------- - Anonymous Writer - ------------------------------------------------------------------------- When you return back to Delitian for your reward for assassinating the conspirators, he mentions that its time to find the mysterious author of the Common Tongue. To find the author, we'll need to do the usual: talking with the townspeople. Use your usual informants (the ones that already have their dispositions raised), unless you have an amazingly high personality. The only information you can get out of people is that the author is probably someone who likes to write (a bookseller) or someone with a shady reputation (pawnbroker). To save you time, just head to the pawnbroker, and talk with the Khajiit. He has heard rumors that a man named Trels Varis is the person behind the newsletter, and that you can find him in the Craftmen's Hall. When you enter the hall, talk with the people inside. When asked about Varis, everyone is quick to point out that he isn't here. Go back to the entrance, and look for a 70-locked door and open it. Go down the trap door, and you'll enter a secret office, with Varis inside. Talk with him about stopping his "lies" about King Helseth, and you're given some more options. Use the third one (donate 3000 Gold to orphange). This persuades Varis to stop printing poison rumors about Helseth. Mission accomplished. Return back to Delitian, and get your reward: 3000 Gold, plus the King's Oath Blade! For your last order, he requests that you talk with Lady Barenziah (Helseth's mother). ------------------------------------------------------------------------- - Meeting Lady Barenziah - ------------------------------------------------------------------------- Despite the implications that Barenziah has some quests for you, there isn't any. In fact, you get no journal entry update when talking to her. So why talk to her? If you've already spoken with her (doing the main quest, or you simply stumbled upon her room), then forget about talking to her. The main point of talking with her is that you know she exists, and that you should talk to her later in the game (hint, hint). To find her, go behind the throne and use the door to Imperial Cult Services. Move straight towards another door, then through some regular doors until you reach her in her bedroom. When you speak with her, she advises that you speak to two people. Fedris Hler is the quest-giver that continues the main-quest. The other person, Plitinius Mero really adds nothing to game, other than some backstory about Barenziah. Apparently, he wrote a biography on Lady Barenziah, and it was immediately banned by its outrageous claims (that are true, however). I'm not sure on the location of the book in the game (even if it exists), but its an interesting read from what I've been told. When you're done talking, its time to move along with the game's quests. If you've already done the main quest, there are plenty of miscellaneous quests to keep you in Mournhold for a bit longer. ========================================================================= 6. Miscellaneous Quests ========================================================================= All of this miscellaneous quests can be completed anytime you feel like it. They're great distractions to the main quest, in case you get tired of it. Or, at the very least, they're something else to keep you in Mournhold for a few hours more. Quests are in alphabetical order. Use the search function at the beginning of this document to quickly find what you want. ------------------------------------------------------------------------- - The Black Dark Gang - ------------------------------------------------------------------------- This quest is triggered when you first talk with Marisa Adus when you first approach the Manor District to defeat the local Dark Brotherhood base (in the Bazaar Sewers). She'll mention that the Black Dart Gang murdered her husband, and that his ghost will remain in this world until the Gang was been wiped out. Ah... a good old massacre quest. I've missed these. Go into the Temple sewers, and meet with Variner's ghost. He'll tell you that the gang lives in Temple Sewers West. From where the ghost is, go west, until you reach the door to their hideout. The ghost speaks of a way to drown the gang, by using a torch that floods the hiedout. While I personally prefer the good ol' hack-n'-slash way of killing, you can try this out as well. The torch is at the entrance of the hideout, quite a ways away from the actual gang itself. When they're dead, talk with Adus once again; she'll reward you with Variner's ring. ------------------------------------------------------------------------- - Blind Date - ------------------------------------------------------------------------- You've probably already discovered now that there are many lonely men in Mournhold, all looking for that special "someone." Just their luck: Marena Gilnith is up for grabs. This quest is an interesting one, as depending on who you hook Marena up determines your prize for the quest. Below, I'll give your rewards for each man you hook Marena up with. Obviously, you're going to have to choose which man offers the best reward. It's fairly common knowledge that the Bipolar Blade is the best item to get. To hook Marena up with a man, it's a fairly easy process. Talk to her first, then whoever you want to hook her up with. Then back to Marena, then the man, and so on. After they decide to meet, wait 3 or 4 days, and talk with the man to get your reward (assuming everything went okay). Here are the names of each person, his location, and what he will reward you if you hook him up with Marena. Goval Ralen (in front of Temple) - Ralen Family Belt. The belt is worthless, fortifying personality and speechcraft by 5. Yippee! Trader in Bazaar - BiPolar Blade. Fons Baren - A player. Date ends in diaster. ------------------------------------------------------------------------- - The Bouncer - ------------------------------------------------------------------------- Here's a short, simply quest you can do in the Winged Guar in Godsreach. Talk with Hession, who can be found either in the foyer, or upstairs. She's the owner of the bar, and has a request for you; the regular bouncer hasn't showed up today, so she asks if you can take his place for a few hours. Say you will, and you've officially become the bouncer of the Winged Guar. Hession suggests talking to everyone in the bar, but I'll speed things up and tell you who exactly is causing trouble. Find Denegor, the wood elf that is usually hanging around Hession herself. When you talk to him, it's obvious that the guy is wasted. To continue with the quest, choose the option that has you kicking Denegor out of the bar. He gets upset, and begins attacking you. Hession wants no one dead, so pull out your fists, and pound on him until you knock him out. When he's KOed, talk with Hession about any trouble, and she'll reward you with 1000 gold. ------------------------------------------------------------------------- - Champion of Clutter - ------------------------------------------------------------------------- Detritus Caria can be found upstairs in the Craftsmen's Hall. When you talk with him, Caria will ask for some clutter to complete his great collection. At first, he asks for a bolt of Imperial rat hair fabric, and a brushed silver pitcher. For the sake of simplicity, I'll ditch the descriptions he give, and just tell you where to get both items. For the fabric, enter the clothier shop in the Great Bazaar. Behind Belwen, there is a large shelf, with two kinds of fabric under it. Steal the burgundy one. Unfortunately, you can't buy the fabric, so you do actually have to steal it. My sneak was rather low, so telekinesis is a great spell to use. If you do get caught, either reload and try again, or drop all your stolen equipment, and talk with a guard. The second item, the brushed silver pitcher, can be purchased at the pawnshop under the name: Silverware Pitcher. Return to Caria and give him the two items. In return, you'll receive 300 gold. Talk with him about clutter again, and he'll ask for three more items: a normal, Redware pot, a full set of Imperial Silverware, and a yellow metallic plate. For the redware pot, check the trader in the Great Bazaar under the same name. A full set of Imperial Silverware can be found in Gavis Velas' Manor in small, locked chests upstairs (also can be found in Andoren's Manor as well, but a bit harder to steal). The yellow metallic plate requires you to go back to the main island. Check traders in Vivec or Balmora to find one. (Balmora Counsel Club has two, but it's hard to steal them). Return to Caria and give him the three items. Your reward is 500 gold. Talk with him more about his collection. The final mission in this quest are all found in Bamz-Amschund. The best way to do it is to wait until you actually have to go down there, and pick up the items during the main quest. The items: 2 kinds of goblets, a pitcher, large bowl, and a tankard (mug). These items are found all over the place, but the Hall of Wails inside the Passage of Whispers has all the items. I think there is one other place in the ruins that have all five items, but I don't believe it's in one specific place (with a name). Remember, there are two goblets. One that weighs four lbs., and another one that weighs three lbs. Lastly, the large bowl is only refered to as "bowl" in the game. Fortunately, there is only one Dwemer bowl in Bamz-Amschund. Return to Caria, and give him the whole Dwemer set. In return for this last quest, he offers 2000 gold. Not bad. ------------------------------------------------------------------------- - Dirty Business - ------------------------------------------------------------------------- Inside the Vacant Manor in Godsreach, you'll find three Dark Elves. Only Dovor Oren will talk to you about business, only when you bump his disposition above 60. Once you have that, he will talk about corruption with the wealthy, nobles, yada, yada, yada. This begins a set of four different assassination quests. Only problem is that you're doing it for a bunch of low-lives. 1. Oren will ask that you "assist" him with his cause. He wants you to kill a man named Soscean, and take his sword and cuirass from his body. Apparenty Soscean is a brutal noble with no regard for anyone but himself. At least, according to Oren. Soscean is next to the bar in the Winged Guar (that guy that has no time for you). 2. Return to Oren, and give him the sword and cuirass. He will give you 4000 gold in return (value of sword + cuirass = lots!). Ask him about assisting him in his cause again, and he'll ask that you talk with Felvan Ienith. Instead of wanting to kill an evil rich person, Ienith simply wants the items Elanande has (robe and axe). Elanande can be found right around Llethan Manor in Godsreach. Kill her, then return back to Ienith to give him his precious items. Ienith will give you a 1000 gold reward. 3. Talk with Oren again, and he'll tell you to ask Olvyne Dobar wants. Notice this time, she simply wants the items Bels Vivenim has (spear and helmet). Head out to the Temple Courtyard, and send Bels six feet under. Return back to Dobar, give the items, and receive a 1000 gold reward. 4. Talk with Oren again, and he has one last quest. Take out Suldreni Salandas, and take her mace and amulet. Apparently, Salandas is one of those "evil" rich people. She can be found on the western edge of the Great Bazaar. Kill her, and return to Oren. Give him the items, and receive another 4000 gold reward. And that, ends the quests. If you haven't figured it out, these guys are using you. The 10,000 gold reward you get from them is a miniscule to the amount the eight pieces of equipment you gave them. So what to do? If you aren't the cold-blooded type, don't bother with this quest to begin with. If you are, do Oren's quests. Take the 10,000 gold, then kill the outlaws and take back the armor. In my opinion, the armor is worth a lot, but its worthless to use. None of them standout as unique, with weird magic effects on the armor. I think there's a way to turn one of Oren's drones against him, but I haven't found out how to do that. ------------------------------------------------------------------------- - The Droth Dagger - ------------------------------------------------------------------------- In Godsreach, enter Geon Auline's Manor, and talk to him about his collection of daggers. To complete his collection, he wants to get the Droth Dagger, which was owned by the Thendas Manor. Fortunately, the widow, Arnsa Thendas has money problems, and is selling away some items. Exit out of the Manor, and enter Thendas Manor (just north of you). Talk with Arnsa, and ask her about the dagger. She will say that it's probably in one of the chests. More specifically, it's in the chest with an 80-lock. I had a tough time opening up the chest without her seeing me, so you may have to kill her. Fortunately, there's plenty of money inside the manor to compensate for your bounty (make sure you pay it off before stealing anything!). When you get the dagger, return back to Auline Manor, and talk with Geon. He will reward you with 800 Gold. ------------------------------------------------------------------------- - Fargoth, Part II - ------------------------------------------------------------------------- Well, he's not quite Fargoth, but the wood elf Gaenor in the Temple Courtyard is about as annoying as they come. Talk to Gaenor, and he'll ask for money. Continue to say no, and he'll eventually become mad enough to not talk with you anymore. At this point, there's nothing else to do... for now. Later, when walking in the Temple Courtyard, Gaenor should come to attack you, with full ebony body armor, and a luck around 700. He's one tough opponent; maybe the hardest in the game, as you can't go straight in, guns blazing. Here are a couple of strategies to take into consideration when you want to fight him. 1. Use armor disintegration! Enchanting an item with it is best, as you can fully concentrate on raising the magnitude up (forget about using it at an distance; simply walk up, cast as many times as you can, then back away). 2. Don't use magicially enchanted weapons/spells. Gaenor has 100% reflection. In this case, it's likely you'll kill yourself before he even gets to you. 3. Drain luck. Try to do it as many times as possible. If you can get his luck low enough, he will become a much easier opponent. There's a couple other things you could try, but I'll leave that to you to figure out. Summoning creatures is a great way to distract him, long enough for you to get a couple hits in. ------------------------------------------------------------------------- - He may be dumb, but at least he isn't naked! - ------------------------------------------------------------------------ In Godsreach, you'll find a Nord, Thrud, who wants to find his missing teacher/friend Dilborn. Say he'll help the Nord out, and he'll follow now. Head towards the northwest corner of Godsreach, and enter the Residential sewers. From here, you've may have already met Dilborn, as one of the first quests from Fedris Hler has you pass him. If you haven't done that quest yet, it may be a good time to do it now. Kill two birds with one stone, right? Alright, back to the quest. With Thrud without, head into the Residential Sewers via Godsreach. Immediately, go east, south, then east until you reach a fallen gate with two doorways (missing the doors). Inside, you will see three dark elves and a naked breton. Talk Drathas Norus, and pay off Dilborn's 3000 Gold random. I'm aware of a way to save Dilborn without paying the gold, but I haven't found it yet (and out-right fighting doesn't seem to work). Once you've saved Dilborn, talk with Thrud. He rewards you with a 250 Gold book called Trap, that raises your sneak by 1. Yay. ------------------------------------------------------------------------- - Hot Pockets - ------------------------------------------------------------------------- Find this disgruntled Wood Elf just outside the Winged Guar in Godsreach. He's pissed off at a Holmar, a drunk Nord who kicked him out of the bar. Now, High-Pockets wants some revenge. Says he'll help him out, then enter the bar. Holmar should be either right in front of you, or upstairs. He's easy to find, as he is the only Nord inside. Talk with him, and you will have three options. Telling him to leave and sober up gets you nowhere. He just ignores you. The second option forces a fight between you and him (with whimpy High-Pockets sort of helping out). When he's dead, the Wood Elf will reward you with a crappy Ring of Icegrip, and 250 gold. If you choose the third option, you lose 60 gold, but get the same reward as if you killed him. He's tough, so you'll want to make sure you're at a decent level if you want to choose to kill him. Fortunately, you still get most of the reward if you choose NOT to kill him. ------------------------------------------------------------------------- - How to get Custom Armor - ------------------------------------------------------------------------- In Craftmen's Hall, you'll meet a man named Ibhori Fautus, a great adventurer. Also has a huge ego, and deserves to die NOW. Unfortunately, this quest doesn't deal with that. Rather, wait later into the game (I talked with him when I first arrived in Tribunal, then came back to find the author of the Common Tongue), and Bols Indalen will be all pissed off that Ibhori has deserted his apprenticeship. He now wants to find another one. Go inside the Winged Guar, then downstairs towards the bar. Look for a redguard named Therdon, and you'll automatically mention the available apprenticeship for Bols Indalen as he tells you his failure in the pillow industry. He'll immediately say he'll head to Craftmen's Hall. Return to Bols, and he'll now do custom armor for you, depending on if you have enough raw glass, raw ebony, or adamantium ore on you. Locations of those, are in bunches of places ^_^ Off hand, there isn't much raw glass or ebony in Mournhold. ------------------------------------------------------------------------- - Mercenary for Hire - ------------------------------------------------------------------------- A cool, overlooked feature in Tribunal is the fact that you can hire a mercenary to follow you through Mournhold (sorry guys, no traveling to Vvardenfell). He's quite customizable; give him armor/weapons that fit under his specializations, and he's nearly as strong as you. The mercenary, Calvus Horatius, can be found in the Palace Courtyard. He's a normal bulky warrior. Give him a long blade and some heavy/medium armor, and he's your tank. 250 gold keeps him for 30 days, at which point I assume the player has a choice to continuing his services. Be careful when taking items from him. If you take away items up to a point, he'll be disgruntled that you're ripping him off. ------------------------------------------------------------------------- - The Museum of Artifacts - ------------------------------------------------------------------------- I never had the opportunity to really go through this part of Tribunal. Fortunately, for the success of this guide, it doesn't have much to do with Tribunal to begin with. Almost all the items that the Museum takes is from the main game. So if you're interested in filling up the museum, take a look at the complete FAQs on Morrowind. And if you're interested to using the information I did gather, go ahead and send me an e-mail. ------------------------------------------------------------------------- - Mournhold's Battle Bots - ------------------------------------------------------------------------- In Godsreach, enter Ignatius Flaccus' House (near the Andoren and Auline Manors). Go downstairs, and talk with Flaccus. He says he runs a Robot Arena in his house, but his robots have all fallen apart. His enchanting skills are unparalleled, he claims, but when something gets beaten down hard enough, replacement parts are needed. He wants 10 pieces of Dwemer scrap metal. He secretly wants three Rusty Dwemer Cogs. And how do I know this? Because after you return with the 10 pieces of metal, Flaccus has a second quest to get the three cogs. So to save time, lets pick all this stuff in one go. Exit out of the house, and enter Bamz-Amschend. If you don't know where this place is, go a bit further into the main quest, as its not available when you first arrive in Mournhold. In Heartfire Hall, go through the eastern door to the Hall of Winds. In this large area, go through the four smaller rooms. Adding the fact that the enemies inside drop scrap metal, and you should have the 10 pieces of scrap metal easily in this rooms. There are four cogs inside, so you will have no problem filling at quota. If you don't get 10 scrap metals, go further into the ruins until you do. Return to Flaccus, and give him the scrap metal, then the cogs back to him. Leave the house, and rest for 3-5 (?) days, then return back to him. He's done with this repairs, and asks if you want to wager on a battle. Go for it; I bet 1000 gold that the Steam Centurion would beat the Centurion Sphere, and I won. Go me. When the (long) fight is done, leave the house and head towards the Craftmen's Hall / Winged Guar area. Talk with Venasa Sarano, and the Robot Arena fan said when she tried to visit Flaccus, the door was lock, there was no answer from him, and she heard strange noises inside. That can't be good news. Return back to the house, and open the level-40 lock. Inside, go down to the basement, and kill the Dwemer robots. When they're all dead, talk to Flaccus, who explained that the robots went hay-wire (or maybe they just wanted freedom from this cruel, cruel world?) and hands you 1000 gold in reward for saving his life. ------------------------------------------------------------------------- - No beer makes Golena go something, something - ------------------------------------------------------------------------- Enter Sadri Manor in Godsreach (it's connected to Thendas Manor), and talk with the people inside, including Golena. From what they've said, it appears Golena has gone nuts, accusing people of trying to steal "it." Talk with Aluan Llarys, who tells you that Golena had a normal conversation with Elbert Nermare not too long ago (normal meaning not babbling about me stealing "it"). Go to Craftmen's Hall, and talk with Elbert the enchanter on the second floor. He tells you Golena wanted him to see these strange devices of Dwemer origin. He warns that when he approached the devices, he felt closer to death. Yikes. Return to Sadri Manor. Talk with Aluan, who is now outside the manor. He begs that you help; when he left the Manor to take a breather, someone lock the door. Then, someone screamed. Unlock the level 90 door, and go downstairs to where Golena was. Ah ha! A dead Ordinator. I recommend picking up the complete armor set, even if you have no plans of using it. It's worth a lot of money! When you're ready to continue (considering how heavy the equipment is, you may require return trips to drop the armor somewhere. Also, don't continue until you at least drop the armor on the ground. When you return, the Ordinator will be gone for good!) go down the trap door to the Residential Sewers, then into the Forgotten Sewers. Run forward to the torch in front of you (past the crates in the center) and using the crank to flood the room. Swim down to wear the crates were, and use another trap door (the crates were previously blocking access to this). In the next room, swim to another crank in the southwest corner to unflood the room (if you have a water breathing spell, forget about this). Once again, use another trap door to go further into the Forgotten Sewers (on the northern section of the room). You're now on the last stretch of the quest. SAVE now. Notice the strange devices on the ground? Don't touch them, or you'll automatically die. When you run past them, they shoot out damaging magic attacks. So what can you do? If you're sneak is high enough, you can sneak past the 10 or so traps. Or you can simply run past them, hoping the magic misses you. There's not much you can do other than those two options. Hopefully you can get past it. When you reach the fork, take the left passage first to reach a couple of chests with some loot. Return to the fork, and take the other passage, which leads you right to Golena, in full Glass Armor. Well, holy crap. Golena is tough. Not only does she have a Museum artifact in the Mace of Slurring, but she also has a deadric longbow with arrows that paralyze you. Yikes. This is arguably the toughest fight so far, so may the force be with you... or something like that. I ended up cheating to kill her, by falling down in the pit in the center of the room, then use levitate, and shoot magic at her from below. Cheap yes, but efficient nonetheless. When you kill Golena, take the mace, bow, and her full set of glass armor (outside of the ugly looking helmet). Return to Aluan Llarys to update him on the situation. He's disappointed that Golena had to be killed, but understands that you had no choice but to kill her. If you check Golena's body, you can get a infinite supply of 50 of her arrows. Simply take what she has on her body, then recheck. This remains in the game as long as you don't get rid of her body. A little cheap, but these are one of the best arrows in the game. The mace is okay, but you can get 10,000 gold for selling it to the museum. ------------------------------------------------------------------------- - Robe of the Lich - ------------------------------------------------------------------------- The Robe of the Lich is a strange piece of clothing. I've yet to create a character strong enough to use it, much less make it useful to wear to begin with. However, you can sell the robe for 11,000 gold at the Museum in Mournhold. Not bad. It's a long process to get the robe, so I recommend getting Mark and Recall (if you don't have it), and use it a lot. You'll be constantly returning to the Temple, and it saves a lot of time over running back. The best time to start this quest is during the cleansing of the Shrine of the Dead quest (second quest in the MAIN QUEST section). Why? The Temple Shrine (where the quest ends) happens to be right next to Gedna Relvel's Tomb, the activator for the Robe of the Lich quest. Considering how long it takes to get down to the Temple Shrine area, I suggest starting that quest before you do this, unless you like to walk a bunch. When you enter the Tomb, walk forward a bit, and you'll get a message saying you smell something strange (outside of normal sewer smells). This is your signal to return back to the Temple Courtyard. Once there, talk with Mehra Helas about latest rumors. She'll explain that there are rumors about a new disease spreading around by rats. She suggests that you talk with Nerile Andaren and offer your help to stop the disease from spreading. Enter the Temple, and make your way to the Hall of Ministry, and talk with Nerile. Do her a favor, and she'll hand you a Cure Common Disease potion to give to Geon Auline. Use your Mark spell, and give the potion to Auline in his Manor in Godsreach. Use you're Recall spell, and talk with Nerile again. When you reach the Halls of Ministry again, seven infected rats appear; kill them fast (be careful not to hit NPCs), and talk with Nerile again. She assures you that this disease isn't a problem, but there is another person who appears to be affected, and needs a Cure Common Disease potion. Exit out of the Temple into the Courtyard, and talk with Athelyn Malas, who should be right in front of the entrance. Give him the potion, then return to Nerile again. After doing some boring pre-K quest work, Nerole finally gives you the goods. The disease appears to be the Crimson Plague, a previously thought-to-be extinct disease. Apparently, it's back. Enter the basement, and approach the trap door to the Temple Sewers. Before you can do this, you'll run into a knocked-out Ordinator, and Shunari Eye-Fly standing above him. Talk with Shunari, and you'll find out she's infected by the Crimson Plague. In return for curing her, she'll give you some information about the disease. Lastly, Shunari tells you to find her in the Temple Gardens to cure her. Return to Nerile (again), and mention "help Shunari" to her. She knows that Shunari will not drink a potion (paranoid, apparently), so she gives you a Chridilte's Panacea scroll. Enter the Temple sewers, and return to the Temple Gardens. Shunari will be standing right in front of the Temple Shrine. Cure her, and she'll tell you about how you released the disease when you entered Gedna Relvel's Tomb. She explains that she saw the Lich, and in one moment, disappear entirely. She believes there is a secret passage with loads of treasure. Enter the Tomb, and move through it until you reach the most bottom point of the dungeon (right before the wooden ladder). Staring at the ladder, turn left 90 degrees, and look at the wall. She the hidden door? There are two types of rock separating by a very distinct line. Look slightly below this, and you should find a smaller rock; step over it, and the door will open, revealing three Skeleton Champions. The rest the secret passage will have a couple of skeleton foes for you to race. Eventually, you'll face the Lich himself. He's strong with the force, so either dodge his magics, or face the consequences. His magic defense is pretty high, so it may be best to use brutal force against him. When he dies, take the Robe of the Lich off his body. Return to Shunari, and talk with her to receive one final journal entry update (just to "finish" up the quest). Return back to Nerile as well, and she'll give you the Grace of Almsivi ability. This restores 30-80 points in health, 60-100 points of fatigue, and cures common diseases. You can use this once a day. ------------------------------------------------------------------------- - Ten-Tongues' Secret Supplier - ------------------------------------------------------------------------- When you have a really high disposition with Ten-Tongues, pawnbroker, he will bring up special offers: good scrolls for dirt cheap. When you ask about the special offers, he admits to dealing in shady business with another Khajitt named Ahnia, found beneath the Great Bazaar. You need to make a decision at this moment. If you choose to look for Ahnia, Ten-Tongues loses his connection with her, and you lose out on his special offers. However, you also gain an awesome relationship with the enchanter in Craftmen's Hall. Choose what you want; if you want a 100 disposition with Elbert Nermare (enchanter), then continue. If you like to keep the special offers, stop what you're doing now. Enter the sewers in the southeast corner of the Great Bazaar. Once inside, go west up the hill, and talk with Ahnia. When you mention Ten-Tongues, she gets pissed off that he taddled about his shady business. She attacks you immediately. Kill her off, and take the nice Glass Dagger from the body. Return back to Ten-Tongues, who is a little displeased that you killed his dealer. Mention Ahnia to him, and he'll give her stolen book (Private Notes - DO NOT READ), and wants nothing to do with you. Read the book, and you'll uncover that Elbert Nermare owns the book (by the fact his signature is on it). Go to the second floor of Craftmen's Hall, and talk with Elbert. Give him the book, and it raises your disposition with him to 100. He also says he will give you a small discount when buying from him. ------------------------------------------------------------------------- - To Be or Not To Be - ------------------------------------------------------------------------- In the Great Bazaar, you'll notice a large portion of the area goes to a stage for plays. Talk with Meryn Othralas behind the stage. He mentions that you look like Tarvus, and that you could replace him as as the lead in the play. Say yes, and he will hand you the script for the play. You can read it if you want, but I'll only list the lines you need to know, as Clavides. 1. "Good Evening. Is your master at home?" 2. "Possibly. Would you mind if I came in?" 3. "No thank you. What's your name?" 4. "Anara, when did your master leave Scath Anud?" 5. "Yes, there is. Do you know an Ashlander by the name of Sul-Kharita?" 6. "Then you aren't likely to know. He's dead. HE was found a few hours ago dying of frostbite in the ashlands. He was hysterical, nearly incomprehenisible, but among his last words were 'castle' and 'Xyr.'" *NOTE* This is shortened significantly as an option. 7. "That is your master's library? Would you mind if I looked in" 8. "As I hear, are all Telvanni." After you say that last line, an assassin from the crowd runs after you. After you dispatch him, take his daedric wakizashi, and talk with Meryn again. He explains that Tarvus had a little run-in with a Tevanni's daughter, and he hired an assassin to kill Tarvus. Tarvus usually moves from place to place, so he's hard to find at home. But he can always be found on stage, so Meryn decided to let someone else take Tarvus' place. Depending on how well you did your lines (minus 200 gold per bad line), he will give you 2000 gold and an okay amulet of Verbosity. ------------------------------------------------------------------------- - The Unfaithful Husband - ------------------------------------------------------------------------- Head towards Andoren Manor in Godsreach, and talk with Deldrise Andoren. She's a bit pissed off that you entered her house without knocking, but she decides she should ask your help in finding her husband, Taren. Say okay, and head off towards the Winged Guar, Taren's stop to get wasted. When you approach the Winged Guar, you'll receive a message and journal update. Look a bit ahead, and you'll see a dark elf sneaking away, heading towards the southeast. Follow him (not too close), and you'll see him and another woman meet near Craftmen's Hall. When you see Taren meet with the woman, you should receive a journal entry update. I had some problems getting it to pop up, but appears others have not. My best advice is to make sure you follow Taren's movements, despite the fact that he's slower then hell. After his "meeting," you have a couple of choices on how to finish off this quest. 1. After receiving the journal entry update, return back to Deldrise. She's disappointed that you didn't do anything, but rewards you 500 gold for telling her anyway. 2. Talk with Taren and the woman. Taren will ask that you bugger off, but the woman realizes that Deldrise sent you to follow Taren, and attacks you. Kill her, and Taren will begin to attack you. Don't kill him (i.e. knock him out, or run away), or you'll ruin the quest (Deldrise is furious when she finds out Taren is dead, and refuses to talk with you). Talk with Deldrise, and she's overjoyed that the woman is dead. She hands you a 1000 gold reward. 3. If you kill both Taren and the woman, Deldrise will refuse to talk with you. ------------------------------------------------------------------------- - When Psycho Wizards Attack - ------------------------------------------------------------------------- Talk with Drathas Reyas, and he'll tell you about a dangerous wizard that teleports out of nowhere, and attacks people. Interesting. Walk away, and you may be interuptted by a teleporting wizard Oris Velos. He'll pop out of nowhere, saying how great he is, then want to 'display' his power by killing you. He's darn easy to kill (even you say something about it). Grab the Worn Key from his body, which can be used to open up Gavis Velas' Manor in Godsreach. (Drathas Reyas in Great Bazaar tells that Oris has a relative). When you enter Velas' Manor, and mention his brother's death, Gavis will demand a duel. It seems that Gavis is a good man, but he has to revenge his brother, no matter what. Unfortunately, Gavis is a tough fight. Immediately, he'll summon many creatures, including a Golden Saint. Still, he's not the hardest thing evar, right? ========================================================================= 7. Fog Fix ========================================================================= Apparently, certain ATi Radeon graphic cards cause a fog in the Dome of Serlyn. To fix it, it requires the use of the TES:CS editor. If you haven't touched it before, fear not! It's far simplier then it looks, and just a few instructions will help you on your way through the game. To begin with, open the editor. You don't need the construction set disc. Select data files on the top left corner of the program menu, then double-click Morrowind.esm and Tribunal.esm. Select okay when they're both checked. Depending on your computer, it may take awhile for the CS to load up the .esms. When its done, select "world" on the top menu, then scroll down to Interior Cell. This loads up a new window; basically it gives lighting effects to internal cells in the game. Scroll for Sotha Sil, Dome of Seryln (you just start with some mine), and change the Fog from 1.00 to 0.00 (it might be the other way around, but I don't have access to the editor anymore). ========================================================================= 8. Closing ========================================================================= Credits, Or this guide would not be possible without the help of... CJAYC <> - For creating the coolest site on the net and hosting this FAQ. Official Tribunal Forum at morrowind.com - For fix on fog in Clockwork City, among other things. Andrew Rockefeller - For strategy to kill Imperfect, rewards for romance quest, how to kill Black Dart Gang. Superjmike - For numerous locations of Artifacts in Vvardenfell. Chris Coderre - For Andoren quest solutions. Eltharion5 - For Black Dart quest reward, and for mentioning not all the items in the rare item book in the Museum of Artifacts are accepted. Sources None This GUIDE is (c) 2005. The Elder Scrolls III: Tribunal FAQ - (c) 2003-2005 cnick -End of FAQ- | http://www.gamefaqs.com/pc/914491-the-elder-scrolls-iii-tribunal/faqs/24851 | CC-MAIN-2013-20 | en | refinedweb |
you can download the sample here
This post assume that you familiar with MEF technologies, you can fined more resources here and here.
Background
I’m currently working at Sela Group on project that demonstrate the different capabilities appended to Windows 7. This project architecture involve plug-ins (using MEF architecture) and we needed way to instantiate only those plug-ins that support the runtime operating system.
MEF Metadata
MEF support lazy instantiation and metadata which can be inspect before the instantiation.
Plug-ins decoration
MEF support decoration by using the Metadata attribute, also you can use that attribute out of the box, we decide to have strongly metadata technique.
In order to achieve our goal we follow the next steps:
1) First we have to define how our metadata contract:
public interface ISupportedOS {
OSSupported OSSupported { get; }
}
2) Then we define our metadata attribute:
[MetadataAttribute ]
[AttributeUsage (AttributeTargets.Class)]
public class ExportWithMetaAttribute
:ExportAttribute,ISupportedOS{
public ExportWithMetaAttribute(Type exportType,
OSSupported eSupportedOS )
:base(exportType){
this.OSSupported = eSupportedOS;
}
public OSSupported OSSupported { get; set; }
}
3) Now we have to decorate our Plug-in:
[ExportWithMetaAttribute (typeof (ICommand), OSSupported.Windows7)]
[PartCreationPolicy (CreationPolicy.Shared)]
public class Win7Command: ICommand {
Consuming the plug-ins
Getting the metadata is easy, all we have to do is to add our metadata contract (interface) after the imported type.
Property looks like:
[ImportMany]
public Lazy<ICommand,ISupportedOS> Commands{get;set;}
GetExports looks like:
var commands = container.GetExports<ICommand, ISupportedOS> ();
The following code snippet show the final step which is to decide whether or not the plug-is compatible with out OS:
foreach (var command in commands) {
if (command.Metadata.OSSupported.IsSupported())
command.Value.Execute ();
}
Summary
MEF lazy instantiation and metadata capabilities give us great control on our plug-ins composition.
You can download the sample code and explore the concept.
You can download the code here
Wouldn’t it be nice to extend WCF just by inheriting the relevant interface (IServiceBehavior, IDispatchMessageInspector, ext…) and having it hooked without wasting your time on finding the right way of hooking it?
Well this post is all about this topic.
How does it work?
I wrote 2 helper project MEFContracts and WcfPrimitivePlugins which responsible for that magic, and you will also need reference to MEF.
For any of your WCF services or client proxy, you have to add reference to the MEFContracts project and ComponentModel project (which is the MEF CTP 6 implementation), then place the WcfPrimitivePlugins.dll under relative folder called “Plug-Ins” (you can change this place by adding MEF catalog using ).
Modify your client proxy to inherit from MefWcfProxy<TChannel> instead of ClientBase<TChannel>.
On the service side you have 2 option:
1) using MefHost for hosting your service.
2) decorate your service with [MefBehavior] attribute
Writing your WCF extensibility
Writing WCF extensibility is made simple.
Just create class library application and implement the WCF extensibility interface
optional: add reference to MEFContracts for having advance hooking for timeout, Throttling, and more
Running the sample
The solution can be compile with different compiler switches for demonstrating different scenarios:
So choose the compiler switch (recommended: delete the bin folder), compile the solution and run (WcfClientConsole and WcfHostConsole projects are configure the start with F5)
Writing WCF extension should be focus on extension and the hooking should be transparent.
I just found out about the partial method existence.
You can think about partial method as virtual method for partial scope.
In case you having your own code generator like Templates, VS designers or Custom tools,
i assume that you familiar with the following scenario:
You code generating partial class and you want to give class consumer the ability to intercept (hook) the class initialization.
Traditionally we may think about one of the followings:
This is the place that partial method help you, and it look like this:
Your generated code:
public partial class MyClass {
public MyClass () {
OnInit (); // will compile to IL only if implement
}
partial void OnInit (); // will compile to IL only if implement
}
The consumer code:
partial class MyClass {
partial void OnInit () {
// consumer logic goes here
}
}
The MSDN definition and limitation: or attributes are allowed. Partial methods are implicitly private.
New version of BTested is available on Codeplex (compatible with MEF CTP 6)
Enjoy :-) | http://blogs.microsoft.co.il/blogs/bnaya/archive/2009/08.aspx | CC-MAIN-2013-20 | en | refinedweb |
servlets
as abstract)
please give the answere
The servlet programmers typically... programmers would be forced to implement methods that they don't need. Therefore, HttpServlet provides a default implementation for all those methods that does nothing
Java Programmers aren't Born,java newsletter,java,tutorial
Java Programmers aren't Born
2004-11-30 The Java Specialists' Newsletter [Issue 100] -
Java Programmers aren't Born
Author:
Dr. Heinz M. Kabutz..., and where do all the ideas come from?. This
webpage is my personal opinion, so
For C++ programmers
Java NotesFor C++ programmers
Java inherited many features from C..." with the file extension ".jar".
A package is a way to group classes together.... The .class files represent Java programs in a machine independent
way (called
servlets
a standard menu in all pages. This reduces the amount of duplication of content
the servlets
how do we define an application level scope for servlet how do we define an application level scope for servlet
Application scope uses a single namespace, which means all your pages should be careful
are not supported by all web servers. So before using SSI read the web server
Servlets Books
servlets give all the benefits of CGI scripting languages without the overhead...;
Head First Servlets and JSP
This book will get you way up...
Servlets That's all you hear-well, in this book, at any rate. I hope this book
Free Programmers Magazine
Free Programmers Magazine
..., only for Java Programmers.
Issue12 in Details
Web services...;
Issue8 in Details
Sun Microsystems has concurred all suggestions complying
The Advantages of Servlets
Advantages of Java Servlets
... that the servlets are written in java and
follow well known standardized APIs so.... So servlets are write once, run
anywhere (WORA) program.
Powerful
We can do
ALL command - SQL
ALL Command in Java & SQL Nee all commands in Java. Dear Manoj,
I didn't get u
what do u mean by all command.
could u please... possible options include:
-g Generate all J2EE Architecture allows the programmers to divide their work into two major categories Business Logic Presentation Logic
servlets - Java Beginners
servlets I am doing small project in java on servlets. I want to generate reports on webpage ,how is it possible and what is the difference between dynamic pages & reports ? tell me very urgent pls,pls
What is Java Servlets?
and
javax.servlet.http. Servlets provides a way of creating the sophisticated server
side...
What is Java Servlets?
Servlets are server side components that provide a
powerful mechanism
Servlets - Java Beginners
for more information,
Thanks
Amardeep
servlets deploying - Java Beginners
servlets deploying how to deploy the servlets using tomcat?can you...);
}
}
------------------------------------------------------- This is servlets....
Thanks
Amardeep
Java Training and Tutorials, Core Java Training
Java
tutorials for new java programmers.
Java is a powerful object...
structure similar to the syntax of C++ so it would be easy for C++
programmers... and deletion of memory automatically, it
helps to make bug-free code in
java servlets - Java Beginners
Threads,Servlets - Java Beginners
servlets - Java Beginners
hi all - Java Beginners
Software Consulting Revolutionizing The Way You Do Business
Software Consulting – Revolutionizing The Way You Do Business
Running... to do these updates yourself. Hiring a consultant is the best way to go.
Let... should avoid is the “Jack of all trades”, since they will its... it helps you how to execute servlet programs.
u simply provide path
javascript introduction for programmers
javascript introduction for programmers A brief Introduction of JavaScript(web scripting language) for Java Programmers
Getting Columns Names using Servlets
Getting Columns Names using Servlets
... are the
programmers so why we need to worry about the database. We want to do... helps us to write on the browser. To get a column names from the database
servlets hi
i want to pass the attributes from one servlet to another servlet..
using requestdispatcher...
wat is the way to do this..
actually i read some values into one page.. in this value is primary key
Finding all palindrome prime numbers - Java Beginners
Finding all palindrome prime numbers How do i write a program to Find all palindrome prime numbers between two integers supplied as input (start and end points are excluded
What is jQuery?
helps the programmers to keep
code simple and concise. The jQuery library... and can be used with JSP, Servlets, ASP, PHP, CGI and almost all the web... library for the JavaScript programmers, which simplifies the development of web 2
Online Java Training for Beginners
Online Java Training for Beginners
The online java training for beginners teaches the students that what Java
programming is all about and what are the uses... understands and realizes the need to develop a course that helps
people
Servlets
servlets
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/92325 | CC-MAIN-2013-20 | en | refinedweb |
" Understanding the Linux Kernel ." An easy description of the Linux architecture and a true handbook for every writer of kernel exploits. Available at the following address:. sourceforge .net
" Common Security Exploit and Vulnerability Matrix v2.0 ." An excellent table listing all recently- detected holes and vulnerabilities. I recommend that hackers print it to use as a poster. Available at .
Cyber Security Bulletins . Security bulletins with brief descriptions of all recently detected vulnerabilities. Available at .
iSEC Security Research . A site of an efficient hacking group that has detected lots of interesting vulnerabilities. Visit the following address: .
Tiger Team. Another hacker site containing lots of interesting materials: .
Download CD Content
True information wars are only beginning. Hackers work underground and brush up their skills. The number of security holes grows explosively; operating systems and server components of applications are patched nearly every day, rapidly becoming larger and more sophisticated. According to the outdated rules of the computing underground, viruses must be developed in Assembly language or even in machine code. Traditionalists simply do not respect those who try to use C, to speak nothing about Delphi. It is much better, they believe, for these hackers not to write viruses, at least at the professional level.
The efficiency of contemporary compilers has reached such a level that by the quality of the code generation they are quickly approaching Assembly language. If the hacker kills the start-up code, then compact, efficient, illustrative , and easily debugged code will be obtained. Hackers characterized as progressionists try to use high-level programming languages whenever and wherever possible, and they resort to Assembly language only when necessary.
Among all components of a worm, only shellcode must be written in Assembly language. The worm body and the payload can be excellently implemented in good old C. Yes, this approach violates 50-year-old traditions of virus writing. Blindly following traditions is not a creative approach! The world is ever-changing, and progressively thinking hackers change with it. Once upon a time, Assembly language (and, before Assembly, machine codes) was an inevitable necessity. Nowadays, both Assembly and machine code are a kind of a magical rite, which isolates all amateurs from the development of "right" viruses.
By the way, standard Assembly translators (such as TASM and MASM) also are not suitable for development of the worm's head. They are much closer to the high-level languages than to the assembler. The unneeded initiative and intellectual behavior of the translator do harm when developing shellcode. First, the hacker cannot see the results of translation of specific Assembly mnemonic. Thus, to find out if zeros are present, it is necessary to consult the manual on the machine commands from Intel or AMD or to carry out the full translation cycle every time. Second, legal Assembly tools do not allow a hacker to carry out a direct far call; consequently, the hacker is forced to specify it using the db directive. Third, control over the dump is principally unsupported and shellcode encryption must be carried out using third-party utilities. Therefore, for developing the worm's head, hackers frequently use HEX editors with a built-in encryptor, such as HIEW or QVIEW. In this case, the machine code of each entered assembly instruction is generated immediately, "on the fly," and, if the translation result is not satisfactory, the hacker can immediately try several other variants. On the other hand, such an approach is characterized by several serious drawbacks.
To begin with, it is necessary to mention that machine code entered using HEX editor practically cannot be edited. Missing a single machine command might cost the hacker an entire day of wasted time and effort. This is because to insert the missing command into the middle of the shellcode the hacker must shift all other instructions and recompute their offsets again. To tell the truth, it is possible to proceed as follows : Insert the jmp instruction pointing to the end of the shellcode into the position of the missing machine command; move the contents overwritten by the jmp command to the end of the shellcode, where the jmp instruction pointed; add the required number of machine commands; and then use another jmp to return control to the previous position. However, such an approach is error-prone . Its application area is more than limited, because only few processor architectures support a forward jmp that doesn't contain parasitic zeros in its body.
Furthermore, HIEW, like most HEX editors, doesn't allow comments, which complicates and slows the programming process. If meaningful symbolic names are missing, the hacker will have to memorize and recall what has recently placed into, say, the [ebp-69] memory cell and whether [ebp-68] was meant instead of [ebp-69] . One misprint would be enough to make the hacker spend the entire day determining why the shellcode became unusable.
Therefore, experienced hackers prefer to proceed as follows: They enter small fragments of the shellcode in HIEW and then immediately move them into TASM or MASM, using the db directive when necessary. Note that in this case the hacker would have use of this instruction excessively, because most Assembly tricks can be implemented only this way.
A typical Assembly template of the shellcode is shown in Listing 15.1.
Listing 15.1: A typical Assembly template for creating shellcode
.386 .model flat .code start: JMP short begin get_eip: POP ESI ;... ; Shellcode here ; ... begin: CALL get_eip end start
To compile and link the code presented in Listing 15.1, use the following commands:
Compiling: ml.exe /c file name .asm
Linking: link.exe /VXD file name. obj
Translation of the shellcode is carried out in a standard way, and, in relation to MASM the command line might appear as shown: ml.exe /c file name .asm. The situation with linking is much more complicated. Standard linkers, such as Microsoft Linker ( ml ) bluntly refuse to translate the shellcode into a binary file. In the best case, such linkers would create a standard Portable Executable (PE) file, from which the hacker will have to manually cut the shellcode. The use of the /VXD command-line option of the linker considerably simplifies this task, because in this case the linker will cease to complain about the missing start-up code and will never attempt to insert it into the target file on its own. Furthermore, the task of cutting the shellcode from the resulting VXD file will also become considerably simpler, than doing this with a PE file. By default, the shellcode in VXD files is located starting from the 1000h address and continues practically until the end of the file. Note that 1 or 2 trailing bytes of the tail might be present there because of alignment considerations. However, they do not present any serious obstacle .
Having accomplished the linking, the hacker must encrypt the resulting binary file (provided that the shellcode contains an encryptor). Most frequently, hackers use HIEW for this purpose. Some individuals prefer to use an external encryptor, which usually can be created within 15 minutes, for example, as follows: fopen/fread/for(a = FROM_CRYPT; a < TO_CRYPT; a += sizeof(key)) buf[a] ^= key;/fwrite . Despite all advantages provided by HIEW, it is not free from drawbacks. The main disadvantage of its built-in encryptor is that it is impossible to fully automate shellcode translation. Thus, when it is necessary to frequently recompile the shellcode, the hacker will have to carry out a great deal of manual operations. Nevertheless, there still are lots of hackers who prefer to fuss with HIEW instead of programming an external encryptor that would automate the dull everyday hacking activities.
Finally, the prepared shellcode must be implanted into the main body of the worm, which usually represents a program written in C. The simplest, but not the best, approach consists of linking the shellcode as a usual OBJ file. As was already mentioned, this approach is not free from problems. First, to determine the length of the shellcode, the hacker will need two public labels ” one at the start of the shellcode and one at its end. The difference between their offsets will produce the required value. However, there is another, considerably more serious problem ” encrypting an OBJ file automatically. In contrast to the "pure" binary file, here it is impossible to rely on the fixed offsets. On the contrary, it is necessary to analyze auxiliary structures and the header. This won't make hackers happy. Finally, because of their nontext nature, OBJ files considerably complicate publishing and distribution of the source code of the worm. Therefore (or perhaps simply out of tradition), the shellcode is most frequently inserted into the program through string array because the C programming language supports the possibility of entering any HEX characters (except for zero, which serves as the string terminator).
This might be implemented, for example, as shown in Listing 15.2. It is not necessary to enter HEX codes manually. It is much easier to write a simple converter to automate this task.
Listing 15.2: An example illustrating insertion of the shellcode into the C program
unsigned char x86_fbsd_read[] = "\x31\xc0\x6a\x00\x54\x50\x50\xbO\x03\xcd\x80\x83\xc4" "\xOc\xff\xff\xe4";
Now it is time to describe the problem of taming the compiler and optimizing programs. How is it possible to instruct the compiler not to insert start-up code and RTL code? This can be achieved easily ” it is enough not to declare the main function and enforce the linker to use a new entry point by using the /ENTRY command-line option.
Consider the examples presented in Listings 15.3 and 15.4.
Listing 15.3: Classical variant compiled in a normal way
#include <windows.h> main() { MessageBox(0, "Sailor", "Hello", 0); }
Listing 15.4: An optimized variant of the program shown in Listing 15.3
#include <windows.h> my_main() { MessageBox(0, "Sailor", "Hello", 0); }
The program presented in Listing 15.3 is the classical example. Being compiled with default settings ( cl.exe /Ox file name .c ), it will produce an executable file, taking 25 KB. Well, this is not bad? However, do not rush to premature conclusions. Consider an optimized version of the same program, shown in Listing 15.4.
This optimized version must be built as follows:
Compiling: cl.exe /c /Ox file .c
Linking: link.exe /ALIGN:32 /DRIVER /ENTRY:my_main /SUBSYSTEM:console file .obj USER32.lib
Thus, by slightly changing the name of the main program function and choosing optimal translation keys, it is possible to reduce the size of the executable file to 864 bytes. At the same time, the main part of the file will be taken by the PE header, import table, and interstices left for alignment. This means that when dealing with a fully functional, real-world application, this difference in size will become even more noticeable. However, even in this example the executable file was compressed more than 30 times ” without any Assembly tricks.
Exclusion of RTL leads to the impossibility of using the entire input/output subsystem, which means that it will be impossible to use most functions from the stdio library. Thus, the shellcode will be limited to API functions only. | http://flylib.com/books/en/1.444.1.76/1/ | CC-MAIN-2013-20 | en | refinedweb |
Handling Namespaces in XQuery
This topic provides samples for handling namespaces in queries.
A. Declaring a namespace
The following query retrieves the manufacturing steps of a specific product model.
This is the partial result:
Note that the namespace keyword is used to define a new namespace prefix, "AWMI:". This prefix then must be used in the query for all elements that fall within the scope of that namespace.
B. Declaring a default namespace
In the previous query, a new namespace prefix was defined. That prefix then had to be used in the query to select the intended XML structures. Alternatively, you can declare a namespace as the default namespace, as shown in the following modified query:
This is the result
Note in this example that the namespace defined,
"", is made to override the default, or empty, namespace. Because of this, you no longer have a namespace prefix in the path expression that is used to query. You also no longer have a namespace prefix in the element names that appear in the results. Additionally, the default namespace is applied to all elements, but not to their attributes.
C. Using namespaces in XML construction
When you define new namespaces, they are brought into scope not only for the query, but for the construction. For example, in constructing XML, you can define a new namespace by using the "
declare namespace ..." declaration and then use that namespace with any elements and attributes that you construct to appear within the query results.
SELECT CatalogDescription.query(' declare default element namespace ""; declare namespace myNS="uri:SomeNamespace"; <myNS:Result> { /ProductDescription/Summary } </myNS:Result> ') as Result FROM Production.ProductModel where ProductModelID=19
This is the result:
<myNS:Result xmlns: <Summary xmlns=""> <p1:p xmlns: Our top-of-the-line competition mountain bike. Performance-enhancing options include the innovative HL Frame, super-smooth front suspension, and traction for all terrain.</p1:p> </Summary> </myNS:Result>
Alternatively, you can also define the namespace explicitly at each point where it is used as part of the XML construction, as shown in the following query:
SELECT CatalogDescription.query(' declare default element namespace ""; <myNS:Result xmlns: { /ProductDescription/Summary } </myNS:Result> ') as Result FROM Production.ProductModel where ProductModelID=19
D. Construction using default namespaces
You can also define a default namespace for use in constructed XML. For example, the following query shows how you can specify a default namespace, "uri:SomeNamespace"\, to use as the default for the locally named elements that are constructed, such as the
<Result> element.
SELECT CatalogDescription.query(' declare namespace PD=""; declare default element namespace "uri:SomeNamespace"; <Result> { /PD:ProductDescription/PD:Summary } </Result> ') as Result FROM Production.ProductModel where ProductModelID=19
This is the result:
<Result xmlns="uri:SomeNamespace"> <PD:Summary xmlns: <p1:p xmlns: Our top-of-the-line competition mountain bike. Performance- enhancing options include the innovative HL Frame, super-smooth front suspension, and traction for all terrain.</p1:p> </PD:Summary> </Result>
Note that by overriding the default element namespace or empty namespace, all the locally named elements in the constructed XML are subsequently bound to the overriding default namespace. Therefore, if you require flexibility in constructing XML to take advantage of the empty namespace, do not override the default element namespace.
ConceptsAdding Namespaces Using WITH XMLNAMESPACES
xml Data Type | http://msdn.microsoft.com/en-us/library/ms187013(SQL.90).aspx | CC-MAIN-2013-20 | en | refinedweb |
getlogin, setlogin - get/set login name
Standard C Library (libc, -lc)
#include <unistd.h>
char *
getlogin(void);
int
setlogin(const char *name);
The getlogin() routine returns the login name of the user associated with
the current session, as previously set by setlogin(). The name is normally).
If a call to getlogin() succeeds, it returns a pointer to a null-terminated.
setsid(2)
The getlogin() function conforms to ISO/IEC 9945-1:1990 (``POSIX.1'').
The getlogin() function first appeared in 4.4BSD. controlling
terminal. In earlier versions of the system, the value returned
by getlogin() could not be trusted without checking the user ID.
Portable programs should probably still make this check.
BSD June 9, 1993 BSD | http://nixdoc.net/man-pages/NetBSD/man2/getlogin.2.html | CC-MAIN-2013-20 | en | refinedweb |
7: Goin API with Facebook Developer Tools 167 in Java
Creation PDF417 in Java 7: Goin API with Facebook Developer Tools 167
7: Goin API with Facebook Developer Tools 167
PDF-417 2d Barcode Maker In Java
Using Barcode creator for Java Control to generate, create PDF-417 2d barcode image in Java applications.
Working with the API Test Console168 FBML Test Console 170 Feed Preview Console173 Debugging FBJS with Firebug176
Bar Code Printer In Java
Using Barcode maker for Java Control to generate, create bar code image in Java applications.
Part III: Developing Facebook Applications 177
Recognizing Bar Code In Java
Using Barcode scanner for Java Control to read, scan read, scan image in Java applications.
8: Developing Facebook Canvas Pages 179
PDF 417 Creation In Visual C#.NET
Using Barcode encoder for .NET framework Control to generate, create PDF417 image in .NET framework applications.
To FBML or iframe That Is the Question 179 Adding a Navigation Header Using FBML 181 Adding an fb:dashboard element 183 Adding a tab set with fb:tabs and fb:tab-item185 Adding a header with fb:header 188 Creating an Editor Form Page190
PDF-417 2d Barcode Creator In Visual Studio .NET
Using Barcode drawer for .NET Control to generate, create PDF-417 2d barcode image in VS .NET applications.
9: Creating Content for Profile Pages 195
Generate PDF417 In VB.NET
Using Barcode encoder for .NET Control to generate, create PDF417 image in .NET framework applications.
Discovering Profile Boxes and Action Links 195 Profile box 196 Profile action links197 Configuring the Default Profile Settings 198
GS1 - 12 Printer In Java
Using Barcode drawer for Java Control to generate, create UPC-A image in Java applications.
Building Facebook Applications For Dummies
Making DataMatrix In Java
Using Barcode maker for Java Control to generate, create Data Matrix 2d barcode image in Java applications.
Pushing Profile Content with profilesetFBML 199 Working with Content in the Profile Box201 Adding Action Links to a User Profile212
Barcode Creator In Java
Using Barcode maker for Java Control to generate, create barcode image in Java applications.
10: Seamless Styles: Styling Your Facebook Application 215
EAN13 Printer In Java
Using Barcode generation for Java Control to generate, create European Article Number 13 image in Java applications.
Adding Styles to Your FBML 215 Using inline styles 216 Using embedded styles216 Including external style sheets218 Specifying Wide and Narrow Styles for Profile Boxes 218 Using fb:ref to Load CSS in a Profile Box219 Going Native: Emulating Facebook Styles220 Setting the basic formatting styles221 Emulating the Facebook dashboard 223 Creating your own navigation tabs 226 Creating a subtitle region 228 Emulating Facebook buttons 229 Creating two-column lists232
Code 128 Code Set A Generator In Java
Using Barcode maker for Java Control to generate, create Code 128C image in Java applications.
11: Hear Ye, Hear Ye: Communicating with the News Feed and Notifications 235
USPS PLANET Barcode Creator In Java
Using Barcode creator for Java Control to generate, create Planet image in Java applications.
Publishing a News Feed Story to Current Users 236 Publishing Actions to a User s Mini-Feed and Friends News Feed 238 Rolling Up Your Sleeves: Publishing Templatized Actions 240 Exploring the template parameters 241 Working with tokens 242 Exploring the fbRecipe template 244 Registering your story template245 Sending Notifications247
Encoding EAN128 In .NET
Using Barcode creator for .NET framework Control to generate, create EAN / UCC - 14 image in .NET framework applications.
12: Tying It All Together: Speed Dial Application 249
Printing Code 39 Extended In C#
Using Barcode drawer for .NET framework Control to generate, create Code 3/9 image in VS .NET applications.
Coming Up with a Basic Vision249 Setting Up Speed Dial in Facebook 250 Creating the Speed Dial Database 253 Structuring the PHP Source Code 254 Setting Up the Canvas Page 254 Connecting to Facebook255 Building the Canvas Page256 Constructing the page header 257 Adding a friend 258 Getting a list of dial friends 259 Previewing the Speed Dial 260 Resetting the Speed Dial262 Processing user actions262
Scan Data Matrix 2d Barcode In .NET Framework
Using Barcode decoder for VS .NET Control to read, scan read, scan image in Visual Studio .NET applications.
Table of Contents
Code 128C Decoder In Visual Studio .NET
Using Barcode scanner for Visual Studio .NET Control to read, scan read, scan image in VS .NET applications.
Assembling the canvas page UI 263 Styling the UI 266 Adding a random quote display 268 Adding a page footer270 Setting the Profile Box Content 271 Sending Notifications and Publishing a News Feed Story 275 Adding an Invitation Page 278 Prepping the About Page 279 Exploring the Full Source Files 280
Generate Barcode In Visual Studio .NET
Using Barcode creator for ASP.NET Control to generate, create barcode image in ASP.NET applications.
xiii
Bar Code Generation In Visual Studio .NET
Using Barcode creation for ASP.NET Control to generate, create bar code image in ASP.NET applications.
Part IV: The Part of Tens 299
Printing Code 39 Extended In Visual Basic .NET
Using Barcode generation for Visual Studio .NET Control to generate, create Code39 image in .NET framework applications.
13: Ten Strategies to Exploit the Power of the Facebook Platform 301
Print Barcode In VS .NET
Using Barcode generation for Visual Studio .NET Control to generate, create barcode image in VS .NET applications.
Optimizing Your Facebook App301 Going Mobile with Your Facebook App 302 Working with Attachments 303 Keeping Track of the Session Key306 Making Canvas Pages Accessible to Non-Facebook Users 307 Handling Unique Browser Needs 308 Integrating with Google Analytics309 Handling Redirects310 Working with Cookies 310 Integrating with Marketplace310
14: Ten Killer Facebook Applications to Explore 313
Local Picks 314 iLike314 Attack!315 iRead 316 Quizzes 317 Where I ve Been317 Flixster 318 Top Friends 318 Introplay s Workout Olympiad and Runlicious 319 Appsaholic 319
15: Smashing Successes: Ten Tips for Making Your Application Popular 321
Avoid Social App Faux Pas 321 Think Social, Act Social 322 Brand Your App Effectively322 Communicate Wisely with Your Users 323
Building Facebook Applications For Dummies
Engage Potential Users with Your About Page 323 Man Your Discussion Board323 Pay Attention to User Reviews and Feedback324 Promote Your App on Facebook 324 React Quickly to Platform Changes and Enhancements324 React Quickly to User Growth 325
Index327
Introduction
f you have spent much time developing Web apps over the past couple of years, you ve probably heard the term social network so many times that you hear it ringing in your ears while you sleep (Talk about nightmares) Until Facebook released its platform, one could understand the nightmares, because social networking seemed far more important to teenage girls on MySpace than to serious Web developers However, when the Facebook Platform was announced by Facebook, social networking suddenly became a buzzword worth dreaming about for the Web development community A whole new breed of Web application was born a social network enabled application If you are interested in developing a Web application that taps into the social networking heart of Facebook, you ve found the right book
More pdf417 on java
About This Book in Java Printing PDF 417
Figure 1-3: My Flickr application displayed as a wide profile box in Java Create PDF417
See 4 for full details on working with FBML in Java Print PDF-417 2d barcode
Part I: Getting Friendly with the Facebook Platform in Java Generator PDF 417
Method in Java Encoder PDF-417 2d barcode
Part II: Poking the API in Java Printer PDF417
3: Working with the Facebook API in Java Create PDF-417 2d barcode
Child Elements in Java Generation PDF-417 2d barcode
Page navigation in Java Painting PDF 417
See 10 for more details on working with the profile box in Java Paint PDF-417 2d barcode
Comparing FQL and API access in Java Printer PDF417
Part II: Poking the API in Java Maker PDF417
When it s embedded in an FBML page, Facebook transforms it to in Java Making PDF 417
Figure 6-4: FBJS Animation enables you to move objects onscreen in Java Encode PDF-417 2d barcode
Part II: Poking the API in Java Printer PDF 417
Figure 8-2: Local Picks app does not emulate the Facebook UI in Java Creation PDF-417 2d barcode
Discovering Profile Boxes and Action Links in Java Creation PDF417
9: Creating Content for Profile Pages in Java Generator PDF-417 2d barcode
Part III: Developing Facebook Applications in Java Encoder PDF 417
10: Seamless Styles: Styling Your Facebook Application in Java Generation PDF417
Articles you may be interested
Par t I: Home Economics in VS .NET Draw QR
Part II: Planning Your Trip to Germany in .NET Printer QR Code JIS X 0510
Adding a Custom Link Bar in VS .NET Generate UPC-A Supplement 2
5: Controlling Program Flow with Decision-Making Statements in Java Draw USS Code 39
Par t II: Financing 101 in Visual Studio .NET Encoding QR-Code
import import import import in Java Make USS Code 39
Book I Green Living in Visual Studio .NET Printer QR Code
When fair market value isn t fair need-based pricing in .NET Drawer QR Code
Counting and searching a queue in Java Generator QR Code 2d barcode
5: Detecting Your Users Browser Environments in Java Drawer QR Code 2d barcode
Germany For Dummies, 4th Edition in .NET framework Generation QR
Grand H tel des Balcons in VS .NET Creation GTIN - 12
Preparing to Publish Your Files in Java Generation Quick Response Code
12: Stalking the Wild Dollar: Internet Commerce in Java Encoder PDF 417
5: Achieving Precision with Google Operators in Visual Studio .NET Encoder Quick Response Code
Condos, villas, houses, and apartments in .NET framework Generation UPC-A Supplement 5
Wittumspalais in .NET framework Print QR Code ISO/IEC18004
7: Getting Help from the Google Directory 119 in Visual Studio .NET Generation QR Code
The Raid on Dieppe, 19 August 1942 in .NET framework Creator Denso QR Bar Code
The Battle of Crecy, 26 August 1346 in .NET framework Encoding QR-Code | http://www.businessrefinery.com/b/45/2/ | CC-MAIN-2013-20 | en | refinedweb |
21 July 2008 09:37 [Source: ICIS news]
SHANGHAI (ICIS news)--China’s polyvinyl chloride (PVC) producers have reduced operating rates on tight supply of feedstock due to transportation restrictions on chemicals put in place before the Beijing Olympics, producers said on Monday. ?xml:namespace>
Xuzhou Tiancheng Chemical in Jiangsu province, for instance, had reduced its PVC operating capacity to 30% due to the tight supply of feedstock calcium carbide, said a company source.
“We don’t have offers because we have no inventory here in our company,” the source added.
Another PVC producer, Huarong in ?xml:namespace>
Hengyang Kingboard Chemical of
Transport restrictions on chemicals are being put in place between July and September in
To discuss issues relating to the chemicals industry visit. | http://www.icis.com/Articles/2008/07/21/9141610/china-pvc-producers-reduce-capacity-on-curbs.html | CC-MAIN-2013-20 | en | refinedweb |
The default control of access to the Oracle Database semantic data store is at the model level: the owner of a model may grant select, delete, and insert privileges on the model to other users by granting appropriate privileges on the view named RDFM_<model_name>. However, for applications with stringent security requirements, you can enforce a fine-grained access control mechanism by using either the Virtual Private Database feature or the Oracle Label Security option of Oracle Database:
Virtual Private Database (VPD) for RDF data allows security administrators to define policies that conditionally restrict a user's access to triples that involve instances of a specific RDF class or property. Using VPD, the data stored in the RDF models is classified using its metadata and each user query is rewritten to include context-dependent data access constraints that enforce access restrictions.
For information about using VPD, see Oracle Database Security Guide. For information about support for VPD with semantic data, see Section 5.1.
Oracle Label Security (OLS) for RDF data allows sensitivity labels to be associated with individual triples stored in an RDF model. For each query, access to specific triples is granted by comparing their labels with the user's session labels. Furthermore, a minimum sensitivity label for all triple describing a specific resource or all triples defined with a specific predicate can be enforced by assigning a sensitivity label directly to the resource or the predicate, respectively.
For information about using OLS, see Oracle Label Security Administrator's Guide. For information about support for OLS with semantic data, see Section 5.2
Some factors to consider in choosing whether use VPD or OLS with RDF data include the following:
OLS, when enabled for RDF data, is enforced at the network level, while VPD can be enforced for individual RDF models.
You cannot use both VPD and OLS for RDF data at the same time in an Oracle instance.
The application programming interface (API) for implementing VPD or OLS with semantic data is provided in the SEM_RDFSA PL/SQL package. Chapter 12 provides reference information about the programs in the SEM_RDFSA package.
VPD and OLS support for RDF data is included in the semantic technologies support for Release 11.2. (For information about enabling, downgrading, or removing semantic technologies support, see Appendix A.)
This chapter contains the following major sections:
Section 5.1, "Virtual Private Database (VPD) for RDF Data"
Section 5.2, "Oracle Label Security (OLS) for RDF Data"
The Virtual Private Database (VPD) feature is a row-level security mechanism that restricts access to specific rows in a relational table or view using a security policy and an application context. The security policy includes a policy function that dynamically generates predicates that are enforced for each row returned for the user query. The security predicates returned by the policy function associated with a table are typically expressed using the columns in the table and are thus dependent on the table metadata. Effectively, the security predicates ensure that the rows returned for a user query satisfy additional conditions that are applied on the contents of the row.
When the relational data is mapped to RDF, the data stored in a specific relational table represent triples describing instances of a specific RDF class. In this representation, the columns in the relational table map to RDF properties that are used to describe a resource. This mapping may be further extended to the application of VPD policies.
A VPD policy applied to RDF data restricts users' access to instances of a specific RDF class or property by applying predicates, in the form of graph patterns and filter conditions, on the instance data. For example, a VPD policy may be defined to restrict access to instances of a
Contract RDF class only to the users belonging to a specific department. Furthermore, access to the
hasContractValue property for a resource identified as instance of the
Contract RDF class may be restricted only to the manager of the contract. VPD support for RDF data allows security conditions or data access constraints to be associated with RDF classes and properties, so that access to corresponding instance data is restricted.
A data access constraint associated with an RDF class or property specifies a graph query pattern that must be enforced for all corresponding data instances that are returned as the query result. For example, a SPARQL query pattern to find the due dates for all instances of a
Contract class,
{?contract :hasDueDate ?due}, may activate a data access constraint that ensures that the information returned pertains to contracts belonging to a specific department. This is achieved by logically rewriting the user's graph query pattern to include additional graph patterns, as shown in the following example:
{ ?contract :hasDueDate ?due . ?contract :drivenBy dept:Dept1 }
Furthermore, the values bound into the rewritten graph query pattern may make use of session context to enforce dynamic access restrictions. In the following example, the
sys_context function in the object position of the triple pattern binds the appropriate department value based on the session context:
{ ?contract :hasDueDate ?due . ?contract :drivenBy "sys_context('sa$appctx','user_dept'}"^^orardf:instruction }
In a relational data model, the metadata, in the form of table definition, always exists along with the data (the rows stored in the table); thus, the VPD policies defined using the metadata are well formed and the security conditions are generated using a procedural logic in the policy function.
However, the RDF data model allows data with no accompanying metadata, and therefore the class information for instance data may not always be available for a given RDF graph. For example, in an RDF graph a resource known to be a contract might not accompany a triple that asserts that the resource is an instance of
Contract class. Usually such triples can be inferred using available domain and range specifications for the properties describing the resource.
Similarly, a VPD policy relies on the properties' domain and range specifications for deriving the class information for the instance data and for enforcing appropriate data access constraints. However, to avoid runtime dependencies on the user data, a VPD policy maintains the minimal metadata required to derive the class information in its dictionary, separate from the asserted and inferred triples. This also ensures that the metadata maintained by a VPD policy is complete even when some necessary information is missing from the asserted triples and that a VPD policy, with its data access constraints and the metadata, is self-contained and portable with no external dependencies.
A VPD policy with specific data access constraints and RDF metadata specifications can be used to enforce access restrictions for the data stored in an RDF model. Each SPARQL query issued on the model is analyzed to deduce the class information for the resources accessed in the query, and appropriate data access constraints are applied. To facilitate the compile-time analysis and derivation of class information for instance data, a graph query pattern with an unbound predicate is restricted when a VPD policy is in effect. For example, a graph pattern of the following form, anywhere in a SPARQL query pattern, raises an exception when any underlying model has a VPD policy:
{ <contract:projectHLS> ?pred ?obj }
VPD policies are only enforced for SEM_MATCH queries expressed in SPARQL syntax. All other forms of data access (such as classic syntax for graph pattern or direct query on the model view) are not permitted.
A VPD policy for RDF data is a named dictionary entity that can be used to enforce access restrictions for the data stored in one or more RDF models. A VPD policy defined for RDF data has unique characteristics, and it cannot be reused to enforce security policies for relational data. An RDF-VPD policy defined in the database includes the following:
The RDF Schema statements or metadata necessary for deriving class information for the data referenced in a SPARQL user query
The data access constraints that enforce access restrictions for the instance data
Application context that allows conditional inclusions of groups of data access constraints based on the runtime environment
An RDF-VPD policy is defined, owned, and maintained by a user with a security administrator role in an organization. This user must have at least EXECUTE privileges on the SYS.DBMS_RLS package. The owner of an RDF-VPD policy can maintain the metadata associated with the policy, define new data access constraints, and apply the policy to one or more RDF models.
A SPARQL query issued on an RDF model with a VPD policy is analyzed, and zero or more data access constraints defined in the policy are enforced such that the data instances that are returned as the query result also satisfy these constraints. The exact data access constraints enforced for a user query vary, based on the resources referenced in the query and the application context. For example, a policy that restricts a manager's access to the
hasContractValue property may be relaxed for a user with the Vice President role.
Based on the role of the user, as captured in the application context, specific constraints to be applied are determined at runtime. To facilitate this dynamic inclusion of subsets of constraints defined in a VPD policy, the data access constraints are arranged into named groups that can be activated and deactivated based on the application context. During query analysis, only the constraints defined in the active groups are considered for enforcement.
The constraint groups within a VPD policy are managed using an application context and its package implementation. Each VPD policy can specify the namespace for a context created with the CREATE CONTEXT command. Each attribute associated with the context is treated as the name of a constraint group that can be activated by initializing its value to 1. For example, setting the value for
MANAGER attribute of the context associated with a VPD policy to 1 will activate the constraints associated with
MANAGER group for the user session. The logic that initializes specific constraint groups based on the user context is typically embedded in the package associated with the context type. The following example shows an excerpt from a sample implementation for one such package:
CREATE CONTEXT contracts_constr_ctx using sec_admin.contracts_ctx_pack; begin -- create the VPD policy with a context -- sem_rdfsa.create_vpd_policy(policy_name => 'CONTRACTS_POLICY', policy_context => 'contracts_constr_ctx'); end; / create or replace package sec_admin.contracts_ctx_pack as procedure init_constr_groups; end; / create or replace package body sec_admin.contracts_ctx_pack as procedure init_contr_groups is hrdata EmpRole%rowtype; begin -- specific users with FULL access to the data associated with -- the policy -- if (sys_context('userenv', 'session_user') = 'RDF_ADMIN') then dbms_session.set_context('contracts_constr_ctx', sem_rdfsa.VPD_FULL_ACCESS, 1); return; end if; SELECT * into hrdata FROM EmpRole WHERE guid = sys_context('userenv','session_user'); if (hrdata.emprole = 'VP') then -- if the user logged in has VP role, activate the constraint -- group named VP and keep all other groups inactive. dbms_session.set_context('contracts_constr_ctx','VP', '1'); elsif (hrdata.emprole = 'MANAGER') then dbms_session.set_context('contracts_constr_ctx', 'MANAGER', '1'); elsif ... ... else raise_application_error(-20010, 'unknown user role'); end if; exception when others then -- enforce constraints for all groups -- dbms_session.clear_all_context('contracts_constr_ctx'); end init_contr_groups; end; /
By default, when a namespace is not associated with an RDF-VPD policy or when a specific constraint group is not activated in a session, all the constraints defined in the policy are active and they are enforced for each user query. However, when a specific constraint group is activated by setting the corresponding namespace-attribute value to 1, only the constraints belonging to the group and any other constraints that are not associated with any group are enforced. For a given session, one or more constraint groups may be activated, in which case all the applicable constraints are enforced conjunctively.
At the time of creation, the data access constraints defined in a RDF-VPD policy may specify the name of a constraint group (explained in Section 5.1.3, "Data Access Constraints"). Within a database session, appropriate groups of constraints are activated based on the session context set by the context package. For all subsequent SPARQL queries in the database session, the constraints belonging to the active groups are consulted for enforcing appropriate security policies.
Maintenance operations on an RDF model with a VPD policy require unconditional access to data in the model. These operations include creation of an entailment using at least one VPD protected model, and load or data manipulation operations. You can grant unconditional access to the data stored in an RDF model by initializing a reserved attribute for the namespace associated with the VPD policy. The reserved attribute is defined by the package constant
sem_rdfsa.VPD_FULL_ACCESS, and the context package implementation shown in the preceding example grants FULL access to the RDF_ADMIN user.
DML operations on the application table are not validated for VPD constraint violations, so only a user with FULL access to the corresponding model can add or modify existing triples.
You can use the SEM_MATCH operator to query an RDF model with a VPD policy in a standard SQL query, and to perform a multi-model query on a combination of VPD-enabled models and models with no VPD policy. However, when more than one model in a multi-model query is VPD-enabled, they must all be associated with the same VPD policy. A VPD policy associated with an RDF model is automatically extended to any data inferred from the model. When multiple RDF models are specified during inference, all VPD-enabled models within the set should use the same VPD policy.
The types of RDF metadata used to enforce VPD policies include the following:
Domain and range information for the properties used in the graph
Subclass relationships in the graph
Subproperty relationships in the graph
Equivalent properties in the graph
The RDF metadata associated with a VPD policy is specified as one or more RDF Schema statements using one of the following property URIs:
For example, the following RDF Schema statement associated with
contracts_policy asserts that the domain of the
hasContractValue property is a
Contract class. Note that range specification for the predicates can be skipped if they are not relevant or if they are of literal type
begin sem_rdfsa.maint_vpd_metadata( policy_name => 'contracts_policy', t_subject => '<>', t_predicate => '<>', t_object => '<>'); end; /
An RDF-VPD policy maintains its metadata separate from the asserted and inferred triples. You can derive this metadata programmatically from the RDF models and the corresponding entailments. For example, if the domain and range information for the properties and subclass and subproperty relationships are already established in the asserted or inferred triples, you can use a SQL query on the underlying model views to populate the metadata for an RDF-VPD policy.
The domain and range information for the properties aid the query analysis in determining the RDF class type for the terms and unbound variables referenced in the query. This information is further used to enforce appropriate data access constraints on the data accessed by the query. The metadata relating to the subclass property is used to ensure that a data access constraint defined for a specific class in a class hierarchy is automatically enforced for all its subclasses. Similarly, the subproperty specification in a VPD policy is used to enforce any constraints associated with a property to all its subproperties.
The RDF Schema statements associated with a VPD policy are not used to infer additional statements, and the security administrator should ensure that the metadata captured in a VPD policy is complete by cross checking it with inferred data. For example, a subproperty schema statement does not automatically infer the domain and range information for the property based on the domain and range specified for the super-property.
Certain owl and rdfs properties in the asserted triples, when left unchecked, may be used to infer data that may be used to circumvent the VPD policies. For example, when the new property is defined as a super-property of a property that has a specific data access constraint, the inferred data may duplicate all instances of the subproperty using the super-property. Unless the VPD policy explicitly defines access constraints for the super-property, the inferred data may be used to circumvent the access restrictions.
The ability to infer new data is only granted to users with FULL access, and such users should ensure that the metadata associated with the VPD policy is complete in light of newly inferred data. Specifically, the metadata associated with the VPD policy should be maintained if some new
rdfs:subClassOf,
rdfs:superClassOf,
rdfs:subPropertyOf,
rdfs:superPropertyOf, or
owl:equivalentProperty assertions are generated during inference. Also, any new properties introduced by the rulebases used for inference may need domain and range specifications, as well as data access constraints, if they are associated with some sensitive information.
In a VPD policy, a property can be declared to be equivalent to another property so that the domain and range information, as well as any constraints defined for the original property, are automatically duplicated for the equivalent property. However, within a VPD policy, additional metadata or data access constraints cannot be directly assigned to the property declared to be an equivalent of another property.
The data access constraints associated with a VPD policy fall into two general categories, based on the types of access restrictions that they enforce:
Those that restrict access to instances of specific RDF classes
Those that restrict to assertions using specific RDF properties
The access restrictions are enforced conditionally, based on the application context and the characteristics of the resources being accessed in a SPARQL query. Data access constraints restrict access to instances of an RDF class or property using some properties associated with the resource. For example, access to a resource that is a member of the
Contract class may be restricted only to the users who work on the contract, identified using the
hasMember property associated with the resource. Similarly, access to the
hasContractValue property for a resource may be restricted to a user identified as the manager of the contract using
hasManager property associated with the same resource.
Each data access constraint is expressed using two graph patterns identified as a match pattern and an apply pattern. The match pattern of a constraint determines the type of access restriction it enforces and binds one or more variables to the corresponding data instances accessed in the user query. For example, the following match pattern is defined for instances of the
Contract class, and it binds a variable to all such instances accessed through a SPARQL query:
{ ?contract rdf:type <> }
Similarly, a match pattern for a constraint involving an RDF property matches the instances of the property accessed in a SPARQL query, and binds two variables to the resources in the subject and object position of such instances. For example, the match pattern for a constraint on the
hasContractValue property is defined as follows:
{ ?contract <> ?cvalue }
The apply pattern of a data access constraint defines additional graph patterns to be applied on the resources that match the match pattern before they can be used to construct the query results. One or more variables defined in the match pattern of a data access constraint are used in the corresponding apply pattern to enforce the access restrictions on the identified resources. For example, the following match pattern and apply pattern combination ensures that the
hasContractValue of a contract can be accessed only if
Andy is the manager of the contract being accessed.:
Match: { ?contract pred:hasContractValue ?cvalue } Apply: { ?contract pred:hasManager emp:Andy }
A data access constraint with its match and apply patterns expressed in SPARQL syntax can be added to a VPD policy to enforce access restrictions on the data stored in RDF models that are associated with the VPD policy. The following example, which adds a constraint to the VPD policy, assumes that the VPD policy is defined with appropriate namespace map for the
pred and
emp namespace prefixes. (To associate a namespace map with a VPD policy, use the SEM_RDFSA.CREATE_VPD_POLICY procedure.)
begin sem_rdfsa.add_vpd_constraint( policy_name => 'contracts_policy', constr_name => 'andy_constraint_1', match_pattern => '{?contract pred:hasContractValue ?cvalue }', apply_pattern => '{?contract pred:hasManager emp:Andy }', constr_group => 'andy'); end; /
The ability to arrange data access constraints into groups could ensure that the previous constraint is applied only for the sessions associated with
Andy. However, to avoid proliferation of structurally similar constraints for each user, you can define a common constraint that uses the application context in the object position of the apply graph patterns, as shown in the following example:
begin sem_rdfsa.add_vpd_constraint( policy_name => 'contracts_policy', constr_name => 'manager_constraint_1', match_pattern => '{?contract pred:hasContractValue ?cvalue }', apply_pattern => '{?contract pred:hasManager "sys_context(''sa$appctx'',''app_user_uri''}"^^orardf:instruction }', constr_group => 'manager'); end; /
In the preceding example. the data access constraint, defined within the
manager constraint group, can be activated for all sessions involving users with a manager role. In this case, the secure application context can be programmed to initialize the attribute
app_user_uri of the
sa$appctx namespace with the URI for the user logged in. For example, when user
Andy logs into the application, the
app_user_uri attribute can be initialized to <>, in which case the constraint will ensure that user
Andy can view the value for a contract only if user
Andy manages the contract. Generally, the
sys_context function can be used in the object position of any graph pattern to allow dynamic URIs or literal values to be bound at the time of query execution. Note that if the context is not initialized properly, the preceding constraint will fail for all data instances and effectively restrict the user from accessing any data.
A SPARQL query issued on an RDF model with a VPD policy is analyzed using the match patterns of all the active data access constraints that are defined in the policy. In the next example, the SPARQL query refers to the
hasContractValue property, thereby enforcing the constraint if the group is active. Logically, the enforcement of a constraint is equivalent to rewriting the original SPARQL graph pattern to include the apply patterns for all the relevant constraints, using appropriate variables and terms from the user query. With the previous access restriction on the
hasContractValue property, the following SPARQL graph pattern passed to a SEM_MATCH operator is logically rewritten as shown in the following example:
Query: { ?contr pred:drivenBy ?dept . ?contr pred:hasContractValue ?val } Rewritten query: { ?contr pred:drivenBy ?dept . ?contr pred:hasContractValue ?val . ?contr pred:hasManager "sys_context('sa$appctx','app_user_uri'}"^^orardf:instruction }
When the match pattern of a data access constraint on an RDF property matches the pattern being accessed in a user query, the equivalent VPD-enforced query appends the corresponding apply patterns to the SPARQL query using the variables and terms appearing in the matched pattern. When a SPARQL query has nested graph patterns, the data access constraints are applied to appropriate basic query graph pattern block. In the following example, the
hasContractValue property is referenced in the
OPTIONAL graph pattern, and therefore the corresponding apply pattern is enforced just for this block of the graph pattern.
Query: { ?contr pred:drivenBy ?dept . OPTIONAL { ?contr pred:hasContractValue ?val } } Rewritten query: { ?contr pred:drivenBy ?dept . OPTIONAL { ?contr pred:hasContractValue ?val . ?contr pred:hasManager "sys_context('sa$appctx','app_user_uri'}"^^orardf:instruction }
The apply pattern for a data access constraint can be any valid basic graph pattern with multiple triple patterns and a FILTER clause. For example, the access constraint on the
hasContractValue property for a user with
VP role may stipulate that the user can access the property only if he or she is the Vice President of the department driving the contract. The match and apply patterns for such constraint can be defined as follows:
Match: { ?contract pred:hasContractValue ?cvalue } Apply: { ?contract pred:drivenBy ?dept . ?dept pred:hasVP "sys_context('sa$appctx','app_user_uri'}"^^orardf:instruction }
A match pattern defined for a data access constraint associated with an RDF class identifies all variables and terms that are known to be instances of the class. The RDF metadata defined in the VPD policy is used to determine the type for each variable and the term in a SPARQL query, and the appropriate access constraints are applied on these variables and terms. For example, the following VPD constraint ensures that a resource that is a member of the
Contract class can only be accessed by a user who has a
hasMember relationship with the resource:
Match: { ?contract rdf:type <> } Apply: { ?contract pred:hasMember "sys_context('sa$appctx','app_user_uri'}"^^orardf:instruction }
The class information for a variable or term appearing in a SPARQL query is derived using the domain and range information for the properties appearing in the query. In the SPARQL query in the next example, if the VPD policy has an RDF Schema statement that asserts that the domain of the
drivenBy property is the
Contract class, the variable
?contr is known to hold instances of the
Contract class. Therefore, with the previously defined access restriction for the
Contract class, the user query is rewritten to include an appropriate apply pattern, as shown in the following example:
Query: { ?contr pred:drivenBy ?dept . ?contr pred:hasDueDate ?due } Rewritten query: { ?contr pred:drivenBy ?dept . ?contr pred:hasDueDate ?due . ?contr pred:hasMember "sys_context('sa$appctx','app_user_uri'}"^^orardf:instruction }
When a basic graph pattern in a SPARQL query matches multiple data access constraints, the corresponding apply patterns are combined to form a conjunctive graph pattern, which is subsequently enforced for the specific graph pattern by logically rewriting the SPARQL query. While considering the data access constraints to be enforced for a given SPARQL query, the class and property hierarchy associated with the VPD policy is consulted to automatically enforce all applicable constraints.
A variable or term identified as an instance of a specific RDF class enforces constraints associated with the class and all its superclasses.
A constraint associated with a property is enforced when the user query references the property or any property defined as its subproperty or an equivalent property.
You can use the
sys_context function in a data access constraint to enforce context-dependent access restrictions with structurally similar graph patterns. You can dynamically activate and deactivate constraint groups, based on the application context, to enforce alternate access restrictions using structurally different graph patterns.
The MDSYS.RDFVPD_POLICIES view contains information about all VPD policies defined in the schema or the policies to which the user has FULL access. If the same policy is associated with multiple models, this view has one entry for each such association. This view exists only after the semantic network and a VPD policy have been created.
The MDSYS.RDFVPD_POLICIES view contains the columns shown in Table 5-1.
The MDSYS.RDFVPD_MODELS view contains information about RDF models and their associated VPD policies. This view exists only after the semantic network and a VPD policy have been created.
The MDSYS.RDFVPD_MODELS view contains the columns shown in Table 5-2.
The MDSYS.RDFVPD_POLICY_CONSTRAINTS view contains information about the constraints defined in the VPD policy that are accessible to the current user. This view exists only after the semantic network and a VPD policy have been created.
The MDSYS.RDFVPD_POLICY_CONSTRAINTS view contains the columns shown in Table 5-3.
The MDSYS.RDFVPD_PREDICATE_MDATA view contains information about the predicate metadata associated with a VPD policy. This view exists only after the semantic network and a VPD policy have been created.
The MDSYS.RDFVPD_PREDICATE_MDATA view contains the columns shown in Table 5-4.
The MDSYS.RDFVPD_RESOURCE_REL view contains information about the subclass, subproperty, and equivalence property relationships that are defined between resources in a VPD policy. This view exists only after the semantic network and a VPD policy have been created.
The MDSYS.RDFVPD_RESOURCE_REL view contains the columns shown in Table 5-5.
Oracle Label Security (OLS) enables you to assign one or more security labels that define a security level for table rows. Conceptually, a table in a relational data model can be mapped to an equivalent RDF graph. Specifically, a row in a relational table can be mapped to a set of triples, each asserting some facts about a specific Subject. In this scenario, the subject represents the primary key for the row and each non-key column-value combination from the row is mapped to a predicate-object value combination for the corresponding triples.
A row in a relational data model is identified by its key, and OLS, as a row-level access control mechanism, effectively restricts access to the values associated with the key. With this conceptual mapping between relational and RDF data models, restricting access to a row in a relational table is equivalent to restricting access to a subgraph involving a specific subject. In a model that supports sensitivity labels for each triple, this is enforced by applying the same label to all the triples involving the given subject. However, you can also achieve greater flexibility by allowing the individual triples to have different labels, while maintaining a minimum bound for all the labels.
OLS support for RDF data employs a multilevel approach in which sensitivity labels associated with the triple components (subject, predicate, object) collectively form a minimum bound for the sensitivity label for the triple. With this approach, a data sensitivity label associated with an RDF resource (used as subject, predicate, or object) restricts unauthorized users from accessing any triples involving the resource and from creating new triples with the resource. For example,
projectHLS as a subject may have a minimum sensitivity label, which ensures that all triples describing this subject have a sensitivity label that at least covers the label for
projectHLS. Additionally,
hasContractValue as a predicate may have a higher sensitivity label; and when this predicate is used with
projectHLS to form a triple, that triple minimally has a label that covers both the subject and the predicate labels, as in the following example:
Triple 1: <> :ownedBy <> Triple 2: <> :hasContractValue "100000"^^xsd:integer
Sensitivity labels are associated with the RDF resources (URIs) based on the position in which they appear in a triple. For example, the same RDF resource may appear in different positions (subject, predicate, or object) in different triples. Three unique labels can be assigned to each resource, so that the appropriate label is used to determine the label for a triple based on the position of the resource in the triple. You can choose the specific resource positions to be secured in a database instance when you apply an OLS policy to the RDF repository. You can secure subjects, objects, predicates, or any combination, as explained in separate sections to follow. The following example applies an OLS policy named
defense to the RDF repository and allows sensitivity labels to be associated with RDF subjects and predicates.
begin sem_rdfsa.apply_ols_policy( policy_name => 'defense', rdfsa_options => sem_rdfsa.SECURE_SUBJECT+ sem_rdfsa.SECURE_PREDICATE); end; /
The same RDF resource can appear in both the subject and object positions (and sometime even as the predicate), and such a resource can have distinct sensitivity labels based on its position. A triple using the resource at a specific position should have a label that covers the label corresponding to the resource's position. In such cases, the triple can be asserted or accessed only by the users with labels that cover the triple and the resource labels.
In a specific RDF repository, security based on data classification techniques can be turned on for subjects, predicates, objects, or a combination of these. This ensures that all the triples added to the repository automatically conform to the label relationships described above.
An RDF resource (typically a URI) appears in the subject position of a triple when an assertion is made about the resource. In this case, a sensitivity label associated with the resource has following characteristics:
The label represents the minimum sensitivity label for any triple using the resource as a subject. In other words, the sensitivity label for the triple should dominate or cover the label for the subject.
The label for a newly added triple is initialized to the user initial row label or is generated using the label function, if one is specified. Such operations are successful only if the triple's label dominates the label associated with the triple's subject.
Only a user with an access label that dominates the subject's label and the triple's label can read the triple.
By default, the sensitivity label for a subject is derived from the user environment when an RDF resource is used in the subject position of a triple for the first time. The default sensitivity label in this case is set to the user's initial row label (the default that is assigned to all rows inserted by the user).
However, you can categorize an RDF resource as a subject and assign a sensitivity label to it even before it is used in a triple. The following example assigns a sensitivity label named
SECRET:HLS:US to the
projectHLS resource, thereby restricting the users who are able to define new triples about this resource and who are able to access existing triples with this resource as the subject:
begin sem_rdfsa.set_resource_label( model_name => 'contracts', resource_uri => '<>', label_string => 'SECRET:HLS:US', resource_pos => 'S'); end;
An RDF predicate defines the relationship between a subject and an object. You can use sensitivity labels associated with RDF predicates to restrict access to specific types of relationships with all subjects.
RDF predicates are analogous to columns in a relational table, and the ability to restrict access to specific predicates is equivalent to securing relational data at the column level. As in the case of securing the subject, the predicate's sensitivity label creates a minimum bound for any triples using this predicate. It is also the minimum authorization that a user must posses to define a triple with the predicate or to access a triple with the predicate.
The following example assigns the label
HSECRET:FIN (in this scenario, a label that is Highly Secret and that also belongs to the Finance department) to triples with the
hasContractValue predicate, to ensure that only a user with such clearance can define the triple or access it:
begin sem_rdfsa.set_predicate_label( model_name => 'contracts', predicate => '<>', label_string => 'HSECRET:FIN'); end; /
You can secure predicates in combination with subjects. In such cases, the triples using a subject and a predicate are ensured to have a sensitivity label that at least covers the labels for both the subject and the predicate. Extending the preceding example, if
projectHLS as a subject is secured with label
SECRET:HLS:US and if
hasContractValue as a predicate is secured with label
HSECRET:FIN:, a triple assigning a monetary value for
projectHLS should at least have a label
HSECRET:HLS,FIN:US. Effectively, a user's label must dominate this triple's label to be able to define or access the triple.
An RDF triple can have an URI or a literal in its object position. The URI in object position of a triple represents some resource. You can secure a resource in the object position by associating a sensitivity label to it, to restrict the ability to use the resource as an object in triples.
Typically, a resource (URI or non-literal) appearing in the object position of a triple may itself be described using additional RDF statements. Effectively, an RDF resource in the object position could appear in the subject position in some other triples. When the RDF resources are secured at the object position without explicit sensitivity labels, the label associated with the same resource in the subject position is used as the default label for the object.
RDF data model allows for specification of declarative rules, enabling it to infer the presence of RDF statements that are not explicitly added to the repository. The following shows some simple declarative rules associated with the logic that projects can be owned by departments and departments have Vice Presidents, and in such cases the project leader is by default the Vice President of the department that owns the project.
RuleID -> projectLedBy Antecedent Expression -> (?proj :ownedBy ?dept) (?dept :hasVP ?person) Consequent Expression -> (?proj :isLedBy ?person)
An RDF rule uses some explicitly asserted triples as well as previously inferred triples as antecedents, and infers one or more consequent triples. Traditionally, the inference process is executed as an offline operation to pregenerate all the inferred triples and to make them available for subsequent query operations.
When the underlying RDF graph is secured using OLS, any additional data inferred from the graph should also be secured to avoid exposing the data to unauthorized users. Additionally, the inference process should run with higher privileges, specifically with full access to data, in order to ensure completeness.
OLS support for RDF data offers techniques to generate sensitivity labels for inferred triples based on labels associated with one or more RDF artifacts. It provides label generation techniques that you can invoke at the time of inference. Additionally, it provides an extensibility framework, which allows an extensible implementation to receive a set of possible labels for a specific triple and determine the most appropriate sensitivity label for the triple based on some application-specific logic. The techniques that you can use for generating the labels for inferred triples include the following (each technique, except for Use Antecedent Labels, is associated with a SEM_RDFSA package constant):
Use Rule Label (
SEM_RDFSA.LABELGEN_RULE): An inferred triple is directly generated by a specific rule, and it may be indirectly dependent on other rules through its antecedents. Each rule may have a sensitivity label, which is used as the sensitivity label for all the triples directly inferred by the rule.
Use Subject Label (
SEM_RDFSA.LABELGEN_SUBJECT): Derives the label for the inferred triple by considering any sensitivity labels associated with the subject in the new triple. Each inferred triple has a subject, which could in turn be a subject, predicate, or object in any of the triple's antecedents. When such RDF resources are secured, the subject in the newly inferred triple may have one or more labels associated with it. With the Use Subject Label technique, the label for the inferred triple is set to the unique label associated with the RDF resource. When more than one label exists for the resource, you can implement the extensible logic to determine the most relevant label for the new triple.
Use Predicate Label (
SEM_RDFSA.LABELGEN_PREDICATE): Derives the label for the inferred triple by considering any sensitivity labels associated with the predicate in the new triple. Each inferred triple has a predicate, which could in turn be a subject, predicate, or object in any of the triple's antecedents. When such RDF resources are secured, the predicate in the newly inferred triple may have one or more labels associated with it. With the Use Predicate Label technique, the label for the inferred triple is set to the unique label associated with the RDF resource. When more than one label exists for the resource, you can implement the extensible logic to determine the most relevant label for the new triple.
Use Object Label (
SEM_RDFSA.LABELGEN_OBJECT): Derives the label for the inferred triple by considering any sensitivity labels associated with the object in the new triple. Each inferred triple has an object, which could in turn be a subject, predicate, or object in any of the triple's antecedents. When such RDF resources are secured, the object in the newly inferred triple may have one or more labels associated with it. With the Use Object Label technique, the label for the inferred triple is set to the unique label associated with the RDF resource. When more than one label exists for the resource, you can implement the extensible logic to determine the most relevant label for the new triple.
Use Dominating Label (
SEM_RDFSA.LABELGEN_DOMINATING): Each inferred triple minimally has four direct components: subject, predicate, object, and the rule that produced the triple. With the Use Dominating Label technique, at the time of inference the label generator computes the most dominating of the sensitivity labels associated with each of the component and assigns it as the sensitivity label for the inferred triple. Exception labels are assigned when a clear dominating relationship cannot be established between various labels.
Use Antecedent Labels: In addition to the four direct components for each inferred triple (subject, predicate, object, and the rule that produced the triple), a triple may have one or more antecedent triples, which are instrumental in deducing the new triple. With the Use Antecedent Labels technique, the labels for all the antecedent triples are considered, and conflict resolution criteria are implemented to determine the most appropriate label for the new triple. Since an inferred triple may be dependent on other inferred triples, a strict order is followed while generating the labels for all the inferred triples.
The Use Antecedent Labels technique requires that you use a custom label generator. For information about creating and using a custom label generator, see Section 5.2.5.
The following example creates an entailment (rules index) for the contracts data using a specific rule base. This operation can only be performed by a user with FULL access privilege with the OLS policy applied to the RDF repository. In this case, the labels generated for the inferred triples are based on the labels associated with their predicates, as indicated by the use of the
SEM_RDFSA.LABELGEN_PREDICATE package constant in the
label_gen parameter.
begin sem_rdfsa.create_entailment( index_name_in => 'contracts_inf', models_in => SDO_RDF_Models('contracts'), rulebases_in => SDO_RDF_Rulebases('contracts_rb'), options => 'USER_RULES=T', label_gen => sem_rdfsa.LABELGEN_PREDICATE); end;
When the predefined or extensible label generation implementation cannot compute a unique label to be applied to an inferred triple, an exception label is set for the triple. Such triples are not accessible by any user other than the user with full access to RDF data (also the user initiating the inference process). The triples with exception labels are clearly marked, so that a privileged user can access them and apply meaningful labels manually. After the sensitivity labels are applied to inferred triples, only users with compatible labels can access these triples. The following example updates the sensitivity label for triples for which an exception label was set:
update mdsys.rdfi_contracts_inf set ctxt1 = char_to_label('defense', 'SECRET:HLS:US') where ctxt1 = -1;
Inferred triples accessed through generated labels might not be same as conceptual triples inferred directly from the user accessible triples and rules. The labels generated using system-defined or custom implementations cannot be guaranteed to be precise. See the information about Fine-Grained Access Control (VPD and OLS) Considerations in the Usage Notes for the SEM_APIS.CREATE_ENTAILMENT procedure in Chapter 9 for details.
The MDSYS.RDFSA_LABELGEN type is used to apply appropriate label generator logic at the time of index creation; however, you can also extend this type to implement a custom label generator and generate labels based on application logic. The label generator is specified using the
label_gen parameter with the SEM_APIS.CREATE_ENTAILMENT procedure. To use a system-defined label generator, specify a SEM_RDFSA package constant, as explained in Section 5.2.4; to use a custom label generator, you must implement a custom label generator type and specify an instance of that type instead of a SEM_RDFSA package constant.
To create a custom label generator type, you must have the UNDER privilege on the RDFSA_LABELGEN type. In addition, to create an index for RDF data , you must should have the EXECUTE privilege on this type. The following example grants these privileges to a user named RDF_ADMIN:
GRANT under, execute ON mdsys.rdfsa_labelgen TO rdf_admin;
The custom label generator type must implement a constructor, which should set the dependent resources and specify the getNumericLabel method to return the label computed from the information passed in, as shown in the following example:
CREATE OR REPLACE TYPE CustomSPORALabel UNDER mdsys.rdfsa_labelgen ( constructor function CustomSPORALabel return self as result, overriding member function getNumericLabel ( subject rdfsa_resource, predicate rdfsa_resource, object rdfsa_resource, rule rdfsa_resource, anteced rdfsa_resource) return number);
The label generator constructor uses a set of constants defined in the SEM_RDFSA package to indicate the list of resources on which the label generator relies. The dependent resources are identified as an inferred triple's subject, its predicate, its object, the rule that produced the triple, and its antecedents. A custom label generator can rely on any subset of these resources for generating the labels, and you can specify this in its constructor by using the constants defined in SEM_RDFSA package : USE_SUBJECT_LABEL, USE_PREDICATE_LABEL, USE_OBJECT_LABEL, USE_RULE_LABEL, USE_ANTCED_LABEL. The following example creates the type body and specifies the constructor:
Example 5-1 creates the type body, specifying the constructor function and the getNumericLabel member function. (Application-specific logic is not included in this example.)
Example 5-1 Creating a Custom Label Generator Type
CREATE OR REPLACE TYPE BODY CustomSPORALabel AS constructor function CustomSPORALabel return self as result as begin self.setDepResources(sem_rdfsa.USE_SUBJECT_LABEL+ sem_rdfsa.USE_PREDICATE_LABEL+ sem_rdfsa.USE_OBJECT_LABEL+ sem_rdfsa.USE_RULE_LABEL+ sem_rdfsa.USE_ANTECED_LABELS); return; end CustomSPORALabel; overriding member function getNumericLabel ( subject rdfsa_resource, predicate rdfsa_resource, object rdfsa_resource, rule rdfsa_resource, anteced rdfsa_resource) return number as labellst mdsys.int_array := mdsys.int_array(); begin -- Find dominating label of S P O R A – –- Application specific logic for computing the triple label – -- Copy over all labels to labellst -- for li in 1 .. subject.getLabelCount() loop labellst.extend; labellst(labellst.COUNT) = subject.getLabel(li); end loop; --- Copy over other labels as well --- --- Find a dominating of all the labels. Generates –1 if no --- dominating label within the set return self.findDominatingOf(labellst); end getNumericLabel; end CustomSPORALabel; /
In Example 5-1, the sample label generator implementation uses all the resources contributing to the inferred triple for generating a sensitivity label for the triple. Thus, the constructor uses the
setDepResources method defined in the superclass to set all its dependent components. The list of dependent resources set with this step determines the exact list of values passed to the label generating routine.
The
getNumericLabel method is the label generation routine that has one argument for each resource that an inferred triple may depend on. Some arguments may be null values if the corresponding dependent resource is not set in the constructor implementation.
The label generator implementation can make use of a general-purpose static routine defined in the RDFSA_LABELGEN type to find a domination label for a given set of labels. A set of labels is passed in an instance of MDSYS.INT_ARRAY type, and the method finds a dominating label among them. If no such label exists, an exception label –1 is returned.
After you have implemented the custom label generator type, you can use the custom label generator for inferred data by assigning an instance of this type to the
label_gen parameter in the SEM_APIS.CREATE_ENTAILMENT procedure, as shown in the following example:
begin sem_apis.create_entailment( index_name_in => 'contracts_rdfsinf', models_in => SDO_RDF_Models('contracts'), rulebases_in => SDO_RDF_Rulebases('RDFS'), options => '', label_gen => CustomSPORALabel()); end; /
The MDSYS.RDFOLS_SECURE_RESOURCE view contains information about resources secured with Oracle Label Security (OLS) policies and the sensitivity labels associated with these resources.
Select privileges on this view can be granted to appropriate users. To view the resources associated with a specific model, you must also have select privileges on the model (or the corresponding RDFM_model-name view).
The MDSYS.RDFOLS_SECURE_RESOURCE view contains the columns shown in Table 5-6. | http://docs.oracle.com/cd/E18283_01/appdev.112/e11828/fine_grained_acc.htm | CC-MAIN-2013-20 | en | refinedweb |
WSCInstallNameSpace function
The WSCInstallNameSpace.
Return value
If no error occurs, the WSCInstallNameSpace function returns NO_ERROR (zero). Otherwise, it returns SOCKET_ERROR if the function fails, and you must retrieve the appropriate error code using the WSAGetLastError function.
Remarks
The namespace–configuration functions do not affect applications that are already running. Newly installed namespace providers will not be visible to applications nor will the changes in a namespace provider's activation state. Applications launched after the call to WSCInstallNameSpace will see the changes.
The WSCInstallNameSpace function can only be called by a user logged on as a member of the Administrators group. If WSCInstallNameSpace
- WSCDeinstallProvider
- WSCEnumProtocols
- WSCInstallNameSpace32
- WSCInstallNameSpaceEx
- WSCUnInstallNameSpace
Send comments about this topic to Microsoft
Build date: 11/29/2012 | http://msdn.microsoft.com/en-us/library/windows/desktop/ms742247(v=vs.85).aspx | CC-MAIN-2013-20 | en | refinedweb |
Horizon Investment Analyst 6.18
Sponsored Links
Horizon Investment Analyst 6.18 Ranking & Summary
RankingClick at the star to rank
Ranking Level
User Review: 0 (0 times)
File size: 3.25 MB
Platform: 95/98/ME/NT4/2000/XP
License: shareware
Price: $15 to buy
Downloads: 2704
Date added: 2002-07-17
Publisher:
Horizon Investment Analyst 6.18 description
Horizon Investment Analyst 6.18 -, CDs, stocks, bonds, etc. Print Preview, comprehensive Printer/page & Grid setup options, Security price and history downloader, and many other new enhancements.
Horizon Investment Analyst 6.18 Screenshot
Horizon Investment Analyst 6.18 Keywords
Horizon Investment Analyst Horizon Investment Analyst 6.18 Win9x Investment Portfolio Manager Investment Analyst
Bookmark Horizon Investment Analyst 6.18
Horizon Investment Analyst 6.18 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for Horizon Investment Analyst 6.18. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Featured Software
Want to place your software product here?
Please contact us for consideration.
Contact WareSeeker.com
Version History
Related Software
Investment Portfolio Manager (MS-DOS) Free Download
Easily analyze the performance of any investment to maximize returns. Free Download
Offshore Investment Fund - Download this small free application. Free Download
Art Investment Calculator - Investment Hunters Free Download
Tax Lien Certificate Calculator - Investment Hunters Free Download
This shareware is an easy tool that calculates the return on investment (ROI), also called the internal rate of return (IRR), for any scenario. It is essential for business and finance. Free Download
Drilling Rig Investment Calculator - Investment Hunters Free Download
Oil & Gas Investment Calculator Free Download
Latest Software
Popular Software
Favourite Software | http://wareseeker.com/Business-Finance/horizon-investment-analyst-6.18.zip/426316 | CC-MAIN-2013-20 | en | refinedweb |
When the following program is compiled:
the following errors are being shown: cannot find symbol variable panel 1 line 24 and 25the following errors are being shown: cannot find symbol variable panel 1 line 24 and 25Code :
import javax.swing.*; import java.awt.*; import java.awt.event.*; public class del1 extends JFrame{ public del1() { JPanel panel1=new JPanel(); JButton button=new JButton("HI"); button.addActionListener(new ButtonHandler()); panel1.add(button); } private class ButtonHandler implements ActionListener { public void actionPerformed(ActionEvent event) { panel1.remove(0); // line24 panel1.add(new JButton("BYE")); // line 25 } }
Can't the variables of the main class be accessed from the inner class?
And when I change the inner class into an anonymous class, the error message displayed during compilation is: declare panel1 as final
When I do it and run the program ( by creating its object ) nothing is being displayed | http://www.javaprogrammingforums.com/%20awt-java-swing/14626-jbutton-printingthethread.html | CC-MAIN-2013-20 | en | refinedweb |
The first to thing to have: a model that contains a
has_many relation with another model.
class Project < ApplicationRecord has_many :todos end class Todo < ApplicationRecord belongs_to :project end
In
ProjectsController:
class ProjectsController < ApplicationController def new @project = Project.new end end
In a nested form, you can create child objects with a parent object at the same time.
<%= nested_form_for @project do |f| %> <%= f.label :name %> <%= f.text_field :name %> <% # Now comes the part for `Todo` object %> <%= f.fields_for :todo do |todo_field| %> <%= todo_field.label :name %> <%= todo_field.text_field :name %> <% end %> <% end %>
As we initialized
@project with
Project.new to have something for creating a new
Project object, same way for creating a
Todo object, we have to have something like this, and there are multiple ways to do so:
In
Projectscontroller, in
new method, you can write:
@todo = @project.todos.build or
@todo = @project.todos.new to instantiate a new
Todo object.
You can also do this in view:
<%= f.fields_for :todos, @project.todos.build %>
For strong params, you can include them in the following way:
def project_params params.require(:project).permit(:name, todo_attributes: [:name]) end
Since, the
Todo objects will be created through the creation of a
Project object, so you have to specify this thing in
Project model by adding the following line:
accepts_nested_attributes_for :todos | https://riptutorial.com/ruby-on-rails/example/26359/nested-form-in-ruby-on-rails | CC-MAIN-2019-39 | en | refinedweb |
Hi, I'm creating a game with Unity and I had a problem I can't solve.
I have to organize all my gameobject in the scene in a multidimensional array GameObject[,].
I just want to set up a public variable of one of my scripts with all the data, that are basically links to the gameobject in the scene.
I can already do that while in the game, but what I really want to do, is to "bake" the data in the editor, so the game don't have to do it at the start. So, the question is : How can i set up a public GameObject[,] in the editor? Assuming I use a script like this:
foreach (Transform childTransform in parentTransform) {
publicGameObject[x, y] = childTransform.gameObject
}
(Fake script wrote in 3 seconds) What do I have to put instead of : publicGameObject[x, y]
Answer by Bunny83
·
Apr 19, 2017 at 09:56 PM
The question is a bit confusing. If you're asking about making Unity serialize a multidimensional array so it's saved and show up in the inspector the answer is: you can't. Unity can't serialize multidimensional or jagged arrays. There are workarounds though by using an intermediate class:
[System.Serializable]
public class GameObjectArray
{
public GameObject[] inner;
}
public class SomeComponent : MonoBehaviour
{
public GameObjectArray[] outer;
}
This construct can be serialized by Unity. It's like a jagged array but it uses a slightly different way of accessing the elements.
GameObject go = outer[x].inner[y];
If you're not asking how to make that array to show up in the inspector to edit it inside the editor during edit-time, you should be more clear with your description. Your pseudo-code doesn't seem to make much's the C# equivalent of @script ExecuteInEditMode() ?
2
Answers
How to identify an asset?
2
Answers
OnSceneGUI called without any object selected
4
Answers
C# Editor script in Unity
1
Answer
Is it possible to ensure that certain game objects don't get saved in the scene or otherwise hook into the default save scene to run custom editor code?
2
Answers | https://answers.unity.com/questions/1341978/public-multidimensional-array-of-gameobjects.html | CC-MAIN-2019-39 | en | refinedweb |
Use custom activities in an Azure Data Factory pipeline, or to transform/process data in a way that isn't supported by Data Factory, you can create a Custom activity with your own data movement or transformation logic and use the activity in a pipeline. The custom activity runs your customized code logic on an Azure Batch pool of virtual machines..
See following articles if you are new to Azure Batch service:
- Azure Batch basics for an overview of the Azure Batch service.
- New-AzBatchAccount cmdlet to create an Azure Batch account (or) Azure portal to create the Azure Batch account using Azure portal. See Using PowerShell to manage Azure Batch Account article for detailed instructions on using the cmdlet.
- New-AzBatchPool cmdlet to create an Azure Batch pool.
Azure Batch linked service
The following JSON defines a sample Azure Batch linked service. For details, see Compute environments supported by Azure Data Factory
{ "name": "AzureBatchLinkedService", "properties": { "type": "AzureBatch", "typeProperties": { "accountName": "batchaccount", "accessKey": { "type": "SecureString", "value": "access key" }, "batchUri": "", "poolName": "poolname", "linkedServiceName": { "referenceName": "StorageLinkedService", "type": "LinkedServiceReference" } } } }
To learn more about Azure Batch linked service, see Compute linked services article.
Custom activity
The following JSON snippet defines a pipeline with a simple Custom Activity. The activity definition has a reference to the Azure Batch linked service.
{ "name": "MyCustomActivityPipeline", "properties": { "description": "Custom activity sample", "activities": [{ "type": "Custom", "name": "MyCustomActivity", "linkedServiceName": { "referenceName": "AzureBatchLinkedService", "type": "LinkedServiceReference" }, "typeProperties": { "command": "helloworld.exe", "folderPath": "customactv2/helloworld", "resourceLinkedService": { "referenceName": "StorageLinkedService", "type": "LinkedServiceReference" } } }] } }
In this sample, the helloworld.exe is a custom application stored in the customactv2/helloworld folder of the Azure Storage account used in the resourceLinkedService. The Custom activity submits this custom application to be executed on Azure Batch. You can replace the command to any preferred application that can be executed on the target Operation System of the Azure Batch Pool nodes.
The following table describes names and descriptions of properties that are specific to this activity.
* The properties
resourceLinkedService and
folderPath must either both be specified or both be omitted.
Note. You can find an example here that references AKV enabled linked service, retrieves the credentials from Key Vault, and then accesses the storage in the code.
Custom activity permissions
The custom activity sets the Azure Batch auto-user account to Non-admin access with task scope (the default auto-user specification). You can't change the permission level of the auto-user account. For more info, see Run tasks under user accounts in Batch | Auto-user accounts.
Executing commands
You can directly execute a command using Custom Activity. The following example runs the "echo hello world" command on the target Azure Batch Pool nodes and prints the output to stdout.
{ "name": "MyCustomActivity", "properties": { "description": "Custom activity sample", "activities": [{ "type": "Custom", "name": "MyCustomActivity", "linkedServiceName": { "referenceName": "AzureBatchLinkedService", "type": "LinkedServiceReference" }, "typeProperties": { "command": "cmd /c echo hello world" } }] } }
Passing objects and properties
This sample shows how you can use the referenceObjects and extendedProperties to pass Data Factory objects and user-defined properties to your custom application.
{ "name": "MyCustomActivityPipeline", "properties": { "description": "Custom activity sample", "activities": [{ "type": "Custom", "name": "MyCustomActivity", "linkedServiceName": { "referenceName": "AzureBatchLinkedService", "type": "LinkedServiceReference" }, "typeProperties": { "command": "SampleApp.exe", "folderPath": "customactv2/SampleApp", "resourceLinkedService": { "referenceName": "StorageLinkedService", "type": "LinkedServiceReference" }, "referenceObjects": { "linkedServices": [{ "referenceName": "AzureBatchLinkedService", "type": "LinkedServiceReference" }] }, "extendedProperties": { "connectionString": { "type": "SecureString", "value": "aSampleSecureString" }, "PropertyBagPropertyName1": "PropertyBagValue1", "propertyBagPropertyName2": "PropertyBagValue2", "dateTime1": "2015-04-12T12:13:14Z" } } }] } }
When the activity is executed, referenceObjects and extendedProperties are stored in following files that are deployed to the same execution folder of the SampleApp.exe:
activity.json
Stores extendedProperties and properties of the custom activity.
linkedServices.json
Stores an array of Linked Services defined in the referenceObjects property.
datasets.json
Stores an array of Datasets defined in the referenceObjects property.
Following sample code demonstrate how the SampleApp.exe can access the required information from JSON files:
using Newtonsoft.Json; using System; using System.IO; namespace SampleApp { class Program { static void Main(string[] args) { //From Extend Properties dynamic activity = JsonConvert.DeserializeObject(File.ReadAllText("activity.json")); Console.WriteLine(activity.typeProperties.extendedProperties.connectionString.value); // From LinkedServices dynamic linkedServices = JsonConvert.DeserializeObject(File.ReadAllText("linkedServices.json")); Console.WriteLine(linkedServices[0].properties.typeProperties.accountName); } } }
Retrieve execution outputs
You can start a pipeline run using the following PowerShell command:
$runId = Invoke-AzDataFactoryV2Pipeline -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -PipelineName $pipelineName
When the pipeline is running, you can check the execution output using the following commands:
while ($True) { $result = Get-AzDataFactoryV2ActivityRun -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -PipelineRunId $runId -RunStartedAfter (Get-Date).AddMinutes(-30) -RunStartedBefore (Get-Date).AddMinutes(30) if(!$result) { Write-Host "Waiting for pipeline to start..." -foregroundcolor "Yellow" } elseif (($result | Where-Object { $_.Status -eq "InProgress" } | Measure-Object).count -ne 0) { Write-Host "Pipeline run status: In Progress" -foregroundcolor "Yellow" } else { Write-Host "Pipeline '"$pipelineName"' run finished. Result:" -foregroundcolor "Yellow" $result break } ($result | Format-List | Out-String) Start-Sleep -Seconds 15 } Write-Host "Activity `Output` section:" -foregroundcolor "Yellow" $result.Output -join "`r`n" Write-Host "Activity `Error` section:" -foregroundcolor "Yellow" $result.Error -join "`r`n"
The stdout and stderr of your custom application are saved to the adfjobs container in the Azure Storage Linked Service you defined when creating Azure Batch Linked Service with a GUID of the task. You can get the detailed path from Activity Run output as shown in the following snippet:
Pipeline ' MyCustomActivity' run finished. Result: ResourceGroupName : resourcegroupname DataFactoryName : datafactoryname ActivityName : MyCustomActivity PipelineRunId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx PipelineName : MyCustomActivity Input : {command} Output : {exitcode, outputs, effectiveIntegrationRuntime} LinkedServiceName : ActivityRunStart : 10/5/2017 3:33:06 PM ActivityRunEnd : 10/5/2017 3:33:28 PM DurationInMs : 21203 Status : Succeeded Error : {errorCode, message, failureType, target} Activity Output section: "exitcode": 0 "outputs": [ "https://<container>.blob.core.windows.net/adfjobs/<GUID>/output/stdout.txt", "https://<container>.blob.core.windows.net/adfjobs/<GUID>/output/stderr.txt" ] "effectiveIntegrationRuntime": "DefaultIntegrationRuntime (East US)" Activity Error section: "errorCode": "" "message": "" "failureType": "" "target": "MyCustomActivity"
If you would like to consume the content of stdout.txt in downstream activities, you can get the path to the stdout.txt file in expression "@activity('MyCustomActivity').output.outputs[0]".
Important
- The activity.json, linkedServices.json, and datasets.json are stored in the runtime folder of the Batch task. For this example, the activity.json, linkedServices.json, and datasets.json are stored in
"\<GUID>/runtime/"path. If needed, you need to clean them up separately.
- For Linked Services that use the Self-Hosted Integration Runtime, the sensitive information like keys or passwords are encrypted by the Self-Hosted Integration Runtime to ensure credential stays in customer defined private network environment. Some sensitive fields could be missing when referenced by your custom application code in this way. Use SecureString in extendedProperties instead of using Linked Service reference if needed.
Pass outputs to another activity
You can send custom values from your code in a Custom Activity back to Azure Data Factory. You can do so by writing them into
outputs.json from your application. Data Factory copies the content of
outputs.json and appends it into the Activity Output as the value of the
customOutput property. (The size limit is 2MB.) If you want to consume the content of
outputs.json in downstream activities, you can get the value by using the expression
@activity('<MyCustomActivity>').output.customOutput.
Retrieve SecureString outputs
Sensitive property values designated as type SecureString, as shown in some of the examples in this article, are masked out in the Monitoring tab in the Data Factory user interface. In actual pipeline execution, however, a SecureString property is serialized as JSON within the
activity.json file as plain text. For example:
"extendedProperties": { "connectionString": { "type": "SecureString", "value": "aSampleSecureString" } }
This serialization is not truly secure, and is not intended to be secure. The intent is to hint to Data Factory to mask the value in the Monitoring tab.
To access properties of type SecureString from a custom activity, read the
activity.json file, which is placed in the same folder as your .EXE, deserialize the JSON, and then access the JSON property (extendedProperties => [propertyName] => value).
Compare v2 Custom Activity and version 1 (Custom) DotNet Activity
In Azure Data Factory version 1, you implement a (Custom) DotNet Activity by creating a .NET Class Library project with a class that implements the
Execute method of the
IDotNetActivity interface. The Linked Services, Datasets, and Extended Properties in the JSON payload of a (Custom) DotNet Activity are passed to the execution method as strongly-typed objects. For details about the version 1 behavior, see (Custom) DotNet in version 1. Because of this implementation, your version 1 DotNet Activity code has to target .NET Framework 4.5.2. The version 1 DotNet Activity also has to be executed on Windows-based Azure Batch Pool nodes.
In the Azure Data Factory V2 Custom Activity, you are not required to implement a .NET interface. You can now directly run commands, scripts, and your own custom code, compiled as an executable. To configure this implementation, you specify the
Command property together with the
folderPath property. The Custom Activity uploads the executable and its dependencies to
folderpath and executes the command for you.
The Linked Services, Datasets (defined in referenceObjects), and Extended Properties defined in the JSON payload of a Data Factory v2 Custom Activity can be accessed by your executable as JSON files. You can access the required properties using a JSON serializer as shown in the preceding SampleApp.exe code sample.
With the changes introduced in the Data Factory V2 Custom Activity, you can write your custom code logic in your preferred language and execute it on Windows and Linux Operation Systems supported by Azure Batch.
The following table describes the differences between the Data Factory V2 Custom Activity and the Data Factory version 1 (Custom) DotNet Activity:
If you have existing .NET code written for a version 1 (Custom) DotNet Activity, you need to modify your code for it to work with the current version of the Custom Activity. Update your code by following these high-level guidelines:
- Change the project from a .NET Class Library to a Console App.
- Start your application with the
Mainmethod. The
Executemethod of the
IDotNetActivityinterface is no longer required.
- Read and parse the Linked Services, Datasets and Activity with a JSON serializer, and not as strongly-typed objects. Pass the values of required properties to your main custom code logic. Refer to the preceding SampleApp.exe code as an example.
- The Logger object is no longer supported. Output from your executable can be printed to the console and is saved to stdout.txt.
- The Microsoft.Azure.Management.DataFactories NuGet package is no longer required.
- Compile your code, upload the executable and its dependencies to Azure Storage, and define the path in the
folderPathproperty.
For a complete sample of how the end-to-end DLL and pipeline sample described in the Data Factory version 1 article Use custom activities in an Azure Data Factory pipeline can be rewritten as a Data Factory Custom Activity, see Data Factory Custom Activity sample..
Next steps
See the following articles that explain how to transform data in other ways:
Feedback | https://docs.microsoft.com/en-us/azure/data-factory/transform-data-using-dotnet-custom-activity | CC-MAIN-2019-39 | en | refinedweb |
These are chat archives for devslopes/swiftios9
We've Officially Changed Chatrooms. Join here:
How can i have a private var in my super class and set it in my subclass when both classes are not defined in the same file? (Now i get the following error: use of unresolved identifier '_hp')
I think i'm running into the access control level 1 (private entities can only be accessed from within the source file where they are defined.)
ps. i'm currently building the OOP exercise app :smile: :+1:
class Character { private var _hp = 100 } class Solider: Character { func attemptAttack(attackPower: Int) { self._hp -= attackPower //This line gives the error when placing these classes in seperate files } }
Swift provides three different access levels for entities within your code. These access levels are relative to the source file in which an entity is defined, and also relative to the module that source file belongs to..
private var _hp: Int var hp: Int { return _hp }
lecture 47 (the OOP exercise)
I normally use the accessor as i learned that in this course but found it strange that you can't access private var from a subclass in essence it is the same class.
import Foundation class Character { private var _health: Int = 100 private var _strength: Int = 10 var strength: Int { get { return _strength } set { _strength = strength } } var health: Int { get { return _health } } var isAlive: Bool { get { if health <= 0 { return false } else { return true } } } init(startHealth: Int, startStrength: Int) { self._health = startHealth self._strength = startStrength } func attemptAttack(attackPower: Int) -> Bool { self._health -= attackPower return true } }
import Foundation class Enemy: Character { var loot: [String] { return ["Rusty Dager", "Lint"] } var type: String{ return "Grunt" } func dropLoot() -> String? { if !isAlive { let rand = Int(arc4random_uniform(UInt32(loot.count))) return loot[rand] } else{ return nil } } }
@Wrenbjor i have the same setup but wanted to have a special case for my enemy which gives an extra HP when the attackPower is lower than it's IMMUNE_MAX
Offcourse i can add a function which does that in the superclass but this undermines (my) idea of polymorphism/inheritance | https://gitter.im/devslopes/swiftios9/archives/2015/11/03?at=5638ccc264376ec44425e13b | CC-MAIN-2019-39 | en | refinedweb |
Understanding type-checking in JavaScript
A very important aspect of every programming language is its type system and data types. For a strictly typed programming language like Java, variables are defined to be of a particular type, constraining the variable to only contain values of that type.
JavaScript, however, is a dynamically typed language, although some extensions exist that support strict typing, such as TypeScript.
With JavaScript, it is possible to have a variable that started off as containing a
string, and much later in its lifecycle, has become a reference to an
object. There are even times when the JavaScript engine implicitly coerces the type of a value during script execution. Type-checking is very critical to writing predictable JavaScript programs.
JavaScript has a pretty basic
typeofoperator for the purpose of type-checking.
However, you will notice that using this operator could be misleading, as we will discuss in this article.
JavaScript data types
Before looking at type-checking with
typeof, it is important to have a glance at the JavaScript data types. Although this article does not go into details about the JavaScript data types, you can glean a thing or two as you progress.
Prior to ES6, JavaScript had 6 data types. In the ES6 specification, the
Symbol type was added. Here is a list of all the types:
- String
Number
Boolean — (the values
trueand
false)
null — (the value
null)
undefined — (the value
undefined)
Symbol
Object
The first six data types are referred to as primitive types. Every other data type besides these first six is an object and may be referred to as a reference type. An object type is simply a collection of properties in the form of name and value pairs.
Notice from the list that
null and
undefined are primitive JavaScript data types, each being a data type containing just one value.
You may begin to wonder — what about arrays, functions, regular expressions, etc? They are all special kinds of objects.
- An
arrayis a special kind of object that is an ordered collection of numbered values with special syntax and characteristics that makes working with it different from with regular objects.
- A
functionis a special kind of object that has an executable script block associated with it. The script block is executed by invoking the function. It also has a special syntax and characteristics that makes it different from other regular objects.
JavaScript has several object class constructors for creating other kinds of objects such as:
Date— for creating date objects
RegExp— for creating regular expressions
Error— for creating JavaScript errors
Type-checking using typeof
Syntax
The
typeof operator in JavaScript is a unary operator(takes only one operand) that evaluates to a string indicating the type of its operand. Just like other unary operators, it is placed before its operand separated by a space:
typeof 53; // "number"
However, there is an alternative syntax that allows you to use
typeof like a function invocation by wrapping its operand in parentheses. This is very useful for type-checking the value returned from JavaScript expressions:
typeof(typeof 53); // "string"
Error safety
Prior to ES6, the
typeof operator always returns a string irrespective of the operand it is used on.
For undeclared identifiers,
typeofwill return
“undefined”instead of throwing a
ReferenceError.
console.log(undeclaredVariable === undefined); // ReferenceError console.log(typeof undeclaredVariable === 'undefined'); // true
However, in ES6, block-scoped variables declared using the
let or
const keywords will still throw a
ReferenceError if they are used with the
typeof operator before they are initialized. This is because:
Block scoped variables remain in the temporal dead zone until they are initialized:
// Before block-scoped identifier: typeof => ReferenceError console.log(typeof tdzVariable === 'undefined'); // ReferenceError const tdzVariable = 'I am initialized.';
Type-checks
The following code snippet shows type-checks for common values using the
typeof operator:
console.log(typeof ""); // "string" console.log(typeof "hello"); // "string" console.log(typeof String("hello")); // "string" console.log(typeof new String("hello")); // "object" console.log(typeof 0); // "number" console.log(typeof -0); // "number" console.log(typeof 0xff); // "number" console.log(typeof -3.142); // "number" console.log(typeof Infinity); // "number" console.log(typeof -Infinity); // "number" console.log(typeof NaN); // "number" console.log(typeof Number(53)); // "number" console.log(typeof new Number(53)); // "object" console.log(typeof true); // "boolean" console.log(typeof false); // "boolean" console.log(typeof new Boolean(true)); // "object" console.log(typeof undefined); // "undefined" console.log(typeof null); // "object" console.log(typeof Symbol()); // "symbol" console.log(typeof []); // "object" console.log(typeof Array(5)); // "object" console.log(typeof function() {}); // "function" console.log(typeof new Function); // "function" console.log(typeof new Date); // "object" console.log(typeof /^(.+)$/); // "object" console.log(typeof new RegExp("^(.+)$")); // "object" console.log(typeof {}); // "object" console.log(typeof new Object); // "object"
Notice that all object type constructor functions, when instantiated with the
new keyword will always have a type of
“object”. The only exception to this is the
Function constructor.
Here is a simple summary of the type-check results:
Better type-checking
The type-check results from the previous section indicate that some values will require additional checks to further distinguish them. For example:
null and
[] will both be of
“object” type when type-check is done using the
typeof operator.
The additional checks on the value can be done by leveraging on some other characteristics:
- – using the
instanceofoperator
- – checking the
constructorproperty of the object
- – checking the object class using the
toString()method of the object
Checking for null
Using the
typeof operator to check for a
“null” value does no good, as you have already seen. The best way to check for a
“null” value is to do a strict equality comparison of the value against the
null keyword as shown in the following code snippet.
function isNull(value) { return value === null; }
The use of the strict equality operator(
===) is very important here. The following code snippet illustrates this importance using the
undefined value:
console.log(undefined == null); // true console.log(undefined === null); // false
Checking for NaN
NaN is a special value received when arithmetic operations result in values that are undefined cannot be represented. For example:
(0 / 0) => NaN. Also, when an attempt is made to convert a non-numeric value that has no primitive number representation to a number,
NaN is the result.
Any arithmetic operation involving
NaNwill always evaluate to
NaN.
If you really want to use a value for any form of arithmetic operation then you want to be sure that the value is not
NaN.
Using the
typeof operator to check for
NaN value returns
“number”. To check for
NaN value, you can use the global
isNaN() function, or preferably the
Number.isNaN() function added in ES6:
console.log(isNaN(NaN)); // true console.log(isNaN(null)); // false console.log(isNaN(undefined)); // true console.log(isNaN(Infinity)); // false console.log(Number.isNaN(NaN)); // true console.log(Number.isNaN(null)); // false console.log(Number.isNaN(undefined)); // false console.log(Number.isNaN(Infinity)); // false
The
NaNvalue has a very special characteristic. It is the only JavaScript value that is never equal to any other value by comparison, including itself:
var x = NaN; console.log(x == NaN); // false console.log(x === NaN); // false
You can check for
NaN as follows:
function isNan(value) { return value !== value; }
The above function is very similar to the implementation of
Number.isNaN() added in ES6 and hence can be used as a polyfill for non-ES6 environments as follows:
Number.isNaN = Number.isNaN || (function(value) { return value !== value; })
Finally, you can leverage on the
Object.is() function added in ES6 to test if a value is
NaN. The
Object.is() function checks if two values are the same value:
function isNan(value) { return Object.is(value, Number.NaN); }
Checking for arrays
Using
typeof to check for an array will return
“object”. There are several ways to better check for an array as shown in this code snippet:
// METHOD 1: constructor property // Not reliable function isArray(value) { return typeof value == 'object' && value.constructor === Array; } // METHOD 2: instanceof // Not reliable since an object's prototype can be changed // Unexpected results within frames function isArray(value) { return value instanceof Array; } // METHOD 3: Object.prototype.toString() // Better option and very similar to ES6 Array.isArray() function isArray(value) { return Object.prototype.toString.call(value) === '[object Array]'; } // METHOD 4: ES6 Array.isArray() function isArray(value) { return Array.isArray(value); }
Generic type-checking
As seen with arrays, the
Object.prototype.toString() method can be very useful for checking the object type of any JavaScript value. When it is invoked on a value using
call() or
apply(), it returns the object type in the format:
[object Type], where
Type is the object type.
Consider the following code snippet:
function type(value) { var regex = /^[object (S+?)]$/; var matches = Object.prototype.toString.call(value).match(regex) || []; return (matches[1] || 'undefined').toLowerCase(); }
The following code snippet shows results of type-checking using the just created
type() function:
console.log(type('')); // "string" console.log(type('hello')); // "string" console.log(type(String('hello'))); // "string" console.log(type(new String('hello'))); // "string" console.log(type(0)); // "number" console.log(type(-0)); // "number" console.log(type(0xff)); // "number" console.log(type(-3.142)); // "number" console.log(type(Infinity)); // "number" console.log(type(-Infinity)); // "number" console.log(type(NaN)); // "number" console.log(type(Number(53))); // "number" console.log(type(new Number(53))); // "number" console.log(type(true)); // "boolean" console.log(type(false)); // "boolean" console.log(type(new Boolean(true))); // "boolean" console.log(type(undefined)); // "undefined" console.log(type(null)); // "null" console.log(type(Symbol())); // "symbol" console.log(type(Symbol.species)); // "symbol" console.log(type([])); // "array" console.log(type(Array(5))); // "array" console.log((function() { return type(arguments) })()); // "arguments" console.log(type(function() {})); // "function" console.log(type(new Function)); // "function" console.log(type(class {})); // "function" console.log(type({})); // "object" console.log(type(new Object)); // "object" console.log(type(/^(.+)$/)); // "regexp" console.log(type(new RegExp("^(.+)$"))); // "regexp" console.log(type(new Date)); // "date" console.log(type(new Set)); // "set" console.log(type(new Map)); // "map" console.log(type(new WeakSet)); // "weakset" console.log(type(new WeakMap)); // "weakmap"
Bonus fact: everything is not an object
It is very possible that at one point or the other, you may have come across this statement:
“Everything in JavaScript is an object.” — (False)
This could be very misleading and as a matter of fact, it is not true. Everything in JavaScript is not an object. Primitives are not objects.
You may begin to wonder — why then can we do the following kinds of operations on primitives if they are not objects?
(“Hello World!”).length— getting
lengthproperty of the string
(“Another String”)[8]— getting the character of the string at index
8
(53.12345).toFixed(2)— calling
Number.prototype.toFixed()method on the number
The reason why we can achieve these with primitives is because the JavaScript engine implicitly creates a corresponding wrapper object for the primitive and invokes the method or accesses the property on it.
When the value has been returned, the wrapper object is discarded and removed from memory. For the operations listed earlier, the JavaScript engine implicitly does the following:
// wrapper object: new String("Hello World!") (new String("Hello World!")).toLowerCase(); // wrapper object: new String("Another String") (new String("Another String"))[8]; // wrapper object: new Number(53.12345) (new Number(53.12345)).toFixed(2);
Conclusion
In this article, you have been taken through a pinch of the JavaScript type system and its data types, and how type-checking can be performed using the
typeof operator.
You also saw how misleading type-checking can be, using the
typeof operator. And finally, you saw several ways of implementing predictable type-checking for some data types.
If you are interested in getting some additional information about the JavaScript
typeof operator, you can refer to this article.. | http://blog.logrocket.com/javascript-typeof-2511d53a1a62/ | CC-MAIN-2019-39 | en | refinedweb |
following code reads a CSV file line by line, performs user agent matching and saves result in another CSV file. Essentially two new comma-separated values are added to each line that contain detection results. In this case we are interested in the following properties: IsMobile and HardwareVendor. For my test I've used the millions.csv file that can be downloaded from our website.
package processcsv;
import fiftyone.mobile.detection.Match;
import fiftyone.mobile.detection.Provider;
import fiftyone.mobile.detection.factories.MemoryFactory;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ProcessCSV {
private static final String FOD_FILE_PATH = "path_to_dat_file";
private static final String CSV_FILE_PATH = "path_to_original_csv_file";
private static final String CSV_FILE_RESULT = "path_to_destination_csv_file";
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
Provider p;
FileWriter writer;
BufferedReader br;
String line;
Match m;
try {
//MemoryFactory is faster than StreamFactory but requires more memory
p = new Provider(MemoryFactory.create(FOD_FILE_PATH));
br = new BufferedReader(new FileReader(CSV_FILE_PATH));
writer = new FileWriter(CSV_FILE_RESULT);
while ((line = br.readLine()) != null) {
m = p.match(line);
writer.append(line);
writer.append(',');
writer.append(m.getValues("IsMobile").toString());
writer.append(',');
writer.append(m.getValues("HardwareVendor").toString());
writer.append('\n');
}
} catch (IOException ex) {
Logger.getLogger(ProcessCSV.class.getName()).log(Level.SEVERE, null, ex);
} finally {
//Destroy objects.
}
}
}
Originally the CSV file contained records of the following format:
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)
Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36"
"Mozilla/5.0 (iPad; CPU OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25"
After executing this program:
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0),False,Unknown
Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko,False,Unknown
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36",False,Unknown
"Mozilla/5.0 (iPad; CPU OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25",True,Apple | https://51degrees.com/Developers/Documentation/Previous/Java/offline-application | CC-MAIN-2019-39 | en | refinedweb |
> From: Joachim Schmitz [mailto:address@hidden > Sent: Tuesday, September 04, 2012 1:49 PM > To: 'Junio C Hamano' > Cc: 'address@hidden'; 'Erik Faye-Lund' > Subject: RE: [PATCH v2] Support non-WIN32 system lacking poll() while keeping > the WIN32 part intact > > > From: Junio C Hamano [mailto:address@hidden > > Sent: Friday, August 24, 2012 9:47 PM > > To: Joachim Schmitz > > Cc: address@hidden; 'Erik Faye-Lund' > > Subject: Re: [PATCH v2] Support non-WIN32 system lacking poll() while > > keeping the WIN32 part intact > > > > "Joachim Schmitz" <address@hidden> writes: > > > > > Different, but related question: would poll.[ch] be allowed to #include > > > "git-compat-util.h"? > > > > Seeing other existing generic wrappers directly under compat/, > > e.g. fopen.c, mkdtemp.c, doing so, I would say why not. > > > > Windows folks (I see Erik is already CC'ed, which is good ;-), > > please work with Joachim to make sure such a move won't break your > > builds. I believe that it should just be the matter of updating a > > couple of paths in the top-level Makefile. > > Haven't heard anything from the Windows folks yet. > > I'd prefer to move compat/win32/poll.[ch] into compat/poll. > Then adjust a few paths in Makefile and that would be the 1st patch > > A 2nd patch would be my already proposed ones that make this usable for > others (me in this case ;-)), namely wrapping 2 #inludes. > > diff --git a/compat/poll/poll.c b/compat/poll/poll.c > index 403eaa7..49541f1 100644 > --- a/compat/poll/poll.c > +++ b/compat/poll/poll.c > @@ -24,7 +24,9 @@ > # pragma GCC diagnostic ignored "-Wtype-limits" > #endif > > -#include <malloc.h> > +#if defined(WIN32) > +# include <malloc.h> > +#endif > > #include <sys/types.h> > > @@ -48,7 +50,9 @@ > #else > # include <sys/time.h> > # include <sys/socket.h> > -# include <sys/select.h> > +# ifndef NO_SYS_SELECT_H > +# include <sys/select.h> > +# endif > # include <unistd.h> > #endif > > -- > 1.7.12 However: this poll implementation, while compiling OK, doesn't work properly. Because it uses recv(...,MSG_PEEK), it works on sockets only (returns ENOTSOCK on anything else), while the real poll() works on all kind if file descriptors, at least that is my understanding. Here on HP NonStop, when being connected via an non-interactive SSH, we get a set of pipes (stdin, stdout, stderr) instead of a socket to talk to, so the poll() just hangs/loops. As git's implementation is based on ('stolen' from?) gnulib's and still pretty similar, CC to the gnulib list and Paolo Any idea how this could get solved? I.e. how to implement a poll() that works on non-sockets too? There is some code that pertains to a seemingly similar problem in Mac OS X, but my problem is not identical, as that fix doesn't help. Bye, Jojo | https://lists.gnu.org/archive/html/bug-gnulib/2012-09/msg00032.html | CC-MAIN-2019-39 | en | refinedweb |
Why do i get truncated array inside my C function?
In C:
#include <Python.h>
#include <arrayobject.h>
PyObject *edge(PyObject *self, PyObject *args) {
int *ptr;
unsigned char *charPtr;
PyArrayObject *arr;
PyObject *back;
int ctr = 0;
int size = 500 * 500;
if (!PyArg_ParseTuple(args, "O", &arr))
return NULL;
charPtr = (char*)arr->data;
printf("\n strlen of charPtr is ---> %d \n", strlen(arr->data)); // --->> 25313
printf("\n strlen of charPtr is ---> %d \n", strlen(charPtr)); //--->> also 25313
back = Py_BuildValue("s", "Nice");
return back;
}
import ImageProc
import cv2
import numpy
import matplotlib.pyplot as plt
img = cv2.imread("C:/Users/srlatch/Documents/Visual Studio 2015/Projects/PythonImage/andrew.jpg", cv2.IMREAD_GRAYSCALE)
np = cv2.resize(img, (500,500))
for i in np:
for k in i:
count += 1
print("Size before passing to edge " + str(count) ) // --->> 250000
result = ImageProc.edge(np)
cv2.imshow("image", np)
cv2.waitKey()
strlen counts as far as the first 0 in your data (it's designed for null terminated text strings). Also, if the first 0 it encounters is after your data has finished then it'll return a number that's too big meaning you might try to write to data you don't own.
To work out the size of the pyarrayobject you need to use
arr->nd to work out the number of dimensions then
arr->dimensions (an array) to work out how big each dimension is. You should also use
arr->descr to work out what data type your array is rather than just testing it as char. | https://codedump.io/share/xSZsRIKWlKz3/1/why-my-pyarrayobject-data-truncated | CC-MAIN-2017-34 | en | refinedweb |
String.Substring Method (Int32, Int32).
You call the Substring(Int32, Int32) method to extract a substring from a string that begins at a specified character position and ends before the end of the string. The starting character position is a zero-based; in other words, the first character in the string is at index 0, not index 1. To extract a substring that begins at a specified character position and continues to the end of the string, call the Substring(Int32) method.
The length parameter represents the total number of characters to extract from the current string instance. This includes the starting character found at index startIndex. In other words, the Substring method attempts to extract characters from index startIndex to index startIndex + length - 1.
To extract a substring that begins with a particular character or character sequence, call a method such as IndexOf or LastIndexOf to get the value of startIndex.
If the substring extends from startIndex to a specified character sequence, you can call a method such as IndexOf or LastIndexOf to get the index of the ending character or character sequence. You can then convert that value to an index position in the string as follows:
If you've searched for a single character that is to mark the end of the substring, the length parameter equals endIndex - startIndex + 1, where endIndex is the return value of the IndexOf or IndexOf method. The following example extracts a continuous block of "b" characters from a string.
using System; public class Example { public static void Main() { String s = "aaaaabbbcccccccdd"; Char charRange = 'b'; int startIndex = s.IndexOf(charRange); int endIndex = s.LastIndexOf(charRange); int length = endIndex - startIndex + 1; Console.WriteLine("{0}.Substring({1}, {2}) = {3}", s, startIndex, length, s.Substring(startIndex, length)); } } // The example displays the following output: // aaaaabbbcccccccdd.Substring(5, 3) = bbb. The following example extracts a block of text that contains an XML <definition> element.
using System; public class Example { public static void Main() { String s = "<term>extant<definition>still in existence</definition></term>"; String searchString = "<definition>"; int startIndex = s.IndexOf(searchString); searchString = "</" + searchString.Substring(1); int endIndex = s.IndexOf(searchString); String substring = s.Substring(startIndex, endIndex + searchString.Length - startIndex); Console.WriteLine("Original string: {0}", s); Console.WriteLine("Substring; {0}", substring); } } // The example displays the following output: // Original string: <term>extant<definition>still in existence</definition></term> // Substring; <definition>still in existence</definition>
If the character or character sequence is not included in the end of the substring, the length parameter equals endIndex - startIndex, where endIndex is the return value of the IndexOf or IndexOf method.
If startIndex is equal to zero and equals the length of the current string, the method returns the original string unchanged.
The following example illustrates a simple call to the Substring(Int32, Int32) method that extracts two characters from a string starting at the sixth character position (that is, at index five).
The following example uses the Substring(Int32, Int32) method in the following three cases to isolate substrings within a string. In two cases the substrings are used in comparisons, and in the third case an exception is thrown because invalid parameters are specified.
It extracts the single character and the third position in the string (at index 2) and compares it with a "c". This comparison returns true.
It extracts zero characters starting at the fourth position in the string (at index 3) and passes it to the IsNullOrEmpty method. This returns true because the call to the Substring method returns String.Empty.
It attempts to extract one character starting at the fourth position in the string. Because there is no character at that position, the method call throws an ArgumentOutOfRangeException exception.
using System; public class Sample { public static void Main() {); } } } // The example displays the following output: // True // True // Index and length must refer to a location within the string. // Parameter name: length
using System; public class Example { public static void Main() { String[] pairs = { "Color1=red", "Color2=green", "Color3=blue", "Title=Code Repository" }; foreach (var pair in pairs) { int position = pair.IndexOf("="); if (position < 0) continue; Console.WriteLine("Key: {0}, Value: '{1}'", pair.Substring(0, position), pair.Substring(position + 1)); } } } //. | https://msdn.microsoft.com/EN-US/library/aka44szs | CC-MAIN-2017-34 | en | refinedweb |
axiomatic has featured prominently in many of my recent blog posts, but what is it?
As one might guess, in simplest terms,
axiomatic is a tool for manipulating Axiom databases. Going into a little more detail,
axiomatic is a command line tool which gathers
axiom.iaxiom.IAxiomaticCommand plugins using the Twisted plugin system and presents them as subcommands to the user, while providing the implementations of these subcommands with access to objects that pretty much any Axiom manipulation code is going to want (currently, that amounts to an opened Store instance). Twisted's option parser is used as the basic unit of functionality here.
I've already talked about several
IAxiomaticCommand implementations: web, mantissa, and start. The first two of these add new Items to an Axiom database or change the values associated with existing Items. The last "starts" an Axiom database (more on what that means in a future post!).
So what you really want to know is how do I write an axiomatic plugin? Let me tell you, it could scarcely be easier:
from zope.interface import classProvides
from twisted import plugin
from twisted.python import usage
from axiom import iaxiom
from axiom.scripts import axiomatic
class PrintAllItems(usage.Options, axiomatic.AxiomaticSubCommandMixin):
classProvides(
# This one tells the plugin system this object is a plugin
plugin.IPlugin,
# This one tells axiom it is an axiomatic plugin
iaxiom.IAxiomaticCommand)
# This is how it will be invoked on the command line
name = "print-some-items"
# This will show up next to the name in --help output
description = "Display an arbitrary number of Items"
optParameters = [
# We'll take the number of Items to display as a command
# line parameter. This is "--num-items x" or "-n x" or
# any of the other standard spellings. The default is 5.
('num-items', 'n', '5', 'The number of Items to display'),
]
def postOptions(self):
s = self.parent.getStore()
count = int(self.decodeCommandLine(self['num-items']))
for i in xrange(count):
try:
print s.getItemByID(i)
except KeyError:
# The Item didn't exist
pass
(Okay, seriously - I know this is a bit long: we'll be working on lifting a lot of the boilerplate out to simplify things; expect about half of the above code to become redundant soon). I just drop this into a file in axiom/plugins/ (there are other places I could put it, if I wanted; read the Twisted plugin documentation to learn more) and I've got a new subcommand:
exarkun@boson:~$ axiomatic --help
Usage: axiomatic [options]
Options:
-d, --dbdir= Path containing axiom database to configure/create
--version
--help Display this help and exit.
Commands:
mail Accept SMTP connections
userbase Users. Yay.
mantissa Blank Mantissa service
web-application Web interface for normal user
web Web. Yay.
web-admin Administrative controls for the web
radical omfg play the game now
click-chronicle-site Chronicler of clicking
encore Install BookEncore site store requirements.
sip-proxy SIP proxy and registrar
vendor Interface for purchasing new services.
vendor-site Required site-store installation gunk
print-some-items Display an arbitrary number of Items
start Launch the given Axiomatic database
Neat, huh?*
*
In case you're dying to see the output of this new command:
exarkun@boson:~/Scratch/Run/demo$ axiomatic -d my.axiom/ print-some-items
<Scheduler>
<axiom.item._PowerupConnector object at 0xb6fc770c>
<axiom.item._PowerupConnector object at 0xb6fc770c>
<axiom.userbase.LoginSystem object at 0xb6f42104>
exarkun@boson:~/Scratch/Run/demo$
Why would one make an axiomatic plugin? | http://as.ynchrono.us/2005/11/adding-axiomatic-plugins_15.html | CC-MAIN-2017-34 | en | refinedweb |
Simple Clojure Protocols Tutorial
Protocols OverviewProtocols Overview
- Provide a high-performance, dynamic polymorphism construct as an alternative to interfaces
- Specification only, no implementation
- Protocols are a mechanism for polymorphism.
- Protocols can be useful for defining an external boundary, such as an interface to a service. In this case, it's useful to have polymorphism so we can substitute different services, or use mock services for testing.
- A protocol is a set of methods. The protocol has a name and an optional documentation string. Each method has a name, one or more argument vectors, and an optional documentation string. That's it! There are no implementations, no actual code.
- An important difference between protocols and interfaces: protocols have no inheritance. You cannot create “subprotocols” like Java's subinterfaces.
- A datatype is not required to provide implementations for every method of its protocols or interfaces. Methods lacking an implementation will throw an AbstractMethodError when
called on instances of that data type.
Extending Protocols to Already Existing TypesExtending Protocols to Already Existing Types
- To create a new protocol that operates on an existing datatype and f.ex when you cannot modify the source code
of the defrecord. You can still extend the protocol to support that datatype, using the extend function:
(extend DatatypeName SomeProtocol {:method-one (fn [x y] ...) :method-two existing-function} AnotherProtocol {...})
Extend takes a datatype name followed by any number of protocol/method map pairs. A method
map is an ordinary map from method names, given as keywords, to their implementations.
- Use extend-type when you want to implement several protocols for the same datatype; use extend-protocol when you want to implement the same protocol for several datatypes.
How protocols solve the expression problemHow protocols solve the expression problem
Expression problemExpression problem
The basic problem of extensibility: our programs manipulate data types using operations.
As our programs evolve, we need to extend them with new data types and new operations. Particularly, we want to be able to add new operations that work with the existing data types and new data types that work with the existing operations. Furthermore, we want this to be a true extension (i.e. we don't want to modify the existing program), we want to respect the existing abstractions, and we want our extensions to be in separate modules and namespaces. We also want the extensions to be separately compiled, deployed, and type checked.
Multimethods already help solve the expression problem; the main thing Protocols offer over Multimethods is Grouping: you can group multiple functions together and say "these 3 functions together form Protocol Foo". You cannot do that with Multimethods — they always stand on their own.
Why protocols when we have multimethods?
Any platform that you would like Clojure to run on (JVM, CLI, ECMAScript, Objective-C) has specialized high-performance support for dispatching solely on the type of the first argument. Clojure Multimethods OTOH dispatch on arbitrary properties of all arguments.
So, Protocols restrict you to dispatch only on the first argument and only on its type (or as a special case on nil).
OO Style with Protocols Often Hides Obvious & Simple ThingsOO Style with Protocols Often Hides Obvious & Simple Things
(defprotocol Saving (save [this] "saves to mongodb") (collection-name [this] "must return a string representing the associated MongoDB collection")) ;Default implementation (extend-type Object Saving ; the `save` method is common for all, so it is actually implemened here (save [this] (mc/insert (collection-name [this]) this)) ; this method is custom to every other type (collection-name [this] "no_collection")) ;Particular implementations (defrecord User [login password] Saving (collection-name [this] "users")) (defrecord NewsItem [text date] Saving (collection-name [this] "news_items"))
Instead of this, where save fn won't work on User and NewsItem, use:
Make the save function a normal function:
(defn save [obj] (mc/insert (collection-name obj) obj)) The protocol should only have collection-name (defprotocol Saving (collection-name [this] "must return a string representing the associated MongoDB collection"))
Each object that wants to be "saved" can implement this protocol. | https://www.codementor.io/mounacheikhna/simple-clojure-protocols-tutorial-ss8pywzkd | CC-MAIN-2017-34 | en | refinedweb |
On Jun 4, 2012, at 3:45 PM, Greg Von Kuster wrote: Hello Shantanu,
Advertising
Sorry for the delay on this - I've been backed up since the weekend when we were trading emails. The way Galaxy works with eggs is fairly complex with regard to handling conflicts, which is undoubtedly what is happening in your environment since you have the mercurial package installed for your Python 2.6. Your environment probably results in a conflict that is not properly handled by the version_conflict() method in ~/lib/galaxy/eggs/__init__.py. On Jun 4, 2012, at 4:04 PM, Shantanu Pavgi wrote: Just want to update on the list about this error. I had followed up with Greg on this issue off-list as I didn't want to share all the hg output here on the list. The galaxy had downloaded Mercurial egg however it's directory wasn't defined in the PYTHONPATH environment variable. However, it was included in the sys.path which I verified by printing it in the tool migration application. While debugging this issue, I downloaded Mercurial (egg) externally using easy_install tool and set up PYTHONPATH to point to this external Mercurial egg. After this the tool migration script worked fine. So should this galaxy-mercurial egg directory and other galaxy egg directories be included in the PYTHONPATH environment variable? For Galaxy, you really don't have to set PYTHONPATH at all, so if you are not using it for other Python related stuff on your Galaxy server, try unsetting it, and Galaxy's mercurial egg will probably be found. If you need PYTHONPATH set, take a look at the version_conflict() method in ~/lib/galaxy/eggs/__init__.py and see if you can figure out what is not being handled in your environment. Also, I noticed that tool migration application failed again when I had default sqlite database URL mentioned in the universe_wsgi.ini file as it is (commented out). It works fine if the default database URL is uncommented. I'll take a look at this. Thanks for reporting it. {{{ $ sh ./scripts/migrate_tools/0002_tools.sh Traceback (most recent call last): File "./scripts/migrate_tools/migrate_tools.py", line 21, in <module> app = MigrateToolsApplication( sys.argv[ 1 ] ) File "/home/shantanu/tmp/galaxy-dist/lib/galaxy/tool_shed/migrate/common.py", line 81, in __init__ object_store=self.object_store ) File "/home/shantanu/tmp/galaxy-dist/lib/galaxy/model/mapping.py", line 1836, in init load_egg_for_url( url ) File "/home/shantanu/tmp/galaxy-dist/lib/galaxy/model/mapping.py", line 1816, in load_egg_for_url dialect = guess_dialect_for_url( url ) File "/home/shantanu/tmp/galaxy-dist/lib/galaxy/model/mapping.py", line 1812, in guess_dialect_for_url return (url.split(':', 1))[0] AttributeError: 'bool' object has no attribute 'split' }}} -- Thanks, Shantanu Greg, Thanks for the reply. I don't have any rush on this issue but just wanted to update on the list before I forget about it. I was using PYTHONPATH only to get an external tool dependency (MACS<>) in the galaxy's environment. As you mentioned, this may have caused some issues while locating galaxy-Mercurial egg, which got resolved after I externally installed Mercurial and also got it in PYTHONPATH. I tried unsetting PYTHONPATH completely which got external - MACS and mercurial - installations out of the galaxy environment, however, it didn't resolve the tool migration error. I will dig into this issue later in the week. Right now it's not a blocking issue as I have migrated EMBOSS tool using external-Mercurial. Thanks for pointing out the code details. -- Shantanu
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: | https://www.mail-archive.com/[email protected]/msg05772.html | CC-MAIN-2017-34 | en | refinedweb |
Run linters against staged git files and don't let 💩 slip into your code base!.
If you've written one, please submit a PR with the link to it!
npm install --save-dev lint-staged husky
.eslintrc,
.stylelintrc, etc.
package.jsonlike this:
Now change a few files,
git add some of them to your commit and try to
git commit them.
See examples and configuration below.
I recommend using husky to manage git hooks but you can use any other tool.
NOTE:
If you're using commitizen and having following npm-script
{ commit: git-cz },
precommithook will run twice before commitizen cli and after the commit. This buggy behaviour is introduced by husky.
To mitigate this rename your
commitnpm script to something non git hook namespace like, for example
{ cz: git-cz }
Starting with v3.1 you can now use different ways of configuring it:
lint-stagedobject in your
package.json
.lintstagedrcfile in JSON or YML format
lint-staged.config.jsfile in JS format
See cosmiconfig for more details on what formats are supported.
Lint-staged supports simple and advanced config formats.
Should be an object where each value is a command to run and its key is a glob pattern to use for this command. This package uses minimatch for glob patterns.
package.jsonexample:
.lintstagedrcexample
This config will execute
npm run my-task with the list of currently staged files passed as arguments.
So, considering you did
git add file1.ext file2.ext, lint-staged will run the following command:
npm run my-task -- file1.ext file2.ext
To set options and keep lint-staged extensible, advanced format can be used. This should hold linters object in
linters property.
linters—
Object— keys (
String) are glob patterns, values (
Array<String> | String) are commands to execute.
gitDir— Sets the relative path to the
.gitroot. Useful when your
package.jsonis located in a subdirectory. See working from a subdirectory
concurrent— true — runs linters for each glob pattern simultaneously. If you don’t want this, you can set
concurrent: false
chunkSize— Max allowed chunk size based on number of files for glob pattern. This is important on windows based systems to avoid command length limitations. See #147
subTaskConcurrency—
2— Controls concurrency for processing chunks generated for each linter.
verbose— false — runs lint-staged in verbose mode. When
trueit will use.
globOptions—
{ matchBase: true, dot: true }— minimatch options to customize how glob patterns match files.
It is possible to run linters for certain paths only by using minimatch patterns. The paths used for filtering via minimatch are relative to the directory that contains the
.git directory. The paths passed to the linters are absolute to avoid confusion in case they're executed with a different working directory, as would be the case when using the
gitDir option.
// .js files anywhere in the project"*.js": "eslint"// .js files anywhere in the project"**/*.js": "eslint"// .js file in the src directory"src/*.js": "eslint"// .js file anywhere within and below the src directory"src/**/*.js": "eslint"
Supported are both local npm scripts (
npm run-script), or any executables installed locally or globally via
npm as well as any executable from your $PATH.
Using globally installed scripts is discouraged, since lint-staged may not work for someone who doesn’t have it installed.
lint-staged is using npm-which to locate locally installed scripts, so you don't need to add
{ "eslint": "eslint" } to the
scripts section of your
package.json. So in your
.lintstagedrc you can write:.
Tools like ESLint/TSLint or stylefmt can reformat your code according to an appropriate config by running
eslint --fix/
tslint --fix. After the code is reformatted, we want it to be added to the same commit. This can be done using following config:
Starting from v3.1, lint-staged will stash you remaining changes (not added to the index) and restore them from stash afterwards. This allows you to create partial commits with hunks using This is still not resolved
git add --patch.
If your
package.json is located in a subdirectory of the git root directory, you can use
gitDir relative path to point there in order to make lint-staged work.
All examples assuming you’ve already set up lint-staged and husky in the
package.json.
Note we don’t pass a path as an argument for the runners. This is important since lint-staged will do this for you. Please don’t reuse your tasks with paths from package.json.
*.jsand
*.jsxrunning as a pre-commit hook
--fixand add to commit
This will run
eslint --fix and automatically add changes to the commit. Please note, that it doesn’t work well with committing hunks (
git add -p).
prettierfor javascript + flow or typescript
stylefmtand add to commit | https://www.npmjs.com/package/lint-staged | CC-MAIN-2017-34 | en | refinedweb |
Alexa.RangeController Interface
The
Alexa.RangeController interface enables your skill to model components of an endpoint that are represented by numbers within a minimum and maximum range. You can include multiple instances of a
RangeController on a single endpoint, as long as they have unique values in the
instance and
friendlyNames fields.
The
RangeController interface is highly configurable and enables you to model many different kinds of settings for many different kinds of devices. Use one of the following more specific interfaces if it's appropriate for your device:
- Alexa.BrightnessController
- Alexa.EqualizerController
- Alexa.PowerLevelController
- Alexa.Speaker
- Alexa.StepSpeaker
For the list of locales that are supported for the
RangeController interface, see List of Capability Interfaces and Supported Locales.
- Utterances
- Discovery response
- Directives
- Properties and events
Utterances
When you use the
Alexa.RangeController interface, the voice interaction model is already built for you. The following examples show some customer utterances:
Alexa, set the bedroom fan speed to 7.
Alexa, set the fan speed on the bedroom fan to maximum.
Alexa, turn up the bedroom fan speed.
Alexa, decrease the fan speed on the bedroom fan by 3.
After the customer says one of these utterances, Alexa sends a corresponding directive to your skill.
Discovery response
All smart home skills must send a discovery response (Discover.Response) that describes the devices associated with the customer's account.
Example discovery response
The following example shows a discovery response for a smart home skill that implements the
RangeController interface.
{ "event": { "header": { "messageId": "0a29824b-9299-4d55-b0c3-1d96ecfae81e", "name": "Discover.Response", "namespace": "Alexa.Discovery", "payloadVersion": "3" }, "payload": { "endpoints": [ { "endpointId": "TowerFan-001", "description": "Device description for the customer", "displayCategories": [ "OTHER" ], "friendlyName": "Basement Fan", "manufacturerName": "Example Manufacturer", "cookie": {}, "capabilities": [ { "type": "AlexaInterface", "interface": "Alexa.RangeController", "version": "3", "instance": "TowerFan.Speed", "capabilityResources": { "friendlyNames": [ { "@type": "asset", "value": { "assetId": "Alexa.Setting.FanSpeed" } } ] }, "properties": { "supported": [ { "name": "rangeValue" } ], "proactivelyReported": true, "retrievable": true }, "configuration": { "supportedRange": { "minimumValue": 1, "maximumValue": 10, "precision": 1 }, "presets": [ { "rangeValue": 10, "presetResources": { "friendlyNames": [ { "@type": "asset", "value": { "assetId": "Alexa.Value.Maximum" } }, { "@type": "asset", "value": { "assetId": "Alexa.Value.High" } }, { "@type": "text", "value": { "text": "Highest", "locale": "en-US" } } ] } } ] } }, { "type": "AlexaInterface", "interface": "Alexa", "version": "3" } ] } ] } } }
Discovery response fields
The following table describes the fields that are specific to the discovery response for devices that implement the
RangeController interface.
PresetResources and capabilityResources objects
For more information on
PresetResources and
capabilityResources objects, see Resources and Assets.
Directives
The
RangeController interface supports directives to set and adjust the value within the range, and to query the current value. The following example is a query utterance:
User: Alexa, what is the bedroom fan speed?
SetRangeValue
The following are example utterances that result in a
SetRangeValue directive.
User: Alexa, set the bedroom fan speed to 7.
User: Alexa, set the fan speed on the bedroom fan to maximum.
The following example shows a
SetRangeValue directive.
{ "directive": { "header": { "namespace": "Alexa.RangeController", "instance": "TowerFan.Speed", "name": "SetValue": 7 } } }
Payload fields
The following table describes the fields in the payload of a
SetRangeValue directive.
AdjustRangeValue
The following are example utterances that result in an
AdjustRangeValue directive.
User: Alexa, turn up the bedroom fan speed.
User: Alexa, decrease the fan speed on the bedroom fan by 3.
The following example shows an
AdjustRangeValue directive.
{ "directive": { "header": { "namespace": "Alexa.RangeController", "instance": "TowerFan.Speed", "name": "AdjustValueDelta": -3, "rangeValueDeltaDefault": false } } }
Payload fields
The following table describes the fields in the payload of an
AdjustRangeValue directive.
Properties and events
When the state of a range value changes, send a state report with a
rangeValue property. You must also include the
instance to specify which
RangeController of the endpoint you are reporting state for.
Supported Values for unitOfMeasure
The following are supported values for
unitOfMeasure:
- Alexa.Unit.Weight.Pounds
- Alexa.Unit.Weight.Ounces
- Alexa.Unit.Mass.Kilograms
- Alexa.Unit.Mass.Grams
- Alexa.Unit.Percent
- Alexa.Unit.Volume.Gallons
- Alexa.Unit.Volume.Pints
- Alexa.Unit.Volume.Quarts
- Alexa.Unit.Volume.Liters
- Alexa.Unit.Volume.CubicMeters
- Alexa.Unit.Volume.CubicFeet
- Alexa.Unit.Distance.Yards
- Alexa.Unit.Distance.Inches
- Alexa.Unit.Distance.Meters
- Alexa.Unit.Distance.Feet
- Alexa.Unit.Distance.Miles
- Alexa.Unit.Distance.Kilometers
- Alexa.Unit.Angle.Degrees
- Alexa.Unit.Angle.Radians
- Alexa.Unit.Temperature.Degrees
- Alexa.Unit.Temperature.Celsius
- Alexa.Unit.Temperature.Fahrenheit
- Alexa.Unit.Temperature.Kelvin
ErrorResponse
You should respond with an error when you cannot complete the customer request for some reason. See Alexa.ErrorResponse for more information. | https://developer.amazon.com/docs/device-apis/alexa-rangecontroller.html | CC-MAIN-2019-35 | en | refinedweb |
1. Create a Coffee class to represent a single hot beverage. Every Coffee object
contains the following instance fields:
a. A protected double variable named basePrice. This variable holds the cost
of the beverage without accounting for any special options (cream, sugar, etc.).
b. A protected ArrayList variable named options. This variable should only store
CoffeeOption objects that have been added to the given beverage.
c. A protected String variable named size. This variable represents the size of
the beverage.
d. A protected boolean variable named isDecaf. This variable will be true if the
beverage is decaffeinated, and false otherwise.
e. A public constructor that takes two arguments: a String followed by a
boolean value. The constructor should create a new, empty ArrayList and
assign it to options. The constructor should set the value of size to the String
argument. The constructor should set the value of basePrice depending on the
value of the String argument (“small” = 1.50, “medium” = 2.00, “large” = 2.50,
“extra large” = 3.00). You may assume that the String argument will always be
one of these four choices, with that spelling and capitalization. Finally, the
constructor should set the value of isDecaf to that of its boolean parameter.
f. A public method named addOption(). This method takes a CoffeeOption
as its argument, and does not return anything. This method adds its argument to
the end of the options ArrayList.
g. A public method named price(). This method returns a double value, and
does not take any arguments. The price() method returns the sum of
basePrice and the prices of all of the elements in options. Note that decaffeinated and regular
beverages are the same price.
h. A public toString() method. This method returns a String, but does not
take any arguments. Your toString() method should return a String that
contains a description of the Coffee object, in the following format:
Coffee base-price
Total: $total-price
should be either “Regular” or “”Decaf”
For example, a small decaffeinated coffee with no added options would produce
the following output:
small Decaf Coffee 1.50
Total: $1.50
A medium regular coffee with one cream and one sugar would produce the
following output:
medium Regular Coffee 2.00
add cream 0.10
add sugar 0.05
Total: $2.15
(do not worry about formatting the price to exactly two decimal places; 1.5 and
1.499999999 are equally acceptable substitutes for 1.50)
Hint: use the toString() method(s) that you developed for Homework 3.
i. You may add any additional instance variables or methods to this class that you
wish.
2. Create a class named Order. This class maintains a list of Coffee objects, and
contains the following fields:
a. A private ArrayList named items that holds Coffee objects.
b. A private int named orderNumber.
c. A public constructor that assigns a new, empty ArrayList to items and
assigns a random integer value (between 1 and 2000) to orderNumber (see the
end of this document for information about Java's Random class).
d. A public method named add() that takes a Coffee object as its argument and
does not return any value. This method adds its argument to the end of the items
ArrayList.
e. A public method named getNumber() that returns the order number. This
method does not take any arguments.
f. A public method named getTotal(). This method returns a double value
and does not take any arguments. This method returns the total price of the items
in the current order, including 8.625% sales tax for Suffolk County.
g. A public toString() method that returns a String and does not take any
arguments. This method should return a String that lists the order number, the
current number of Coffee objects in items, and the total price for the order.
For example, toString() might return a String like the following:
Order #212 3 item(s) $8.25
h. A public method named receipt(). This method does not take any
arguments. It returns a String containing a neatly-formatted order receipt that
includes the following information:
i. The order number, with an appropriate label
ii. The total number of items in the order, with an appropriate label. Note that this
value only includes whole beverages; do not include coffee options as
separate items!
iii.A detailed list of the items in the order (HINT: use Coffee's toString()
method)
iv.The subtotal for the order, with an appropriate label
v. The tax amount, with an appropriate label
vi.The total price of the order, with an appropriate label.
i. You may add any additional instance variables or methods to this class that you
wish.
3. Create a subclass of Coffee named IcedCoffee. An IcedCoffee object is
available in the same sizes as a regular Coffee object, except it costs $0.50 more
for the equivalent size (for example, a large IcedCoffee has a base price of $3.00
instead of $2.50). Be sure to provide the following functionality for your IcedCoffee
class:
a. A public constructor that, like the Coffee constructor, takes a size (a String)
and a regular/decaf value (a boolean) as its arguments.
b. An overridden version of toString() that replaces the word “Coffee” with “Iced
Coffee”. Otherwise, toString() provides exactly the same output as its
superclass version.
4. Using the sample driver , create a menu-based program
that allows the user to perform the following actions:
a. Create a new Order. If there is an order currently in progress, it is discarded/
replaced without warning.
b. Add a new Coffee object to the current order. This option should let the user do
the following:
i. Specify a size for the beverage
ii. Specify whether the beverage is regular or decaffeinated
iii.Add coffee options (cream, sugar, and flavor shots) to the beverage until the
user indicates that he/she is done. (HINT: use a while or do-while loop)
This option is only available if there is a current order.
c. Add a new IcedCoffee object to the order. This option should provide the same
functionality as the preceding option. (HINT: instead of writing the same code
twice, can you avoid duplication by using a method or if statement(s)?)
d. Display the contents of the current order. This option is only available if there is an
order in progress.
e. Place the current order. Selecting this option causes the program to print out the
receipt for the current order, and then delete it (by setting any references to this
Order to null). This option is only available if there is an order in progress.
f. Cancel the current order. Selecting this option causes any references to the
current Order to be set to null. This option is only available if there is an order
in progress.
g. Quit the program.
Each time the menu is displayed, if there is an order currently in progress, a short
summary of the order should be displayed as well (use Order's toString()
method for this).
Here's the sample driver:
import java.util.*; public class ProjectDriver { private static Order myOrder; private static Scanner sc; private static double totalCharge; public static void main (String [] args) { myOrder = null; totalCharge = 0.0; sc = new Scanner(System.in); int userChoice = -1; do { userChoice = displayMenu(); handle(userChoice); } while (userChoice != 0); } private static int displayMenu () { System.out.println("\n\n"); System.out.println("Main Menu\n\n"); System.out.println("1. New order"); System.out.println("2. Add Coffee"); System.out.println("3. Add Add Iced Coffee"); System.out.println("4. Print the current order"); System.out.println("5. Clear the current order"); System.out.println(); System.out.println("0. Exit"); System.out.println(); System.out.print("Please select an option: "); int result = sc.nextInt(); sc.nextLine(); // consume extraneous newline character System.out.println(); // Add an extra line for formatting return result; } private static void handle (int choice) { switch (choice) { case 1: newOrder(); break; case 2: addCoffee(); break; case 3: addIcedCoffee(); break; case 4: printOrder(); break; case 5: resetOrder(); break; } } private static newOrder() { myOrder = new Order(); totalCharge = 0; } private static void addCoffee() { System.out.print("Enter size: "); String size = sc.nextLine(); System.out.print("Decaf? "); String d = sc.nextLine(); Coffee c; if (d.equals("yes")) { c = new Coffee(size, true); } else { c = new Coffee(size, false); } // Add submenu for coffee options int choice = -1; while (choice > 0) { System.out.println("Options\n"); System.out.println("1. Add sugar"); ... if (choice == 1) { c.add(new Sugar()); totalCharge += 0.05; } } myOrder.add(c); } private static void addSugar () { myOrder.add(new Sugar()); totalCharge += 0.05; } private static void addCream () { myOrder.add(new Cream()); totalCharge += 0.10; } private static void addFlavoring () { myOrder.add(new Flavoring()); totalCharge += 0.25; } private static void printOrder () { if (myOrder == null) { System.out.println("NO current order!"); } else { System.out.println(myOrder.receipt()); System.out.println(); } System.out.println(); System.out.println("Total charge so far: " + totalCharge); } private static void resetOrder () { myOrder = new Order(); // Get rid of the current contents totalCharge = 0.0; } }
---------------------------------------------------------------------------
Other classes:
Flavoring.java
import java.util.*; import java.io.*; public class Flavoring extends CoffeeOption { private List<String> flavors = new ArrayList<String>(); private String selectedFlavor; private void loadFlavors() { Scanner sc; try { sc = new Scanner(new File("flavors.txt")); } catch (Exception e) { System.err.println(e); return; } while (sc.hasNextLine()) { flavors.add(sc.nextLine()); } } public Flavoring() { description = "flavor shot"; cost = 0.25; loadFlavors(); Scanner sc = new Scanner(System.in); // Print the menu of flavors System.out.println("Please select a flavor:"); for (int i=0;i<flavors.size();i++) { System.out.println((i+1)+") "+flavors.get(i)); } System.out.print("Your choice? "); selectedFlavor = flavors.get(sc.nextInt()-1); System.out.println(); } public String toString() { return super.toString()+"\n"+selectedFlavor; } }
-------------------------------------------------
Cream.java
public class Cream extends CoffeeOption { public Cream() { description="cream"; cost = 0.10; } }
--------------------------------------------------
Sugar.java
public class Sugar extends CoffeeOption { public Sugar() { description = "sugar"; cost = 0.05; } }
-------------------------------------------------
CoffeeOption.java
public abstract class CoffeeOption { protected double cost; protected String description; public double price() { return cost; } public String toString() { return "add "+description+" "+cost; } | https://www.daniweb.com/programming/software-development/threads/364931/java-homework-help | CC-MAIN-2019-35 | en | refinedweb |
It is used to provide the support to End Users. There are various types of Supports provided by the partners to the customers. Partner (Partner) is a company which provides support services. Ex: IBM, HP, HCL, TCS, WIPRO, SATYAM 1 2 3 4 L1 Support L2 Support L3 Support L 4Support Tier-1 Tier-2 Tier-3 Tier-4 Level-1 Level-2 Level-3 Level-4 Support-1 Support-2 Support-3 Support-4
1. Level-1 Support: It is a front end support or end user support to Reset the Passwords lock and unlock users. GUI related issues. (Installation, patches, upgrade) performance check list activities etc 2. Note: Very less privileges are assigned to the users there will be no input on the system by this users there will be no input on the system by this users i.e. they cant Delete, Drop, Change objects in the system. It is also referred as monitoring Job. 3. Level-2 Support: The reports from the Level-1 consultants along with recommendations are evaluated and ensure that they are resolved. Level-2 consultant handle assignments of roles, Background Jobs rescheduling, data transfers, notes etc.. 4. The issues which could not be resolved can be escalated to Level-3 5. Level-3 Support: Recommend the parameters for Tuning, Support Package application, Scheduling down times, working with SAP, database activities etc. 6. Level-4 Support (Onsite):The consultant works with data center and knowledge of O/S, R/3, DB etc is required for this. Responsible to Start Up the BOX. Responsible for Backup, Restore, Recovery, DR, Standby, Clustering, Mirroring, H/W migration,. UPS aircom etc. Based on the nature of the company the following activities are also segregated. 7. Security 8. Performance
1
9. Data Base 10. 11. 12. O/S Transports Normal BASIS Support
Roles and Responsibilities: 13. 14. 15. 16. 17. Define the Checklist Working on Tickets/Requests/Cases Configuring Work Processes Configuring Buffers Resolving Runtime Issues
18. Working with Dialog Process, Update Process, background Process, Message, Spool and Enqueue Processes 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. Define Background Jobs and Monitoring Monitoring Critical Update Define the Printers Working with SAP Archiving Creating RFC Destinations Define various methods for data transfer Monitor EDI and IDOC Release change request and import target systems Applying patches notes etc Configuring CCMS and monitoring Configuring logon load balancing
2
Configuring operation modes Monitoring the log files Defining and scheduling standard background jobs Monitoring backup regularly Monitoring DATABASE Standard Jobs
OFFSHOR E
RIGHT SHORE
CUSTOME R
GE, COKE, HP
HP: . . 17 Employees
Connectivity from OFFSHORE to CUSTOMER/CLIENT: 35. Leased Line: If the trust is established between the supporting partner and the customer dedicated leased line can be used. (Trust means the data security of the customer). Each leased line is backed up by dialup or another alternative leased line. 36. VPN: (Virtual Private Network). It is a service which establishes a tunnel between the customer and supporting partner. CISCO VPN is widely used. We need to key in USER-ID and Password and Access Key to logon into the Private Network. 37. Remote Access Cards (RAC) are used to generate Random Number (Remote Authentication Key) i.e. generated access key. 38. User Request Mechanism: User request management tools are utilized for this purpose. 39. 40. 41. 42.
USER Request REMEDY
CLARIFY HP Overview Seibel Ticketing Tool SAP CRM SYNERGY Phone CUSTOMID EXCEL Help Desk (Phone, FAX, Email)
Status
New Assigned Work In progress Temp Fix Pending Closed/Completed/ Finished Severity of the Problem:
Description
Raised Problem When the Consultant Accept the Ticket Processing Temporary Fixed, Job Aborted, Run Temporary For Approval Problem is Fixed
S.No
1 2 3 4 5
Severity
Low/Normal Medium High Very High/ Critical Diasster
Response Time
24 Hrs 12 Hrs 8 Hrs 2 Hrs 15 Min
Closure Time
72 Hrs 48Hrs 24 Hrs 4 Hrs 15 MIN
High: Printing, Update, Deactivation, Transport Errors etc Very High: Printers Down, Instance Down, Services are partially inturepted Disaster: Server Down
rdisp/wp_no_vb= Defining the number of update W.Ps rdisp/wp_no_vb2= Defining the number of update W.Ps (secondary) rdisp/wp_no_enq=1 Defining the number of enqueue W.Ps rdisp/wp_no_spo= Defining the number of spool W.Ps rdisp/max_wprun_time= 600-1800 sec this defines the maximum time that a (Dialog)W.P can run . DIAG Protocol Roll In Roll Out
USER
Central Instance
Dispatche QUEU r E W0,W1 Wn U SCREEN C ABAP SQL R/3 Buffers DBCL DATA BASE
43. 44.
Handles the user Request interactively At least Two dialog processes are required
45. Each Dialog W>P requires around 75MB-150 MB of memory on an average (Memory varies between the systems)
7
46. Every Dialog W>P an handle 5 to 10 users 9Depending upon the type of users (Lo, Medium, High)) 47. Each Dialog W.P will be timed out for every 600 eseconds or the time specified by parameter rdisp/max_wprun_time
Note: The parameter rdisp/max_wprun_time is instance specific and will take default value of 600 if not changed. Each request will be handled for 600 seconds. Otherwise the request will be timed out.
Each Dialog W.Pis not restricted to a user session Each process can serve multiple users Each process can serve only a part of a Transaction
Dialog Steps:
It is a part of a transaction. It can be also called as a sub transaction.
51. 52.
Go to SM50 to display the list of processes configured on that instance. It is used to display the following: 1. No: Serial Number of Work Processes (W0, W1, Wn-1) 2. Type: It shows the type of work process. It can be any one of DVEBMGS 3. Process ID: (PID) It represents a process Id at O/S level. This is used to identify the critical process running at O/S level and to take a decision whether to continue or Kill the W.P. 4. Status: It shows the status of the W.P a. Running: The W.P is executing the user task until it complete the task or timed out. It written in the status of running
b. Waiting: It is waiting for the user request they are free t handle the task assigned by the dispatcher c. Holding: The process is on hold. (It is also running) and waiting for the communication from an RFC system. d. Terminated or Stopped: The process is terminated due to an error, time out after 600 seconds, explicitly killed. e. Ended: The W.P is ended i.e. it could not be started and it cant handle any user request 5. Reason: The reason for the status. For running and hold press F1 to display the various reasons
53.
Sleep Mode: Its waiting for the resources on the target system
54. Private Mode: It is dedicated to a user. Time out will not work for this process 55. Enqueue: Communicating with enqueue process.
6. START: It ensures the W.P to start during W.P termination or Restart. It can be change from menu process restart after error Yes or No If it is set to No it cant be started to handle the Queue. But its useful to debug, why the process has been terminated or stopped. 7. Err( Error): Indicates the number of times the process is terminated 8. Semaphore: It indicates the number of the semaphore which blocked the W.P i.e. each W.P needs to work at O/S level and gets blocked for the various resources. There are 55 semaphores which are displayed by pressing F1 9. CPU: Click on CPU to display the time utilized by the W.P while accessing CPU. It is also reffered as CPU time. 10. Time: The time spent by the dialog W.P to execute the current dialog step of a transaction. If it goes beyond 600 seconds it will be terminated automatically. 11. Report: The report which is executed by the process 12. Client: The client NO through which user logged in 13. User: Name of the user who is executing the process
14. Action: Specify the action performed by the W.P. Ex: Logical Read, Sequential Read, Physical read, Roll In, Roll Out etc
Monitoring:
1. SM50: It is used to monitor the status of various W.Ps on an instance. 2. SM66: It is used to display the W.P of all instances. It is used to monitor the time consuming W.Ps with respect to users, report, reason for the long running and the type of action on the database along with the time consuming for that dialog step. 3. As part of the checklist you need to identify the total number of work processes with various statistics and mark them with red which are consuming lot of time. The reason for this is also very important. (Sleep Mode, PROID, CPIC.) to identify the expensive W.P. 4. If all the W.Ps are in the status of running we can assume that the system is over loaded due to lack of memory or the users are overloading the system i.e. more than the expected. (The system designed for 200 users but it is being utilized by 300 users) 5. We can also kill the expensive W.P from SM50 (Inform the user before killing the W.P). Select the user W.P Go to Menu Process Cancel with core or Without Core WITH CORE: Means it will generate a trace file at O/S level. WITHOUT CORE: Means no trace file generated. Double click on the W.P to check the details of the W.P SM50 is also used to change the layout. Go to Menu Settings Layout We can customize the layout as per our requirement. Note: Sometimes it may be recommended to end the user session instead of killing user W.P. ./kill -9 (In UNIX)
Dpmon:
It is used to monitor the status of W.P at O/S level. If the system is congested and user can not log on to GUI then use dpmon at o/s level. It displays the list similar to SM50. We can select the process which is time consuming and use the option kill with core or without core. If we cant kill identify the Project ID and kill at O/S level 10
Note: Dialog Process is used to handle the user request to schedule the job in the background to update the database to print the requests and to get the logs before updating a record. Disadvantages: Dialog process cant be used to run the long running, time consuming expensive programs or Reports.
11
Background Process:
56. The time consuming, expensive, long running programs can be scheduled in the background to run during Off Peak hours without user intervention. 57. Background jobs running in Non Interactive mode and doesnt require any user inputs 58. The max work process runtime is not applicable to background W.P i.e. they can run for any number of hours. 59. Background process doesnt handle part of the transaction but in time the complete transaction is handled by them. 60. 61. Background jobs are created by using Dialog W.Ps Background jobs are defined in the transaction SM36.
62. Go to SM36 Specify Job Name Specify the Job Class Specify the job triggering mechanism (Immediate/Date & Time/Event/After Job/ At Operation Modeetc) Save the job definition
Job Class:
Class-A: It is used to define high priority jobs. We need a dedicated process of Type A which is defined in operation mode. Class A job will be executed by only Class A work processes. Dont schedule more Class-A jobs unless it has a dedicated work process at that point of time. Class-B: It is used to handle medium priority jobs i.e. system defined jobs like SAP standard housekeeping jobs which runs periodically at regular intervals. Class-C: It is the default class for all the Jobs. It is used to schedule low priority jobs.
4. Active: The job is currently running 5. Completed: the job is completed successfully 6. Canceled: The job is canceled i.e. an error occurred
Background job Mechanism: When the dialog user defines the run a job in the background it is entered into the tables TBTCT and TBTCS Background processing time scheduled in table ( TBTCS) Compare value for batch job selection TBTCT Background Job Scheduler: SAPMSSY2 is the ABAP program that runs every rdisp/btctime It runs for every 60 seconds or the time specified by the parameter rdisp/btctime= It checks for the jobs in the ready state and brings them into background job queue. It runs in the dialog process. 1. User logon to the system to schedule a report or program in the background mode 2. It is stored in the tables TBTCT, TBTCS 3. A background job scheduler runs for every 60 seconds in the dialog mode to pick the background jobs 4. If the job is picked and ready to execute the status is set to ready 6. When the background process is assigned status is ACTIVE 7. Canceled when the job is not complete. Background Job Steps: Background Job can be defined by using an ABAP program, External Program and External Commands 1. ABAP Program: It is a standard program or custom defined program which will be executed using Variant Variant: It is a program selection criterion to provide the inputs during run time or execution of the program 13
Ex: Delete the background jobs for every two days 9The jobs which are terminated or completed successfully) Delete the old log files for every 3 days or delete the log files which are older than 2 days.
Note: Variants are stored in the table TVARV (Table of variable in selection criteria)
2. External Program: It is used to define the program to trigger on the host wwith the specified parameter Ex: R3trans, SAPstart, SAPEVT,
This type of job step allows you to run programs outside the SAP System. External programs are unrestricted, directly entered commands reserved for system administrators.
3. External Command: These are the commands which are not specific to O/S Ex: brbackup, brrestore, brconnect etc These commands are defined I transaction SM69 and executed through SM49.
This type of job step allows you to run programs outside the SAP System. External commands are predefined, authorization-protected commands for end users. The type of external command and external program is unrestricted, meaning that you can use either compiled programs or scripts. Such programs can be run on any computer that can be reached from the SAP System. Parameter passing to non-SAP programs is completely unrestricted except by the predefinition mechanism for external commands. Output of non-SAP programs, particularly error messages, is included in the job's log file. Specifications required for an external command or program is: o o External command + Type of operating system + (Parameters) + Target host system External program + Parameters + Target host system
14
External commands
External commands are predefined commands for end users. They are operating-system independent and are protected by authorizations, so that normal end users can schedule only those commands that the system administrator permits them to. With an external command, an ordinary end userany user without background processing administrator authorizationmay run a host system command or program that has been predefined.
15
External programs give an administratora.
16
6. RSM13003: To delete old update requests Select all the required jobs to schedule or click on Default Scheduling to schedule as per SAP norms.
17
a. The database space is not enough and results in errors (ora- 1653, 1654, 1631, 1632, 255, 272 .) b. Update process deactivated 4. ABAP dumps due to programmatically errors (S-Note, Support Packages, patches, Kernel upgrade) 5. Third party tools 6. SAP background mechanism is not so intelligent to work based on multiple conditions a. Maestro Toll b. Tidal Tool c. SAP Job Scheduler d. Other IBM products (Tivoli) Note: In order to work with above tools customer provides adequate training to the consultants SM62: It is used to display SAP events which will be triggered in the background by using SAPEVT SM64: To trigger the event in the background Reorganizing Background Jobs: Schedule report RSBTCDEL to delete the old background jobs based on outdated variants Background Job scheduling will be done in the following Way:
18
19
Authorizations for accessing background processing jobs can be set up for two types of users: administrators and end users.
A users jobs are defined and run in the users current logon client, regardless of whether the users background processing authorizations are set for user or for administrator.
Setting Up Authorizations
Administrator authorization setup requires the following authorization objects:
Authorization Object S_BTCH_ADM (Batch Processing: Batch Administrator) Allows all of the activities listed above except for maintaining external command definitions. No default profile with ONLY this authorization is currently shipped with SAP, but the standard SAP_ALL profile contains this authorization. S_RZL_ADM (CC Control Center: System Administration) Allows an administrator to maintain external command definitions and to trigger commands from the external command function (Transactions SM49 and SM69). S_BTCH_JOB (Batch Processing: Operations on Batch Jobs) Allows an administrator to view job-generated spool requests. To protect sensitive data, this authorization is not included in the standard administrator authorization. S_DEVELOP (ABAP Workbench) Allows an administrator to capture and debug background jobs by providing access to ABAP debugging tools
Value Y
01
LIST
User authorization setup beyond job scheduling and status checking requires the following authorization objects:
Authorization Object S_BTCH_JOB (Batch Processing: Operations on Batch Jobs) Allows all of the activities listed above except for maintaining external command definitions. No default profile with ONLY this authorization is currently shipped with SAP, but the standard SAP_ALL profile contains this authorization.
Value(s) DELE (delete other users jobs) LIST (display spool requests) PLAN (copy other users jobs) PROT (display anyones job log) SHOW (display job details) RELE (release other users jobs to start; a users own jobs are automatically released when scheduled.) permissible users
S_BTCH_NAM (Batch Processing: Batch User) Allows a user to specify other users for runtime authorization for a job. S_LOG_COM (Authorization to Execute Logical Operating System Commands) Allows a user to run external commands.
20
S_ADMI_FCD (System Authorizations) For special functions, such as debugging active jobs. For complete information, see authorization object documentation from Transaction SU21.
21
SAP Transaction:
A transaction is defined as a sequence of dynpros (sap term for screens) having input and output fields and corresponding processing logic behind them to perform a particular task. Every transaction has a 4 or more character code assigned to it. To invoke the transaction the user needs to enter this transaction code in the command window. This takes the control to the first screen of the transaction.
It consists of multiple transactions which are handled by different dialog W.P. Each transaction is a logical unit of work (L.U.W) in the database. Each L.U.W is a transaction which can be committed or rolled back
DB
Committed
22
The bundling technique for database changes within an SAP LUW ensures that you can still reverse them. It also means that you can distribute a transaction across more than one work process, and even across more than one R/3 System.
23
Update Mechanism:
63. Whenever a user wants to update or create a transaction logs into the system using dialog process 64. 65. User logs into the Database/ R/3 system using SAPGUI User request is received by dispatcher and keep in the queue
66. Whenever a free work process is available dispatcher assign it to the work process 67. Work process rolls the user context to task handler
68. Dialog user initiates a update transaction like sales order, purchase order, invoice, billing. (Let us say modify/ change) 69. If the request is related to the update it communicates with Enqueue process to issue lock to update the records. Time= 1 millisecond. If the request is from a Dialog instance, dialog work process communicates with message server in the central instance and message server to communicate with enqueue process to issue the lock. This entire process should be completed within 100 milliseconds 70. As a transaction consist of multiple dialog steps they are updated into temporary tables called as VB* tables. These tables are updated by the dialog work process 71. Update process reads the temporary tables to update the database based on Transaction Id.
Dialog process updates each dialog step task in temporary tables. These tables are called as VB* tables. These are VB* tables and the tables contains: 1. VBHDR: Update header information is stored in this 2. VBMOD: Update module information 3. VBDATA: Data tables of update process
24
4. VBERR: Errors occurred during the update process 5. VBLOG: Update log files 10. Upon the successful update (Temp Table) a transaction Id is generated 11. Transaction Id is generated form NRIV table (Number range Intervals)_ 12. Update process gets initiated, reads the temporary tables and updates the database synchronously based on Transaction Id. Types of Updates: 1. Local Update 2. Synchronous Update 3. Asynchronous Update 1. Local Update: Dialog process updates the database directly (System tables, direct update tables, users etc) 2. Synchronous Update: Update process reads the temporary tables and updates the database synchronously 3. Asynchronous Update: The process of updating temporary tables by a dialog process is referred as asynchronous update Types of Update Processes: 1. V1 Update 2. V2 update 3. V3 updae V1 Update: It handles the update with high priority V2 update: It handles the update with low priority V3 update: Reserved by SAP Define V1, V2 updates properly to ensure that update mechanism works properly. Note: There should be at least one V1 update to handle the updates There should be at least one V1 update process defined for every 5 Dialog processes. 25
If V2 is not defined then V1 handles the V2 request The V1 and V2 mechanism is defined in the update programs defined by SAP. SAP never recommends custom updates on the standard tables SE12: Display the database tables Note: The update process inherit the locks from dialog process Update Monitoring: Updates are monitored in Transaction SM13 Go to SM13 to display the records based on client, user, from date and To date and status. The records are displayed with the following status: Init: Update is getting initited to update the database Run: Update running Auto: If the update is cancelled due to any reasons it will be set to automatic update once the problem is solved Err: The update process is thrown in to error Update Errors: 1. There is no space on the database (errors are prompted with message Err1653, 1654, 1631, 1632, 255, 272 and so on) 2. there is a problem in the program which can be fixed by applying note or support package 3. Number range problem: Cannot insert duplicate records. During the above problems we can set the system to deactivate the complete update mechanism to keep the system consistent 4. After resolving the above issues we need to manually activate the update mechanism to update the records. The records with errors state will turn into Auto Status 5. There are some records where the error message says that error (Couldnt repeat the update) that means these records cant be updated again 6. If there is no such error we can select the record and repeat the update 7. Do not delete the update records. 26
Use the following transaction to get the granular formation about the update failure SM21, SM37, ST22, SM13, SM50, SM66 SM14: It is used to deactivate and activate the update manually and we can fix problems manually Update Parameters: 1) rdisp/vb_stop_active: Set to 0 so thatupdate can be deactivated. If the value is set to be 1 update cant be deactivated 2) rdisp/vbdelete: This parameter is used to delete the old update requests based on number of days which are older than 50 days it will delete the default 50 days 3) rdisp/vbmail: It is used to send an email if update throws an error which can be viewed in SBWP (SAP busiess work place) based on yuour user. Will be set to either 0 or 1 4) rdisp/vbname: Name of the server where updatesa are processes 5) rdisp/vbreorg: used to delete the incomplete update requests. 1- Delete, 0- No We can also schedule a background job RSM13002. But it will delete the update request which are in completed. Alternatively use rdisp/vbreorg set to 1 so that it will be deleted after restarting. 6) rdisp/vb_delete_after_execution: Its used to delete the delete update requests soon after the execution of the update. Set it o 1 to delete the record or 2 to the record will not be deleted. It is set to 1 the background job RSM13002 is not required, if not schedule periodically daily during the off peak hours. Update Advantages: 1. Database consistency 2. User is not waiting for the status of update in database 3. User updates i.e. dialog updates temp tables asynchronously 4. Update process reads the data from temp tables and update the database synchronously.
27
Number Assignment: During the implementation number range intervals are defined in the table INRIV. The numbers are buffered and assigned to the transactions when they are committed. The numbers are buffered and assigned to the transactions when they are committed. The update process updates the database with same transaction number. That is the reason we need to monitor updates continuously. Update Problems: 1. Less number of work processes configured 2. The number queue increases and more updates are init state Resolution: Try to find out the status of other back ground job which are updating the data base. The update is consuming more time to update the database, the update queue increases. If it is a generic problem try to resolve it. If it is a regular problem consider increasing update process based on the availability of resources 3. Check if the update mechanism is deactivated (SM14). Go to SM14 check the status of update mechanism if it is deactivated check the system log (SM21) Note: Update can be deactivated and activated manually in SM14 4. Programmatical Errors: There is a programatical for which the update is thrown into error state. Refer the problem to development team, it is a customizing program. If the problem popped up after applying support package and patches refer to sap notes for a consult or else write to SAP. 5. Table Space Overflow: When the table gets overflowed we could not update the database. Increase the table space and rerun the update. Note: Update work with enqueue process to obtain and inherit the locks.
28
Enqueue Process: It is used to communicate to obtain a lock while updating a record. It is completely different with database locks. Database locks are only at DB level, where as enqueue locks hold the transaction in large.
DI
MESSAGE SERVER
Enqueue Mechanism: 1. User requests for an update. Dialog process communicates with wnqueue to hold lock on that record. (If the request is to the central instance). 2. If the request is coming from dialog instance dialog process communicates with message server on the central instance and the message server communicates with wenqueue to get the lock and issue to dialog process. 3. The enqueue locks are issued from the shared memory of the central instance which are displayed in transaction SM12 4. The use update the record in temp tables and locks will be inherited to the update process till the final update into permanent tables in the database. 5. The enqueue time will be 1 millisecond to 5 milliseconds on central instance, where as it is 100 milliseconds for the requests that are coming from dialog instance.
29
6. There will be only one enqueue process in most of the environments. It is also possible to configure more than one enqueue process but ensure that all the processes shares the same lock table. 7. Enqueue locks are monitored in transaction SM12. 8. Enqueue displayed based on table name, client and user name. It displays lock arguments, time and the table. 9. No lock should be older than 24 hrs. If long pending locks are displayed we nedd to evaluate clearly. Enqueue Problems:
72.
73. Enqueue lock table resides on shared memory of the central instance 74. The lock table size is configured by parameter enqueue/table_size= 4-100 MB. By default it is 4 MB in size which can be increased up to 100 MB 75. When the lock table overflow the error message recorded in SM21, ST22 76. If the update is processing the records and releasing the records in time. If not the lock table will be filled and you can not issue any locks. We can use the parameters to increase the enqueue size. 77. Enqueue time increases (4 or 100 Milliseconds) i.e. enqueue could not process the locks with in time or in any massive update system, the enqueue process alone cannot serve the reuquests in this scenario. We can increase enqueue work process by using process 78. Rdisp/wp_no_enq= 0 to 100. Increase the enqueue process on the same server where the earlier enqueue processes are defined 79. Dead Locks: If the object required by one user is locked by another user and simultaneously the object required by the other user is locked by this user then there is a dead lock. But mixture of programs and SAP programs makes a way to a deadlock. In a dead lock situation either one of the user has t log off.
Releasing Locks: Rlease the locks only with the permission of the user. Te permission should be in black and white (Email or signed document). 30
80.
81. Get the detail of the user from SM12 and communicate with the user to release the lock 82. If the user is not in the office communicate over mobile (Verbal) and send a mail (As per the conversation in releasing the lock.). 83. Send the mail cc to project manager, team leader etc..
84. Get the approval to log off the user session in SM04 and release the lock.
Note: Enqueue and Update work together.
31
Spool Management:
Its only the process which is used to output the documents to the printers, fax. Spool Process work Flow: Dialog process or background process creates a spool request i.e. to print the documents. Ex: Dialog process use to print an individual pay slip, sales order, purchase order invoice etc Background processes use to run the pay roll to generate pay slips for all the employees. To print delivery orders in batch (bunch) invoices etc When the print order is specified by the user or background work process the spool request is stored in database or at O/S level in the global directory. The storage location is specified by the parameter rspo/store_location. This parameter has two values Global-G and database- DB These spool requests are also referred as TEMSE. (Temporary sequential objects). These are stored in the location rspo/store_location parameter. TEMSE is nothing but spool requests. Spool process reads Temse and generates output requests. G- Means it is stored in global directory \usr\sap\SID\sys\global DB: Means it is stored in Database tables TST01 and TST03 TST01: It stores the objects and details of the spool requests such as name of the Author, Number of copies, name of the printer etc.. TST03: It contains spool data to be printed
DIALOG
Spool Request
32
G (\usr\sap\sid\sys\global) TEMSE Parameter G: The Temse resides at O/S level. It will be faster to access than database (DB). When the spool size is small (around 300 MB) and If the spool size increases it will be difficult to locate them to print . O/S memory is used. Dedicated space needs to be taken to store the backup of Temse. Parameter DB: Time consuming to wrote into DB. But with the help of indexes it can be printed out fast. No special care for backup is required because its backed up along with normal database. O/S memory is not required. SA38: RSOSR002 is the report used for deleting the old requests. Advantages of Temse:
85. 86.
The out of the spool request can be viewed before it is pronted. Spool process reads Temse and generates out put requests.
87. Output requests are depends upon the access method. Access Method specifies the type of access to the printer. 88. There are various access methods
Access Methods:
89. Local Access Method: The spool process and the host spool (Printer Spool) reside in the same system. Access method type L is used for unix operating systems. C is used for windows which makes a direct call to the host spooler. 90. Note: The printer can be remote or local
91. Remote Access Method: The spool process host spooler resies on two different machines. Access method U based on unix barkle protocol. User for unix os access method. S sup protocol used for windows. 92. Front End printer: Access Method F. The printers ae connected to end user desktop do not configure too many front end printers because the resources will be blocked by the user.
33
93. 94.
USER F Printer Sap R/3
Its also not used for scheduling background jobs because the interactive inputs.
Defining a Printer: Go to transaction SPAD (Spool Administration) Click O/P device Click on change Click on create Specify the output device name. Output device name should be meaningful to identify the location and type of printer. Specify the short name Define model, location, and message.
95. Device Type: Specify the type of the output device. Name of manufacturer most of the device types are available in the SAP system. But new printers which are released after the release of SAP component we need to get the new device types.
Go to SPAD Utilities for device types Import.
96. Spool Servers: The server with at least one spool process is called spool server. Spool server can be logical or real server. Spool servers are created in SPAD 97. Go to SPAD Click on Spool Server Click on create Specify the spool server name Specify server class specify device class (Standard printer) Authorization group 98. Note: Specify the group so that only the group assigned can access the printer
It is defined for fail over and load balancing between printers. USER
LGS1
(Logical Server)
Real Spool Server Click on the Access Method: Host spool access method C for Windows NT L for UNIX C and L are local access methods U and S for remote access method U for UNIX S for Windows NT F for front end printing
Note: Dont configure too many front end printers. If configured spool congestion occurs. To avoid spool congestion configure the parameter rdisp/wp_no_spo_fro_max=2 This parameter allows using the work process for the front end printing, let us say if we have 10 processes only 2 can be used for front end. Specify the name of the printer Destination Host: Name of the host where the printers are configured. Check Box: Dont query host spooler for output status. Each work process goes to the printer and gets the status of the printing. If this box is not checked the spool process are busy getting the status. Output Attributes: 35
To print the cover page Author Name Number of copies Name of the printer
103. Process request sequentially. Used to print the documents s sequentially. If numbering is important select this option. If not deselect the check box
Note: If the device types are not available select SWIN a default device type which will run a suplpd (Line print daemon). Suplpd is a protocol to print by default, if the device types are not available.
Spool Monitoring: Go to SP01 Specify the User Name, Date and Time to display the list of spool requests. The following statistics of the spool requests are displayed. 104. -: The request has not been sent to the host system or the output request doesnt exist. 105. +: Spool request generated stored in Temse
106. Waiting: Spool request is waiting to be processed by a spool work process. 107. In Process: The spool process is generating the output request based on spool request. 108. Printing: Printer is printing the request. The status will be displayed approximately 1 min and status sets to either Complete or Error. 109. Completed: The job might be completed but assume that handling over the print job to the printer is completed. The printer might not be printed. 110. Problem: The output request printed but contains mirror errors such as page format, character set etc. 111. Error: The request has not printed.
36
Handling Spool Requests: Go to SP01 and select the spool requests which are thrown into errors. However we may need to act based on user requests such as the status shown completed but the documents could not be completed. Waiting Status: These requests are waiting in the waiting status for more than one hour Conclusion: Spool process is busy in handling the spool requests or the spool work process is not sufficient to handle the request. User complaints that the spool request could not be generated: 112. The printer is not available or locked in the system.
113. By default SAP allows 32,000 requests to be stored in Temse we can increase that up to 99,000. We cannot increase beyond 99,000 because the maximum spool number is 99,000. Rspo/spool_id/max_number=32,000 to 99,000 Schedule the following background jobs in SA38: RSPO0041 (or) RSPO1041. This above reports deletes the old spool requests based on status. Schedule the reports RSPO0043 or RSPO1043 to check the consistency of the spool periodically. Spool Problems: 114. 115. 116. 117. The printer is not available (Printer power off, No network etc) Paper out. No paper, Paper Jam Cartridge or toner out Printer problem
Go to SP02. This will provide to all the users to display their own spool requests. If the spool request is thrown in to error to a particular printer then select the printer and print select the printer with change parameter
37
SPIC: Spool installation check. This is used to check the spool device and pending requests along with consolidated problems and warnings. SP12: Temse administration. It is used to check the memory allocation objects and perform the Temse consistency check
Reque st Dialog Dialog
LGS
Local Access
Remote Access
Front End
L/C
U/S
38
Data Transfer:
Interface
BW SAP SYSTEM
SMS
The data has to be transferred in to SAP system in the following situations. 118. During implementation to transfer the data from legacy systems to SAP system. To test with live data. 119. 120. Migration of legacy system DB to R/3 system Data transfer during parallel run
Parallel Run: It is an activity where both SAP and Non-SAP systems run parallel. The data entered in NON SAP system will be transferred to SAP system periodically during Off Peak hours without any user intervention. Periodic data transfer from suppliers vendors etc. In order to communicate with vendors back end systems we need to define RFC connections.
39
Remote Function Call: There are various types of RFCs which are used to transfer the data. 121. 122. 123. 124. Asynchronous RFC (ARFC) Synchronous RFC (SRFC) Transactional RFC ( TRFC) Queued RFC (QRFC)
ARFC: Does not check for the acknowledgment from the target system. These are not reliable because there is no confirmation from the target system. SRFC: It communicates synchronously with target system and ensures that data is transferred. During the data transfer the process may go into sleep mode or CPIC mode. CPIC Mode: Common programming interface communication. It is an SAP proprietary protocol to communicate between systems. TRFC: It is similar to asynchronous but a transaction Id is created and is monitored in transaction SM58. A background job RSARFCSE is scheduled for every 60 seconds to check for the transaction Ids in the transaction SM58 QRFC: It is similar to TRFC and it ensures that the transactions are processed in the same sequence they entered in to the queue. Defining a RFC connection: 125. Go to SM59 Click on create Define the RFC destination name. The name should be able distinguish between the connectivity. 126. Define the connection Type: 3 for connection to R/3 system 2 for connection to R/2 system 127. 128. Give description Go to Technical Settings
40
Specify the system number Specify the system number Specify the Gateway options Gateway host Gate way sapgw<instance number> Click on logon details Save Click on test connection.
If the specified user is a dialog you can click on remote login to check the connectivity. ALE: Application Linking and Enabling It is used to communicate between two loosely coupled systems. Use transaction SALE to define systems.
Logical System: Logical systems are used to identify the client uniquely in the landscape. As client is identified by a number we need to assign logical system between <SID>CLNT000 Ex: DEVCLNT000, QASCLNT000, PRDCLNT000 BAPI: Business Application Programming Interface It is used to communicate between sending system and receiving system based on interface and method. Select the Model View Select the Send Client Select the Receiver/Server OBJ Name Method C / Interface Simulator
41
Add message type and specify the message type. Note: Most of the data transfer methods are defined by functional consultants during implementation. Ex: Central user administration uses ALE mechanism to transfer the data between clients EDI: Electronic Data Interchange It is used to exchange the data between SAP and NON SAP systems. 137. 138. SAP system needs to understand NON SAP systems. Non SAP systems needs to understand SAP systems language
In order to understand the language between systems IDOCS are implemented. IDOC: Idoc is an intermediate document which is in the understandable format by both the systems. Ex: Customer is having VB_SQL_Server based system. Where as SAP is based on ABAP language Customer sends purchase order through VB system, which is converted into IDOC and sends to SAP system. SAP system sends invoice in the native format which is converted to IDOC and sends to customer VB system. Note: This mechanism is defined by functional and technical consultants. BASIS consultants monitor the flow of the IDOC. IDOCs are monitored through IDOC and WE05 The sending system documents are out bound documents in the sending system. The receiving system documents are inbound documents from the sending system. Go to WE05 select the date and time to display the IDOCs with various statuses 00-49: Out Bound 50 and above: Inbound
42
Some of the states is 51 document not posted. 64- IDOC ready to transferred to application . QRFC Monitor: GO to SMQR is used to monitor the QRFCs. SMQ1: Is used for out bound queues SMQ2: Is used for inbound queues SM58: Is used for monitor the transactional RFCs based on transaction Ids Data Transfer Methods: 139. LSMW: Legacy system migration work bench: It is used to transfer the data from NON SAP system to SAP R/3 systems. This is used during implementation and mostly one time activity. 140. Process: Identify the data which needs to be transferred from the legacy systems. Pause truncate and PUDD the data if required i.e. mapping between the source data and receiver data. 141. Ex: Char (50) is source but an receiver char (40) so we may need to truncate the source by 10 characters. If receivers is char(60) se ma need to padd the data. 142. 143. Session Method: This also programmed by developer but this can be used for periodic data transfer. This familiarize with error message handling 144. 145. Errors during Data Transfer: Source /target system not available
146. Document problems (Document is not readable, Junk character, Permissions, document not found, document is too old) 147. RFC erros lik t ID. RFC: Error like USERID, Password, USER ID not active 148. WE05 IDoc expenses.
43
Instance:
Instance provides a set of services, work processes, Buffer-Areas to an application instance is controlled by various parameters i.e. start up parameters, instance parameters and default parameters. These parameters describe the characteristics of an instance. Instance mainly depends on memory resources based on available memory. We need to configure the parameters. These parameters reside in usr\sap\sys\profile directory. 149. Default Profile: It provides default parameters for all the instances in the R/3 system. Some of the parameters that can be configured globally are as follows: 150. login/system_client=999 this determines the default client for all the users 151. zcsa/system_language= To specify the language during log on
152. login/* : All the login parameters to control password, user id etc. This can be modified based on requirement by administrators. But it requires a restart of the instance to take effect. Its naming convention is default.pfl. 153. Start Up Profile: It is used to start the instance. It is recommended not to change any parameters in this profile. Your instance may not start if any changes are made to this profile. Changes are allowed only when there is a change in Host names or SID. Its naming convention will be as follows: 154. 155. Start_DEVBMGS<nr>_Hostname (Central Instance) Start_D<nr>_Host name (Dialog instance)
156. Sapcpe.exe : It is used to communicate with database when a dialog instance is installed. It is also initiated when the central instance and database instance installed separately. 157. 158.
159.
160. IT consists of the instance specific parameters like work processes, buffers. Go to the table TPFYPRODTY to display the instance specific properties grouped by dispatcher, ABAP etc. Some of the parameters are as follows: 161. 162. 163. 164. 165. 166. 167. rdisp/wp_no_dia rdisp/wp_no_btc rdisp/wp_no_vb1 rdisp/wp_no_vb2 rdisp/wp_no_spo rdisp/wp_no_enq rdisp/max_wprun_time=600-1800
168. All ST02 transaction like abap/buffersize. If the field dynamic is set to X the parameter can be changed or in RZ11. If there is an option change value we change that parameters dynamically without restarting server.
Profile management:
Go to RZ10 for static profiles and RZ11 for dynamic profiles: 169. 170. Go to RZ10 to import profile from O/S level to database. So that parameters are maintained at Database level and consistency between the required and threshold value is checked. 171. Ex: Work process should not be configured more than 100 where as this is allowed at O/S level but Database level it gives warnings. 172. Table TPFRT is used to store the parameter values along with versions. 173. Administrative Data: Which will gives you the path of each profile. Do not change this until there is change in path of the profiles
45
174. BASIS Maintenance: This is used by technical team where maintenance is performed without knowing the parameter names. You can toggle between the values by increasing and decreasing the values. It is used to maintain work process buffers, memory management. 175. Extended Maintenance: It is used to change the parameters based on parameter names. It is used by experts and ensures that necessary care is taken while modifying parameters. Note that your instance may not start due to change parameters (Wrong parameter). 176. Go to RZ11 and get the documentation of the parameter before you make any changes to the parameters 177. Go to RZ10 select the profile to be maintained. Let us say instance profile and select Radio Button for extended maintenance. Click on create parameters. Specify the parameter and its value click on copy. Go back and save and activate profile. It will request you to restart the server for the parameter to get effected. 178. The existing profile in SYS/Profile will be renamed to .bak and profile is copied from database to O/S level. Restart the server.
46
Operation Modes:
Operation modes are used to define system optimally by utilizing the resources during the peak hours and off peak hours. Operation modes toggles between the day mode and Night Mode (Peak and Off peak) by utilizing the work process optimally. Defining the operation mode is done in 2 levels:
179. Defining the Operation Mode: Go to RZ04 Define operation Mode SAP System name, Operation mode Click on Instance/ Operation Mode 180. Assigning the Time interval: Go to SM63 Select the interval and assign the operation modes
Purpose of Operation Modes: Operation modes are used to utilize the dialog process during day time and background process during the off peak hours i.e. we may not require dialog during off peak hours. We may require more BTC during off peak. We can dynamically switch between the processes without restarting the server. When opmode switch occurs it is resulted in SM21. Dont configure too many modes. Note: If a background job is running, during operation mode switch it is allowed to continue t run after completing the job the operation mode switch occurs for this background work process. Select * from SAPDEV table name where paraname like rdisp%
47
181. 182.
L D.I D.I
183.
G 1
End User
C.I
DB
L G 2 Message Server
D.I
D.I
Logon Groups: These are used to define load balancing mechanism between the instances. These are used for logon load balancing\fail over between the instances. These are used for optimal utilization of buffers. Defining logon Groups: Go to SMLG Define log on groups and assign instance Logon groups are defined based on geographical locations based on application models. Message server keeps the list of all logon instances and displays the favorite computer server by calculating answer time and think time. Mechanism:
48
184.
185. System communicated with message server based on sapmsg.ini and communicate with 3600+ instance number through entry in etc\services i.e. you need to maintain all the user desktops with the above two entries ( sapmsg.ini and etc\services) 186. Message server keeps the info of favorite server and route the request to that instances Advantages: 187. 188. 189. Load balancing Fail over Effective utilization of buffers and system resources
190. Performance improvement similarly logon server groups are used for RFC communication RFC Server Groups: These are used by RFC users to identify the least loaded server and route the request. Go to RZ12 Define the server group and assign the instance. We can specify various conditions i.e. number of logons, maximum number of work processes, maximum wait time etc. These are mainly used for background job processing. These are used for optimally utilization of resources so that background processes are utilized effectively
49
System Monitoring
It is used to monitor system health on a periodic schedule to avoid the last minute surprises and accidental growth or utilization of resources abnormally. SM51: To identify the types of process configured and the status of the instances. As per our checklist we need to count the servers and ensure that all the active servers are running Click on the release notes to identify the R/3 Kernel, DB Kernel, O/S kernel and support package information Select the systems and use option Goto to display various properties of the instance RZ03: Ensure that this transaction is locked in the production system. It is used to stop the instances to toggle between the operation modes. Note: Do not stop the instances using RZ03 SM21: System logs It is used to display the logs based on instance. Go to SM21 display the logs based on Date, Time, User and Transaction Code. The messages are displayed with various colors. SM21 displays error message, warnings and messages 1. Max time out reached 2. Oracle errors (ORA 1631, 1632, 1653, 1654, 255, 273, 1555) 3. Update deactivation 4. Work start up and shut down 5. Operation Mode switch 6. User distribution 7. O/S errors 8. ABAP Dumps
50
9. Background job errors 10. Number range intervals Mostly it records all the important activities. We need to look in to the errors that are displayed in RED color and light Red color Analysis: Click on the error message. Get the error message. Identify the uses and check with the user. If any abnormalities are found get the error message and check out in market place. ST22: ABAP Dumps SAP system is built on ABAP language. So it executes based on ABAP programs. If any one of the program could not be executed it will thrown into errors and recorded in SM22 and SM21. The programs can be thrown in to dump under the following circumstances 1. Time out Error: Schedule in the background or fine tune the program by restarting selection criteria 2. Data Base Errors: ORA 1631, 1632, 1653, 1654, 255, 272, 1555 3. PXA Errors/ Buffers: Space is not enough to store the content 4. Memory Errors: When memory to execute the program is not sufficient it will be thrown into error 5. Program Bugs: Which can be fixed by applying notes and support packages 6. Kernel Mismatch: Upgrade Kernel 7. Upgrade errors and background job errors: SQL_array_cannot_insert_duplicaterecords 8. Too many conditions and indefinite loops throws the custom program in to dump
Go to ST22 Double click on the dump and read the dump thoroughly and understand the problem. Thoroughly understand stand the problem. Go to how to correct error and try to resolve get the error message and resolve by searching in the market place. EWZ5: To lock and unlock the users 51
SM04: To display the list of users logged on to the instance. RSUSR006 is the report we can find all users. ST11: Developer trace of work directory SSAA: Transaction help you to know the user has navigated to the transaction or not. It own all the reports (Monthly, Weekly, Daily) List of transactions to be monitored:
SJAD: Statistics collection for all systems. Report used to generate STAD is: RSSTAT26
52
Front End Time (or) GUI Time: The time taken by the user request to reach the dispatcher is refereed as GUI Time or Front End time.
Normally this time should not take more than 200 m/s. If it goes beyond 200 m/sec it is considered as expensive. However it is not going to be the part of Response Time. If GUI time increases consider the following: 1. Problem with network (Check the connectivity between GUI to Server) 2. Check whether it is a generic & Common problem (Remote Desktop) 3. GUI problems (We may need to upgrade or apply patch) 4. Logon through VPN or Firewall and proxy and filters may also be a problem 5. Using Dial Up connectivity
Wait Time: The amount of time the users request waits in the dispatcher queue. Usually it will be 50 m/sec or 10% of response time. If Wait Time is expensive consider the following:
1. Work process congestion 2. Work processes are not sufficiently configured at the rate of ratio 1:5 (5 user, 1 work process) 3. Work process configuration is fine but the processes are held up with expensive user requests 4. The work process might gone into private mode, sleep mode, RFC, CPIC modes Solution: Identify the expensive process and logoff the user session based on
approval. We can also consider increasing of work process or deploying the additional instance based on the load. Alternatively configure logon load balancing. Roll-In-Time: The time taken by the work process to roll the user information into task handler Roll Out Time: The time taken by the work process to roll out the user information into roll area. 53
54
Gateway Process:
It is used to provide gateway to the instance i.e. incoming and outgoing connections are performed through gateway. There will be only one gateway process for each instance. Gateway is monitored in SMGW. The RFC connections, The ICM connections are displayed in SMGW The maximum gateway connections that are allowed through gateway is 100. It is configured by process. rdisp/max_gateway=100
Message Server:
There will be only one message server in R/3 system (Irrespective of instances). Message server is monitored through SMMS. It is used to handle all the dispatcher and first process to be started. In the R/3 system the instance on which it is installed is called central instance. It is used to balance the load when groups are configured. When log on load balancing is configured we need to maintain the entries sapmsg.ini, etc/service and define the logon groups in the GUI entry for each user. Alternatively we can copy relative saplogon.ini to the end user desktop.
55
SAP Archiving: It is the process of moving the old data (which cannot be updated any more but required for data analysis and for statutory auditing requirements). The data cane be moved into global directory. If the archiving is not performed time to time the following issues are cropped up in the data center. 1. Response Time should shoots high 2. Database size grows and hardware must not sufficient in terms of memory, CPU and storage 3. The existing tapes becomes un utilized when the size groups beyond the size of the type 4. Admin costs will be high so it is recommended to archive the data from time to time
Database reorganizations should follow the SAP Data Archiving. There are third party tools available for performing archiving.
1. Click on New entries 2. Specify logical path and name 3. Click on assignment of physical paths 4. Define syntax group (Windows NT)
56
5. Click on logical file name to move the data 6. Specify the logical file 7. Specify name 8. Specify the physical file 9. Specify date format 10. 11. 12. Specify the application area Specify logical path Click on SAVE
1. Go to SARA (SAP Archiving) Select the object name Click on pre processing to define the variant to schedule archiving in the background by start date and spool parameters specify the start date and spool parameters. 2. Click on Write: To write the data click on delete specify date and parameters Execute 3. Click on delete: Select Archive selection and delete complete archive. GO to again delete and select. 4. Click on Read: we can read document 5. Storage System: We can store files.
Go to SAR1: which is archiving information system to check the status. Go to SF01: File transaction is used for cross client where as SF01 is used for client specific files.
57
CCMS Monitoring
CCMS: Computer center monitoring system Go to RZ20 Extras menu Activate maintenance function It is used to raise the alerts based on the threshold values which are defined in the system.
RZ20: Display monitor sets Monitors Monitoring trace elements properties methods Variants Monitor Set: It consists of all the monitoring activities Monitor: Specific to a certain function Monitor Tree Elements: Elements to be monitored Method: This is specific to a process or activity
Property: Monitoring category variant is value Go to RZ20 Extras Activate maintenance function It is used to raise the alerts based on the threshold values which are defined in the system Add define your own monitor se from the existing templates:
1. Create: Define elements 2. Copy: include only the monitoring required elements
CCMS displays the elements in color:
1. RED: Problem 2. YELLOW: Warning 3. GREEN: ok 4. WHITE: Information not obtained/ not collected
MTE: Monitoring tree element
58
59
20 KB Front en time or Dispatche r GUI TIME TSKTT Queue)/Wait Time50 m/sec U EN D US ER C SCREEN ABAP SQL R3 Buffers DBSC CI/DI RFC DB QUEUE (Dispatcher
Processing Time
1. Front End time or GUI time 2. Wait Time 3. Roll In time 4. Roll out time 5. Processing time 6. CPU time 7. Load and generate time 8. Enqueue time or lock time 9. RFC+ CPIC time 10.
60
Database Time
11. Time
Dialog Response
Generally each time should not exceed more than 50 m/sec. If it exceeds consider the following:
1. Check the user context and reduce the duplicate authorizations. 2. Advice the user to use the reports by search criteria (Specify user name, date, time, status etc) Processing Time: The amount of time taken by the work process to process the
user request. (ABAP interpretation, screen interpretation, SQL interpretation) and reinterpretation, processing time should not more than 2*CPU time PT < 2* CPU time While processing user request CPU resources are utilized expensive programs, expensive SQL statements, expensive screens are responsible. For expensive/ high processing time
CPU Time: The amount of time consumed by work process in utilizing CPU resources while processing request. SCREEN ABAP SQL0
1 2 3
5 6 1+2+3= CPU Time 4+5+6= Wait Time Note: AS CPU time is included in processing time so its not calculated as part of the response time. CPU time = 40% of (Response time-Wait time)
61
The time taken by the work process to load and generate the screens and programs is referred as lad and generation time. Generally it should not be more than 200 m/sec. If it exceeds its not utilizing buffers properly increase the size of the buffers.
If Enqueue time increases: 1. The lock table may be overflow 2. Dead Lock 3. The Enqueue congestin which can be avoided by increasing work process
RFC + CPIC Time: The time required to communicate with external system or
calling programs using RFC or CPIC is referred as CPIC time. There is no threshold value but ensure that it should not be a bottleneck o the response time. Ensure that resources are available on the target system. If required configure RFC server groups.
DATABASE Time: The time required to process the user request in the database
is referred a Database time. Generally it should not be more than 40% of (response time- Wait time) Ex: DB Time= 40% (1000-100) 40*900/100= 360 m/sec If it exceeds 360 then consider the following:
2. DB buffers space is not sufficient 3. Expensive SQL statements 4. Database statistics not up to date Dialog Response Time: The sum of all the above time except (GUI Time + CPU
Time). Generally it should not be more than 1000 m/sec but on an average it should be between 600 m/sec to 1200 m/sec.
63
64
R/3 Buffers
Buffering: The frequently accessed content and rarely changed content is stored
as buffer in the application server which is also referred as R/3 buffers.
R/3 Buffers: These are stored in the instance and cannot be shared between the Instances. These buffers are different from database buffers. There are various types of buffers. Ex: Program Buffers, table Buffers, Calendar, CUA, Screens etc. Buffers are stored in the shared memory of the instance. Buffer Mechanism: User logs into the system to access certain data. The request is processed and goes to database to fetch the content. If the content is eligible for buffering it is stored in the instance. The content should be rolled out into user context before the response is sent to the user. As the user context is small in size the context I not stored in user context. But in terms store in R/3 buffers and the pointer to R/3 buffers are stored in user buffers. Note: User context cannot be shared between users but R/3 buffers are shared between users. Buffer Monitoring: Buffers are monitored in ST02. Buffers are organized in terms of directories and the space in the memory. ST02: Display the following information 1. Name of the Buffer context 2. Buffer hit ration: The ration should be always greater than 94% 3. SWAPS
We need to look for swaps. Swaps occurs when the allowed space is completely used or of all the directories used or both utilized. Basically the ABAP buffer size will be 150 MB by default. We can increase up to 600 MB (Up to 4 times) based on the available memory. If swaps occur frequently consider increasing either space or directories. The Reasons for Swapping: 65
1. Frequent transaction of objects 2. The new modules are implemented 3. Buffer memory is not sufficient 4. Number of directories not sufficient 5. Frequent changes to the buffer data
Note: In each company the swaps occurs frequently but look into the number of swaps. Based on the size of the database we can allow 5000 to 25000 swaps. They are not effecting the performance of the system i.e. response time. Double click on the context which has more swaps. Click on parameters to identify the parameter name and value and to change in RZ10 before changing any parameter read type. Complete documentation in RZ11. Miss configuration or improper configuration may not start the SAP engine. TABLE Buffering: Apart from the repository objects SAP also buffers the table content based on the table data (content). There are 4 types o buffering:
1. No Buffering: The table which is large frequently updated rarely accessed is set to no buffering. 2. Full Buffering: (100 % Buffering) The table which is small, frequently accessed and rarely changed is eligible for full buffering 3. Single Record Buffering: The table which is relatively large but frequently accessed is buffered using primary key 4. Generic Key Buffering: The buffering is based on group of keys
Note: For most of the tables SAP define the buffering settings, which can be modified in SE13. BY default SAP provides the following options for buffering tables.
1. Buffering Allowed: This tale can be buffered 2. Buffering Allowed but Switched off: This is used for development and quality systems 3. Buffering Not allowed: Buffering not allowed on this table.
66
Exercise: List out at least 5 tables in each of 7 cases. SE13 and SE14
1. When more than one instance is configured we need to synchronize the data between buffers of the instances. If not we will get the old snapshot. 2. When data is fetched by one instance it will keep a log in the table DDLOG 3. Even the content is accessed from buffers it will check the dialog always If there is a difference in time stamp it will fetch the data from database. Mean while we can synchronize the data between instances using the following two parameters: rdisp/buffertime= 60 sec: This parameter will refresh buffers every 60 seconds rdisp/buffermode= send on execute/ send off execute. Due to performance reasons use send off, If only one instance is configured.
67
68
Memory
Its a temporary work area to perform calculations, Reads the data from the disk. No operations are allowed on the hard disk without memory. 1. Physical Memory: The amount of memory that is installed on the system is called as physical memory. 2. Virtual Memory: As the physical memory is not enough we need to assign space on the disk which is referred as page file. This memory is referred as virtual Memory.
SAP Memory Management: SAP recommends using Zero memory management so that memory automatically managed. Memory Assignment:
When a user is assigned with work process the work process requires memory to roll the user information. In order to roll the user information from ROLL AREA (U.C) into Task Handler work process requires memory. Each work process assigned with a memory called Roll Memory which is defined by parameter ztta/roll_area= 2 MB. By default it is 2 MB, and this is the maximum memory a user can use. But initially when the user request we will assign around 1 MB that is specified by parameter ztta/roll_fiorst= 1 MB
Local Memory
Ztta/roll_first=1 MB
Once the initial memory is utilized i.e. ztta/roll_first we will assign memory ztta/roll_extension=512 KB to 2 MB Based on memlimits of OS. Memlimits run command If the specified value is used completely then the remaining part of the roll memory is used i.e. (ztta/roll_area) (ztta/roll_first) If this value is also not enough it uses private memory i.e. the work process goes on to private mode. The value ranges from 80 MB to 2 GB. This value should be lower than abap/heap_area_total (Memory for Dialog and Non Dialog WP) If the work process exceeds the limit specified by parameter abap/heaplimit the work process that can go into private mode as minimal as possible by using parameter rdisp/wppriv_max_no=1 0r 2 If the work process goes into private mode the parameter rdisp/max_wprun_time will not be effective i.e. the program cannot be timed out. If too many programs or W.P goes into provate mode the WP congestion occurs (Hour Glass) and no user can login to system. Use dpmon to kill the expensive work process based on the approval. dpmon pf= E:\usr\sap\<SID>\sys\profile\ Go to ST02 to monitor the extended memory and Heap Memory. If the usage of Heap Memory increases the bottle necks on the system raises gradually. Go to ST06 to display the amount of physical memory. It is used to display the number of CPUs by using count. It displays CPU utilization for the last 15 minutes. The CPU idle time should always be greater than 30%. If it falls below 30% CPU bottle neck occurs.
70
Reconcile ST03 and ST06 and identify the expensive ABAP program and recommend to fine time it. However we can identify the top 10 CPU users, using Detail Analysis menu Top CPU users. ST06: It is used for displaying CPU idle time number of CPUs, CPU utilization, Physical memory available and utilized and available memory, swap memory and used. It is also used to start and stop SAPOSCOL service. Click on detailed analysis menu to display the TOP CPU users compare the data based on memory CPU. Click on LAN check by ping to check the number of presentation servers, Application Servers and Database Server. You can ping to the servers or to a specified IP address. ST07: It gives the complete picture of the instances users work process and the load on the applications. It is used to say weather system is optimally configured or not. It is a measuring device to configure load balancing based on usage of application components. It also gives the details of Response Time (Which Instance). It also displays the amount of buffers configured on each instance along with the buffered content. ST11: It is used to display the developer traces of work directory \usr\sap\<SID>\sys\work ST05 and ST01: It is used to trace the following 1. SQL trace 2. Enqueue Trace 3. RFC Trace 4. Buffer Trace
1. SQL Trace: When a user complains with show response times while accessing a report or when the DB time is more contributed in response time i.e. more than 40% of the response time we need to run
71
the SQL Trace. Select SQL trace and activate the trace. Check that trace with filter and specify the selection criteria. We can also enter SQL statements and explain the statement about the cost and estimated rows. 2. Enqueue Trace: When the enqueue time goes behind the threshold value i.e. more thatn one m/s in Central Instnace and 100 m/s for dialog instance then Select Enqueue time and activate the trace and display the trace similarly. When RFC+CPIC time increases we need to switch on RFC trace When buffer swaps occurs and increases gradually in ST02 we may need to trace using buffer trace. ST01: It provides kernel and authorization trace in addition to ST05 traces.. In order to check the missing authorization that could not be traces in SU53 user authorization check trace in ST01 Kernel functions also i.e. kernel executable. When they are calling certain functions we can also trace their activities
This Page is intentionally Left Blank
72
Manufacturing/ Production (Oracle Apps) Memory (VB and SQL) Machinery (Java and Oracle)
SASTRY GROUP of Industries
Material (Java & Oracle) HR (People Soft) Marketing (Java & Oracle) Management (Java & Oracle) Customer Service (Sieble) Sales (Wings)
Disadvantages of the above Scenario: 1. Monthly report from various systems 2. Data is not Centralized 3. Different Applications 4. Different Databases 5. Too little integration and correlation 6. Administration costs shoots high in maintaining various H/W and S/W resources, Backup Data centers 7. Some of the softwares are out dated then there is no support from the vendor
Proposal to Identify the S/W vendor:
73
1. The companies unable to identify the requirements to implement and replace the existing S/W by a single solution 2. Company appoints External Auditors to identify the requirements from the key business process owners. Ex: KPMG, Accenture, Baring Point, PWC (Price Waterhouse Coopers) 3. Based on the auditors document the S/W vendors submit the feasibility of the requirements. 4. External Auditors identify the right vendor with the help of customers. At this point customer decides the S/W vendor based on the advantages
Let us say customer decided to implement SAP ERP solution because it provides functionality of more than 30 modules along with extension to various Add Ons. it has a good track record of 46,100 customer providing 24*7 support and continuous improvements by realizing patches, upgrades etc. It has a compatible GUI both web based and GUI based and its portable across various OSs and DBs. The major advantage of SAP is the automatic integration of Data between modules. Steps:
1. Now customer has to identify the implementation partner to implement SAP 2. External auditors defines SOW (Scope of Work) to implement SAP SOW (Scope of work): It defines the scope of work at macro level and includes the following: 1. It includes the request for Proposal/ Quotation (RFP/RFQ) 2. Modules to be implemented (Ex: SD, HR, MM,FI,CO) in the first phase other modules in the second phase 3. O/S and DB 4. Number of Users 5. High level customizing details based on each module 6. Assumptions etc..
74
Based on the above document we can submit RFI (Request for Information) to get additional details or clarity on the document. As a BASIS consultant we need to submit the following documents:
1. System landscape strategy 2. Client Strategy 3. Transport Strategy 4. Approval Strategy 5. Backup, Restore and Recovery strategy 6. G0-Live strategy 7. Post implementation strategy 8. Apart from the above the following documents are also included in the proposal: a. Company details along with Organizational Structure and Financial Stability b. Services offered by the company c. Planned man hours based on modules at the rate of 168 to 176 hours per month or weekly 40 hours d. A detailed project plan along with implementation methodology e. List of assumptions f. Risks involved in implementation Finalizing S/W vendor by external auditors after considering SOW, RFP, RFQ Based on various credentials the customer identifies the implementation partner and releases purchase order. The objective of this whole process is to get qualified implementation.
75
BASIC PREREQUISITES
1) Installation of IDES system and allow all the functional consultants/ Developers to work on the system. 2) Solution Manger Project manager and BASIS consultant responsibilities at the time of implementation: 1. Visit the site of communicate with the data center team 2. Get the details of current infrastructure to plan the H/W resources 3. Perform H/W sizing based on users. to determine the CPU, Memory and storage required for the system in the landscape (Development, Quality, Production) 4. Include solution manager system in the sizing along with the SAND BOX system. (Training or Testing or Standalone) Note: IDES comes along with Demo Data Company is setup with all options for Demo Data in IDES. Production: It can not contain any data. We need to setup everything in production
76
Implementation Methodology
Implementation partner uses the traditional process ASAP methodology to implement SAP. ASAP Methodology: (Accelerated SAP). It is a methodology provided by SAP to implement SAP with a predefined series or sequence of steps i.e. what goes first and nest. It consists of 5 steps. 1. Preparation Phase 2. Business Blue Print phase 3. Realization/ Configuration Phase 4. Final Preparation/ Pre Go-Live 5. Go-Live and Support
1) Project Preparation: In this we plan our project and lay the foundation for successful implementation. At this stage that we make the strategic decisions crucial to our project. a) Define project goals and objectives b) Clarify the scope of project implementation c) Define project schedule, budget plan and implementation sequences d) Establish project organization and relevant committees and active resources 2) Business Blue Print: In this phase we create a blue print using the question and answer database (Q & A DB), which documents for enterprise requirements and establishes how our business processes and organizational structure are to be represented in the SAP system. We also define the original project goals and objectives and revise the overall project schedule in this phase. 3) Realization: In this phase we configure the requirements contained in the Business Blue Print. Baseline configuration
77
(Major Scope) is followed by final configuration (remaining Scope) which can consist of up to four cycles. Other key focal areas of this phase are conducting integration tests and drawing up end user documentation. 4) Final Preparation: In this phase we complete our preparations including testing end user training, system management and cut over activities. We also need to resolve all open issues in this phase. At this stage we need to ensure that all the prerequisites for our system to go live have been fulfilled. 5) Go Live and Support: In this phase we move from a pre production environment to the live system . The most important elements include setting up production support, monitoring system transactions and optimizing overall system performance.
78
SOLUTION MANAGER
Uses of Solution manager: 1. Maintain the project and its status 2. Documentation of the entire project 3. Generating license keys and upgrade keys 4. Provide the road map for the implementation 5. Configuring satellite systems 6. Early watch alert configuration 7. System monitoring configuration 8. Solution Monitoring 9. Service Desk: To provide customer service 10. Change management 11. It is used to configure, maintenance optimizer to download patches from SAP. Interview Questions: 1. Explain the pre implementation steps 2. Explain the process of Identifying implementation partner at least 20 partners 3. Define SAP implementation methodology in detail 4. Advantages of Solution Manger Roadmap: It is accesses by using transaction rmmain. It provides the complete implementation methodology along with sequence of steps. As a BASIS consultant the first task is to define the hardware infrastructure required for the project such as desktops for the consultants, network band width, software required, remote connectivity (VPN), pc anywhere, internet
79
connection, e-mail services. Apart from the above the major task is to plan the hardware to implement SAP. Project Charter: Consist of a team i.e. responsible for implementation of SAP project. Customer Project manager: He is absolute owner of the project and responsible for implementing the project from customer end. He needs to track the project status and update the management from time to time. Based on the status of the project he will release the funds to implementing partner. Implementation Project manager: He is used to support the scope of the project and manages all the resources that are required to implement SAP. Business Process Owners: These are involved in owning the respective divisions and responsible for critical decisions in the business. Each division has one owner who owns the responsibility for the business.
80
4. Click on the link Quicksizing. It will prompt you to key in your customer number. It will be a 6 digit number. 5. Provide the customer number; specify the name of the project and click on create. 6. The project is created and we need to key in the following information:
1) Customer Information: Name of the Customer, Customer Contact Person, Customer Contact information like Telephone, Fax, email etc This information is required so that Hardware vendors can communicate with customers to submit Quotation for Hardware. 2) Working Time: Normal business time, from what time to what time. The peak hours. Specify the business peak hours Average working days in the year (210- 250 days) or 365 days. 3) Unplanned Downtime: Should be 0% 4) What O/S is along with versions, what DB is required along with versions, what type of Backup is required (Offline, online, partial, incremental). Specify high availability options like mirroring, RAID, Clustering, stand by, disaster recovery, servers are required 5) Network band width in LAN, WAN etc 6) Users: Define the number of users based on modules and specify the (Low, Medium, High) users. Low Users: These are the management who uses the system occasionally i.e. they will try to input 0-480 dialog steps per week for 40 hours Dialog Step: It is an input provided by user along with an enter (Key stroke) or a mouse click.
81
0-480 dialog steps/ 40 hours= 12 hrs (Dialog steps per day) For 1 hr it will be 60/15=5 min (dialog steps per 5 min) = 300 sec I.e. low users will input his data for every 300 seconds. Medium User: They will generate 480-4800 dialog steps per week or per 40 hrs each dialog step is initialized for every 120 sec High User: They will generate 4800-14400 dialog steps per week or 40 hrs. Each dialog step is initialized for every 10 seconds 7. Save the project with all the inputs and calculate the result. The result is displayed as follows. Note: Sizing is also referred as T-Shirt size. CPU Size: It can be small, large, medium SAPS: Sap applications Ex: CPU Size S No of SAPs 5000 Memory in MB 7869 Disk Category Disk Size in MB XL, S,M,L 586, 889 MB
Note: We need to include the size of legacy system database. SAP Service Market Place: It is an official website of SAP. The URL is. It is referred as SAP market place. Official website of SAP targeting various groups of users like Customers, Employees, Partners etc.. It provides various advantages to the customers some of them are
82
1. SAP Notes: SAP provides a rich set of knowledge base to resolve runtime issues information etc.. Ex: Installation problems, Problems during patch application, resolutions for standard errors. It is accessed by using 2. Downloads: We can download software based on customer license My SAPERP 2005 Netweaver 2004 Solution Manager It is accessed by using
3. Hardware Sizing: It is accessed by using
(Quick Sizing Tool). It is used to perform the hardware sizing i.e. required to implement SAP or upgrade SAP solutions. 4. Create Message: For run time issues Help and Support. SAP provides this options to create service message to resolve the runtime problems Earlier it is straight to create a message. But in the current version of market place it will not allow to create a message directly. Instead it provides a search criteria to search the notes and if the error is not resolved then it will allow to create the message. Go to notes/help and support Report a product error customer message 5. Creating License Keys: SAP provides an option to generate license keys for developers, objects, migration keys. License Key: It is required for all the instances that are running on production landscape. Object Key: By default all the SAP objects are locked. In order to modify the SAP standard object we need to obtain key for developers and the object.
83
Click on SCCR key (SAP Software Change registration). Each developer has to be registered in the website. There is a separate license fee for developers. SAP designed 15 lakh programs or objects on SAP system. Migration Keys: Whenever there is change in O/S or DB of SAP system we required migration keys. 6. Data Administration: It consists of two options System Data User Data System Data is used to maintain the information of all the systems of the customer landscape User data is used to create new market place users and assign passwords.
7. Inst Guides: It is used to download the installation guides and upgrade
guides.
8. Downloading Support Package and Patches: Go to
9. Quick Links: Quick Links are used to identify the options of market place. It is also used to provide the URLs 10. Road Maps: (RMMAIN) It provides series of steps to implement SAP.
84
Solution Manager
1. Generation License keys/ Upgrade Keys Transaction: solution-Manager or DSWP 2. For Land Scape Creation: SMSY: To generate license key ot configuration of satelite system 3. Monitoring Alerts 4. Service Desk Configuration 5. Maintenance Optimizer 6. Change management License Key/ Upgrade Key: It is mandatory to have license key/ Upgrade
Key to continue with the installation. With out this key no installation based on netweaver (ECC 6.0, EP-7, BI-7, XI-7, CRM 5.0, SRM 5.0) can be continued.
This key is checked during central instance installation. Go to SMSY Go to Systwem landscape Click on other objects Specify the System ID or SID Click on generate key installation or upgrade key Specify system ID, System Number, Message server (Hostname of systme where SAP is installed) Click on Generatge Key. Activity-2: Configuration of Satelite System: Go to SMSY to create satelite system. Satelite systems are SAP systems which can be monitored through sllution manager. Monitoring of SAP system configuration- landscape Go to SMSY where we can create satelite systems manually or using wizard (graphical tool) Start . Go to SMSY System landscape Landscape Components Server Database System System components Right click on system and click on create new system with assistant Specify system short description SAP product Product version Installation number Click on continue Instance 85
Number, Host Name Generate RFC connectivity System Landscape-RFC connectivity(required for each system to be connected) Start select use scenarios Customizing distribution Change request management SAP solmon RFC connection with logon screens Transfer RFC connection outgoing RFC connections Now specify Option, user Id, Pwd Incoming RFC connection Additional RFC connection data RFC connection attributes L:oad balancing Server Group Routing Info Assign RFC connections for system monitoring complete
Activity-3: Assigning logical component to system in the landscape. Go to SMSY Go to system groups and logical component Select the system <SID> and assign the system role to the logical component. (If this is not performed the addressing will be difficult) Activity-4:
Creating solutions Go to solution Manger or DSWP to creqte solutions Click on New Create new solution Povide Soluiton Name, Customer Number, Original Language Continue to creatge the solution select the solution (BASIS Group) This solution is used for solutin manitoring system monitoring, service desk, change management Delivery of SAP services continuous improvement Maintain solution landscape and include the logical component defined in SMSY sustem group Go to solution settings to setup Earlywatch Alerts (EWA) and CCMS monitoring of EWA.
Activity-5:
Creating Project: Go to Solar_project_Admin to create a project SAP solmon: Project Administration click on create Provide project Name, Type of Implementation, Landscape: BASIS Group Provide the below details general data scope(Roadmap ASAP ERP) Project team member system landscape milestone OUs Project standards Save the project The above the task done by project manager in preparation phase.
86
SOLAR01: It is used to create various configuration scenarios that needs to be configured in the project. This is also referred as business blue print. This is used to select the scenarios from the various modules, consultants can upload the docs. Solar1 Business Blue Print change for project Business Blue Print structure BASIS Project OUs, MASTER Data, Business Scenarions (Update documents). SOLAR02: It is used to realise the scenarios that are created in solar1. It is used to configure the scenarios by navigatin to satelite system. SOLAR2: Configuration change for project based Analysing the SIZING Results: The sizing output will provide the memory and storage required directly in megabytes (MBs) considering the growth in transactions values(Business) No.of users and enhancement in modules.We may need to add 30% to 50% to the output results. Apart from the above we also consider the following: 1. Operating System 2. Database 3. Interface to provide connectivity to ohther systems) 4. Printers (Check printing, Barcode to lable printing) Note: We may need to provide the sizing table with various options Note: The sizing will be initially for development system. We need to plan the hardware for production 3 months before going live (To save the maintenenace cost, increase the warranty, Reduce the cost) Note: Always check fo the enhancements and feasibility of the Hardware. Ex: The system should support more memory (Different slots or with multiple slots and HDD) CPU requirement:
87
SAP does not give you the CPU output directly because the CPU varies based on different manufacturers. Ex: 102 GHz, 3.0 GHz, Dual Core, Quad Core SAPS: SAP provides CPU requirement in SAPS (SAP Application Bench Mark for performance standards). SAP defined benchmarks based on sales and distribution module i.e. for every 2000 sales documents 100 SAPs are required. Depending upon the vendor of CPU it can generate 800 to 2000 SAPs. Let us say output requires around 5000 SAPs then Hardware vendor may recommend 4 CPUs. Ordering the HARDWARE: Once the sizing results are finalized we can call for the quotations from the H/W vendors. Customer Manger and Implementation manager work in this task. Solution Manager can be installed on a 50 GB desktop with 1 GB RAM on 32Bit machine. Depending on the usage of Solution Manger we may need to move to 64-Bit and enhance the RAM. Ordering the SOFTWARE: 1. O/S Software: License and support Note: Most of the cases O/S is provided by H/W vendors 2. DATABASE: Software mostly is provided by SAP 3. SAP Software: We need to buy the SAP S/W through channel partners. There are various types of SAP S/W. Based on customer requirements SAP has its own release for different components. SAP Release Strategy: 1. SAP R/2 2.0 on two tier architecture 2. SAPR/3 3.0 ..3.1g, 3.1 (i) (Released in 1997-1998)
Its strategy 5-1-2 First 5 years of release SAP charges 17% of total software cost.
88
From 6th year 17+2%=19% of total software cost From 7th and 8th year 17+4%=21% of total software cost. 1997-1998 up to 3.1 i 1998+5=2003 Normal Maintenance 2003+1= 2004 Extended Maintenance 2004+2 2006 Extended Maintenance
2000 4.6c DEC 2006 Maintenance of mainstream end DEC 2007 Additional Maintenance Dec 2008 Additional Maintenance 2003 4.7 EE (Enterprise Edition) 2009 Maintenance of Main Stream Solution Manger: 3.0 2005 April 3.2 2006-2007 4.0 2008 2005 ECC 5.0 (SR1, SR2) 2007 ECC 6.0 (SR1, SR2) 2003 4.7 EE (4.70, 4.71 Extension Set-1, 4.72 Extension Set-2) SAP license is based on Number of Users. Each user cost is different from country to country, state to state. It is estimated that each user cost varies from 40,000 to 1,00,000 depending upon the number of users.
89
Note: After the H/W and S/W order it will take 1 Month to 45 Days to release the Order. This time where we need to prepare various Documents to Install, Configure, moving the known Problems of installation, Product Errors, Strategies for systems, System Landscape, Clients, Transports, User Management, Authorizations, Back Up and Restore and Day to Day activities. Installation Pre Requisites: Download the installation guide from service market place for specific O/S, DB and Sap R/3 Version. Ex: HP UNIX WINDOWS LINUX ORACLE SQL,ORACLE,DB2 SQL,ORACLE,DB2 4.7 EE1, 4.7 EE2, ECC 5.0, ECC 6.0 4.7 EE1, 4.7 EE2, ECC 5.0, ECC 6.0 4.7 EE1, 4.7 EE2, ECC 5.0, ECC 6.0
Read the documentation and document the following: 1. Required Software to0 install the SAP system
2. Download the required software from SAP market place 3. If you have the software in DVDs copy the software into TEMP directory 4. Check the label.asc in DVD to identify the version and product 5. Find the required Virtual Memory and assign the memory to the system. Virtual Memory: It is the sum of Physical Memory+ The space on the Disk. It is also referred as paging memory or memory on the disk. Note: It is not possible to install the RAM which is required by the system.
90
On 32-Bit machines we can define up to 4 GB. On 64-Bit machine the minimum SWAP memory will be 20 GB. As a rule Virtual Memory is calculated as 3 times the size of physical memory + 1 GB. Identify the proper JAVA version and download it from. It should be JRE (Java Runtime Environment) 1.4.12_2 Install the JRE and set the path as defined in the document. Environment Variables: This provides the run time environment of the S/W i.e. installed on the system. WINDOWS: On windows we need to set JAVA_HOME and Java Bin Path in local variables and Global Variables. Local Variables: These are specific to the logon user. Global Variables: These are applicable to all the users
IF the O/S is not WINDOWS then based on the O/S we may need to set profile parameters in .bash profile or .profile. We can also use command setenv and setpath. Define the Hostname as per the document and company standards. Ex: WillERPDEV/WillBIWDEV The HOST NAME should not be more than 13 characters. Get the Static IP address from the system administrator which should start from either 172 or 10, where as 192 is used for Testing and 127 is used for loop back etc as per the INA (International Network) standards. Define the System ID (SID). SID is a three character alphanumeric string and the first characters must be a character and the remaining two can be either a number or a character. Ex: R60, DEV, RS1 There should not be any special characters in the system identifier.
91
There should not be any duplicate SID in the landscape The SID name should be meaningful to identify the system in the landscape based on the PRODUCT, LOCATION, Roll in the landscape. Note: Dont use reserved key words like SAP, BIW, SCM,SRM< ERP, All etc Provide the entry in host file i.e. etc\hosts which is located in windows\system32\drivers\etc\hosts Download the known problems related to installation of this particular component. Identify the OS version and patches to install. Identify the DB and patches to install. R/3 4.6 C 4.6 BASIS (BASIS system)
4.7
SR1
SR2
Installation Types:
Installation of SAP up to version 4.6 C is performed by R3 setup. R3 Setup: It is not dependent on any other component once the inputs are keyed we cannot change the inputs and restart from the scratch. It is not an interactive tool. This tool is used to install BASIS components only i.e. 4.6 C and below. SAPINST is used from version 4.7 Enterprise Edition. But the components need to be installed separately i.e. Central instance, Database instance, Dialog instance.
92
SAPINST Requires JRE because the installation executables are programmed using JAVA (SAPINST.CMD). Different passwords for different users are created during installations. There is no provision to change keyed Inputs. This is valid only for WEBAS 620 (4.7 EE, 4.7 EE SR1, 4.7 EE SR2). It is also not interactive. SAPINST.EXE (./sapinst): From 640 version onwards JAVA dependent, color full option to change the inputs at the end single password with minimal inputs. SAP COMPONENT 4.6 C 4.7 5.0 ECC 6.0 SR1 ECC 6.0 SR2 SOLMOM 3.2 SOLMOM 4.0 XI 3.0 XI 7.0 BIW 3.5 BI 7.0 EP 6.0 EP 7.0 BASIS 4.6 C /4.6 D 620 640 700 700 640 700 640 700 640 700 640 700
Installation: Central Instance Installation: It provides a typical installation with minimal inputs where Central Instance and database instance are installed together. INSTANCE: It is an application server which has its own resources or shared resources (Memory, CPU, and HDD)
93
Distributed Installation: In this installation the Central Instance and Database Instance are installed separately High Availability: It is used to install clustered systems on Node-A and Node-B. Installation Inputs: (Central instance) Select the SAPINST for the specified OS i.e. navigate to the folder i.e. related to your OS Select the component that is going to be installed Ex: SAP ERP 2004, SAP ERP 2005, SRM, CRM\ Select the Database Ex: DB2, SAPDB, MS-SQL Server, Oracle Select the type of installation (Central Instance, Distributed, High Availability) Specify wither typical or custom installation Note: While installing database use SAP customized Batch Files or Script files defined by SAP. Dont use the native setups of database. The advantage of using SAP script is to set the environment variables that are required for SAP. For Oracle use SAPServer.cmd and for Dialog Instance use SAPClient.cmd. For SQL Server use script SQL4SAP.vbs On UNIX environment the Data Base has to be installed during Database Instance Installation or after Central Instance installation. Database is not required for windows operating system. Select either UNICODE or NON UNICODE UNICODE: It supports around 90.000 characters to support all most all available languages in the world. UNICODE consumes 40% more resources than NON UNICODE Select the Central Instance and specify SID
94
Specify the Instance Number Instance Number: It is a two digit number that varies from 00 to 99 but only 00 to 97 are used, 98 and 99 are used/ reserved for routing purpose Specify the Database ID and Database Host Specify the Amount of RAM i.e. to be used during installation. It used 60% of memory Specify the type of Installation either local or in the domain or in the domain of the user that user has necessary privileges to create users, services, groups and assign to groups Specify the TRANSHOST directory i.e. to install the executable path Specify the Database Home i.e. the oracle executable path Specify the schema Id (database Schema). In earlier versions of DB we need to install DB for each SAP component. The confusion arises while handling multiple homes from ORACLE 9i on wards. (MS SQL 2000) are supporting MCOD i.e. Multiple components on single Database with different SIDs which means that the schema Id should be different for each component. Specify the password for SAP system Administrator (USER ID) i.e. SAPADM (SRMADM, DEVADM). He is the owner for entire R/3 system. Specify the password for SAPServiceAdministrator. The User Id is SAPService<SID> Ex: SAPServiceSRM Specify the path of KERNEl DVD Specify the ports 1. Message Server Port: 3600+ <Instance Number> 2. GateWay port: 3300+ <Instance Number> 3. Dispatcher Port: 3200+<Instance Number>
95
4. Dispatcher Security Port: 4700+<Instance Number> 5. GateWay Security Port: 4800+<Instance Number> Start The installation
DB Instance Installation:
Once CI is installed select Database Instance to install Select DB instance Specify the SID Select standard installation. Other Options like System Copy is used to setup the system from the existing system Decide on MCOD. Select the option Install SAO system in the DB (first Time) choose the second option if the DB exists Specify the Instnace Number Specify the Memory (RAM). During database installation it take 40% Specify the TRANSHOST directory Specify the Passwords for SIDADM and SAPServiceSID Specify the location for DB server directories Specify the location for Redo log files Specify the path of Kernel directory and Export Directories Specify the path for Data File Note: DB Server directory include the following 1. SAPBACKUP 2. SAP ARCH 3. ORAARCH 4. SAPREORG 5. SAPCHECK
96
6. SAPTRACE Data Directories Includes the following: 1. SAPDATA1 2. SAPDATA2 3. SAPDATA3 4. SAPDATA4 5. SAPDATA5 6. SAPDATA6 In ECC 6.0 the installation inputs are reduced. 1. Use Installation Master (IM) and click on SAPINST.EXE 2. Select the usage type (ABAP, JAVA, BI, PI, MI, EP etc..) ABAP is mandatory to select 3. Specify SID. Specify installation location (Drive) / oracle 4. Specify the password i.e. master password for all the users that are going to be created during installation 5. Specify the DB instance location 6. Specify Kernel path and database export 7. Specify path to install the data files 8. It displays the lost of inputs to review 9. Review the inputs if required and continue to start the installation Installing the Dialog Instance: Dialog instance is required to handle the additional load on the C.I 1. Click on additional life cycle task. Select Application Server and select the dialog instance 2. Specify the SID and Instance Number 3. Specify the path of profile directory
97
4. Specify the password 5. Specify the location of Kernel directory and continue to install dialog instance Installation of SAP GUI: SAP recommends the higher version of GUI to be installed to connect to SAP SYSTEM Current version of GUI is 7.10 (700, 640, 620, 4.6 D, 4.6 C etc) Pre Implementation: 1. Feasibility Report 2. RFQ 3. RFI 4. RFP 5. Installation IDES system 6. Solution Manger 7. ASAP Methodology
Installation Logs:
Central Instance: Control.xml: It consists of installation steps that are executed one by one. Keydb.xml: It will give you where the installation steps need to be start (Continue with old installation) Check for environment variables Check for privileges of the user who is going to install the SAP S/W. Contol.xml gets the details from DVD or DUMP and write the installation steps in keydb.xml. When the installation is restarted it
98
reads from keydb.xml to continue the installation where it is aborted. Set Environment for users Create users (<SID>ADM, SAPService<SID>) Create groups (SAP_localAdmin, SAP_<SID>_Admin, SAP_<SID>_GlobalAdmin, SAP_<SID>_localAdmin) Assign groups to users <SID>ADM: It is the owner of R/3 system (Administrator) SAPService<SID>: It is the owner to run the services when the system is started and ensure that password never expires (Back Ground service user) Groups: SAP_LocalAdmin: To administer the system locally and to own the usr/sap directory SAP_<SID>_Global Admin & SAP_<SID>_LocalAdmin: These groups are created to provide access to instance specific and they are used when multiple <SID> are in one system. Creating usr directory where the SAP profile parameters are installed SAPinst creates shared mounts SAPmnt and SAPloc to access by other systems in the landscape SAPinst extracts executables in to RUN directory. But from ECC 6.0 exe/nuc/NTI386 or exe/uc/NTI386
Create services and run the services SAPoscol and SAP<SID>_<INR> SAPoscol: It is an operating system collector SAP<SID>_<INR> It is an instance specific service to run the SAP. It is based on SAPstartsrv.exe
99
Database Instance: It checks for the users and passwords of <SID>ADM and SAPService<SID> users Extracts database dependent executable into RUN directory Creates the database Creates the table spaces Loads the data i.e. import data from Export DVDs How to load the DATA: As we cant load the data sequentially (It is time consuming) we will load the data based on table type. SAP Defined: It provides command files that are split based on the version. These command files control the loads. These are nothing but (2 running, 3 Completed, 9 waiting and Failed 0) Each command file is related to a task file i.e. .tsk file. Each task file gets the size from .tpl file and gets the structure from, .str file and table of contents from TOC file. Process of Loading: Each task file consists of the Tables, Index View, Pkey etc to be created. T-table, D-data index, p-primarykey. Each task file when it is going to run state it is copied to a .tsk.bak file. The load happens from .bak file. When the table is created or loaded an entry is made into .tsk file with status (OK). If it could not be loaded then the status will be (err) and we can see if it is failed in SAPint GUI (Failed) Post Installation Activities: Check the consistency of the installation by using transaction SICK. It checks the consistency between O/S and its patches, DB and its patches and R/3 and its patches (Kernel Path). Based on the errors we may need to update OS, DB patches and R/3 patches Lock all the known users and passwords in all the clients
100
For Sap* login/no_automaticuser_sapstar=1 Create two super users. One should be sealed and kept by the project manger and other one is used by BASIS consultant Initialize Change and Transport system or Correction Transport System (CTS). For this Go to SE06 9System Engineering) and select the standard installation and perform Post Installation Actions. It will prompt to configure TMS. It will reset Transport management System System settings as per role of the system in the landscape we need to set whether it is MODIFIABLE or NOT MODIFIABLE. Go to SE03 and click on system change option. MODIFIABLE: The objects are allowed to modify in the system. Using this option we can modify objects and we can restrict some objects. This is set for Development System only. Occasionally we may set this option for Quality System, but never in the production system. Modifiable means the system is modified in terms of Repository Objects 9SAP standard objects), tables, programs, transactions etc.. NOT MODIFIABLE: This option is set to production and none of the objects are modified. If you change accidentally in the production system it is tracked in the auditing and you will be questions for the reason we may need to pay damages if it is set modifiable without approve from authorized customers. These are called as System Change Options (Modifiable & Non Modifiable) Configuring TMS in Client 000: Transport Strategy: We have to define this strategy to ensure that the objects are developed in one system and moved on to other system for quality testing and finally to production. Configure TMS: The transport strategy has to be configured in 000 client with user i.e. copied from DDIC. Ensure that two Background work processes are defined in the system.
101
Go to STMS A pop up window is displayed to configure Transport Domain Controlled (TDC) Domain Controller: Commands the other systems in the land scape. TDC: It is a domain controller to control all the systems in the landscape. It is used to manage all the transport parameters in the landscape. In most of the environments there will be only one Domain Controller. POP UP Window: It provides two options one is to configure Transport Domain and other is to configure Member systems. While configuring transport domain provide the name of the transport domain i.e. DOMAIN_SID and description save the transport domain. Include system in Transport Domain: Select the option next to SAVE icon provide HOSTNAME and SYSTEM number to include the system in the landscape Backup Domain Controller: IN case of TDC failure BDC will take over. STMS Overview Select System Select the system that needs to be configured as BDC Go to Communication TAB Specify the BDC system and click on save. Include Systems in the Domain: When you include systems in the diamond a request is sent to Domain Controller. To include in the domain logon to Domain Controller Go to Overview Systems Select the system and click on Approve. RFC destinations are created with a Communication User TMSADM. Create Virtual System: As the other systems of the landscape are not included we are creating Virtual Systems. Virtual Systems are required to address the objects that are created in development system. If the virtual systems are not defined we need to address manually which is a time consuming process.
102
STMS Overview systems SAP systems create virtual system. Defining the Landscape: Go to STMS Overview Transport Routes Go to Configuration in Change Mode Select standard configuration Save. Landscape Strategy: The arrangement of systems in an order to display the flow of objects in the landscape. System: The system is a physical entity where certain activities are carried in the landscape Development System: It is referred as DEV but one can use their own naming convention. It is used to develop the objects in the landscape. It is only the system where development activities are carried out. The system settings are set to Modified. Quality System: It is used to test the objects for load and stress that are developed in development system. The system settings are set to "Not Modifiable. If the object fails to test it needs to be modified again in the development system is tested again in the development system. Dont move the objects to production system until they are successfully tested. Production System: The objects which are approved in the testing system will be moved to the production system. System settings are set to Not Modifiable. If any object is found fault it has to route from Development System again. Apart from the above we can have the following systems in the landscape 1. SAND BOX 2. PRE PRODUCTION 3. TRAINING 4. PAY ROLL.. Types of Landscapes: 1. Single System Landscape 2. Two System Landscape
103
3. Three System landscape 4. Multi System Landscape 5. Other Systems in the landscape SYSTEM: It is a physical entity which is installed with the following components The /usr directory The usr directory consists of: 1. TRANS directory 2. Executables Directory exe 3. Profile Directory 4. The system is represented by an Instance. i.e. the Directory DVEBMGS <Instance_Number>.
/usr Directory (OS Level) Screens, Programs, Reports, Functional Modules, Menus, Transactions (DATABASE LEVEL) BASIS LAYER
Repository: Collection of Data (Programs, Transactions, Screens, Menus, Functional Modules) Repository Objects (A-X): The objects with name preceded by any letter of this range are called as standard objects and repository objects and are developed by SAP. All the Repository objects are stored in the table TADIR. SAP recommends not to modify any of the Repository objects. SAP in turn recommends to develop your own objects in customer name space Y-Z alternatively with in defined our own objects using Company Name.
104
Y-Z: Developer can create their own objects with the name starting with Y-Z and this will not disturb the standard or repository objects.
Cross Client Customizing: Customizing: It is the process of keying entries in to the table without changing the basic structure of the system. Ex: measurements, Country, Currency, Time settings etc Support Backup
Template
Backup Client
Default
Template
DD02L: It is a table which contains all SAP tables. SM01: List of transactions SM02: System Message Functional Module: It is the logic behind the screen. System: It consists of repository Objects, Cross Client Objects, and Client Specific Objects. 9User Master Data, Application Data, Customizing Data).
Two System Landscape: It is least recommended landscape by SAP where Development and Quality activities are performed in one system and production activities are on the other system. Disadvantages: The repository and cross client objects still share by development and quality system. So there will be inconsistency and the objects are used either by developers and quality people.
DEV 001
QAS 000
EWA 066
EWA 066
Three System Landscape: This is most optimized landscape recommended by SAP where Development, Quality and Production activities are performed in three different systems. Z<SID>
DEV 001 000 EWA 066 QAS EWA 066 000 001 PRD EWA 066 000 001
107
Delivery
DEV
QAS
PRD
OFF SHORE
INDIA
DEV
QAS
DEV
QAS
PRD
US
DEV
QAS
PRD
EUROPE In a multi system landscape the development (object) is performed at off sore (Common Development Environment) and the objects are transported
108
onto different landscapes. Geographically the objects are customized locally according to local customize settings (Measurements, Time Tax etc) Other Systems in Landscape: Apart from the three system landscape SAP also allows to include the following systems in the landscape. 1. Testing 2. Sand Box 3. Training 4. Pre Production System 5 Pay Roll or Migration system
Note: SAP allows 8 systems in the landscape for a single installation number.
Applying License: Each system needs to be applied license so that runtime issues are resolved by SAP. Go to Transaction slicense The license key depends upon Hardware key o f the machine and installation number. In earlier versions where this T-Code is not available use command suplicnese-get to get the hardware key. Note: Hardware varies depending upon the O/S. Ex: 32-Bit, 6t4-Bit O/S and SAP component Go to market place with authorization to generate license keys and click on the tab license keys Click on the option request license key select the installation number provide the following details: 1. System ID 2. Host Name 3. INR (Instance Number) 4. OS 5. H/W key
109
6. Data Base Now SAVE the information to generate the license key. Normally license generation will take around 1 hour. The license key can be downloaded from the same screen. Alternatively SAP also sends over the Email. The license key is a number of 24 digits but now SAP is sending the license by encrypting in the text format. License can be installed in Two Types: Click on Install New License Specify the Instance Number System Number and Key Click on New License and specify the path of the text file.
400 200
500
When the system is installed it can be accessed with 000, 001 and 066 clients. Client: A client is an individual business entity or a company having its own User Master data, Application Data and Customizing Data. Client is represented by a Field MANDT in the database. The tables with fields MANDT are called as Client specific tables.
110
Client field MNADT is a data separator i.e. the users needs to specify the client number to logon to the client specific data. 000 Client: It is a default client or template client which is provided by SAP. No changes are allowed in this client. As this client is SAP client it is commonly updated by applying Support Packages, Patches, Add-ons and languages etc.. It is used to setup Transport Management System (TMS) and run Standard Jobs. No customizing data, Application Data and User Master Data (Except for the Super Users) is allowed. 001 Client: It is a backup client for 000 client as per initial plans of SAP. But 000 is continuously updated where as 001 is not. So the purpose is deviated in the current versions like SOLMON, NETWEAVER systems. 001 is considered as production client. 066 Client: Early watch client. It is used by SAP to logon remotely and generate Early Watch Alerts. As per SLA (Service Level Agreement) SAP sends two early watch reports per annum.
Client Creation: As the standard clients provided by SAP are not used for production use we need to create our own clients and define client roles Clients are displayed, created and modified in the transaction SCC4. Go to SCC4 Click on New Entries Specify the Client Number Add description Save. Client Number: it is a3 digit ID which varies from 000 to 999 which means all together we can create 1000 client s in SAP. Client is represented by Field MANDT in the database. Note: The clients are available in the table T000. This is the only Cross Client Table with the MANDT field. Click on New Entries and specify the name of the clients, specify the client numbers specify the name of the city. Logical System:
111
This is used to differentiate between clients in the landscape. Logical systems are created in transaction SALE. SALE: System application linking and Enabling. SALE is used to define the logical systems that are used for communicating or transferring data. Systems are using naming convention. Ex: DEVCLNT555, QASCLNT555, PRDCLNT555 Note: Do not try to change the logical system name. Once it is assigned to a client. In order to change logical system i.e. assigned to a client use transaction BDLS. This BDLS transaction is generally used after performing a SYSTEM COPY from production system.
500
500
PRDCLNT000 to QASCLNT000 (We need to change) Go to SAVE Go to BASIC settings Select logical system Define logical system Click on New Entry Specify logical system name (QASCLNT000) Click on SAVE
112
It will prompt you to create a change request Specify the logical system name Select the logical system name Select the currency Specify the client Role Client Role: It specifies the role of the client in the landscape. There are various client roles defining the implementation. Development System: In the system landscape development system the first client i.e. going to be created as Master Client, Parent Client, Golden Client. It is represented by CUST i.e. customizing role. This is only the client in the landscape where changes are made. The changes in the landscape will be carry forwarded to other clients in the landscape. No changes are allowed in other clients. Changes and Transport for Client Specific Objects : 1. Changes without automatic recording 2. Automatic Recording of Changes 3. No changes allowed 4. Changes without Automatic Recording, No transports allowed Cross Client Object Changes: 1. Changes to repository and cross client customizing allowed. 2. No changes to cross client customizing objects 3. NO changes to repository objects 4. No changes to repository and cross client customizing Protection Client Copier Tool and Comparison Tool: Protection Level 0: No restriction Protection Level 1: NO over writing Protection Level 2: No over writing and No external availability CATT and eCATT restrictions: eCATT and CATT not allowed
113
eCATT and CATT allowed eCATT and CATT allowed only for trusted RFC eCATT allowed but FUN/ABAP and CATT not allowed eCATT allowed but FUN/ABAP and CATT only for trusted RFC. Restrictions: Locked due to client copy Protection against SAP upgrade Cross Client Customizing: Changes to Repository and Cross Client customizing allowed: In this particular client the changes made to cross client and repository objects is allowed Changes only to repository Objects: Repository objects only can be modified. DEV: Client CUST TEST SAND QAS: Client CUST TEST SAND C.S NO NO NO CCC & REP NO NO NO Protection 2 2 2 C.S ARC CWR CWRT CCC & REP YES NO NO Protection 0 1 1
Protection 2
CS Client Specific CCC Cross Client Customizing REP Repository Objects Protection Protection Level Protection Level: Level-0: No restriction i.e. no restriction. This client is allowed for over writing and available for client copy Level-1: No overwriting i.e. the client is not allowed to be over written by other client Level-2: This option is provided to not to allow client copy and client comparison Note: Based on requirements we may need to change this protection level and cross client object changes even in the production system. Note: Dont try to change these options of preproduction and production without approval. Changes to these tables are logged with Data and Time along with User Name. Ensure that necessary approvals are obtained from the customer to change the client settings. eCATT and CATT not allowed: It is used to upload the data into the system. We need to specify whether they are allowed or not. Depending upon the requirement we may allow or disallow all the clients. But in production client it should be set to not allowed.
Test Client: It is used to test the scenarios that are configures in CUST client. Changes are moved from CUST to TEST using transaction SCC1. SAND CLIENT: It is used to configure the scenarios based on customer requirement. It will serve as a play ground environment. For the consultants no changes are carry forwarded.
115
QTST Client: This client issued for integration testing and consolidation testing. Changes that re made in CUST client are transported to QTST client. (Tools like mercury, ecatt are used to test the integration between modules along with stress and load testing). TRNG Client: It is used to training end users before they work on the production system. MIGE and Prep (pay roll): These are optional clients that are created based on requirement. Migration client is used to migrate the data from legacy system. Preproduction client is used to check the behavior of changes before they work. Payroll Client: As we cant run the pay roll on the production system (Test Payroll n-number of times) payroll client is created and it is run at various frequencies. Prod Client: It is used by the end user to perform the business. It is more critical client and no other system in the landscape is allowed to carry the business.
116
Pre Requisites of Client Copy: 1. Switch off active log files. 2. There should be enough space in the database. For this execute RSSPACECHECK and RS1TABLESIZE to identify the space and memory requirement. 3. Choose 000 client for initial client copies. However business clients are also allowed for client copy based on protection level. 4. To perform client copy always login into Target Client and copy the data from Source Client. 5. Client copy consumes time, so schedule in the background 6. Choose Profile to specify the data to be copied from source client (Application Data, User master Data, Customizing) Ex: SAP_ALL, SAP_USER, SAP_CUST, SAP_APPL 7. Define RFC destination between two systems to perform Remote Client Copy 8. Check the space in TRANS directory to perform client export. 9. Perform a TEST RUN before actual client copy 10. Client copy logs are displayed in Transaction SM37/ SCC3. 11. It is not recommended to login to source client and modify the objects during the client copy. 12. During client copy the number of tables does not increase. Only the entries in the table increases under the entry MANDT. 13. Remote client copy has to be performed between same systems (Same Version, Same Patch Levels) not possible for different R/3 system. If DB tables having field MANDT then these are specific to client. If there is no MANDT field in the tables these are shared tables for all clients. Desc SAPDEV.usr02 (Describe DB tables) Cross Client is shared by all users.
117
Select MANDT , Count (*) from SAPDEV.usr02 group by mandt; Select count (*) form SAPDEV.usr02; Select count(*) from sapdev.tbdls; Select count (*) from sapdev.tadir; Select BNAME from SAPDEV.usro2 where MANDT=800; Select * from SAPDEV.usr02 where MANDT=789; Select MANDT form DEV.T000; Local Client Copy: It is performed between two clients with in the same system. 1. Create a client (SCC4 for create client) and logon to the target client 2. Go to Transaction SCCL 3. Select Source Client 4. Select Test Run to check the source 5. Select Profile 6. Select start immediately and schedule in background 7. Go to SCC3 to check the logs (To monitor the client copy process) Remote Client Copy: It is performed between two clients with in two different systems in the landscape 1. Create a client 2. Go to SCC9 of the target system 3. Specify the profile 4. Specify the RFC destination 5. Start in the background mode. However we can start immediately Client Export and Import:
118
It is used to perform the client copy between the systems which are not in the landscape It is performed in 3 steps. 1. Client Export: Use Transaction SCC8 to export the client to O/S level. This process generates transport requests in the \usr\sap\trans directory. 2. Copy the files to target system and import them using FTP and transaction STMS 3. Client Import: Go to Transaction SCC7 and perform client import Based on client export profile KX, KO, KT files are created in \usr\sap\trans\cofiles and RX, RO, RT files are created in \usr\sap\trans\data directory. KO, KX,KT are called as cofiles or control files Ro, RX, RT are called as Data Files Copy the files using FTP\normal copy\DVD into target system \usr\sap\trans\cofiles and \usr\sap\trans\data directories. Go to STMS. Add the request and Import Go to SCC7 to perform post client Import activities Standard Clients
DEV PRD
QAS
Migration/ Legacy
119
QTST
TRNG
PREPROD
Client Deletion: SCC4: Delete the entry from table T000. SCC5: Permanent Deletion To lock and unlock a client: Go to SE37 Reports: The following reports can be used to lock and unlick the clients. Sccr_lock_client- lock client Sccr_unlock_client- unlock client Note: Profile SAP_CUST is used initially to setup Golden Client but later this golden client can be used as a source client to copy to another client with other peofiles. Client Deletion: Clients are deleted using SCC5. But they cannot leave any space in the Database i.e. by deleting a client we cannot get any free space in the database. We need to reorganize the database. Locking and Unlocking Clients: Use these two reports to lock and unlock the clients. SCCR_LOCK_CLIENT and SCCR_UNLOCK_CLIENT (or) we can manually use tp to lock clients. Use transaction SE37 for lock and unlock clients. Client Comparison: While applying client specific patches i.e. CIN(Country India Version) use transaction SCMP to compare between two clients and to adjust the changes between two clients. Setting up Library: I order to get the screen context help we need to install the library and setup in SR13. Install the library from library DVD. In the shared folder Sap help.
120
Go to SR13 to set up the library There are four types of libraries available 1. HTML Helo 2. Dynamic Help 3. Plain HTML Help 4. HTML HTTP Help. F1-Field Help F4-Field Possible Value Help HTML Help: It occupies lesser space when compared to other library formats. It is viewed using micro soft help viewer. It is in the compressed format. Plain HTML: It is installed on the file serve and help is displayed in the HTML format. HTML HTTP: It is installed on the web server and requires web browser to execute. Click on New Entries and provide the Following Details: 1. Name of the O/S 2. Variant 3. Training 4. Path of the library up to help data drive directory 5. Specify the language 6. Select the default check box 7. Save the entries
Scheduling House Keeping Background Job: Go to SM36 click on Housekeeping jobs and schedule with default variants
121
Importing Profiles: RZ10 It is used to set the parameters for Work processes, Memory, Buffers etc Go to RZ10Utilities Select Import Profiles of Active Server DB13: Schedule an offline Backup so that we can test the Backup mechanism Scheduling Database House Keeping Jobs: Select check and verify database optimizer statistics, Adopt extents etc Go to DB13 Select the Date It displays all the above tasks. Select each of them and schedule with different times. Operation Modes: These are used to adjust the Work Process between Dialog and Background work processes. Use transaction RZ04 and SM63 to configure Operation Modes. Note: Ensure that the systems configured to solution Manger to configure the business scenarios as per the project plan. SMSY, Solar1, solar2, Solar_project_admin Applying Support Packages: Go to Transaction SPAM Creating Users: Create users and assign SAP_ALL, SAP_NEW (Take out the critical Transactions like SU01, RZ10, RZ03, PFCG etc)
122
DEVELOPMENT
Customizing: This is a process of modifying the system according to the requirements of the customer. This is performed in the transaction SPRO. (SAP project customizing). This is the process of keying the entries into the table. Customizing consists of the Following: 1. Crating Company 2. Crating Country 3. Crate Currencies, Measurements, Time Settings etc For example below are some of the customizations that we will do indifferent modules: FI: We create Vendors, Customers, Cost Centers, Account Receivables, Payables, banks, Tax, Asset management, Description etc SALES: Sales Organizations, Sales Divisions, Sales Areas, Distribution Channels, Billing, Shipping etc MM: Purchasing, organizations, Inventory, Stores, Plants, Storage locations, Ware Houses, Goods, Invoice etc Exits or User Exits or Customer Enhancements: It provides additional functionality to the SAP system. Exits can be searched. Transaction SMOD (Search Modifications) and CMOD (Create Modifications) are used to create the modifications. There are various types of exits: 1. Field Exit 2. Screen Exit 3. Menu Exit 4. Function Module Exit The exits are identified by Functional consultants and logic is programmed by developers.
123
Support Packages, patches and Notes: Support Packages are provided by SAP to provide the bug fixes, functional enhancements and resolve run time issues.
Development:
If the customer requirements are not satisfied then we may need to develop the Programs, Reports, Transactions, Menus, Screens, Scripts and Functional Modules. Programs and Reports: All the customer objects has to be created in customer name space. /Company name/ (or) Y-Z in order to create programs the developer needs to register in the market place (developer License are charged separately). Create user in SU01 and register this user in the market place. Get developers and key in the registration key before developing or modifying first program. It will not prompt for the key later programs. Programs are created, modified and displayed in Transaction SE38. Transaction: It provides an easy way to navigate to the program. Go to Transaction SE93 and create a transaction using the above created programs. Provide Program Name and Screen Number to define a Transaction in the customer name space. MENU: Menus are defined in Transaction SE41. SCREEN: Screens are designed in the transaction SE51. Scripts or Forms: These are available in SE71 and can be modified by copying in to customer name space. Ex: PO script, Invoice, Delivery Order, Quotation, Sales etc Function Modules: These are used in the programs for modularization i.e. frequently and logis is turned in to functional Modules. Domain: It is the least granular field in database which has its types and property like Data Type and Size. It is created in transaction SE11.
124
It is used to keep the fields uniquely in the database i.e. all the location follow the same format. It is a field in the database like Location, name, Currency etc It specifies type and Length. Data Element: It is also defined in SE11 but it is specific to a field pointing to a domain. It is nothing but domain with a meaningful name. Ex: District- Location, City- Location, Country-Location, State-Location TABLE: It consists of ROWS and COLUMNS. Columns are Data Elements and ROWS consists of Data. Screen & Menus: SE51, SE64 Transaction: SE93 Program: SE38 Data Element: SE11 Domain: SE11 Table: SE11 Changing the Standard Objects: SAP mostly never recommends modifying the standard objects in the name space A-X. However based on the requirements customer may change the objects by obtaining Access Key from SAP. These Keys are also referred as SCCR key. 9SAP software change registration key) When you modify a standard program in SE38 it will prompt for Developer Key and Access Key. Get the program Id, Object Id and Object Names. Get the above details to generate an access key from the market place Generate the key only based on the approval. As we are changing the standard programs the changes will be lost after an upgrade.
125
Customizing Requests or Modified Data can be copied from one client to another client, where as workbench requests.
400 000
300
066
001
400 000
300
066
001
400 000
300
066
001
DEV
QAS
Workbench Request
CUSTOMIZING
Create super user. Dont use DDIC to perform any of the following activities 1. Check SCC4 settings client and check whether it is set to ARC (IF it is ARC we can perform client specific customizing and record the changes) 2. Go to SPRO. Click on SAP REFERENCE Img (Implementation) Click on Enterprise Structure Click on Definition Click on Company- Click on New Entry Create a Company Copying one client to another client is a BASIS consultant task. When a client specific entries are Created/ Modified/ Deleted it prompts to assign the changes to a change request of type customizing When a cross client or Repository entries are Created/ Modified/ Deleted it prompts to assign the change to a work bench request of type workbench. Go to SE01 to Create/ Replace change Request. Change Request consists of changes that are recorded during customizing Change request are created by Project Leaders and assigned to Team Members in the form of Task Each change request may consists of one or more tasks Once the task is assigned to a user only that user can work on that task
127
Change Requests are in the form of <SID>K900000 and start with 1 and subsequently it Will change.
Right Click on change request and click on Add User (Super User, Admin etc..) User has to release o Admin has to release the task Change Request can only be released when all the subsequent tasks are released Please perform client coy using SAP_CUST profile before you start customizing. 36687663221261278694 for IDES system SSCR key
128
CHANGE MANAGEMENT
129
SAP BASIS Administration Tutorial prepared by Shastry. This will help in understanding and answering different questions and asked in different interviews. During the requirements analysis phase the BASIS consultants along with team members (Functional Consultants, Project mangers) visits the customer and gathers the requirements and analyzes the requirements to define the scope of thee work. Scope of Work: (SOW) It is the estimated work which is documented during the requirements analysis phase. It defines the following:
What type of O/S What type of database Reusing the existing infrastructure (Ex: Data Center servers) Analyze the servers and submit the feasibility report (This consists of server details like CPU, RAM, Storage, and Warranty) What kind of backup customer is looking for?
Types of backups are: 1. Online backup 2. Offline backup 3. Incremental backup 4. Partial backup 1. Online backup: The system continuously work and wont be shut down. Users cannot get any interruption while performing backup
What are the production hours & define peak times and off peak hours If it is 7*24*365 then what kind of High Availability customer is looking for?
High Availability: it is defined as the availability of servers in the data center to continue the user accesses without any business interruption. There are 2 types of Availabilities: 130
1) System Availability: The system is available and the server is available but users may not connect to the system or experience very low performance. 2) Business Availability: While planning high availability we need to ensure that business operations continue without any interruption.
2. Offline Backup: uses offline backup to store files so that they are unchanged and easily referenced.
131
Advantages: 1. Backing up is the fastest 2. The storage space requirements are the lowest Disadvantages: 1. Restore is the slowest
4. Partial backup:.
We need to define the amount of time the business can sustain without the system. This is called as percentage f high availability. High availability options:
If the customer is asking to implement more & more users: The no of systems to be implemented (R/3 production, test, developing, demo, training, development, sand box, training, pay roll, migration, pre production) The no of interfaces modules to be implemented
The scope of work document will be sent as a draft to the customer manager and project manager. The document will be finalized and signed out by the customer and we can call it as statement of work.
132
Implementation Methodology: Applications are built is SAP but not customized. In SDLC application are not built. Most of the application are built is SAP. SAP implementation is carried out by using ASAP methodology. ASAP: Accelerated SAP. Using this implementation steps are defined in any organization. Currently Solution Manager is used for defining these steps. ASAP consists of the following steps:
1. Preparation 2. Business Blue Print 3. Realization 4. Pre Go-Live 5. Go-Live and Support
Preparation: once the project is awarded the following preparatory steps are performed. Install solution manager to record all the activities of implementation. Note: Prior to this H/W sizing has to be performed to define the H/W required. Hardware Sizing: it is an exercise which will be carried out at customers place to identify the required infrastructure. This will be defined in the SOW. In order to perform H/W sizing SAP recommends to use Quick Sizer Tool. It is a proprietary tool of SAP. The tool is located at
Thumb Rule for SAP Installation: 4.6 40GB/256MB/512MB 4.7EE 80GB/512MB ECC5.0 120GB/1GB 133
ECC6.0 200GB/2GB Legacy systems: These are nothing but the existing system which are to be replaced with new configured system to introduce SAP. In order to perform H/W sizing we need to use the sizing which is available on SAP market place SMP: SAP market place. The Quick Sizer is a tool that is developed by SAP and its H/W partners to help customers get an idea about the sizing . It is free of cost In order to log on to the market place we require USER Id and for performing the H/W sizing we need customer number. Customer Number: It is provided by SAP after sizing the software purchase agreement. SUSER ID: It is a 10 digit number with S provided by SAP to log n to market place. This user id a super administrator and have all the privileges on the market place. We need a SAP user Id called as SUSER ID which start with S followed by 10 digit number Ex: S0001234567
134
The Customer No field value will be automatically shown. The input values required for the H/W sizing are:
Specify O/S,DB, Mirroring, RAID High availability required (Clustering/standby server) Online, offline, incremental backup and partial backup What amount of legacy data Working hours & specify off peak and park hours Unplanned down time
There are 3 types of sizing for SAP.
1. User based Sizing: 191. This sizing depends upon the no of users, type of users and modules used.
3 Types of users:
1. Normal User: The user who communicates with the system and the requests will be in between (0 to 480 dialog steps/40 hours)2 dialog steps/ Min 2. Power user: These are the one who will create dialog steps in between 480- 4800. For every 30 Sec one dialog step 3. Transactional users: These will create around 4800 to 14400 dialog steps per week. These users will create more load on the system. By activating the transactions at least once per every 10 seconds. These users are also called as concurrent users.
Dialog Step: user request goes to server and comes back to the user. Average response time of a dialog step is 800 m sec to 1200 m sec i.e. 1.2 seconds Now specify the Type of users, No of users and Module users.
Output of H/W sizing: The output f the hardware sizing is the requirements in terms of SAPS. H/W Sizing: Disk category Disk Size S,M,L,XL,XXL 900 SAPs CPU Category S,ML,XL,XXl 3200 Memory in MB 198000 X50
SAPS: SAPS stands for SAP Application Performance Standard While defining SAPS, SAP has considered a most populated module SD as a benchmark. Fir every 2000 sales orders which are created in an hour generated 100 SAPS. SAPS: SAP applications benchmark standard SAPS is a unit which will be used to calculate the amount of CPU resources required. The H/W vendor also provides details of SAPS generated by various CPUs. Ex: IBM P-series. It will generate 1000 SAPS per CPU In the same level I-Series generates around 900 SAPS. It always depends upon the speed of the CPU. Depending upon the no of SAPS we need to recommend the CPU. Ex: If the SAPS required are 3600 we may need to suggest going for multiple CPU. Whereas for 1000 to 2000 steps we can recommend DUAL CPUs. Note: H/W sizing will not give you the exact figure required. It is recommended that 40 to 60 % of the sizing needs to be added to the sizing results. 9 dialog steps are required for one sale. For 2000 sales 100SAPS are considered. 2000 X 9 (for 100 SAPS per hour)= 18000 dialog steps/hour Sizing report can be changed from time to time by modifying the details in the quick sizer tool. Note: The sizing which is planned should cater at least for minimum 3 years. Ensure that H/W is capable of handling the enhancement.
137
Communication with SAP: Customer communication with SAP to purchase SAP software depends on No Of Users. Currently for 5 users it is charging 14 lakhs. On top of it every year we need to pay 17% as service fees. Once the agreement is signed SAP shifts software to the customer sight. The following products are available in SAP:
1. MYSAP Business Suit 2. SAP NETWEAVER 3. SAP BUSINESS SPECIFIC (SAP B1) 4. SAP ALL IN ONE 1. MYSAP BUSINESS SUITE: 192. It consists of the following modules.
S.C.M: Supply Chain Management C.R.M: Customer relationship Management S.R.M: Supplier Relationship Management P.L.M: Product lifecycle Management mySAP ERP: myERP Financials, muSAP HR, R/3 Enterprise
2. SAP NETWEAVER: SAP NETWEAVER: It provides all the new dimensional components like EP (Enterprise Portal) , XI (Exchange Infrastructure), MDM (master Data Management), PI (Process Integration)
193. 3. SAPB1:
It is a product which is specific to particular business Ex: Textile, Fabrication, Banking
1. Ensure that H/W is procured according to the sizing requirements 2. Check and verify the S/W from the market place which is shifted to you. 3. Check the DVDs are existing, check if they are readable 4. Download the installation guides from the marketplace link instguides 5. Check for known-problems for installing SAP components (Known problems while installing Solution Managerr3.2) 6. Install Java runtime environment because they are designed using JAVA 7. Set the environment variables JAVA_HOME and path. 8. Set the virtual memory, currently we can set 3X RAM size from Windows 2003 onwards 9. Communicate with network team and obtain static IP address 10. 11. Specify the host name and IP address in etc/hosts Install the O/S with relevant patches
139
12. 13.
Install the RDBMS S/W and patches Set the file sharing to the maximum of NIC
14. Dump the S/W into the server, ensure that the directories should not have spaces and special characters 15. The system name should not be more than 13 characters.
Note: From ECC 5.0 onwards the Installation Key is necessary. This is can be generated in Solution Manager. While doing the installation of R/3 at step 3 or 4 it will ask for installation key.
BW
XI
CR M
Solution Manager
R/3
XI/EP
AP
Solution manager requires 28 GB of space for installation. Solution Manager is used to configure satellite systems R/3, BW, CRM, XI,EP,APO and other WEBAS ABAP and WEBAS JAVA components. Satellite Systems: All the systems which are configured as satellite systems can be monitored through solution manager. Creation of Satellite Systems: Go to transaction SMSY. It is a transaction in solution manager. Create R/3 system, database host, System landscape, and generate RFC destinations with satellite systems. RFC: Remote function call. Generation of Installation KEY or Upgrade Key: Go to SMSY menu -> Go to other object and specify the HOST NAME, SYSTEM ID, INSTANCE NUMBER > click on Generate Installation/Up gradation Key 140
Transaction: It is a shortest way to navigate to the programs in SAP system. There is a command window which will be used to execute the transactions. Transactions are easier to use and cut down the dialog step activity. SAP Based Menu: Easy access menu. It is a standard menu, which is used to navigate to the programs with n no of dialog steps. We can create our own transactions. Transactions are created in T-Code SE93. While defining transactions SAP has followed certain conventions. Transactions that start with SM are used for System monitoring / Sap monitoring Transactions that start with ST are used for system traces Transactions that start with SE are used for sap engineering Transactions that start with SU are used for user administration Transactions that start with VA for sales Transactions that start with ME for material management Transactions that start with AL for alerts. Project Management: Go to T-Code SOLAR_PROJECT_ADMIN It is a transaction where project is created. It is used to define the following:
1. Name of the project 2. Person responsible for the project 3. Name of the consultants 4. Project status 5. Modules to be implemented 6. Mile stone 194. be used or templates can be avoided. 195. 2. Business Blue Print Phase: It is configured in solaro1. It is used to configure business scenarios which need to be implemented in the
141
project. Go to SOLAR01 select business scenarios -> go to structure -> Go to Configure elements -> press F4 196. When we press F4 it will display the list of the components. Select the component for configuration. 3. Realization (or) Configuration Phase: This is where exact configuration of the system is performed. Go to SOLAR02. Select the business Scenarios which need to be configured. Click on the transactions so that it will be routed to the respective transaction in the transaction system.
Note: To define and Configure Satellite system logical system component should exist Documentation such as screen shots and notes can be uploaded. In order to implement the above steps or phase we can use the road map which is predefined in solution manger. Go to Road Map (RM Main)-> Select the Project and choose the road map which is predefined in solution manger -> Go to Road Map and you can down load the road map. We also have the flexibility to define the consultants for each phase.
4. Pre Go-Live (or) Final preparation Phase: During this phase all the configured components are related and moved on to other systems in the landscape. Some of the key activities are 1. Data Migration from legacy system 2. Quality check for all the programs 3. Pre pay roll run 4. End user training 5. Pre Go-Live check by SAP
5. Go Live and Support: The system goes live on the cutoff date, end user start working, run time problems are identified, patch up work will continue, support the project till SLA ends SLA: Service Level Agreement.
142
1. Key Generation 2. Centralization 3. Alerts 4. Service Desk 5. Support package Updates 6. Solution Monitoring 7. Documentation 8. Project Management 9. Reports
Inputs required during Installation : Make sure that Pre requisites are met. (H/W,O/S, Virtual Memory, Environment variables, me JDK,RDBMS) Make sure the availability of Installation master DVD, KERNEL,Export DVD and Instguide. In earlier versions up to 4.6c R/3 setup is used for installation. But from 4.7 onwards SAPINST is used for R/3 installation up to WEBAS version 640 SAPinst tool has independent DVDs for R/3, BIW, CRM, APO. But from 640 onwards master DVD will be used to install ERP, CRM, SRM and other NETWEAVER products. R/3 setup Not GUI Kernel- R/3 Executable Export DVD (If it is component Specific then it is separate for every module) JAVA SAPinst Tool: It is used for installing R/3 components based on ABAP and JAVA. SAPinst tool has its own version. If the tool dosent support the installation we may need to upgrade SAPinst to higher versions. Download the current version of SAPinst executable from the market place The DVDs used for installation are 143
1. Installation maser DVD 2. Kernel [Release Specific] 6.40 or 7.00 (8.00 will come soon) 3. EXPORT DVDs. It consists of component specific data such as tables (Everything is stored in the database) 4. JAVA DVDs which are required for providing graphic solutions (IGSInternet Graphic Service) 5. GUI installation DVD: It is used to install SAP GUI front ends for all the users. 6. RDBMS software (Oracle, MAXDB, Informix, DB2, SQL Server) 7. Language inputs (or) language DVDs. By default German & English are installed. Additional languages are installed using this DVDs. 8. Support Package Collection: It is used to apply the support packages and patches to R/3. 9. Add on components: additional components related to XI, BI and other Netweaver components or other industry specific solutions. 1) Go to Installation Master DVD 2) Select the O/S specific directory 3) SAPinst.exe 4) We need to set the runtime environment JAVA_HOME. 5) SAPinst listener on port 21212. We need to ensure that the port is not blocked. 6) Select the component to be installed say ECC 5.0 7) You need to select the database 8) Select the Unicode or Non-Unicode
Single Code: The database has its code to support the language. Ex: If we have English, German the default code page is 1100. This code page supports only few languages. If we want to access the system with other language which are not supported by current code page we need to install that code page. MDMP: Multiple display multiple processing (Out dated) 144
It is used to support in installing to a language. MDMP has a disadvantage because e have to handle different code pages when upgrading the system. UNICODE: The database reserves 2 bytes to cater to almost all the languages in the world. It supports all the data code pages and no special attention is required during up gradation. Note: We can install non Unicode system but it can be upgraded to Unicode by using SAP export and import tools. Note: Unicode system cannot be converted into non Unicode system.
9) Select Central Instance: Click in next 10) Specify the< SID>: SID is used to identify the system. It should be of 3 characters and can also be alphanumeric. It should start with a character only followed by alphanumeric values.
Note: Do not choose SID from the reserved words SAP, ALL, BIW, ERP. We can use like PR1,P01, D01, D40, P20.. Try to define the naming convention for all the systems in the landscape. Instance Number: This is the port number which will be used to reserve the ports or the instance services and processes. It should be in between 00 t 97.
11) Specify the host name. The name of the system where the installation is performed. This should not be more than 13 characters. Ensure that host name and IP address are entered in etc\host.
MCOD: Multiple components on one database From Oracle 9i onwards multiple components like ERP, BIW, CRM, and SRM can be installed on one database. The databases are differentiated by SCHEMA IDs (Ex: SAP<SID>SAP01). SCHEMA OWNER: It is the owner of database represented by SAP<SID>. Instance number should be unique in the system. Hostname: Name of the software where installation s performed. Specify the host id of database. It will be same as the central instance host.
12) Specify the host ID of database. It will be same as the central instance host.
145
Central System Installation: If the central instance and the database instance installed on the same physical machine then it is treated as central system installation. Distributed Installation: Database instance and central instance are hosted on two different machines. In case of a distributed installation specify the name of the database host where database is installed.
Central System
HOSTNAME DB INSTAB CE
Physical Machine Instance: An instance provides executable services. Ex: Database Instance, Central Instance, Dialog Instance Instance has its own instance number it requires own memory configuration. Multiple instances can be installed on a single system.
13) Specify the amount of memory reserved for the instance for installation. By default 60% of the physical memory. 14) Specify the type of installation
Local Installation: It is performed locally using the local user rights. Unless and until it is recommended dont go for any installation except local installation. Ex: For installing DUET local installation is not recommended. Duet Enterprise is a business tool that blends SAP and SharePoint data to increase staff and department productivity, according to the pair. Microsoft SharePoint, also known as Microsoft SharePoint Products and Technologies, is a collection of products and software elements that includes, among a growing selection of components, web browser based collaboration functions, process management modules, search modules and a document146. Installation Number: If the installation is carried out in a domain ensure that all privileges are held by the user to install the components. Most of the domains have restrictions for passwords, user creations password expiry. Note: It is recommended to install locally and attach it too domain later. Domain Server: All the systems in landscape are configured with domain server. It is a centralized server. It can give IP address of DHCP. Specify the kernel directory and transport directory. Specify the database instance parameters, database id, dbhost, db home (Home Directory of Oracle, db SCHEMAs owner of the database, db character set. Installation log is located in the C:\programfiles\sapinst_instdir\SOLMON32\WEBAS
15) Specify the passwords, SAP system administratin and SAPserviceadministrator SID<ADM>.
SAP system administrator has the privileges to start and stop the system. He is the administrator for R/3 system. SAPserviceadministrator is used to run the services required for the instance. These user ids are instance specific.
16) 17)
Specify the path of the kernel DVD Specify the port numbers
Message Server Port: 3600 + Instance Number Dispatcher port: 1300 + Instance number Gateway Port: 3300 + Instance Number Dispatcher Security Port: 4700 + Instance Number Gateway security port: 4800 + instance number These port numbers are entered into etc\services.
18)
147
1) Specify the SID instance number and host name 2) Select whether it is a standard installation or system copy using migration/Backup/standard export & import. 3) Specify whether we need to install SAP system in the db (or) if the db exists add another SAP system. 4) Specify the SID and host name 5) Specify the amount of memory required by default it is 40% of physical memory. 6) Select local installation 7) Specify db kernel file location. It will be the central instance kernel file location. Because kernel for both R/3 and DB should put together on one location 8) Specify the server directories and redo log files, archive log files location. 9) Specify thee kernel dump directory location 10) sapservice<SID>. Specify thee passwords for sysadm and
11) Specify the location of mirror log files A,B and specify thee location of Origlog A,B and data files (SAP DATA1, SAP DATA2SAP DATAn) 12) Specify the load strategy (load strategy should be loaded by the data file). Specify the DB code page. By default it is 1100. No of parallel jobs can be increased based on the availability of memory by default it is 3 13) manager 14) P01). 15) Specify passwords for system by default it is Specify the password for SCHEMA user (SAP Specify the password for DBA user
148
F1 functional key gives help for all options. F4functional key displays list of components Solution Manager is used to create new solutions. DSWP is also used to configure solution landscape. Roles and Responsibilities:
Generating installation Keys. Monitoring the configured solution through DSWP/Solution Manager. Assisted project manager to create project. Select he business scenarios. Upload the project specific documentation created, satellite system and read the data from them. Create the server, database, systems and logical components. Download the roadmap. Perform H/W sizing. Analyze the sizing results with customer/ H/W vendor. Process of CI installation. Process of DB installation. Install and configure Solution manager. Communicate with the customer to prepare the SOW. Defined feasibility report of the existing landscape. Training to the end users on solution manager. Knowledge on ASAP methodology.
149
Response
Client software is required for client system. Servers are heavily loaded Response time shoots up Resources exhaust hour glass occurs
Client
server
Database
Client software is out from client Interpretation is done at middle level Buffering issued frequently and consistently Accessing is easier.
150
Applicatio n Server
Client
Server
Applicatio n server
In a typical client server architecture client request and server responds. The contents of t client are GUI which is programmed using traditional language like C, C++ etc In order to communicate with DB, database client software should be installed in each client. As the clients are directly communicating with DB the DB is heavily loaded. There is no common area where frequently accessed content can be stored. The response times and the database server resources exhaust and users encounter hour glass situation. In view of the above the concept of application server came into picture. The application server handles the load of the client by taking client software from the clients. The interpretation will be done at application server. Frequently accessed content are stored in application server which reduces load on the database. There by increases the performance of the client request. Application Server has the following:
1) Its own memory 2) Buffer Areas 3) Application server specific work processes 4) Database client software.
Application server is also called as an Instance. 151
1. SAP GUI for windows 2. SAP GUI for Java 3. SAP GUI for HTML
SAP GUI has the following features: Graphical user interface Change the color, font and font colors of GUI Support of all languages One GUI for all the applications (R/3, CRM, BIW, APO..) F1 and F4 help (F1- Field Help, F4- List of possible values) Customized favorites according to user requirements Frequently keyed IPs can be stored as parameters Download compatible higher version can work with older versions With the help of message server it defines the least loaded server and routes the request to that particular server.
152
Use SAP GUI DVD to install SAP GUI. Use SAPinst to install SAP GUI. Installation on few clients can be performed personally, but the number of users increases we need to automate the tasks.
1. 1 to 10 users in personal (remote desktop (or)net meeting) 2. More than 100 users use login script so that GUI can be installed or updated during the user logon to the system. 3. Install the software on a file server along with installation screen so that user can perform installation on his own screen so that user can perform installation on his own 4. Use 3rd party tools like SMS or Software deployment tools 5. Use SAP GUI installation server to install, update user GUI.
153
The above screen shot represents the SAP GUI after installation and configuration. GUI Operation: When user clicks on anyone of the options in SAP GUI logon screen one of the following files will be evaluated.
154
2. Buffer Area: The frequently accessed contents from the database need to be stored in the temporary work area.
This is refereed as R/3 buffer. Various Buffers are:
1. Calendar buffer 2. Table buffer 3. Program & Definition Buffer 4. DD: Data Dictionary 3. Memory: In order to process the user request memory is allocated for work process. 4. Interpreter: It interprets user requests by splitting into ABAP codes, Screens, SQL statements
Dialog Step: User logs onto the system by keying USERID and password. diag is the protocol which connects to the application servers. DIAG: Dynamic Information Action gateway. TCP/IP is the basic thing. Dispatcher receives the user request and checks for free work process. If free work processes are not available it will be queued in the dispatcher WAIT QUEUE. Dispatcher Wait Queue: When free work processes are available user requests are served by dispatcher on FIFO Work processes are W0, W1, W2.Wn. 155
Work processes handle the task of user while handling the task. It sets the memory allocated and completion of the task. It contains the following: ABAP interpreter: It interprets the ABAP code which is containing in the task handler. Screen Interpreter: It interprets the screen contained in the task Task: user request SQL Interpreter: It interprets the SQL statements of the task. The work process reaches database but it could not perform any task from the database, because it it specific to application server. The work process hands over the task to the DB SHADOW PROCESS. SHADOW PROCESS processes the request and hands over the response to work process. WORK PROCESSES analyzes the response which is in the native format and screens are designed to respond to the user. Before sending the response to the user the work process rolls out the user related information into user context. USER CONTEXT: It is a temporary memory or temporary work area where user related buffer area stored. When users login user context is created and it is cleared when the user logs off. It consists of logon attributes parameters, the earlier access contents. ROLL OUT: The process of rolling out of the user related information into USER CONTEXT. ROLL IN: The process of rolling the user context information into Work processes buffers. First Dialog Steps: The average dialog response time should be between 800-1200 milli seconds. Second Dialog Step: Create/Modify purchase order Dispatcher assigns the free work processes to the work process. Work process rolls in thee user context information and checks whether the user is authorized to create the Purchase order. IF authorized the process continues else response back to the user saying that user doesnt have authorization. 156
1. UR 2. DIAG 3. Dispatcher 4. Queue 5. WP allocation 6. T.H 7. Interpretation 8. Reaches DB 9. Shadows process 10. 11. 12. 13. 14. Response to WP Format the response Roll out User context Response to user
Database Layer: It is used to store the data of the customer. Database has its own memory, process (Work Process), buffer areaetc. R/3 work process hands over the request to the shadow process. The SQL statements are interpreted in the R/3 application servers and converted to native SQL statements. All the ABAP programs consists of open SQL statements. When the request sends to the DB they are interpreted into native SQL statements (T-SQL, PL SQL) T- SQl : Transactional SQL PL-SQL: Programming Language SQL T-SQL and PL/SQL common languages for database-independent applications Shadow Process: When the user request is added over to the shadow process shadow process try to find out the request in the user buffer. If it is not available then it will search in the database. 157
Application Layer: Application layer consists of various instances with various processes. Instance provides the following services
1. It has its own memory, own buffer, own work process. One or more instances can be installed on an application server. Application server is a physical device. It consists of CPU, memory storage. 2. Instance also can be referred as Application Server 3. A group of instances can also referred as application server group of instances can be installed on an application server each instance can be configured up to 89 work processes and each work process requires around 75 MB to 150 MB 4. Each work process need to serve 5 to 10 users 5. Application servers provide the following work processes:
D: dialog WP V: Update WP E: Enqueue WP B: Background WP S: Spool WP M: Message server service G: Gateway Service
158
159
The above 2 screen shots are typical windows MMC. (This sacreen shot is taken from Netweaver 2004s flavor of SAP) You can find 3 dialog, 2 update, 1 spool, 2 Background, 1 Enqueue, 1 update, 1 Messaage server and 1 gateway service for this installation. The number of dialog processes and the number of background processes can be increased after the installation is completed by using the transaction code RZ10 (Profile Maintenance). Dialog Work process: It is the only process where users communicate indirectly. There should be at least 2 dialog work processes for each instance. Dialog process is used to create, update requests, transactions, print requests and background tasks. Update Process: It updates the records to the database. There should be at least one update work process for each R/3 system.
160
Transaction: A transaction consists of one or more dialog steps which will be committed together or rollback. It is also referred as logical unit of work. SAP Transaction: SAP transaction is a bundle of one or more transactions which can be executed together or rolled back. Dialog process updates in temporary tables, update process reads the temporary tables and updates the database synchronously. The dialog WP time is restricted to 600 seconds. These all dialog WP need to complete their task within the maximum runtime. If the task could not be completed within the specified time timeout error occurs and program will be terminated. ENQUEUE Process: Enqueue process is used to provide tasks to the records which are going to be updated. Technically enqueue process is used to lock and unlock records which are being updated. There will be only one enqueue process in any R/3 system and mostly it will reside on the central instance, where message server is installed. Note: More than one enqueue process can be configured based on the frequency of updates. Background process: The tasks which consumes more time are scheduled in the Background mode. The jobs which are long running, expensive, time consuming and non interactive will be scheduled to run in the off peak hours using background process. Expensive means the ABAP code is expensive or the DB records have more records. This should be at least one process for each instance. Message Server: It is used to manage all the instances. That is it controls all the dispatcher when logon load balance is configured. It also helps to procure logs from enqueue process if the requests are coming from dialog instance. There will be only one message server in an R/3 system. Message server always resides on central instance. The instance where message server resides is called as central instance. Enqueue process & message server resides on the same instance and it is not advised to host them in two different instances as it will increase the enqueue time. Gateway Process: it is used to communicate between the instances and systems. Thee will be only one gateway process for each nstance. Spool Process: it is used to output the documents to printers, fax, emailetc. Spool process or print request are created by dialog process and background process. The print requests are stored in the temporary area which are read by spool process to format and print the request. We can configure one spool process for each instance. 161
Dialog Instance: we can configure as many work processes as possible but it limits to 89 per instance. If the load on the instance increases in terms of users we consider displaying additional dialog instances. Central Instance: It is a dialog instance where message server and enqueue process resides. Database Instance: it is an instance where database is hosted. It resides on the database layer. File structure of Application Server: There is a shared mount (or) shared directory which will be created during installation. \\usr\sap \\usr\sap\trans \\usr\sap\<SID>\sys\exe\run Note: the sum of non dialog process should not exceed the sum of dialog process in the dialog instance. Central System: This is the installation where the central instance and the database instance are installed together. Application Layer
A1 A2 A3 A4
Central System Installation Check: Start MMC Check the status of R/3 system. The status is Green- Installation is proper The status is Yellow Network interface problem 162
On WINDOWS NT use MMC to start and stop the system. Start up procedure for SAP WINDOWS: MMC sapstart.l og
Alert_Dev.or g
DEV
Read
dev_ms
DB(If not
started)
dev_disp
dev_rd dev_icm
dev_w0 wn
163
During startup of the operating system Windows NT, the NT Service Control Manager starts all the services in the service list that are configured for automatic startup. The information relevant to these services is stored in the registry and is read by the Service Control Manager during startup. Several services of type SAP_ (the SAP service) and Oracle Service,but only one SAPOsCOL service, can be run on one computer. The SAP service, SAPOsCOL, and OracleService should be configured for automatic startup. To start the Oracle database and the R/3 System, the administrator performs the following steps: Log on to the operating system Windows NT as user adm. To start the R/3 System, open the Microsoft Management Console (MMC) using the SAP R/3 Systems Snap-in. Right-click on the system icon and select Start. The sapstartsrv.exe executable sends a message using a named pipe to the SAP Service, SAP_. The SAP service starts the database by executing an NT script that calls the Oracle Server Manager. The Oracle Server Manager executes an SQL script that starts the database if it is currently not running. Once the database is up and running, the SAP service starts the Message Server (msg_server.exe) and the Central Instance dispatcher (disp+work.exe). The R/3 System has been started successfully when the icon for the central instance changes color to green. The colors displayed in the MMC have the following meanings: red - the process terminated abnormally; yellow - the process is being started; green - the R/3 System has been successfully started; gray - the process is not running, status unknown. You can also start the R/3 System with the NT scheduler called at. For this kind of start, SAP provides the executables startsap and stopsap which are executed 164
locally. Use - startsap name= nr= SAPDIAHOST= to start an R/3 instance and - stopsap name= nr= SAPDIAHOST= to stop an R/3 instance (the executables sapstart.exe, sapsrvkill.exe and sapntwaitforhalt.exe must be in the same directory)
To provide a stable startup procedure, a parameter read sequence (also known as the parameter replace sequence) is defined during startup as follows: R/3 processes read the appropriate parameters from the R/3 kernel, from the NT system environment variables, and from the NT Registry environment variables. The default profile \\\sapmnt\\SYS\profile\default.pfl is read. Profile values already defined in the R/3 kernel are replaced with the values in the default profile. The instance profile \\\sapmnt\\SYS\profile\__ is read. Profile values already defined in the default profile or in the R/3 kernel are replaced with the values defined in the instance profile. This procedure ensures that system parameter values reflect not only the instance profile but also the values in the default profile and the R/3 kernel. The SAP service reads only the start profile and the default profile. The R/3 kernel (disp+work.exe) reads only the default profile and the instance profile. If you change the default profile, you must restart the SAP service (including the R/3 instance). If you only change the instance profile, you only need to restart R/3 using the MMC.
165
R/3 work directories contain trace files and error files for messages relating to the startup of work processes. Each R/3 instance has a separate work directory containing information that may not be found in the R/3 System log. The work directory files are initialized in chronological order. During startup, the SAP service executable SAPSTARTSRV.EXE writes: Database logs to the file STDERR1 Message server logs to the file STDERR2 Dispatcher logs to the file STDERR3 To define the level of information written to the developer trace files, set the profile parameter rdisp/TRACE in the instance profile. The possible values for this parameter are: 0: Write only errors (no traces) 1: Write error messages and warnings (default) 2: Write error messages and a short trace 3: Write error messages and the complete trace
166
alert_dev.l og Read
dev_disp
read
dev_rd dev_icm
start
167
In UNIX operating system the command startsap command id used for starting the SAP instances. This command will in turn call the SAPOSCOL (If not started). This will in turn calls the startdb script which will start the database. Now the sapstart command is executed which reads the startup profile. Then the message server will be started. Dispatcher (disp+work.exe) will be started which reads the default profile and the Instance profile. Then the dispatcher will start the GATEWAY, ICM, and the Work Processes. There will be different trace files that will be generated while starting the sap system. Alert_dev.log: generated when starting the Database. Startdb.log: Log file for the database start up. Startsap_ZDVEBMGS00.log: log file for the sap start. Dev_ms: Message server trace file Dev_disp: Dispatcher trace file Dev_rd: Gateway process trace file Dev_icm: ICM trace file Dev_w0----Wn: Trace files for the work processes.
In UNIX the commands are used for the starting and stopping of SAP: To start the DB server: $ startsap db To start the sap instance: $ startsap r3 To start all at a time: $ start sap all
The following commands are used in Stopping SAP: To stop sap instance: $ stopsap r3 168
To stop the database: $ stopsap db (The script stopdb is executed when this command is executed) To stop all with a single command $ Stopsap db
In Windows or UNIX or Linux or any other operating system before stopping the SAP check the status for List of user logged in the system. (SM04, AL08) List all active processes: SM50, SM66 Send a message through: SM02 or any other third party tool for the execution. Brief of SAP startup procedure: On windows NT use MMC to start and Stop the system On UNIX use scripts STARTSAP and STOPSAP to start and stop the system. Sequence of START UP: First the database is started. CI is started Dispatcher will be started Gateway will be started ICM will be started While stopping the system: Ensure that Application servers are stopped CI is stopped DB server is stopped
169
Users created during the installation: <SID>adm to admin the R/3 system SAPservice<SID> is the user to run SAP services. These are the two operating system users during operating system installation. DB users are: When the ORACLE system is installed SYS & SYSTEM are installed On a MS-SQL system and USERID is SA SA-System administrator. DB users created during DB instance installation:
SAP<SID>-SCHEMA owner of the database OPS$<hostname> SIDADM ORASID is the DBA on UNIX platform.
OPS$ Mechanism: This is the mechanism which allows the operating system users to connect to database without prompting password Services: These are various services which are created during installation.
1. SAPOSCOL: It is the O/S collector which will collect the H/W information from the system before starting the instance. If the required resources are not found this service will not be started. Execute SAPOSCOL at command level and clear catche. There will be only one SAPOSCOL service in a machine (SERVER) 2. SAP<SID>_<Instance Number>: It is the service which is required to start the R/3 instance. If it is not started R/3 will not be started. There will be one service for each instance. 3. ORACLE TNS LISTNER: This service is required to start DB. There will be one listener for each DB 4. ORACLE SERVICE SID: This is used to communicate between the DB instance and other instance. All the above services are displayed in
170
services options else go to My Computer-> Right Click-> Manage -> Computer Management -> Services and Applications.
197. On UNIX use the command ps-ef to display the services. The service user SAP<SID> runs the services. 198.
Directory Structure: The only directory installed during installation is \usr\sap. This directory is a shared directory by default, because it needs to communicate with other systems in the landscape. Communication means to share the folders across the systems. This is for Windows NT. On UNIX SAPinst is created as a soft link pointing to the above directories
171
1. \\usr\sap\<SID>\sys\exe\run 199. NUC\i386(or) iA64\ In future the kernel directory is \exe\UC (or)
This is the directory where kernel executables are located. Some of the kernel executables are Startsap Stopsap SAPOSCOL MSG_server.exe Dispatcher+Work process (disp+work) \\usr\sap\<SID>\DVEBMGS<instance number> It is the instance directory to display the type of instance installed. For a dialog instance it will be DO7 like that. Under this which consists fo logs and traces of the instance. \\usr\sap\trans: This is a directory where transport related information lies. \\usr\sap\<SID>\sys\profile: This consist of the properties of instances Startup Mechanism: User clicks on strartsap -> saposcol which is already running provides the information of the system. Instance service sapM<sid>_<instance number> is started and this service look into start up profile. Start up profile is used for starting the sap system. This resides in the directory \\usr\sap\<SID>\sys\profiles Start_DVEBMGS<Instance no>_<hostname>.pfl Ex: Start_DVEBMGS00_indiainternat.pfl It consists of the following service statements
172
msgserver.exe
1. <SID>_D01_<hostname>.pfl Dialog instance 2. DEV-DVEBMGS00_indiainternat.pfl Central instance 3. Work process configuration like Types of WP, No of WP 4. Memory configuration (Role Memory, External Memory) 5. External Areas
Note: The above profiles are maintained in transaction code RZ10, RZ11 The profiles which are changed using RZ10 requires the instance restart. The profiles which are changed using RZ11 (Dynamic Change to parameters) dont require any instance restart. 173
RZ11 is also used for displaying documentation for profile parameters. Dont change start up profile unless there are changes in the directory structure. Note: Default profile parameters are overwritten by instance profile parameters. System shutdown procedure: Reasons for shutdown are as follows
Change parameters Up gradation by the support package Offline backup Migration of H/W or H/W interpretor General (UPS, Air Conditioning, Power) Data Center (Exchange, Domain,)and dependent systems
Process of shutdown: Define and schedule down time. Ex: Sunday 1st Jan 2010 6:00 Am to 2nd Jan 2010 5:00 PM Effective Users: All the R/3 users domain users BIW users.. Purpose of Downtime: _________________________________________________________________ KPR: Key person responsible, Role, contact number Inform all the users well in advance through notification, email, or through TCODE SM02. Problems during SAP startup: Scenario: We have scheduled downtime for our R/# system to update the O/S patches and DB patches. If the system could not be started after applying patches the system has no history of startup problems.
1. Check whether the server is pinging to the right host. Use ipconfig to check the IP address and ping to that server
174
2. Check SAP<SID>_00 started (Central Instance started or not) 3. Check the DB is stated or not. 200. 1.If the DB is not started check alert<SID>.log in saptrace\background. It gives all the details fo database directory startup details. 201. LSNRCTL status 2. If it is not showing any results check
202. 3. Check the startdb.log in the work directory \\usr\sap\<SID>\DVEBMGS00\work 4. Check all the services 5. Check message server log in work directory 6. Check for the dispatcher log 7. Check for the dispatcher log 8. Check for the ports which are blocked or running. Use command NETSTAT-P 9. Check the environmental variables missing. In that case we need to configure them manually
Environmental Variables: These provide runtime environment for the users, as we cannot restrict the installation directories to be specific to one drive. So we define home directories like HOME_JAVA, ORACE_HOME and define them in the environmental variables pointing to the directory which they are installed. Path: Class path is used to define the location of executables to execute from the command window, irrespective of drive and directory. Ex: cmd, calc There are 2 types of environmental variables:
1. User variables which are specific only to the users and it will not affect the system variables 2. It is specific to the system and use globally by all the users on WINDOWS NT. Go to MY Computer Right Click Properties Advanced Environmental Variables
175
In UNIX: $ SET JAVA_HOME $ export JAVA_HOME $ set display, $ set Oracle_Home SET command sets the environmental variables only for that user services.
10. Changes in the Parameters: We can change WP configuration, memory, buffers without any additional resources. 11. Changes in the passwords of service users
12. In compatibility of kernel executables with existing O/S patches and DB patches 13. Incompatibility of kernel executables with existing O/S kernel and DB kernel version
176
When the O/S and DB are patched they look for current versions of R/3 executables. If there is a mismatch with the versions R/3 will not start. In this case we need to perform kernel upgrade. Kernel Upgrade: It is a process of replacing the current run directory i.e. R/3 executables are replaced with the new versions of R/3 executables. Reason for up gradation: While applying the support packages and patches for R/3 system and applying the patches to O/S and DB sap recommends to upade the kernel to resolve the runtime problems either in the future or during the post installation. Process of Kernel Up gradation: Check the kernel version by using disp+work. Patches: it is the missing functionalities or bugs that arise while running the system. Up gradation: Group of patches. Command disp+work Go to market place Go to my application components. Select Kernel for 32 bit/un (or) nuv/download DB independent [sapexe-----.sar] Kernel for 32 b it/ uc (or) nuc/ db dependant [sapexe_db----.sar] Select the newer version than older version Uncar the files using the command sapcar-xvf Trace the uncar foles into new run directory (exe\run) Go to exe new run copy all the uncar files to new run stop the system and services copy existing run directory to old run directory copy executables from new run to run directory [replace]start the services and the system Note: instead of downloading and upgrading entire kernel we can also upgrade each lernel executables independently. But we need to check the dependencies. If we could not find the problem increase the trace level. Use this parameter to trace the start u activities into work directory. The following files are trace files which provide more granular information.
1. Stderror 0
177
2. Stderror 1 3. Stderror 2
These provide DB server logs message server logs, dispatcher logs. Check DEV_DISP, DEV_MS,DEV_W0DEV_Wn for all the processes. [dev: developer trace] Check event viewer application log and system log. Check syslog in MMC RCA: Route cause analysis
1. MMC sys log 2. Work directory dev_ms 203. 204. 205. 206. 207. 208. 209. Dev_disp Dev_W0.Wn Startsap.log startdb Stderror0 Stderror1 Stderror2
3. Event viewer application system logs 4. Environment variables 5. Network connectivity 6. Check services 7. Check DB logs 8. Check parameter change of DB of R/3 9. Check ports 10. Check password changes
1. Find the patch level: to find this go to left top corner Click on sap logon select the file version click in options to activate the GUI trace level 2. Message server time out 3. .ini files are missing: take backup of the ini files and store on a file server and restore them 4. GUI could not logon: check the other desktops to check the login problem. Check the network connectivity on desktop. Check the client number which is accessing. Ask the user to send the status bar. Check whether the user exists, check caps lock, check password locks. 5. User Complains of incorrect format: check which format he is looking for and change accordingly 6. Language option: when language other than English is installed them language characters and new screens are not displayed properly. 7. Check whether the GUI is compatible else upgrade GUI (ex: 4.6c, 4.7d, 6.20, 6.40, 7.10) 8. Functionality missing in the GUI: (Junk characters like -, ?, !, #.....) if the probles is related to single user uninstall the GUI on the problematic machine and reinstall GUI. If the problem is generated to all check the relevant patch and upgrade all the GUI. Else write to SAP so that a correction note will be released. 9. Check authorizations 10. parameters Use parameters missing: recreate the
Note: Screen Painter is used to design GUI screens. This is done by ABAP team. SUMMARY: So far we have seen the following topics for installing SAP system:
1. SICK (SAP installation consistency check) 2. SE06 (Performing the post installation activities) 3. SE03 (System change option) 4. Define the landscape 5. STMS (SAP transport management system 6. RZ10 (Importing the profiles the DB) 7. Client Creation and copies 8. Help installation (SM13) 9. License installation (SLICENSE) 10. Support packages and upgrades
The above activities are major tasks and require more analysis and pre planning before the system (SERVER) starts working for the designated work (Development, Testing, Production).
1. SICK(SAP installation Consistency Check). Check the consistency of the system by executing transaction SICK and seed for any errors. This cab done by logging in as SAP* with 000 client and pass word 06071992 (Default). This password can be changed during the installation. 2. SE06: It is used for performing the post installaion activities. SE06 has 2 options:
180
1. Perform installtion activities (Post installtion activities) 2. DB configuration ( this is used to set the change the trapsport sustem or correctin transport sustem)
SE06 enables the CTS (change transport suystem or correction transport)
3. SE03: click on the system change option in SE06. This option is used for setting the system change option i.e either to modifiable or notmodifiable. If the system is set to modifiable then the software component s can be modifiable. This option is se set only to Development and Sand Box systems. 210. If the system is set to not modifiable all the software component are not allowed to change. This options is only set for Quality and Production systems 4. Configuring the Transport management system: 211. 212. SAP system landscape:
213. It defines the flow of objects between the systems. Inorder to achieve the high xonsistency and stability SAP recimmends t use more than one system in the landscape. 214. 215. Single System landscape: In this only one system will be there for the entire landscape which is used for all the purposes(Development, Qualtity and Production) 216. 217. 218. 219. 220. 221.
Quality Production Single System Landscape
181
222. 223. 224. 225. Disadvantages: The objects which are used by developers will not be allowed to eited by the tester and production users. The system is not consistent either for development quality pr production. No production for single system. 226. systems. 227. 228. 229. Two System landscape: it is the least recommended landscape by SAP where developers and quality activities are performed in one box and production activities are carried out in another box. Production server is consistent but inconsistency between quality and development. 230. 231. 232. 233. 234. 235.
Two System landscape Development
Ex: It is used to setup demo, desk and training It is a resource minimized setup.
Productio n
Developm ent
Quality 182
Production
This is the optimized landscape recommened by SAP where development, quality and production actions are carried out in individual systems (or) boxes. Development System: This system is used to develop the objects and customizing activities are carried out. This system is usedd by functional consuoltants, ABAP development team and BASIS team. No end user and production user can logon to the system. Quality system: The objects whch are modified (or) customized in DEV system will be tested in this system. This system is utilized by the qulaity team, training team and BASIS consultants. Production System: The objects which are development modified customized in development boxx (or) system are transported to quality and qullity approved objects are moved to production system. The system is only allowed for end user. The restricted actions can be given to ABAP and functional tools. Each user created in the system are accountable for license. Note: For each system we can configre upto 8 systems in the landscape. 8 different systems in a typical landscape are:
1. Pre production 2. Pay roll 3. Migration 4. Sand box 5. Development 6. Quality 7. Production 8. Training
183
Transport domain controller: As part of post installation activity (TDC) transport domain controller needs to be configured on a high availability system. As this is only one system I the landscape we will configure development system as TDC. TDC manages all the systems in the landscape. Configuring TDC:
1. Logon to 000 client 2. Goto transaction STMS (SAP transport management system) 3. If STMS is not configured within a domain then a pop up box will be displayed to configure the domain. If the STMS is already configured POP UP BOX will be displayed to create another domain (or) oncluding the domain. 4. Specif the name of the domain 5. Specify the description and save 6. Domian controller is created
Defining landscape: As we dont have quality and production system in the landscape we need to define them as virtual systems. Defining the virtual system:The systems which are going to be deployed in future can be configured as virtual systems. Virtual systes name should be exactly the same as of the real time systems. Including the systems in the domain:
1. Delete the virtual system 2. Logon to the real system which is replacing real virtual sstems. (Logon as client 000) 3. Go to transaction STMS POP UP BOX will be displayed. Select include in the domain specify domain name in the domain sytem 4. A request for inclusion is sent to TDC 5. Logon to the TDC goto STMS selecty systems there will be a system waiting for approval 6. Select the system and approve. Ti si included in the domain
184
Note: A communication user TMSADM will be created and RFC destinaltion between the TDC and member system is established. Domain.cfg is updated Domain.cfg is stored in \ \usr\sap\trans\bin It consists of domain settings lie details of TDC and members. It is updated whenever there is achange in the landscape. Transport Group: The group of systems which share the same transort directory are said to be in one group. Transport Layer: It is a path defined to transport the objects. SAP transport layer is by default . The name of it is trapsport layer. Transport Routes: Transport routes define the flow of objects between systems. There are two types of routes. They are
1. Consolidation Route: The route between development and quality systems is called as consolidation system 2. Delivery Routes: It is the route between quality and production systems
Development system is also called as Integration sytem Quality system is also called as Consolidation system Production system is also called as Delivery system Defining the Transport Groups: Log in to TDC in client 000 Go to STMS Select transport groups We have the option of defining the routes in 2 ways.
5. RZ10:Import profiles of all active servers onto the database. Goto RZ10 select the profile import in to DB
185
6. Install License: In order to install the license we need to obtain license key from SAP market place. 7. Inputs required to get the license key are 1. Customer Number 2. Installatio Number 3. Host Name 4. SID 5. Instnace number 6. H/W Key
In order to get the H/W key goto transaction SLICENSE (or) at command level saplicense get Get the key from market place and install using slicense transaction. Initial license is valid for 30 days and when license is installed the expiry date DD:MM:YYYY
8. Install Library: Goto SR13 specify the tyoe of HELP (HTML help file, HTML, HTTP< Dynamic Help). The library can be installed on a file server or on a web server. Go to SR13 define language define variant (IWB.Help, Documentation)
Repository Objects: The objects which are shifted by SAP are called as Repository Objects. These are also called as SAP standard objects.
186
Client 001: It is a backup of 000 client. But it is not updated continuously. There is a variation between 000 and 0001 i.e. 0001 is not updated continuously. No 001 for 7.0 version. Client 066: It is an early watch alert client used by SAP to generate early watch alert report to the customer. In client 066 the report serve as recommendation to fine tune the system interms of expensive reports, transactions, SQL statements and users. Need for client copy: In order to adapt SAP systme to the requirements of the customer we need to perform customizing. But customizing is not performed in 000 client as it is a template client and client 001 is a back up client. In order to perform customizing we need to define our own client. When the client is defined it doesnt consists any data, so we need to copy data from the existing clients. Client 000 is eligible for client copy because it is continuously updated. Client dependant data/ client soecific data: The data which is visible only in that client is called as client specific data. Ex: User master Data, Application data, Customizing Data
1. The Purchase Orders whicha re created in a client are not visible in another client. 2. The users which are created in one client cannot login to another client. 3. The application like invoices, delivery notesa are client specific i.e. if you perform customizing in one client it is not visiblw in another client.
Note: The customizing which is performed in one clietn wont effect the other clietns. Client independent data/ cros client data: The data which is visible across all the cliet is called as cross client data (or) client independent data. Ex: Calanders, Measurements, Time Zones, Timings, Currency. If we perform customizing to the above it will be effective in all the clients. CCC: (Cross Client Customizing): The above one is known as cross client customizing. Repository Data:All the SAP standard objects are referred as repository objects (reports, Functional Modules, Programs, Transactions) etc
187
The changes t repository data will effect the entire R/3 system. Some times it may mall function and the system will be crashed. Because of applying chages, patches etc. Note: While changing the repository objects follow SAP recommendations and most of Repository Objects are locked for editing. If there is a need to modify the repository objects we need to obtain key from SAP. Customizing: It is the process of keying entries to the templates. Ex: Company name and address entries, sales organizations, employee datga applications such as material master data, vendor master data, customer master data. Customizing is performed in SPRO. User Exits/ Customer Enhancements: These provide an additional functionality to the existing structure (SAP objects). There are variou types of exits.
Forms are created in SE71 Functional Modules in SE37 Repository objects like classes packages are created in SE80 These activities are purely done by the ABAP team. Changing the SAP Standard objects: In order to modify SAP standard objects we need to obtain object access keys from market place. These are also referred as SSCR. SSCR: SAP software change registration In order to modify the repository objects we need to get the following
1. Client number (Variable in between 000 to 999. Bu it should be used from the SAP reserver numbers. It should be unique in the system. 2. Specify the client number 3. Describe the client 4. Currency of the client 5. Location of the client 6. Roll of the client (Demo, Customizing, Testing, Production) 7. Specify client specific customizing is allowed or not 8. Specify cross client customizing and repository chanes are allowed or not 9. Protection level 0, 1, 2 10. ecatt is allowed or not.
189
By default from 000 999 total 1000 clients can be created. Logical System: In order to distinguish between various clients of different systems logical systems are defined and assigned to clients. Defining Logical System: Go to SALE Tcode (SALE: SAP Application linking and Enabling) Click on sending & Receiving system for further information click on logical system click on define logical system logical system click on define the logical system click on new entry. Specify the name of the logical system. The naming conventions for logical system are <SID>CLNT<CLINTNO> Ex: DEVCLNT200, QASCLNT200, PRDCLNT200 (Upper Case letters) Client Role: SAP defines client role to be specify the functionality of the client. 1. SAND BOX: It is a play ground where functional consultants will customize the requirements of the customers. It is represented as SAND. The changes which are performed in the system are not carry forwarded. This client is allowed for only client specific customizing. 2. Customizing Client: It is represented by CUST. It is also called as Master Client, Golden Client. This is the only client where client carry forwarded. This is the only client where changes are initiated and carried forward. This is the only client where client specific cross client and repository objects are modified. Note: Other than this client no clients are allowed to modify objects. 3. Testing Client: It is represented by TEST. This client is used for testing the customization which is performed in cust client. Transactions SCC1 is used to copy the change request from CUST client. Note: SCC1 is used to copy transport requests between the clients within the system. This client is used to test the modules which are customized and if the consultants approves it will be released. 4. Quality Testing Client: It is represented by QTST. This client is used to test integration between modules cross client object test and repository object test. It is also assured that all the objects are tested for quality stress etc Testing tools are deployed to test the objects in this client. Note: Each of the objects need an approval to move into production.
190
5. Training Client: It is represented by TRNG. It is used to train the end users of the company Note: Changes to ATST and TRNG are made by using transport request 6. Production Client: It is represented by PRD. This is the only client where the company data is populated by end users and production operations are carried out. Most critical client and to be secured in the landscape. Most critical client sensitive info (financial, pay roll, client) data migration.pre production client can be created additionally based on customer requirements. Change options available for clients (Client Specific Settings): Recording: It means saving the changes to a change request. 1. Automatic Recording of Changes: The changes which are performed in this client are automatically recorded to a change request. 2. No Changes allowed: changes are not allowed in this client. 3. Change without Automatic Recording: Changes will not be recorded to change request. 4. Changes without automatic recording, no transport allowed: Changes without automatic recording to change request and changes cannot be transported. Cross-Client Object changes: Changes to repository and cross client customizing allowed: The changes are allowed in CUAT client (or) master client (or) Golden Client. Protection Level: 0: No Restrictions. Client is allowed for client copy and client comparison is allowed. Client can be overwritten. Protection Level 1: No overwriting by client copy Protection Level 2: It is neither allowed for a client copy not for comparison between two clients. eCATT & CATT allowed: Restriction: The client should be always protected against upgrade unless we perform an upgrade Note: When the client is created an entry in table T000 is created without any data. We can logon to the client using USERID: SAP* and Password: pass in that particular client for client copy. Client Copy: There are 3 types of client copies: 191
1. Local Client Copy 2. Remote Client Copy 3. Client import and Export Pre Requisites of Client Copy: 1. Users should be working in the source client, reserve at least 2 background processes for client copy. Dialog process can also be used. 2. Logical system name should be defined and assigned to a client 3. There should be enough space in the DB (Table Space and enough disk space) 4. RFC connection should be defined between two clients to perform Remote Client Copy 5. Enough space should be there in Trans Directory to perform client transport 6. The sized of the client can be determined by using report RSSPACECHECK 7. Select the profile determine the type of the data to be copied from source client to target client. 8. Ensure that source and target clients are of in same versions in terms of O/S, DB, R/3. Note: BD48: Changing the logical system name Note: Size of the table can be determined using the reports RSTABLESIZE and RSSPACECHECK. These reports can be executed in SA38 T-Code Local Client Copy: Go to SCCL (SCCL is the tcode used for local client copy) Before client copy we have to create the client. This can be done using the table SCC4. Go to SCC4 from which we can create an entry in the table T000 (List if clients present in the system). Make entry for the client number, description, logical system name, application server and click on save. This will create the client entry i.e. client without any data in it. Now Logon to the Target Client go to SCCL select the profile Profile: It defined the type of data to be copied from the source client. We should use SAP defined profiles that starts with SAP. (Ex: SAP_APPL) 192
Note: repository objects will not be copied during the client copy (Local or remote) Select the source client from which the copy has to be performed. By selecting the Test Run we can perform a resource check. Simulation reads the extra data andif there are any problems with DB like space, and then simulation terminated the copy. Go to SCC3 for detailed log of the client copy.
Remote Client Copy: It is performed in transaction scc9. Select the profilego to profile select the source destination Repository objects can be copied Client specific data is copied RFC connections are defined in SM59. specify the system name Specify the host name Schedule it in the Background mode (or) dialog mode Perform a test run for simulation Then select RFC RCC (RFC) is between 2 systems LCC is between 2 clients Client Export and Import: When there is a necessity to copy a client from one landscape to another landscape client then we will use the export and import of the client. Client Transport is performed in 2 steps: 1. Client Export 2. Client Import
193
It is performed in SCC8. Client wont be copied into DB in turn it will be copied into transport directory in terms of Control Files and Data Files. Select the profile to define the type of data to be copied or exported. Specify the name of the target system and run it in the background mode. Note: Ensure that transport directory have enough space to host the data files and command files. Copy of the co files and data files to the target system and execute the following commands to import the data into the client or we can use STMS to import the client data. Copy the command files and data files into the target system. tpaddtobuffer <tr> <SID> <CLNT> pf=\usr\sap\trans\bin\tp_domain_<SID>.pfl tpimport<TR><SID><CLNT> pf=\usr\sap\trans\bin\tp_domain_<SID>.pfl tpactive buffer: It is used to add the contents to the target system in order to import any change request. It should be added to the buffer of the target system. During the normal transportation the object follow the transport routes and automatically added to the buffer of the target system. It is also one of the reasons for creating virtual system. Note: In 2 system landscape only consolidated route is there no delivery route. Go to STMSGo to Import select the system go to extras add transport request to import pool select the transport request click on semi loaded truck to import the request Post Processing of client transport: Go to Scc7 select the transport request select the profile name specify the export system <SID> schedule as a background process.
194
While performing post installation activities the clients are created and populated with 000 template. Create users and assign them complete authorization to configure the system according to the requirements document. Customizing: It is a process of adapting the system according to the requirements without changing SAP standards. It is a process by keying entries into the templates or tables. Ex: Setting of sales organizations, company code, sales areas, plant storage locations, cost centers, profit centers etc. Whenever a change is initiated in the client which is set to automatic recording of changes a change request will be generated or changes will be added to the existing change request.
CUST
PRD
Note: Client copy always rewrites the target system. There is no method for client merging. If there are 1000 users in client 300 and there are 500 users in client 200. If the users in client 200 are to be copied to client 300 then all the 1000 users n client 300 are overwritten by client 200 users. So now there will be only 500 users in client 300. Authentication USER and PASSWORD TIN: Tax Identification Number Authorization What needs to be accessed and what not to be accessed. (Permissions to users)
Change Request: Whenever a change is initiated in the client which is set to automatic recording and change a change request will be assigned to the changes (or) a change request will be prompted to the changes.
195
No Changes Allowed Production client Automatic Recording of Changes Customizing client Changes without recording SAND, TEST Changes without recording TEST, QTST, TRNG No Transports are allowed Change request s created by project manager, project leader. Each change request created consists of one or more tasks which are in turn assigned to developers, functional consultants. Change requests are created in SE01. Change request can be copied from one client to another client using SCC1. Change Request Mechanism: Change request is created by project leader. Project leader defines tasks under a change request and assign it to developers (or) functional consultants Change Request naming convention will be like: <SID>K900001 <SID>KA00001 Tasks follow the same naming convention. All the tasks are located by developers (or) consultants I.e. the objects which are being assigned activity is completed the developers/consultants will release the task. If the task is released the objects can be used by other people (or) other team members. Change request numbers are stored in table E070 Tasks follow the same naming convention. All the tasks are located by developers (or) consultants i.e. the objects which are being created / modified are locked to that user. Once the assigned activity is completed, the developers / consultants will release the task. If the task is released the objects can be used by other people (or) other team members. Change request numbers are stored in table E070. If one of the tasks is locked y the user and left then change the owner of the tasks based on the approval (or) instruction. Release the task as administrator. 196
A change request has many tasks and each task is assigned to one developer and the developer locks the table. Tasks are released by developers/ consultants in SE01. Once all the tasks are released the change request can be released by the project leader. Types of Change Requests: 1. Customizing Change Requests: This change request is to save the changes related to client specific settings i.e. changes are related to a particular client. These changes needs to be imported subsequently into all the clients in the landscape. Ex: User master data, Sales Area, Company Code, Sales organization, Application Data and Customizing Data. 2. Development: When the customizing does not fulfill the user requirements we need to perform development. Development involves creation of tables, files, reports, programs etc. Field: It is nothing but a column in the table. In SAP table fields are created as domains and data elements. Domain: It defines the technical characteristics of a field or as 1. Type of field (whether it is num, int , char..) 2. The length of the field. Domain helps us to keep the relative fields consistent throughout the system. Data Element: Data element is defined in the table pointing to a domain in precise meaningful name to the domain. Table: Table consists of columns and rows. Columns contain the name of the data elements and rows contains the data. There are 3 types of tables. 1. Transparent Tables 2. Pool table 3. Cluster Table Transparent Table: These represent 1:1 [ABAP dictionary to oracle] to DB. If there is one table in DDIC then there will be one table in DB. 197
Pool Table: A group of related tables pooled into a larger table. Cluster Table: Group of tables clustered together. Tables, Data Elements, and domains are create in transaction SE12. SE System Engineering. They should be created in the namespace of Y and Z only. Screens: Screens are created in transaction SE51 MENU: Menus are defined in SE41 Programs: Programs to read the contents of the screen and consider the activities of the menu. SE38 is the T-Code used to create, modify display the programs. In order to define the programs to take the runtime values and calculate the values functional modules are defined. Functional modules provide reusability of functions. These are defined in SE37. In order to execute the programs use transaction SA38. As it is very hard to remember the programs names, the programs are assigned to transactions. Transactions are created in SE93. Note: Transactions are stored in table TSTC. Cross Client Customizing: The customizing related to the entire system such as currency settings, time zones, calendars etc. These are specific to the entire system. All the above changes need to be imported only once on the target system irrespective of number of clients. Work bench Change Request: The changes related to work bench [development, cross client customizing] are saved to change request of type work bench. Transport of copies: In order to transport a table definition or table content, transport of copies is used. Ex: When we want to retain client settings such as user master data during a client refresh. User master record is saved as transport of copies and imported after a client refresh. Identify the table to be included as transport of copies save it, release the change request. It is exported to TRANS directories. It can be imported into the client again to get the table entries back. Relocation of objects: The objects can be moved from one system to another system with development class (or) without development class. 198
Change Request Mechanism: The objects cant be copied from one system to another system using traditional means because it is difficult to identify what objects are changed and where they are located. That is the reason whenever there is any change it is recorded to change request. Change request consists of the change made to the object. Change Request Release: When the tasks are released in as change request the change request is eligible for release. The change request can be released to another change request. When a change request is released the changes from the objects are bundled into data files. When a change request is released transport protocol tp records the change requests and copy the relevant changes into a data file and move this file into transport directory. The data files also have control files and these files are copied to co files. Data Files: It consists of the changes to the objects. It starts with naming convention as follows. <SID>R900009 Ex: DEVR00009 Command Files: Co files It stores the commands to be created while importing the change request. Its naming conventions is derived from change request no DEVK900009. There will be exactly one command file for one data file. SAP Names: When the change request is released the name of the developers and the change request number is populated into this directory. Log: It will store the logs of export and import. Buffer Directory: When a change request is released based on the routes definition it will be added to the import buffers of the target system. If the routes are not defined we need to explicitly add change requests to the buffer using command tpaddtobuffer. That is the reason we will configure the landscape. Development Class: It is a group of objects such as reports, function modules, programs, transactions etc. Development classes are defined in transaction SE80. While defining a development class we need to assign it to a transport layer. Development class is required to develop the objects. While developing objects such as reports programs, functional modules etc we need to assign them to a development class. Development class should start with either Y or Z. The 199 R denoted data file.
TASKS
objects which are assigned to $tmp class are saved as local objects and cannot be transported. That is the reason we need to define a valid development class and assign them while developing objects.
Transport Transport Either Consolidation (or) development (or) QAS (or) PRD
The change requests which are released are stored in trans directory in the form of data files and co files. These are available for import into systems in the landscape based on buffer entries. If there are no buffer entries we need to add manually using command tpaddtobuffer. Change request once released are called as transport request. Transport requests are imported using STMS. Importing a transport Request: Go to STMS Select import Queue of the selected system select the transport request request can be imported as a single, mass or a group of change requests. Change requests can be triggered immediately (or) at a later time. Disabling the folly loaded truck: Go to STMS click on the systems select the system go to transport tool change insert row No_Transport_ALL save it. SE38, SE41, SE71 Programs, Functional Module, Reports, Transactions SE80 Developm ent Class Transport Layer Data Files Co Files Transport Route Buffer SAP Names 1 Development C.C.C WB Request E070 200 EPS BIN TMP LOGS
Released TASKS 2 C.B.Customizing 3 Transport of Copies Select the object Table Structure Table Content Manual tp: tp can be called from command line by using executable tp.IN order to execute tp path to kernel directory has t be set in the environment variables. Tp version is displayed using command tp. Type tp help to find out the various options of tp C:\> tp help tpaddtobuffer will add a request to the buffer tpcleanbuffer cleans the buffer tpdelete from buffer deletefrom buffer C:\>tpaddtobuffer tp return code: 0,4,8 o Transport request import is successful 4 Transport request is successful with warning 8 and above Errors Import Errors: 1. Tp error 212 could not connect to DB The objects which are imported into target system could not over write because the ojects are locked. SAV Releas CUST Request DEV/ Functional Consultants
201
Imports cannot be reverted back. If we want to revert back develop another change request release it and import it into the system. 2. STMS is not configuredTp version is outdated. [Upgrade tp version] 3. In order to check tp connectivity execute command R3trans-d which generates trans.log in the current directory. Trans.log gives codes and the error description 4. RDD*JOBS could not be executed due to lack of background resources. Check background jobs in SM37
tp import mechanism: When a transport request is imported tp is initiated and reads the contents from trans directory and connects to the database tp calls R3trans [R3trans is an executable which performs the transport] to execute the import task, to connect to DB tp documents all the import steps in tables TRBAT and TRJOB tp also triggers RDD* Jobs.
RDD * Jobs
RDD* Jobs reads all the steps from tables and executes them. MTP (Move to Protection): 202
In order to move the transport request t production system we need to schedule down time. The changes which are approved in the quality system are available to move into production. In order to avoid inconsistency between objects these are moved during off peak hours. In major enterprise where there are huge transports, transport strategy is defined like transports will be moved on every Sunday, first week and the last week of the month. During MTP we need to define the escalation table. 1. Valid backup is required to restore incase of problems during transport. 2. Identify the people (Consultants, developer ..) 3. Key persons responsible and assign tasks. 4. Lock all the users except the ID to be involved in transportation 5. Perform transportation of objects 6. Release key business users to test the functionality. If the functionality works fine, release other users else revert back with alternatives.
CUST 400
TRNG
C.C.C Repository
QAS: In QAS objects are imported tested for quality integration testing. Imports are also performed subsequently in training client and other optional clients. Testing tools are deployed to test the load on the objects. Simulation of users to check the performance of the objects is performed. If the objects are approved in terms of 203
quality [integration, stress and performance] the approving officers will document a list of transport request along with scheduled date of release and time, into the production system. Training for the users is common in the training client. Next move to production (MTP) is carried out. Note: No transports will be moved without approval. Preliminary Imports: These are emergency transport request which are moved independently without following MTP. Preliminary imports will remain in the queue even after the transport. This transport is again transported during MTP. Approval is required to perform a preliminary import. Current Settings (customizing): Even though system is set to not modifiable and client customizing doesnt allow any changes. In a production system we can change some of the entries like tax, currency information transactions. These changes are carried out on the production system by authorized consultants without any change request. Support Packages and Patches: During the implementation (or) post production support whenever there is a gap in functionality (or) functional consultants request for enhanced functionality support packages are applied. There are various types of support packages and each support package has its own released life cycle. Support packages should be applied in the sequence. Pre Requisites to apply support packages: a. Always apply the support packages in client 000 using user like DDIC. b. Apply the support packages in the sequence of landscape [First in DEV, next in QAS and finally in PRD] c. Apply the support packages in the following order SAP_BASIS, SAP_ABAP, SAP_APPL, SAP_HR d. Always apply the support packages in the sequence such as 1,2 3, 4. Ex: SAPKB64001, SAPKB64002 Download the support package queue i.e. a group of non conflict packages can be applied together. Ex: 1 & 2 can go together SAPKB64001 SAPKB64002 SAPKB64003 SAPKB64004 SAPKB64005
204
Note: Known problems related to support packages file can be downloaded from the service market place before applying the changes to the system. e. Download the recent notes with search criteria known problems related to support packages of version XXX f. SPAM and SAINT versions should be updated. SPAM and SAINT have their own versions. SAINT SAP add on installation tool. SPAM Support package manager. These tools are used to apply support packages and add on installables. g. There should be enough space in the TRANS/EPS directory. EPS Electronic parcel service. EPS hosts all the support packages h. There should not be any aborted packages. i. j. Developers and functional consultants should be around to handle the change with support packages. SPDD and SPAU are the places where data dictionary (DDIC) and repository chages are made.
k. There should be enough space in the DB l. If the support packages are less than 10 MB apply through presentation server and if it is greater than 10 MB apply through application server.
m. There should be at least two background processes to apply the support packages. n. Ensure that STMS is configured o. Users should be locked and this should be performed in off peak hours p. Ensure that valid backup is available Upgrading the SPAM/SAINT version: 1. GO to SPAM support package manager 620/ 0022 2. Go to SAINT SAP add on installation tool 6.20/0022 3. 6.20 is the ABAP version 0022 is the SPAM & SAINT 4. Go to SAP market place 5. Go to support packages & patches Select SPAM & SAINT versions 205
6. Download the SPAM latest version and SAINT latest version into your desktop. 7. Go to Support Packages Load package from front end specify the path of the download file .car and .sar (Sap Archive) 8. Select the downloaded file. It will load into the system. Note: The package is only loaded but not applied. 9. Now go to support package select import SPAM objects 10.SPAM and SAINT can be applied in arbitrary order. Applying Support Packages: Logon to client 000 using user like DDIC. Download the support packages from market place. The available options are: SAP_BASIS, SAP_APPL, SAP_HR, SAP_ ABAP.. If the support package is less than 10 MB size apply through front end. Load package through front end. Note: If the network connectivity is slow (or) if the support package is more than 10 MB of size then always use application server. Load the support packages. It reduces the time from presentation server to application server. How to apply from application server: Download the .car/.sar files from the market place into trans directory. Trans\eps\in. Use command sapcar xvf SAPKB62026.car (or) sapcar xvf SAPKB62026.sar Note: Earlier the command was only sapcar The file will be uncompressed into the directory \usr\sap\trans\eps\in The files that we will get are .ATT and .PAT ATT: Attribute file PAT: Patch file 206
Select the new support package and display. New support packages which are loaded are displayed. Select the support packages to import. In order o select the support packaged define the queue. For defining the queue click on display queue and click on define Select the support component for which the support packages have to be applied. Queue is displayed select a single packages or a group of packages based on the composite note recommendation. Go to support package and import. Click on import queue to import the support packages. Note: The support packages once applied cannot be reverted back. Before applying the support packages check all the requirements are fulfilled. Problems occurred during applying the support packages: 1. Conflict patches are grouped together 2. There are aborted packages 3. SPAM and SAINT are not updated 4. eps\in directory has not enough space. 5. DB table space out of space (or) table space over flow with errors ORA_1653, ORA_1654 6. DB max extends reached with error ORA_1631 and ORA_1`632 7. There are no enough background processes available. 8. Extended memory is not enough to import the support package 9. tp and R3trans are out dated (We come to know about this when the screen cannot move) 10.Ensure that outdated are not locked by the users 11.Data dictionary activation errors 12.Conflict with the add-ons 13.tp could not connect to the database (Execute R3trans d)
207
Note: Tables used are PAT01, PAT02. Do not delete entries from these tables unless there is a SAP recommendation. 14.Check the status of the support packages imports in SPAM. Click on import logs. Logs can also be displayed in trans\log 15.Support packages can be re imported from the point where it terminates. SPDD Phase: It is used to update the DDIC elements While applying the support packages it will prompt you to run SPDD. It is decided by functional consultants to keep the existing changes to adapt the new changes which come with new support packages. Do not run SPDD without the concern of development team. SPAU Phase: It is similar to SPDD, but it is related to repository objects. CRI: Conflicts resolution transports. These are released by SAP to resolve the conflicts between SAP support packages and add-ons. The following are the phases that include the support package implementation.
The Support Package Manager informs you of the status of the phase currently being executed in the status bar. If you want to know which phases are executed for which scenario (test or standard scenario), run the program RSSPAM10. The following list provides an overview of all the modules and phases and lists them in the order in which they are executed by the Support Package Manager:
208
209.
Applying the Plug-In: Fabrication Pharmacy Textiles Standard Modules SD,MM, PP Additional PI_BASIS /willsys /VIRSA /SATYAM (Third party tools) SAP shifts (or) SAP comes with standard functionality such as MM, SD etc.. In order to provide additional modules such as banking insurance business content (BIW), textile, mining, pharmacy etc. In Order to apply these plug-ins (Solution tools plug in). It is used to provide solution based on CCMS or EWA (early watch alerts).
CCMS: Computing central monitoring system. 1. Download the add-ons from the market place. 2. Download the documentation related to add on 3. Download the installation guide and important notes 4. Document all the steps 210
5. Ensure that all the pre requisites are met. Ex: BIW_BASIS should be of level5 SAP_BASIS should be 16 SAP_ABAP should be 14 STPI and PI_BASIS should be 2003 and 2005 (STPI-Solution tool plug in) 6. Read the note thoroughly and get the password, key word, key from the note to install the add on 7. Pre requisites and problems are same as support packages and patches 8. Download the content onto transport directory 9. Uncar the content using SAPCAR-xvf *****.car 10.Go to SAINT lad the package. Package is displayed on the screen. 11.Select the package (add on package)to install. 12.Click on continue 13.A pop up box is displayed to go through the note and key in the password to continue the add on installation. 14.Continue the add on installation 15.There are chances for conflicts between the existing support packages then it will ask you to configure CRT. 16.Respective CRT need to be downloaded and need to be applied by using tp. (Transport, background jobs, Security) Note: SAP NOTES provides valuable information for resolving eh run time problems. SAP maintain abundant knowledge base of notes. Notes are related to customer problems and solution, break fixes to the standard functionality and minor enhancements. SAP Notes are of 3 types: 1. Informative Note 2. Corrective Note (Manual) 3. Corrective Note (Automatic- using transaction SNOTE) Informative Note: This note provides information to the customers as follows. 211
1. Production information 2. Composite note for support packages 3. Notes for add-ons 4. Query related solution Corrective Note (Manual): SAP recommends changes to be performed on the system during the following scenarios 1. Changes to table entries such as tags, vat etc 2. Changes to wage types 3. Changes related to tables 4. Recommend to resolve the issues (Table space overflow, Max extents list, Archive struck, Work process congestion) Corrective Note (Automatic): (Patches) Up to version 4.6c all the changes to SAP system are performed manually. From 4.6c onwards SNOTE is used to apply the changes automatically without any developers intervention and SSCR key. Most of the changes are related to Sap standard objects. Applying Note: 1. Go to market place () 2. Download the correction in the note onto your desktop 3. Go to SNOTE. Upload the note. (When it is uploaded the status will be new) 4. Select the note to implement. Now the status is in processing 5. When it is implemented status is implemented 6. Some of the notes cant be applied and some of the notes status will be obsolete(Outdated). Note: Notes consists of corrective coed, which will append to the standard programs containing the necessary code, where as support packages overwrite the code. Group of notes and patches are called as support packages. Note: Notes once applied can be reverted back. Notes can be downloaded directly connecting to the market place.
212
Go to SAP market place. Until SEP 2005 there was not that much significance for market place. Before SEP 2005 we use OSS1 TCODE for downloads. OSS1 is a transaction which is used to perform the following until SEP 2005: 1. Creating customizing 2. Searching the notes 3. Download the notes 4. Download the license keys From SEP 2005 onwards SAP decommissioned using the TCODE OSS1 and recommended to use SAP MARKET PLACE. SAP Market place is built on EP (Enterprise Portal) which is used to connect between employees, customers, partners and users. It requires SUSER ID to login. It consists of 10 digits preceded by S and 2will be like S000123456. Market Place provides the following Services: 1. Notes: This can be searched based on error description. Only SNOTES steps are documented. Notes are always named with numbers. 2. Customer Messages: The runtime problems which could not be resolved can be escalated to SAP Click on customer message.
213
Select the system for which help message needs to be created. Enter the error message. SAP advices to search and look for the resolution among the display notes. Enter the error message. SAP advices to search and look for the resolution among the display notes. If there is no resolution continue customer message. Upload the relevant screen shot Click on save and send it to SAP Note: We can define the priority of the issue such as normal, medium, high, very high. High and Very High are considered first as the production system is in danger which causes business loss. Software Download (SWDC): 214
It is used to download support package, installation DVDs add-ons, plug-ins, kernel executables, 3rd party tools, DB patches etc
KEYS: There are 4 types of keys which can be generated from the market place. 1. SAP LICENSE 2. DEVELOPER 3. SSCR 4. MIGRATION SAP LICENSE Key: In order to apply license to the SAP system we need to generate license key from market place. Before generating a license key the system needs to be registered with SAP. [The following information needs to be given for registering with SAP: Customer Number, SID, Installation Number, Instance Number, Host name, Hardware Key] 215
The system which needs license needs to be registered. System data has various information such as IP address, system name and router port. Select connection and open it to SAP for specific number of days for remote connections.
216
217
SSCR Key:
218
Migration Key: Whenever we are moving from one operating system to another operating system (or) from one DB to another DB we need the migration key.
219
Client Comparison: When you install add on packages like country India version(CIV) the changes effect only in that client through which its applied. CIN is an add-on which is used to bring the changes related to taxes, wages etc. It was earlier version. Now it is coming along with standard SAP software. Functional consultants identify the changes related to these functionalities and compare the entries in respective transactions and merge them. Select the entries and merge system prompt to record the changes to a change request. SAP Router: SAP router is a software program which runs on the application server. It runs as a service and maintains access control list in a table SAPROUTETAB. It is a text file which maintains the IP address, permit, deny and port numbers(Not a DB table). Permit P 192.168.01.1 3600 Deny D 192.168.01.3 3600
Configuring SAP Router: GO to and download SAP Router executables into c:\> (drive)and saprouter (folder). SAP ROUTER is an executable present in the run directory and ensures that the version of the SAP router is always updated. Download the files into SAP router directory send the details to SAP to establish the remote connection. encrypt.pse are given to the customer. In that a key is available. Run the encryption and key in the SAP key and get the key to sap. Once the connection is opened we can TELNET to SAP server. In order to start the SAAP the command is Start: saprouter R Stop: saprouter S Create the SAP router, start up as a service using NTSCMGR In UNIX: $PRONJOB, ALL_ACCESS control list. In order to open the connection SAPROUTER must be running.
220
The screen shot is an initial screen page for SAP ROUTER of the market place.
221
PRE GO LIVE: Before the system goes live SAP logs into customer system and evaluate customizing and development. Pre Requisites: 1. The system needs to be registered in the market place 2. SAPROUTER is defined and connectivity is established 3. Customer has to send the necessary information to SAP (Name of the company, Name of the persons, Email Id and Phone Number) 4. No of modules configured 5. O/S, DB, R/3 system type There are 3 types of Go LIVE check conducted by SAP 1. GA- Go live analysis 2. GV- Go Live Verification 3. Go- Go live optimization We need to inform SAP preferably 2 months or at least 1 month before Go-LIVE to conduct the above checks. Go Live Analysis: This is the first check conducted by SAP to determine the following: 1. Critical Transactions 2. Expensive Programs 3. CPU utilization 4. Memory Consumption 5. Back Up Configuration 6. Scheduling standard jobs 7. Critical interfaces 8. Utilization of buffers 9. Hard Ware configuration 10.Dialog response time 222
SAP studies all the above and recommends the measures for tuning such as 1. Relocation file structures 2. Configure memory parameters and buffer parameters 3. Work process configuration 4. To create indexes like primary indexes and secondary indexes 5. Run the check optimizer statistic schedule it in the standard background jobs (House Keeping Jobs). 6. Defining the backup 7. Tuning the expensive programs and SQL statements 8. Moving the long running programs into background mode. Note: Large installations should inform 2 months before Go Live (or) at least 1 month. Small installations should inform 1 month or at least 15 days before the Go-Live. SAP recommends the changes to be performed within 15 days. Go Live Verification: This session has to be scheduled after the Go Live analysis session. This will be conducted by SAP technical consultants to verify the changes that are recommended in GLA system. If the changes are related to Hardware it will highlight during this phase. Note: SAP verifies the parameters and recommends further changes on the system configuration on the day off session. Go Live Optimization: This session will be conducted after Go Live (After 1 month of Go live). This is used to analyze the load on the system. (How many users logged on, Critical transactions and critical programs) and recommend further changes to optimize system performance. Parallel Run: Data Migration: Most of the Customers define a migration client to perform data transport from legacy systems. The data is migrated and tested for its quality before moving to production. The data migration, parsing truncating is performed in the migration client. 223
End User Training: Before Go-Live the production users are trained on all the processes such as creating P.Os, Invoices, Billing, Pay Roll Run and training client of QA system. Note: As the users are not habituated to SAP systems, we will uses to work on both the systems (Legacy and R/3 systems). As a company policy it may be allowed to use only fewer modules on the production system and remaining on legacy systems. In the above scenario parallel run of both SAP system and legacy systems are advised. During parallel run legacy system and R/3 system run together, the data from legacy system is transported periodically, hourly, and daily to synchronize the data between two systems. JAVA MM PP ORACLE HR FI SAP CO
VB SD
Go Live 1 st Apr09
1st May09SD,MM
Daily/Hourly Transfer 1st June 09 PP,HR Modules are run in parallel in both systems
Now the other modules (SD, MM, PP, HR) are also cancelled in the legacy
224
Systems. Now the modules run only in SAP system. 1st Jan 2010 All Modules are running only on SAP system from here onwards
USERS and SECURITY of SAP System Name Employe e Code Departm ent Address Employe e Salary Appraisa l Bonus
Name
Empco de
Departm ent
Addres s
Emp Salary
Apprai sal
Bonus
Employee can see his own details. But he cannot change any other entry. He can see details of other employees. Director can change create (or) display all the entries in the table. He can see/ change details of all the employees. Project Manger cant create the first four entries (Name, Empcode, Department, and Address). He can only see them. He can create/change the other 3 entries.
225
Authentication: It is the means of providing access to the system i.e. User ID and Password. Authorization: It is the means of providing access to system functions to the authenticated users. In order to provide various accesses to different people in the organization we need to define Roll Matrix (or) Authorization Matrix. Roll Matrix (or) Authorization Matrix: It is a table which defines the critical transactions, access levels and the rolls. Authorization Field: The data element (or) a field in the table which needs to be protected is called as authorization field. Authorization fields are created in transaction SU20. Ex: Employee Salary, Bonus, Discount, Sales Order Amount, Purchase Order Amount etc. Activities: It is the action that can be performed on an Authorization Field. Activities are as follows: Create, Display Modify/Change
Delete Print Reverse etc Activities which are available in the system are shown in the table TACT. By default there are aroune 170 activities defined in this table. Note: We can create our own activities in table TACT. Authorization Objects: A group of not more than 10 related authorization fields is called as authorization object. Authorization Fields and Activities: The list os possible activities for each authorization objet is defined in the table TACT2
226
Authorization: Authorization field and its activities are referred as authorizations. It is also referred as field and its value. Authorization Profile: A group of not more than 150 authorizations is called as authorization profile. Note: Up to SAP 4.6 B version profiles are created manually, but from 4.6C onwards profiles are generated while creating roles. Composite Profiles: Group of profiles is called as composite profiles.
Role: A role is a combination of profiles, transactions, reports, menus, personalizations over assignment. Roles are defined in transaction PFCG. Composite Role: Group of one or more roles for administrative ease is called as a composite role. It doesnt provide any additional functionality. Derived Role: The roles derived from a parent role, but differ by organizational levels. These restrict the functionality based on organizational levels. Parent Role: It is a single or generic role which is having authorizations to inherit to child roles/derived roles. Defining a Role: Go to PFCG Specify the name of the role Create, change (or) copy role Authorization field and Authorization objects are cross client objects. Description of the role Go to Transaction to include standard transactions Go to menu to include reports, URLs, create folders etc create click on change, authorization data specify organizational levels such as company code, sales orders, organization sales area, distribution channel Authorization objects, fields, authorizations are displayed 227
Include authorizations objects manually Red means organization levels are missing Yellow means field value activities are missing. Green means totally organized.
Click on the traffic light. Then you will get the complete/full authorization. (Pop up box will be displayed). Provide field values, activities according to SOD matrix. (SOD Segregation of Duties) Save and generate profile Save the role and assign it to users. Perform user comparison, so that role will be effective in the user master records. The role wont be effective until we run user comparison Note: In earlier versions the user has to log off and log in to get the effectiveness of the new role assigned to him. But in the current versions after user comparison the role is effective automatically. Note: Performing user comparison during role creation consumes more time. So perform the user comparison in the background mode during off peak hours. PFUD: This is used to perform user comparison in the background mode (or) schedule a report PFCG_TIME_DEPENDENCY to perform user comparison in the background mode. Initializing Profiles: (Filling the Customer Data) When a client is created we cannot perform role creation without filling the customer tables. The tables used are USOBX and USOBT. Go to SU25 click on initially fill customer table. The relation between the transaction and the authorization object s can be displayed in SU24. SU24 is used to define the relation between transaction and authorization objects.
228
Authorization check indicators such as unchecked, not maintained, check check and maintain. When a transaction is initiated it will check in the assigned program Authorization objects are created in SU21 Development class> packages SM30 table maintenance if the table maintenance is allowed we can create entries manually. Activity Table TACTZ
Transactio n Role
Profile/Composite Profile
Example: Purchase order Activities are: Create, Modify, Delete, Reverse, Approve, Release (These activities are listed in table TACT) Roles are added in table AGR* When the user is logged into the system request goes to the DB and gets the necessary authorizations from the user master record (USR02) and keeps a copy on the user context. This user context is displayed in transactions SU56. When a user executes a transaction it points to the program. Program in turn points to authorization objects inside it. Authorizations are programs busing statements authorization_check_ followed by the authorization object, field value and activity. These are all checked against user buffer (SU56). If they are available user is allowed to perform transactions else user is not authorized to run the transaction.
229
Missing Authorization: Very frequently help desk receives calls related to missing authorization when a user could not access a program over a transaction. It is referred by the user as Missing Authorization. Missing Authorizations are caused by the following: 1. User is not assigned with authorization 2. User is assigned, but user comparison is not performed 3. User to roll assignment is expired 4. User buffers are old (or) over flown Analyzing Authorizations: Transaction SU53 is used for analyzing missing authorizations of the respective user. Check the missing objects in the user master record. If the role Is already assigned perform user comparison. If the role assignment is expired then send a mail to the business process owner. User raises a ticket of missing authorization ask the user to execute the transaction again and send the screen shot of SU53 screen. Note: In every organization SU53 TCODE is assigned to the every user. If the role is assigned recently and user comparison is not performed then perform user comparison. Note: Use SUIM (SAP User information management. This information transaction gives you all the reports related to SAP security. Some of the reports are user to roles, roles to transactions, user to profiles. Standard password reports, user change documents (last logon date, last password change, list of users locked). Basically SUIM uses (or) calls RSUSR* SA38 is executed. In SE38 all the engineering can be done Go to SE38Key in RSUSR* Press F4 to display the list of possible reports Scenario: Authorization is not assigned to the user User complains of missing authorization
230
Plant In charge
Write to the business process owner to approve the missing authorizations assignments. Wait till get the approval. Note: Always assign only requested authorization to the users. Do not assign excess authorizations. Authorization Objects, Authorization Fields and Field values cannot be assigned directly to a user. Generated profiles can be assigned indirectly by assigning roles. All of the above can be combined in the role and assign it to the user. Role Mitigation: It is performed by using third party tools like VIRSA, BIZRights Role Mitigation is used to mitigate the following 1. What worse will happen by assigning the existing role to the user? After mitigation it will giva report of excessive authorization. Send the list to the Business Process Owner and get approval. Roles consists of many authorizations. Ex: Purchasing Officer Role Create Material, Create P.O etc.. are the authorizations of the role. Now if a user wants authorizations to create a material and if we assign the whole role i.e. purchasing officer to that user, then he will get excess authorizations. 2. Identify the least effected role by mitigating fewer times and assign that object to that role (Save the role, generate the profile, assign it to user, perform user comparison). If both of them are not allowed then write to the manager approval to create a role.
231
Create Role, assign the authorization object, save and generate profile, assign it to the user and perform user comparison. Barbanes Oxley Procedure: SOX 404 act. It is implemented in most of the public limited companies in the U.S to safeguard the applications. SAP uses the above tools (VIRSA, BIZRights) to mitigate when modifying a role (or) assigning a role to the user. VIRSA has become internal part of SAP and comes as an add-on. Transaction SU99 is used to specify which of the transactions should not be combined. These 3rd party tools are also used for the following : 1. Auditing Purpose: Who has done? What he has done? When e has done? Example: Consider P.O Who has released P.O, Who has created P.O, When is it created, All the details are given in auditing. 2. Mitigating between the Roles and Users 3. Identifying (or) tracing out incoming and outgoing documents Ex: We can get across control to that P.O creations, TIN, bank account number, address, telephone number, fax number, match with P.O receivers. 4. User Management: User Creation: Users are created in transaction SU01. SU01 is used to create, modify, lock, unlock, copy and delete the users. SU01 is used to create single user. SU10 is used to create mass users with some privileges
2. System User 3. Communication User 4. Service User 5. Reference User Dialog User: Is the on e where interactive logon is possible. These are calculated for license. System User: Is used to communicate within the system in the background mode. No interaction logon is possible. Communication User: It is a non dialog user where interactive login is not possible and used to communicate between 2 systems using RFC connection. Service User: This is anonymous user, which is used by group f people where dialog logon is possible. No password, USER ID Ex: Top view the company targets Reference User: It is used to provide additional rights to interact users. Ex: Users connecting to the system over the browser will have limited authorizations (EP). Once authenticated the rights will be assigned to the reference user. SOD: Segregation of Duties It is an authorization matrix (or) role matrix. Segregation of duties is performed while implementation (or) duing support, business process owners define the key roles in the company along with the activities which are performed in the system. It is a matrix of Roles & Transactions. It is used to define the transactions in the company.
The following table is an example of the role matrix or authorization matrix. 233
Name of the Role Purchase Officer SALES OFFICER SALES MANAGER AREA MANAGER MANAGER DIRECTOR
MM01
MM02
MM03
ME21
ME22
ME23
Questions: 1. What are the standard roles available in the system. Name some of them along with description (at least 10) 2. Name some of the profiles, composite profiles, authorization objects, authorization fields, composite roles, derived roles. 3. List the users with critical transactions 4. Display all the available users in the system User Creation: Users are created in transaction SU01 SU01 has the following tabs: Address Tab: It provides the details of the users like first name, last name (It is mandatory to provide email and telephone number) Logon Data Tab: It specifies the validity period, specify the type of the user
234
User Group: These are defined in SUGR. These are used to group the users based on department or division or roles for easy maintenance. By default super group is available in the system. User Group for authentication Check: If we specify a group the user is allowed to be administered by the user of this group only. Super user group admin is allowed to administer the user belonging to all the groups. Specify the password. Default Tab: Specify the start menu Specify the logon Language Specify the printer settings like O/P device, O/P immediately or release after O/P. Time Zone: Decimal notation and date format Parameters: These are frequently used fields which will be popular during the run time of the user Define the run time of the user. Define a parameter Go to the field Press F1 go to technical properties Find the parameter ID Go to SU01 Click on parameters tab specify the parameter and parameter value Roles Tab: These are used to assign authorization to the users. We can delegate additional rights by using reference users. Profiles Tab: These are going to be combined in the roles. There is a specific profile which can be used like SAP_ALL, SAP_NEW needs to be documented. Use only either roles or profiles. Groups Tab: These are created in SUGR used to group the users for administrative purpose. Presentation: It is used to restrict the user access such as time in and time out of the current day. Sales of the day and sales of the week, pay slip of the month.
235
Personalization Tab:?)
EWZ5: To lock and unlock the users 1. Principle of Dual Control: It is controlled by two administrators 1. User Administration 2. Profile and Authorization Administration When there are more systems in the landscape (or) various components SAP (BW,CRM,SRM) are implemented in such a case SAP recommends to use control administration CUA User Admin Profile& Authorization Admin Create users perform User Comparison Assign Roles to User and
2. Principle of Triplet control: The user and authorization administration is segregated into 3 roles 1. User Admin 2. Role/ Profile Admin 3. Authorization Admin This is controlled by 3 admins Role Admin Create Role Display Roles Change Roles User Admin 236 Profile Admin Authorization Admin
1. All the clients are defined with logical system 2. Define RFC connections with logical system names in SM59 3. Go to SALE define the logical system name and assign them to clients
Defining CUA: (Central User Administration) Go to SCUA of the master client or parent client from which we want to monitor users centrally. Specify the name of the CUA, SAVE and include all the logical systems. Save and distribute configuration of all the clients. Identify CUA configuration Systems tab is added in the master client. Systems tab is included in the roles tab User creation is disabled in all the child systems Users can be created only in the master client and maintained in the child client Users can be maintained either globally or logically which will be defined in SCUM While assigning roles click on read text from child client Note: RSDELCUA is used to delete the CUA. (Execute this report in SE38) CUA mechanism: CUA uses ALE mechanism to transfer the data between different clients. Transaction RFC is used to transfer the data between the systems. Go to SUIM (To give global or local users to be monitored) SCUM: If you are using Central User Administration, you can use the distribution parameters in transaction SCUM to determine where individual parts of a user master record are maintained.
In the child system with automatic redistribution to the central system and the other CUA child systems SCUM The system displays the User Distribution Field Selection screen, with tab pages of the fields whose distribution parameters you can set. To display additional fields, choose page down. You can select the following options on the tab pages: Global You can only maintain the data in the central system. The data is then automatically distributed to the child systems. These fields do not accept input in the child systems, but can only be displayed. All other fields that are not set to global accept input both in the central and in the child systems and are differentiated only by a different distribution after you have saved. You maintain a default value in the central system that is automatically distributed to the child systems when a user is created. After the distribution, the data is only maintained locally, and is not distributed again, if you change it in the central or child system. You can maintain data both centrally and locally. After every local change to the data, the change is redistributed to the central system and distributed from there to the other child systems. You can only maintain the data in the child system. Changes are not distributed to other systems. You can maintain data both centrally and locally. However, only changes made in the central system are distributed to other systems, local changes in the child systems are not distributed.
Proposal
RetVal
SAP R/3 Security Tables: SAP R/3 Security Tables are Tables in SAP R/3 that have relations or direct impact to Logical Access Control, Program Changes Control and Operational Control. Today, the convergence of the Internet within distributed ERP systems is ever-increasing the demands on data and business process security almost exponentially.
238 references USR02 USR04 UST04 USR10 UST10C USR11 USR12 USR13 USR40 USGRP USGRPT USH02 USR01 Logon data User master authorization (one row per user) User profiles (multiple rows per user) Authorisation profiles (i.e. &_SAP_ALL) Composit profiles (i.e. profile has sub profile) Text for authorisation profiles Authorisation values Short text for authorisation Tabl for illegal passwords User groups Text table for USGRP Change history for logon data User Master (runtime data)
USER_ADD Address Data for users R AGR_1016 Name of the activity group profile AGR_1016 Name of the activity group profile B AGR_1250 Authorization data for the activity group AGR_1251 Authorization data for the activity group AGR_1252 Organizational elements for
239
authorizations AGR_AGRS Roles in Composite Roles AGR_DEFIN Role definition E AGR_HIER2 Menu structure information - Customer vers Assignment of Menu Nodes to Role
AGR_HIERT Role menu texts AGR_OBJ AGR_PROF Profile name for role AGR_TCDT Assignment of roles to Tcodes XT AGR_TEXT File Structure for Hierarchical Menu S Cus AGR_TIME Time Stamp for Role: Including profile AGR_USER Assignment of roles to users S USOBT USOBT_C USOBX Relation transaction to authorization object (SAP) Relation Transaction to Auth. Object (Customer) Check table for table USOBT
USOBXFLA Temporary table for storing USOBX/T* GS chang USOBX_C Check Table for Table USOBT_C
Types of Security are NW, DB, O/S, R/3 In R/3 application and data to be protected (Invoices, Billing..) In DB OPS$ USER ID , SYS, SYSTEM, SCHEMA ID should be protected In O/S File structure , sapservice<SID>, <SID>ADM should be rotected In N/W firewall should be provided
240
2. Q_TCODE (Q represents Quality) 3. P_TCODE (P represents HR) 4. S_USER_GRP, S_USER_AUTH, S_USER_AGR, S_USER_TCD, S_USER_USR, S_USER_PRO These are user management authorizations for dual control and triplet control 5. S_TABU-DISP : This is an authorization object which provides access through authorization fields, authorization group 6. S_TABU_CLIENT: Cross client object 7. S_PROGRAM, S_DEVELOP, S_SPO_DEV are related to development The following T-Codes are security related activities: SU01: SU10: SUGR: PFCG: EWZ5: SCUM: SCUL: SUPC: SU03: SU24: SU25: SU99: SM18: SM19: SM20: SM58: SM59: 241:. CPIC: Common programming interface communication : CPIC (Common Programming Interface Communications) is the interface deployed by the ABAP language for program-to-program communication. CPIC was defined and developed by IBM as a standardized communication interface and was later modified and enhanced by the X/Open organization. The CPIC communication interface is useful when setting up communications and data conversion and exchange between programs. Since CPIC is based on a common interface, an additional advantage is the portability of the programs across different hardware platforms. SAP divides the possibilities and the scope of the CPIC interface into two function groups: the CPIC starter set and the advanced function calls. This division is simply meant to guide the user and not to restrict the available functions. For instance, the CPIC starter set would just be used for the basic and minimum set of functions shared by two partner programs, such as
242
establishing the connection and exchanging data. The advance calls cover more communication functionality, such as converting data, checking the communication, and applying security functions. For more information on these CPIC function groups, refer to the SAP documentation BC SAP Communication: CPI-C Programmers Guide. CPIC communication is always performed using the internal SAP gateway which takes care of converting the CPIC calls to external communication protocols such as TCP/IP
243
digits/letters/special characters in the password login/disable_password_logon and login/password_logon_usergroup Controls the deactivation of password-based logon login/disable_cpic -Refuses incoming connections of type, CPIC rdisp/gui_auto_logout - Defines the time for automatic SAPGUI logout login/no_automatic_user_sapstar Controls the SAP* user
Starting with installations of SAP Web Application Server release 6.10 and higher, the passwords of SAP* and DDIC are selected during the installation process. Use the User Information System or report RSUSR003 to monitor the passwords of all predefined users. If possible, make use of the profile parameter, login/no_automatic_user_sapstar. If you create a new client the default password for SAP* is pass. If you delete SAP* userid, logon is possible with SAP* /pass. The DDIC user maintains the ABAP dictionary and software logistics. The system automatically creates a user master record for user SAP* and DDIC in client 000 when the SAP System is installed. This is the only user who can log on to the SAP System during a release upgrade. Do not delete or lock user DDIC because it is required for certain installation and set-up tasks. User DDIC needs extensive authorization. As a result, the profile SAP_ALL is allocated to it. The users, SAP* and DDIC, should be assigned to user group SUPER to prevent unauthorized users from changing or deleting their user master record. 244
Default clients in an SAP System: Client 000 is used for customizing default settings. SAP imports the customized settings into this client in future SAP System releases during the upgrade process or even with support packages. Client 000 should not be used to customize data input or development. Client 066 is used by the SAP EarlyWatch service and should not be used or deleted by the customers.
245
USH02: This table is known as the change documents table and is updated every time a user is locked, un-locked, its password reset etc When these events take place a partial copy of user master information including BCODE, PASSCODE and PWDSALTEDHASH fields is inserted as a new record in this table. USRPWDHISTORY: In release previous to .00 password history is set to 5 and the last five password hashes are stored in the USR02 tables (Fields OCOD1-OCOD5). From that release onwards, the password, history size is no longer fixed, being configurable through the profile parameter login/password_history_size. Therefore, each time a users password is change a new rec0ord is inserted in this table containing the old hash value. The USR02 table can be retrieved from transaction SE16. The tables USR02, USH02 and USRPWDHISTORY should be protected against direct access through table maintenance tools (TCODES SE16, SE17, SE11). This can be enforced through the proper use of the authorization object S_TABU_DIS which restricts access to critical table authorization groups. Use different passwords for critical users like Sap*, DDIC, Administrator. Disable downwards compatibility (If possible). If there is no need to connect with older systems or special scenarios (e.g. CUA) the login/password_downwards_compatibility parameter should be set to 0 avoiding the generation of weak hashes. If legacy components needs to to connect with a new SAP system then a possible solution needs to be configured having 8-characters long upper case passwords, including special characters. This would keep the hashing procedures strong, while providing a decent level of security for legacy compatibility. Implement a strong password policy. Ex: login/min_password_lng > 8 and login/min_password_lowercase > 0 to demand the use of passwords longer than 8 characters and the use of at least one lower case character.
Frequently Asked Questions on Authorization Role & Profile What is the difference between role and a profile?
246 activity groups in SAP consisting of transactions, reports and web addresses. What is the different between single role & composite role? A role is a container that collects the transaction and generates the associated profile. A composite role is a container which can collect several different roles What profile versions? Profile versions are nothing but when you modify a profile parameter?
247
su53 is the best transaction with which we can find the missing authorizations and we can insert those missing authorization through pfcg. Table of authorization field settings. Table with deleted users Someone has deleted users in our system, and I am eager to find out who. Is there a table where this is logged? Debug or use RSUSR100 to find the info.-modifiable but still allows the nonconfig delivery class tables (or those configured to be changeable) to be modified. number.
248.
249
Dialog Work Process in Multiplexing: SAP transaction consists of multiple transactions which are handled by individual dialog work processes. Dialog work process handles request from various users (It is not restricted to the user) User transactions are handled by various dialog processes (restricting to a process). This mechanism is called as multiplexing. This process of handling various dialog steps of different users transactions without restricting to the users is called work process multiplexing. Dispatcher
User Communica tion W0,W1-------Wn-1 U C Task Handler R/3 Buffers DB Queue
The number of dialog work processes can be configured by using the parameter rdisp\wp_no_dia The maximum run time of a dialog work process is restricted to 600 sec, which can be increased dynamically in RZ11 by using the parameter rdisp\max_wprun_time Configuring this parameter does not require R/3 restart This parameter can be increased and decreased dynamically in the following scenarios: 1. Weekly progress report (or) sales report 2. Month end reports (or) any critical transactions which consumes more time. Dialog work process can be monitored in SM50/ SM60/ SM66
250
We can also monitor dialog work process using command prompt using the command dpmon Ps ef|grep de * Disp+work Update Process: This process is used to update the database. SAP transactions consist of multiple transactions which are handled by different dialog work processes. Each process cant update the DB, because they are part of the transaction. That is the reason they will update temporary tables, so that rollback is possible. If the entire transaction is committed then the update is written to the database. Users request using dialog process to commit a part of a transaction. If it is written to the Db, it cannot be rolled back. So it is updated in the temporary tables. Temporary Tables: These are the tables which are used by the dialog process to update the data temporarily. Update process reads the committed data from the tables and updates the tables in DB. The temporary tables are VBHDR, VBMOD, VBDATA, VBERR VBHDR: It stores the header information VBMOD: Modules involved in the update (Scripts) VBDATA: Data to be updated VBERROR: Any error information Once all the steps (dialog steps) update temporary tables a COMMIT occurs. Once the transaction is committed update gets initiated. Update process reads the temporary tables and update the DB. Before updating temporary tables dialog process gets a logical lock from enqueue process, if it is on CI (Central Instance). If it is on DI (Dialog Instance) it communicates with message server. Message server in turn communicates with enqueue process to get the lock, so that other users will not modify during transaction.
251
While updating records from the temporary tables to the DB update inherits the locks Types of Update Work Processes: There are two types of update work processes V1 and V2 Update Process (v1): V1 handles the most critical updates. (User Transactions) Update Process (v2): V2 handles non critical statistical information Update work processes are configured using parameters rdisp/wp_no_vb1 rdisp/wp_no_vb2 Each instance/ SAP system requires at least one V1 process. If v2 updates are not defined they will be handled by v1 update process. Update process Monitoring: Updates are monitored in transaction SM13 Note: When a dialog process commits into temporary tables a transaction number (P.O number, Invoice) will be generated. The number is generated from Number range buffer. Number range buffers are defined in transaction SMRO in the table NRIV. NRIV: Number range interval Monitoring: Go to SM13 select terminated records double click (or) select all to find out the list of updates which are to be updated, which are terminated etc The status of the updates will be followed. The status may be: Init: Update is initialized and waiting for the updation work process. (SAP recommends at least one update work process for 5 dialog work processes) Auto: If the update mechanism is deactivated the work processes will be terminate the update process, upon activation of the update. The update status will go into AUTO. That is no initialization is required Run: Update is running. Update is updating into the DB
252
Error: Update is terminated Reasons for Update Problems: There is a problem in the program (Very rare) so that we apply support packages, patches and notes Table space overflow (Resize the data file or add data file) Locks are not available Work process congestion (Updates work processes are not available) Update deactivated for each of the problems with update the entire update mechanism will be deactivated Go through SM21 logs and ST22 ABAP dumps thoroughly and fix the issue. The update can be activated in SM14. Note: IF the update could not be initialized automatically, select the record and run the update (if the update is terminated). Types of Update Mechanism: 1. Local 2. Asynchronous 3. Synchronous Local Updates: These are used to update the DB directly Ex: Update is programmed locally and updates DB directly without any intermediate tables temporarily. These are used for small transactions which dont have multiple tables in the DB. It is not monitored and displayed in SM13. Asynchronous Updates: This mechanism is used by dialog work processes. They will update the temporary tables and gets the transaction ID from number range buffers. Dialog work processes never waits for the response from the temporary tables. Ex: User creates a P.O but could not see the P.O in the DB due to deactivation of Update Mechanism. No acknowledgment. Synchronous Mechanism: This mechanism is used by update process. Update process reads the temporary tables and updates DB tables synchronously by obtaining transaction consistency.
253
Update reorganization: rdisp\vbdelete isused to delete the incomplete update records. It should be set to 1. RSM130002: It is used to delete the update requests based on number of days. rdisp\vbmail: This is used to trigger the mails to users in case of update errors and update deactivation rdisp\vb_stop_active=1: This parameter is used to deactivate the update process in case of programmatic or DB related errors. Enqueue Mechanism:Enqueue is used to the locks to an SAP transaction. This is used to logically lock the tables and the arguments which are involved in the transactions. Enqueue process is only confined to application server level. These are not equal to DB locks (Tables updation locks) but similar to DB locks. ENqueue_Table_Size Enqueue table is stored in main memory of the instance where enqueue is located. Dead Lock: In order to update user requirements enqueue lock to be placed on the table and on its arguments. Let us say if the user has to lock the two arguments. But one of the locks is locked earlier by another user. This user is waiting for lock on the argument which is locked by the current user. This situation is called as Dead Lock. Apart from the above user call regularly raising a message saying that transactions are locked and could not be updated. Note: In order to resolve the above issue release the locks in SM12. Process of Releasing Lock: 1. Users complain of updating request. As part of system help we need to monitor the locks which are older than 24 hours 2. Identify the user who locked the argument and check whether user is still logged onto the system. Note: Do not ever try to delete the locks without user approval. SM04- Users who logged on to the system. Case-1: If the user is not logged on we can release the lock
254
Case-2: If the user is logged in get the telephone number and E-mail id from SU01 and send a mail to the user about the lock release. Call the user and explain the significance of lock release. Probably we can also involve the consultant who is waiting for lock. Based on verbal approval send a mail to the user saying that As discussed today morning 5 A.M about release of locks we are going ahead to unlock the transaction Thanks for approval Finally release the lock in SM12. Release the one which is approved. Background Processing: The programs, reports, transactions which consumes more than the time specified is rdisp\max_wprun_time=600. After 600 seconds the transactions will get terminated and logged into ABAP dump with an error Time out occur or a program running more than the specified time. SAP recommends to run this sort of long running programs and time consuming programs or expensive programs in the background mode using Batch Process (background Work Process). Background work processes are used to schedule the reports, programs, transactions to run in the background mode during off peak hours without any user intervention. This facility is used by most of the companies for the following purposes. 1. Data transfer from legacy systems 2. On line sales orders, purchase orders to move into SAP 3. Running the weekend reports, monthly reports, pay roll runs Companies like H.P uses for Order Processing. Note: Companies uses 3rd party tools like Maestro, Tidal to trigger the background processes. Triggering background Jobs: This can be triggered based on the background jobs 1. Time-Controlled: Ex: Every day 10 pm Every hour start at 11:55
255
Every Month Run immediately 2. Event Controlled: These are used to trigger based on events on success of dependant programs Users can trigger some of the events using standard ABAP programs or customizing using functional modules BP_RAISE_EVENT (In earlier versions) or use the external programs and commands using command sapxpg and sapevt. Defining a Background Job: In SM36 background jobs are defined SM36 Specify the name of the job Description of the Job Specify the type or class Spool recipient Different types of classes are Class A: With high priority (background W.P of Type A should be reserved in RZ04) Class B: Medium Priority Class C: Low priority Spool Recipient: Printer, E-mails, fax, intranet, internet [pdf] specify the name of the program along with the variant. Variant: It is a predefined value which will be populated during runtime. These are created in transaction SA38. Variants are stored in tables TVARV Variant s are used to avoid user interactions We also specify external command or program to run in the target system. Specify the time and save. Monitoring Background Jobs: Background jobs are monitored in SM37
256
Background jobs have the following status: Scheduled: When it is defined status is scheduled Released: When we specify the time to run the status is released Ready: Waiting for the background work process Active: Actively running the job or program Cancelled: Cancelled Completed: Job is successfully completed. User defines dialog process a program/report/transaction to run in the background mode Dialog process updates a table TBTCS and TBTST (scheduled) Scheduler sap message server system (SM61) runs in the dialog mode for every 60 seconds (default) which is specified by parameter rdisp\btc_time=60
Dialog Instance CI
User Communit y
DB
rdisp\btc_time=60 Scheduler reads from the table and keeps the job in queue based on their time of priority Background job gets free work process and runs in the background mode active The job is completed the status is finished (or) completed else it is canclled
Text File Legacy System Purchase Order Excel File Text and Excel File 257 DB
Based on the time IDOC Background Job Errors: 1. File not found 2. File found but could not be opened 3. File permission problems 4. Background work processes are not available 5. Target syste is not available 6. User ID or password changed (or) password expired 7. Users are locked in the target system 8. Objects are locked (SM12) 9. Table space overflow 10. 11. Connectivity with RFC system fails Authorization to the target system fails (Role is expired)
Standard Background Jobs: In order to organize the R/3 DB we will schedule the following as housekeeping jobs.
Periodic Jobs Required for Housekeeping: Job Name SAP_REORG_JOBS Deletes old background jobs. SAP_REORG_SPOOL Deletes old spool requests. SAP_REORG_BATCHINPUT Deletes old batch input sessions. ABAP Program Required Variant RSBTCDEL You must create a variant. You must create a variant. You must create a How Often? Daily
RSPO0041
Daily
RSBDCREO
Daily
258
This job may not run at the same time as normal batch input activity. Schedule this job for periods during which no batch input sessions are run. SAP_REORG_ABAPDUMPS RSSNAPDL Deletes old dumps produced by ABAP abnormal terminations. Alternative: To keep from needing to schedule this job, run the ABAP report RSNAPJOB from the ABAP editor instead. This schedules RSSNAPDL as follows: Job name: RSSNAPDL Variant name: DEFAULT (you must create this variant) Start time: 0100 AM Repeat interval: Daily SAP_REORG_JOBSTATISTIC RSBPSTDE Deletes job statistics for jobs not run since the specified date (statistics no longer needed since job was a one-time occurrence or is no longer run) SAP_REORG_UPDATERECORDS Deletes old completed update records (automatic delete deactivated); deletes incomplete update records (automatic delete deactivated) Run this job ONLY if: You have deactivated the default automatic deletion of update records once they have been processed. This function is controlled by the system profile parameter rdisp/vb_delete_after_execution You have deactivated the default automatic deletion of incomplete update records (records that are partially created when an update header is created and saved but the generating transaction then ends abnormally). This function is controlled by system profile parameter rdisp/vbreorg You have deactivated processing of V2 update components after the processing of the associated V1 updates. This function is controlled by system profile parameter rdisp/vb_v2_start. RSBPCOLL RSCOLL00 RSM13002
variant.
Daily
Monthly
None.
Daily
SAP_COLLECTOR_FOR_JOBSTATISTIC Generates runtime statistics for background jobs SAP_COLLECTOR_FOR_PERFMONITOR Collects system performance statistics This job was previously called COLLECTOR_FOR_PERFORMANCE_MONITOR. When scheduling this job, be sure to use the new name. RSCOLL00 schedules all reports that need to run for the performance monitor using table TCOLL to determine what to run. See the CCMS Guide for more information on
None. None.
Daily Hourly
259
setting up RSCOLL00.
Note: Apart from the above we will run the statistics to update tracing transactions (ST03, ST04) External Commands: These commands are used to trigger the jobs in the system. Using commands SAPEVT and SAPXPG (External Programs) Commands are defined in transaction SM49. It can be executed using SM59. Some of the commands are 1. Startsap 2. Stopsap 3. BRBACKUP 4. BRARCHIVE 5. Tp 6. R3trans 7. BRRESTORE Background processes can be configured by parameters rdisp\wp_no_btc Spool Re organization: The tables TST01, TST03 can held upto a limited number of entries 32,000. This can be increased up to 99,000. If this number exceeds the spool mechanism does not work i.e. why spool tables re organized using the standars background jobs. Schedule background jobs RSP0041 to delete this spool requests which are older than 14 days by default Gateway Process: It is used to monitor the gateway connections make to instance (SMGW). Gateway process is used to allow the incoming connections to an instance there will be only one gateway for each instance. From version 6.40 onwards gateway instance can be installed separately on a Standalone instance. SMGW is used o monitor the connections which are coming to RFC, CPIC.
260
Ex: A service in JAVA system is set to run using a program ID. IN order to check whether the connection is made to R/3 we need to check the program ID in SMGW. Gateway process listens on 3300 (Instance number 00) Spool Process: Spool process is used to print the documents to a printer or a copy as an Email, fax etc.. Spool process is used to generate output request as a device specific request. Spool Process Mechanism: 1. User requests through dialog process to print a document 2. User requests through dialog process and schedules a background job to print documents massively 3. The spool requests are generated by the dialog process (or) background process and updated in the TEMSE. TEMSE: Temporary sequential database These are the spool requests which are generated by dialog (or) background stored in the location defined by the parameter as follows: rdisp\store_location=G (or) db G: Global Directory (O/S level) DB: Database DB means it is stored in the table TST01 and TST03 TST01: It consists of the TEMSE details (Spool request, Name of the author, No of copies, Name of the device) TST03: It consists of the original data to be printed.
Advantages of Global Directory: 1. It is stored at O/S level 2. Easy to access by the O/S spooler Disadvantages of Global Directory:
261
1. For fewer records access time is good, but if the record size grows it takes longer time to fetch the record from the global directory 2. O/S is not part of the regular back up (printer). If there is any damage to O/S global directory will be lost all our spool request will be lost Advantages of Storing at DB: 1. Spool requests are backed up along with the DB backup 2. DB features redundancy, consistency, indexing Disadvantages of Storing at DB: These are stored in DB. It consumes more time than files stored in O/S level
File or Database Storage for TemSe Objects? Type of Storage File system
Disadvantages TemSe data must be backed up and restored using operating system tools, separately from the database. In the event of problems, it can be hard to restore consistency between the data held in files and the TemSes object management in the database.
Database
Definition of Printers: Printers are defined by transaction SPAD SPAD stands for spool administration SPAD is used to define the following: 1. O/P devices 2. Spool Devices (Logical and Real)
262
3. Access Method (Local and Remote) Spool Servers: Go to SPAD Specify the name of the server Specify the server class Server class specifies that instance is designated to handle mass printing, production printing, desktop printing, test printing etc Specify whether the server is real (or) logical server Check on if the server is real (or) logical server Check on if the server is allowed for load balancing Logical spool server is mapped to real spool server Real Spool Server: This is an instance whether at least one spool work process is configured Logical Spool Server: This server doesnt exist but logically defined for load balancing and handle fail over. Each logical spool server should be mapped to at least one real spool server
U C
TA R/3
LP3 Dialog
TA R/3
R/3
263
Access Method: Access method specifies the type of printing mechanism (Local or Remote Front End)
D.B
User Reques t
Global
TEMSE
TST01/TST03
UR
TEMSE
Spool Request
Output Request
O/S Spooler
O/S Printer
Local
Print Server Print Server: This is a server in the network where the printers are configured Local Access Method: If the output request generated by the spool process is formatted according to the output device and hands over the request to O/S
264
spoolers. The commands Lp or LPR at O/S will analyze the O/P request and print it local. Protocols L & C will be used to print the document Remote Access Method: If the O/S spooler and the O/P request resides from different hosts, then remote access method is used. Protocol U is used for UNX based interface Protocol S will be used for WINDOWS based interface S is proprietary protocol for SAP Defining O/P devices: Go to SPAD Click on O/P devices Select the device type Authorization Group: It is used to provide additional authorizations to the objects so that the objects are highly protected Ex: S_TABU_DIS, S_TABU_CLI should be in authorization role of the user to provide additional authorizations to the table and client administration Specify the Access Method: Front End Printing (P): The printer is connected to the user desktop to print sensitive document Disadvantages: Spool process is dedicated to the user till the job is completed Print congestion Occur: IF more number of front end printers are configured try to restrict the no of front end using the parameter rdisp\wp_no_spo_max We cannot schedule background printing because the system needs to login Spool Monitoring: Spool processes are monitored in SP01 or can be read as O/S level using command LP status
265
Go to SP01 Specify * in create Specify the current date Specify the O/P device name It will display the list of spool request The following are the status of the spool request 1. (Minus): No O/P request 2. + (plus): spool request is being generated 3. Waiting: Spool request is not yet processed 4. Inprocess: Spool request is formatted by the spool process generating O/P request 5. Printing: The host is printing 6. Complete: the task is completed. The task with status complete may not be completed because the process waits for the 7. Problem: It monitors the problem related to character formats and margin settings and page settings 8. Error: it is an error status. The printer has been not generated Problems in the Spool Request: 1. Printer pages are not available 2. Print cartridge problem 3. Device problem 4. Incorrect page settings 5. Incorrect character formats 6. SAP scripts, small forms alignment is issued 7. Problem in the report
266
8. While configuring barcode printers ensure that right drives are installed Configuring of SCOT: (SAP Connector) It is used t define E-mail server, Fax Server, Internet, SMTP (Simple mail transfer protocol) Configuring SMTPClick on internetClick on createSpecify the name of the nodeDescribe the nodeSelect the RFC nodeContinue define the RFC destinations If we select fax, pager, printers (OMS output management system) we need to install respective drivers SAP scripts (or) Smart Forms: These are pre defined print formats provided by SAP. We are customizing according to our requirements ABAP list: There are non interactive reports Define the name of the mail server host on which mail server resides Monitoring SCOT: As part of regular monitoring the no of requests which have been sent out from SAP system. The various statuses of the requests are 1. Waiting: The request is sent to outbound an is in waiting for processing 2. In transit: the requests are processed 3. Completed: The requests are completed 4. Error: The requests are not sent SOST: SOST is used to display and manage all messages sent using
SAPconnect.
Depending on the selection criteria chosen, the program displays send requests that are sent or have already been sent using SAPconnect. A variety of selection and display options are available to you. For more detailed information about using the send requests overview, see the program documentation in transaction SOST. To do this, call transaction SOST and choose the info button or choose Help Application Help.
267
Go to SOST Select the status waiting, errors, sent and execute to display the list of transmission requests we can trace error transmission requests upon resolving the error, the transaction request can be resent, spool process is configured using parameter rdisp\wp_no_spo more than one depending upon resources
If a user is only allowed to select send requests of certain users or groups, you can use transaction SOSG for this. This transaction is the same as transaction SOST, however it also performs additional authorization checks. To be able to use this transaction, a user must not have ADMINISTRATOR authorization in authorization object S_OC_ROLE. Authorization to select users or groups in transaction SOSG is controlled through authorization object S_OC_SOSG. For more information, see the documentation for this authorization object. In transaction SOSG, using input help for the Sender field displays only those users or groups for which the current user has display authorization.
268
Printer Types: Local Printer The printer is directly attached to the host system. Shared Printer The printer resource is shared to other systems. Network Printer The printer is configured in a network (i.e. to a LAN switch) where end users within the network can use the printer resource.
SAP Printing Techniques: Local Printing: The spool server (which contains spool work process) and the host spool system will reside in the same server. Host Spool System: The area which contains the spool data Remote Printing: The spool server & host spool system are on different servers. Front-end Printing: The host spool is in the front-end system itself, the spool data will be stored in the spool directory of the front-end system. For front-end printing users must indicate SWIN device name as output devices, indicating the access method F and the __default as host printer name.
Configuring the printer type in the operating system: Click on Start Menu settingsPrinters and Faxes. 1. Click Add Printer. 2. Click Next & specify the printer type (Local/Network) and all related parameters. Configuring the SAP printing type: SPAD
269
Create output device Specify the name of device according to naming convention of the device Specify the device type Specify the location & describe the printer Choose the access method [local, remote, Front-end] Specify the sequential printing, if we require a sequence in printing. When it is remote printing specify the name of remote host where the printer is hosted. Save and activate The IP address of the printer and the short name described while configuring the printer in SAP must be included in the hosts file. SAP supports only limited number of printers. LP01 is the default printer.
Access Methods: Access method specifies the communication path between SAP spool system and the host spool system i.e., how the SAP spool transfers the data to be printed to the host spool system.
List of Access methods: SAP Printing Type Local Remote Front-end Access Method(Unix) L U Access Method(Windows) C U,S F
Spool Architecture:
270
Spool Work Process converts the spool request into device specific output stream and sends it to the host spool or SAPLPD. SAP profile parameter that controls the no. of spool work processes per instance is rdisp/wp_no_spo. Having several spool work processes per instance avoids communication problems between spool work process and the printing devices which implements spool load balancing by using server groups called as dynamic spool assignment. Before 4.0 release, the spool server assignment & spool server work process were static that is only one spool work process was allowed.
Spool Request It is for the print job or output job, made up of spool request record (administrative information to manage the print jobs) that is it contains the reference to the spool data, output device and the printing format. SP01 We can see the particular spool request or output request. SP02 shows list of all spool requests.
271
Output Requests are the components of the spool request which actually formats the output data and sends it to the host spool system to be printed. You can submit multiple output requests for a single spool request. SP01 select the spool request and click on output request button ( output request for the spool request. ). Shows the list of
SAP spool system handles spool request and the output request, manages the output device type, device drivers, device formats & the character sets. It converts all types of output data into the required output format.
TemSe DB: Temporarily Sequential Object Database which stores the spool request data, the background processing job logs and other texts that are temporary. Contains two options: 1. TemSe Administration(SP12) 2. TemSe Contents(SP11 or SP12 Go to menu TemSe contents) TemSe DB contains lot of job logs and print data files it is convenient to schedule the report RSPO0041/RSPO1041 (house keeping jobs) periodically as a background job for removing and reorganizing the log files.
TemSe has two main storage options: Rspo/store_location=DB stores spool data in the TemSe database. G stores spool request data in the /usr/sap/<SID>/SYS/global
To check the inconsistency of the TemSe DB SP12 TemSe Data Storage Menu consistency check (checks TST01 and TST03 tables). Or using the report RSPO1043.
272
Spool printing process: 1. End user requested for printing 2. Dialog work process creates the spool request and stores in the TSP01 table and assigns to the spool work process. 3. Spool work process gets the spool data and stores in the TemSe. The actual data is stored in the TST03 table and the header information is in the TST01 table. These two tables are updated by the dialog work process. 4. Spool work process converts the spool request into device specific output stream. Output request data is stored in the table TSP02. 5. SAP spool system is a uniform interface which sends the output request to the OS spool system or to the SAPLPD program, which communicates between SAP spool system and the host spool system. 6. OS spool system then sends to the printer directly or to the printing sever where this server sends to the printer (local/network/shared). Statuses of a spool output request: + Waiting : Not yet sent to the host system : Spool requested being generated : Output request is not yet processed.
Processing: Request being formatted. Printing : Request being printed by host spooler.
Complete : Request was printed successfully, or transferred to the host spooler. Problem : The request was printed despite a minor problem, but the output probably contain errors. Error : The spool request could not be printed.
Administration Tasks: 1. Checking and monitoring the spool system, both at SAP and at OS level 2. Deleting old spool requests or scheduling the background job which automatically deletes them (RSPO0041/RSPO1041). 3. Defining new printers, device types and other device elements. 273
4. Fine-tuning 5. Trouble shooting Trouble Shooting: 1. Check and monitor the spool work process (SM50), message server working properly and the OS spool. 2. Find which printer is causing the problems (SP01 System services Output Controller). 3. Check network connections for remote printers. 4. If the print job has been printed out but contains unreadable characters, check device type, access method. 5. When nothing is output at the printer and the output controller is in wait status, check the developer trace and the system log and look for time-out messages. 6. If the job has status complete or problem and nothing is output at the physical printer it might be a wrong output device, problem in host spooler, the physical printer or the SAPLPD program 7. If printing is very slow it might be lost indexes in the spool tables, too many spool table entries, slow WAN connections or incorrectly defined access methods.
Summary
TemSe 274
Access Methods C L U S F
SAPLPD, SWIN
275
System Monitoring
System health checks are used to achieve the high availability by forecasting problems and resolving them within the time. The following activities are carried out as part of system monitoring.
D1
D2 CI + DB1
D3 D4 D5 D6
Go to SM51 to display the list of active servers. This is to ensure that all the dialog instances are up and running. SM51 has the following status: Active and Passive It is also used to identify describe the type of instance use go to go into the respective transaction of the instance. There is no need to log in to the path instance. There is no need to log on to each instance. Double click on the instance it will navigate to SM50 of the respective transaction. Click on release notes to identify kernel version path number etc. SM50: It is used to display the work process over view of an instance. It displays the following 276
1. Number of work processes 2. Type of work processes 3. Process ID of the work processes 4. Status of the work processes 5. Every work processes has the following status Waiting, Running, Holding, Terminated The reasons for the status: The process is debug mode RFC, Sleep, Priv, none mode Sleep Mode: If the target system is not available the work processes waits until the target system is available till that time it will be in the sleep mode. Private Mode: Each work processes required for certain amount of memory for execute user transactions. Work process consumes the memory from roll area initially, if it is not enough it consumes memory from the extended memory. If this is also not enough it will go into HEAP memory. If all these are not available then it goes into private mode. If the work processes goes into private mode it will be released only when the entire memory is exhausted or transaction is completed. The parameter rdisp\max_wprun_time has no effect on the work processes in private mode. The work process which can go into private mode can be restricted by parameter rdisp\wppriv_max_no If mote number of work processes go into private mode work process congestion occurs, users experience hour glass mode. Go to dpmon in command prompt Go to the option menu Display the statistics Identify the process which is logged in earlier. Inform the project manager about the termination of work processes. Upon approval terminate the work process. Restart mode Yes or No: Error: How many times it went to restart Semaphore: it is a block in O/S level which the work process is waiting 277
CPU: The amount of time work processes spend utilizing C.P.U resources Time: Total time of the report user is executing Client: The name of the client user logged in User: The name of the user logged in Action: The type of action to fetch the records
Insert sequentially read, physically read, generate, loaded on which table the work process accesses. Double click on t he process to find out what statement it is running SM04: It is used to display the list of active users of an instance Select the user go to menu to display the terminal and amount of memory utilized by the users such as roll memory, page memory and private memory Al08: It displays active users of all the instances SM66: it is used to display process of all the instances Select the work process double click on it it displays amount of external memory and heap memory utilized. SM21: It is used to display the logs belonging to an instance. It is used to route cause the problem. It displays the following permissions 1. System start up and shutdown 2. Operation mode switch 3. ABAP dumps 4. Private mode ST22: ABAP dumps SAP is programmed in ABAP language. If ABAP program could not be executed it will be thrown into DUMP. ST22 is used to identify the dumps based on date, time, user, client. Most of the ABAP programs errors are related to the following: 1. DB errors 278
2. Table space over flow 3. Max extends reached 4. Archive strucks 5. DB Memory is not sufficient 6. SNAP Shot too old 7. R/3 Error 8. Programmatical Errors 9. Indefinite loop in the program 10.The ABAP program is expensive which consumes more time and Time Out error occurs 11.When the memory is not sufficient it goes onto dump 12.GUI compatibility
279
Data Transfer: Data Migration: During the Implementation the data from legacy systems has to migrate into R/3 systems to continue the business transactions. In order to transfer the data, data migration client is created in Quality System. Perform a migration. Performing the Migration: 1. Identify the source data which needs to be transferred (Customer data, material master data, suppliers data, vendor master information, employee master, some amount of transactional data) Master Data: It is basic information which is used to carry out transactions in system. Ex: while creating sales order the customer name is required along with items and descriptions and quantity Data migration can be performed later also. 2. Data needs to be parsed (or) truncated and mapped to the target system fields 3. Convert them into intermediate format (Text file or Excel File) 4. Schedule the data transfer
Data Migration can be performed using the following: 1. LSMW 2. Direct Input 3. BDC 4. Batch Input 1. LSMW: It stands for legacy system migration work bench. It is ued to transfer the legacy system data into R/3 system. It is a transaction 2. Direct input method: These are standard reports which are used to transfer the data into R/3 system as and when we required. 3. Batch Input Session: It is a part of LSMW which can be executed as and when they are required. A session can be created to upload the data using standard transactions. This standard transaction is recorded with inputs and it is executed in the background mode. It is used for transactional data. 280
4. BDC: Batch Data Communication: It is used to define the customer transfer programs based on the requirements. It also used the mechanism similar to Batch input session, but it is customized using the standard function modules. Workstation upload and Work station download There are 2 types of methods: 1. Call Transaction: It is used for 1 time data transfer 2. Session Method: It is used for periodic data transfer ALE: Application Linking and Enabling It is used to transfer the data between two loosely coupled systems i.e. from SAP to SAP systems. In order to work with ALE we need to define the following: 1. Source System (Logical system name) 2. Target system (Logical system name) 3. Define the RFC connections between the two systems (SM51) 4. Identify the interface BAPI BAPI: Business Application Programming Interface BAPI: These are standard interfaces defined in the system to transfer the data from one system to another 5. Go to BD54 or SALE to define sending system, receiving system and define the distribution model by selecting the respective BAPIs. EDI: Electronic Data Interface It is used to transfer the data between SAP and Non SAP systems. IDOC: It is an intermediate document which is understandable by both SAP and NON SAP systems. IDOCs are monitored in transaction IDOC and WE05 RFC Communication: (Remote Function Call) It is used to define the remote connectivity to the target system. RFCs are used widely in the following scenarios: 1. Remote client copy 2. CUA admin 3. Data Migration
281
4. Transport 5. Satellite systems in solution Manger 6. Configuring email, fax, pager, SMS and external systems 7. Remote monitoring of instances using alert monitor 8. Connectivity between various components of SAP (BW, CRM, SRM,APO) Defining RFC connections: RFC connections are defined in transaction SM59. Go to SM59 Specify the name of the connection specify the description specify the RFC destination (mostly logical system name) specify the type of connection specify the description go to technical settings specify the target host (name of the host or IP address) specify the instance number gateway host and services can be specified optionally click on logon security specify whether it is a clustered system or not specify the authorization and logon details, logon language, USER ID and password go to special options to trace the RFC 9its not required) save the connections click on the test connections (It displays the connectivity and connection time) click on remote logon to check whether the system can login remotely. If the logon details specified in the RFC connections are correct it will connect the remote system and opens the logon screen. Note: Do not give dialog user while defining RFC connections always use either system or communication user Go to menu test to check the authorization of the user. RFCs are of 4 types: 1. ARFC Asynchronous RFC 2. SRFC Synchronous RFC 3. TRFC Transactional RFC 4. QRFC Queud RFC Go to SA38 and use report RSRFCTRC to trace the activities of the RFC connections. 1. Synchronous RFC: If a request is sent from a sending system the target system is available then the data transfer is performed synchronously. If the
282
target system is not available the work process gets in to sleep mode and waits till the target server is available. 2. Asynchronous RFC: If a request is sent to the target system if the target system is available then it hands over the request to the target system. The request will be executed on the target system. If the target system is not available transaction will be cancelled 3. Transactional RFC: It is similar to asynchronous but a transaction ID is generated for each of the user request, unlike ARFC it will not leave the communication with the target system if the system is not available. A background program RSARFCSE is scheduled in the background for every 60 seconds. This program identifies all the transaction IDs which are not sent to target systems and tries to update for every 60 seconds . TRFCs are monitored in SM58 4. Queud RFC: It ensures that the requests are updated in the target system by following a defined sequence. It is an extension of TRFC. QRFCs are monitored in SMQ1, SMQ2, SMQR
Logon Load Balancing: Instance: It provides executable services and has its own memory, buffers, own processes. When more than one instance is configured the following issues may raise: 1. Users logon to instance of their choice 2. Users logon to different instances, buffers are not effectively utilized. In order to address the above logon load balancing is defined. Definitions of logon load balancing: In order to define logon load balancing we need to identify which the components are widely used in the system. Go to ST07 to identify the mostly used components Identify the components and the no of users using the components Go to SMLG define the group and assign instance to it specify the response time go to SAP GUI click on logo groups specify SID specify the message server host and click on the groups lost of logon groups are displayed select
283
the logon group save specify the message server entry in etc\services file create an entry in sapmsg.ini files. SMLG provides the computation of message server. Message server always keeps the name of the favorite server so that the request will be routed to that instance.
Advantages of Logon Load Balancing: Buffers are effectively utilized Optional utilization of work process of an instance Logon load balancing Message server identifies the least loaded server in the logon load balancing and directs the connection to that particular server
RFC Server Groups: These are similar to logon groups but used for RFC communications. Go to RZ12 define server group assign the instance The server groups are used while defining RFC connections when a request is sent to the target system which will be sent to the server group. Server group is mapped with all the available instances. This is used by the RFC request to identify the least loaded available server.
284
Operation Modes: These are used to define switching of W.P during the specified intervals. Ex: During off peak hours there is no need of more dialog work processes which can be converted into background processes. Prerequisites: 1. Profiles are imported 2. Define the instances
Defining operation mode: Go to RZ10 import profiles of all the active instance go to RZ04 define the instance along with profiles (check location) go to RZ04 define operation mode (Day, Night, Peak, Off Peak) Assign the instance and configure the no of W.P for each mode save the operation modesdefine the time schedules of the operation mode in SM63. We can define intervals up to 15 minutes. Default it is 30 minutes. Select the time and assign the operation mode. Note: During the operation mode switch all the active work processes work till they complete the job. As part of regular monitoring we need to check whether operation modes are switched or not. Go to SM37 and click on expensive jobs. If it is business criteria to change the operation modes get the approval from business process owner and manually switch the operation mode. Operation modes are switched manually using RZ03 Maintaining Profiles: By default when the system is installed minimum parameters run the instance are configured. But in order to handle the load configure more memory, configure more processes, increase the size of the buffers, security settings, logon client logon language etc. We need to modify profiles from time to time based on requirements. Profiles: The following are the 3 profiles which are installed by default: The directory that holds the profiles are \usr\sap\<SID>\SYS\profile 285
Startup Profile: This will not be modified unless there is a change in the instance and instance file system. Default Profile: This is modified to set the default value for the users like logon language etc. The parameters which set are applicable to all the instances. This parameters are over written by the instance profile. Instance Profile: These are instance specific parameters like W.P configuration memory configuration
Profile Maintenance: These are maintained in RZ10 transaction code. These profiles needs to be imported into DB after the installation. This will be done in the following way: RZ10 profile import form active server Defining the monitor set: Go to RZ20 Go to Extras Click on active maintenance function (on) Create your monitor set copy from the standard template and adapt the SAP standard values save the monitor set In order to modify the properties, variants threshold values go to RZ21
Dialog Instance 3 Dialog Instance 1 B/W RFC CCMS Agents SAP CCMS Dialog Instance 2 SR RFC JAVA WEBAS Config Alerts CI R/3 RFC CRM RFC RFC RFC
286
Archiving
It is a process of moving the old data (based on the age of the data) to a data warehouse solution, to a tape, external device, HSM machines (Hierarchical Storage Machines). Reasons for Archive: 1. Aged Data 2. Response Time increase (R/3 and DB) 3. Admin cost (Maintenance, Tapes) 4. Based on the country specific requirements, the data in the system should be available for a specific period of time for auditing purpose. The old data cannot be deleted because it will be used to correlate the data, analyze the data. It finally helps to forecast the expected business, man power, material, machines, money and management. In INDIA the companies are following Bassel-2 for auditing. Identifying the necessity of Archiving: 1. The database grows abnormally 2. The database fetches are expensive 3. Using indexes is also not suitable to get the optional response 4. The DB requires reorganization The process of Archiving: 1. Identify the archiving objects 2. Go to DB15 specify the archiving object kike financial documents, material numbers etc. Ex: MM_MATNR, FI_DOCUMNT 3. Functional consultants identify the objects based on the modules and inform us to archive 4. Find the size of the table (RSTABLESIZE) 5. Define the archiving path
287
6. Go to file (FILE_Tcode) 7. Define the logical path Assign physical path (Global Directory)\usr\sap\SID\sys\GLOBAL (Cross Client) 8. For client specific use T-Code SM01 9. Go to SARA identify the object select for archive 10.Define the variant (Name of the variant, time) 11.Status options are Archive, Save, Delete, Write 12.Data reorganization needs to be followed after archiving The third party tools like IXOS, DWB2000, BIW are used for the archiving purpose. As part of regular monitoring we need to monitor the archiving background jobs in SM37
288
GUI (or) Front End time: It is the time taken by the user request to reach the dispatcher. GUI time is not part of response time. GUI time should generally be around 200 m/second Check the network connectivity to application server Wait Time: The time taken by the user request waiting for Work Process it should be around 500 m/sec i.e. 10% of dialog response time. Reasons is no free work process is available W.Ps are running expensive programs transactions, reports The time taken for roll-in of the user context information by the W.P. The roll in time should be 500 M/sec It is assumed as used context information is expensive. Reduce the no of authorizations parameters and ask the used to restrict the queries. Response Time: The time taken for processing a user request and returning the data to the presentation server. Response time will be continued from the movement when a presentation server request is reached to the dispatcher. Note: GUI time and CPU time are not part of the response time. Process Time: It is the amount of time required to process the user request by interpreting (ABAP, Screens, SQL Statements). If the processing time is more we need to analyze ABAP statements, expensive screens etc Processing time in not measured directly. It should not be more than twice the amount of CPU time Process Time 2* CPU Time CPU Time: It is the amount of time taken by the work process utilizing CPU resources while processing the user request. Generally CPU time should not be greater 40% of (Response Time Wait Time) 289
If CPU time is expensive then ABAP programs are expensive. Tune ABAP programs. Run time analysis of the program The CPU time is expensive because of the following reasons: 1. Expensive programs (Inefficient Coding) 2. Internal tables could not handle the data files 3. Indefinite loops in the program Perform the run time analysis of the ABAP programs. It is done by the developers. Load & Generation time (LG Time): It is the amoun ot time taken to load and generate the programs. Normally it should be 100 or 200 nano seconds. If LG Time is more we can consider the following 1. Buffer areas are configured with small size (Increase the buffer size) 2. Buffers are swapped RFC+CPIC time: CPIC: Common programming interface for communication. It is the amount of the time taken by the work process to communicate with external systems like BW, CRM, SRM etc. Generally the time is not defined because it varies from system to system Enqueue Time: The amount of time taken by the dialog process to communicate with enqueue process to obtain the lock while updating records. Normally it should be 1 m/second in the central instance. It should not be more than 100 m/sec if the request is coming out from dialog instance. "enqu_ tablesize is the parameter which is used to define the size of the enqueue table. Database Time: It is the amount of time taken by the database to process user request. It should not be more than 40% of (Response Time Wait Time). If it is expensive we can conclude the following: 1. Expensive Report: With expensive SQL statements 290
2. Database Statistics are outdated: Run the statistics using DB13 3. DB buffers are not sufficient: Increase the buffer size (ST04) or using he parameter db_block_buffers in initsid.ora file at the O/S level. 4. Missing Indexes: Indexes are missed out which can be traced out in DB02 (Recreate indexes, Create appropriate indexes). One primary index and upto 5 secondary indexes 5. Exclusive Lock waits on the DB: Go to DB01 and find out exclusive lock waits Response Time: It is the sum of all the above mentioned times i.e. Wait Time+ Roll Time+ Roll Out Time + Processing Time + Load and generation Time + Enqueue Time+ (RFC +CPIC ) Time+ DB time
291
These need to be explicitly switched based on the requirement. Switch off the trace as soon as the tracing is finished else it will populate the log files. ST02: Memory configuration Monitor:
ST07: Application Monitor It is sued to find the following: 1. No of application servers 2. No of W.P in the entire R/3 system 3. Total no of users created in the system 4. The components and the load on the sytem
292
DATABASE
Normalization: It is the process of splitting of larger tables in to smaller table by enacting primary keys, foreign keys and secondary keys relationship. Various DB Vendors are: 1. Oracle 2. SQL Server-Microsoft 3. DB2 IBM 4. MAX DB- SAP 5. Informix- IBM SAP gives an interface to work on above mentioned systems. SAP never recommends working on the DB directly KDB (database), a database server created by Kx Systems for real-time business applications KDB, a Kernel debugger, a type of tool used for debugging the kernel for Linux and Unix
A kernel debugger is a debugger present in some kernels to ease debugging and kernel development by the kernel developers.
The Windows NT family of operating systems contain a kernel debugger. BeOS contain a kernel debugger. DragonFly BSD employs a built-in kernel debugger.
No kernel debugger was included in the mainline Linux kernel tree prior to version 2.6.26-rc1 because Linus Torvalds didn't want a kernel debugger in the kernel KGDB and KDB are two kernel debuggers for the Linux kernel. KGDB requires an additional machine for debugging, whereas KDB allows the kernel to be debugged on the same machine while the kernel is running. The debugger included in Linux is KGDB.
293
IBM- AIX, HP-AUX are KDB Windows 30-40% KDB Linux- No KDB When the ORACLE DB is installed it is installed by default with the following Users: 1. SYSTEM (user)/MANAGER (Password) 2. SYS (user)/ Change_on_install (password) When R/3 system is installed the following users will be created in the DB. 1. SAP R/3 up to 4.6c 2. SAP<SID> from 4.7 onwards (SAPSID is the schema owner) 3. SAPSR3 DB in new dimensional products like XI, EP. SAPR3, SAP<SID> and SAPR3DB are the schema users of the DB Apart from the above we have 1. Ops$<host name>\ <SID>ADM 2. Ops$<host name>\SAPSERVICE<SID> Query: A normal sql query looks in the following way Selest * from DBA_Users;
Roles: Roles provides authorizations to the users 1. SYSDBA: It provides authorizations for the following activities: system startup, system shutdown, create DB, Modify DB, Create users, roles, tables etc SYSDBA has the entire authorizations from the DB 2. SYSOPER: He is an operator to start the DB and stop the DB Back up, Restore except creating DB
If a user connects as SYSDBA, he becomes SYS, if he connects as SYSOPER, he becomes PUBLIC.
294
Who am I? SYS@ora10> show user Same thing, but connecting as sysdba: Connect sys/my_secret_password as sysdba Who am I? SYS@ora10> show user USER is SYS Now, I, being Rene, want to connect as sysdba and then as sysoper. In order to do so, I need to have the sysdba and sysoper privileges: Grant sysdba, sysoper to rene; Now, I can connect as sysdba: Connect rene/another_secret_password as SYSDBA Who am I? SYS@ora10> show user Altough I have connected myself as Rene, giving my (not sys') password, I am sys: USER is SYS As sysoper: Connect rene/ another_secret_password as sysoper Who am I? SYS@ora10> show user USER is PUBLIC
SYSDBA and SYSOPER are administrative privileges required to perform high-level administrative operations such as creating, starting up, shutting down, backing up, or recovering the database. enables an administrator who is granted one of these privileges to connect to
295
the database instance to start the database. You can also think of the SYSDBA and SYSOPER privileges as types of connections that enable you to perform certain database operations for which privileges cannot be granted in any other way. For example, if you have the SYSDBA privilege, then you can connect to the database using AS SYSDBA. The SYS user is automatically granted the SYSDBA privilege upon installation. When you log in as user SYS, you must connect to the database as SYSDBA. Connecting as a SYSDBA user invokes the SYSDBA privilege. Oracle Enterprise Manager Database Control does not permit you to log in as user SYS without connecting as SYSDBA. When you connect with SYSDBA or SYSOPER privileges, you connect with a default schema, not with the schema that is generally associated with your user name. For SYSDBA this schema is SYS; for SYSOPER the schema is PUBLIC.
Two very special and important system privileges are SYSDBA and SYSOPER. Following table describes the power lies beneath these two system privileges.
SYSDBA Create new database Startup, Shut down database Alter database with the OPEN, MOUNT, BACKUP, CHANGE CHARACTER SET, ARCHIVELOG and RECOVER options Create SP file SYSOPER Startup, Shut down database Alter database with the OPEN, MOUNT, BACKUP, ARCHIVELOG and RECOVER options Create SP file Example:
296
GRANT hr_emps In this example we have first created the role hr_emps and next we assign certain privileges to it and finally we have assigned it to all the employees of hr (Human Resources department). After a month, new business rule comes up which says that all the hr employees should only be having SELECT privilege on emp_test table. To implement this rule all you have to do is to revoke the privilege from the role e.g.
Database Startup: Database is started and stopped by using the following commands 1. Startdb 2. Stopdb It can be started as a part of SAP Stopdb will stop the database. When the database is started it looks for port 1521. This port is configured for listeners. More DBs ,ore ports. Listner.ora: This file is used to configure the database and the port number. This can be configured by using the SQL net configuration. It reads from file TNSnames.ora to resolve the database. Listner Status: Use command lsnrctl to check the listener status. 1. Go to home directory by logging as <SID>adm. Use command lsnrctl listener control n the command wondow <listener>prompt appears. Type status Listener> status If it is not started give <listener> start listener.log is located in the home directory. 297
This gives the information about listener port blocked and why it is not starting. 2. SQLNET_ORA: It consists of the server details. It is required on dialog instance. Environmental Variables: When the oracle database is started it looks for ORACLE_HOME ORACLE_HOME is the environmental variable which we need to set up as user type in the central instance and all the dialog instances. Path: Path needs to be set up for ORACLE_HOME/bin ORACLE Start Up Process: Oracle is started up in the following modes/phases 1. No mount phase: During this phase initialization files are evaluated. SGA is created. The database is not appended and it is not operational. It is used to build the control files and restore the control files. DB is not mounted. 2. Mount: The DB is mounted. SGA is created but the data files are not opened. It is used to restore the data files. DB is operational only for few users like SYSDBA 3. OPEN MODE: The DB is open. The users can logon to the system. SHUT DOWN the DB: The database can be shut down by using the following. 1. Shutdown normal: When this command is issued no users will be allowed to connect to the DB. Existing users will continue their work. If these users logged off the system will shut down normally. 2. Shutdown immediately: No new users are allowed. Existing transaction will be completed and wither a commit or rollback takes place. In shut down normal the DB will be consistant while restart the DB 3. Shut down abort: The system will shut down without any notification to the users. All the open transactions could not be completed or rolled back. Therefore the system inconsistent. A small process SMON will perform instant recovery during the restart. 4. Shutdown transactionally: Open transactions are allowed to complete and no new transactions are allowed. The DB consistent after restarting DB. OPS$ mechanism: Oracle parallel server 298
It is a mechanism which allows the O/S users to connect to the DB without any password provided that the following parameters are set in init<sid>.ora 1. remote_os_authen= true 2. os_authent_prefix= OPS$ init<sid>.ora: It is a file that contains the DB startup parameters control file location log and SGA [SSA, log, DB_buffers] Modifying this profile requires instance restart. Init<SID>.sap:
The initialization profile init<DBSID>.sap contains parameters that influence how the SAP tools perform various functions. It is usually stored in directory <ORACLE_HOME>/dbs (UNIX) or <ORACLE_HOME>\database (Windows).
This specifies the things like back up type, back up device tape size, tape copy command and mostly related to BRTOOLS for backup and recovery. To configure the SAP tools BRBACKUP, BRARCHIVE, BRRESTORE, and BRCONNECT, you must use the initialization profile init<DBSID>.sap. You can edit the file with a text editor. If you do not make any changes, the SAP tools use the default values for the parameters. Before you use one of the SAP tools, find out exactly which parameters you have to configure. Pay particular attention to parameters without default values and parameters that have device-specific information or require special platformspecific commands. Init<SID>.dba: This is used to look up when users withy DBA privileges is logged on to utilize 1. brtools 2. SAPDBA Modifications to this file are performed by the system, whenever the privileges are modified. Both the Oracle and SAP parameter files are usually found in $ORACLE_HOME/dbs. Work Process Connectivity Mechanism: 1. User starts an instance 2. OPS$ mechanism is used to connect to the DB, without any password (SAPSERVICE<SID>)
299
3. Service user connects to the DB without password and gets the password for Schema Owner which is residing in the table SAPUSER 4. Sapservice<SID> is the owner of the table sapuser. SAPUSER table consists of only one entry schema user ID and password. In sapuser table password is stored in the encrypted format 5. OPS$ user gets the password from the SAP table and disconnects from the DB 6. Work process reconnects with SCHEMA ID and password. The work processes is connected to the DB. Note: To check the connectivity between work processes and DB use the command R3trans-d. Trans.log is created current directory. Go to AL11 go to Work directory then check dev_w0, dev_w1 7. SAP schema ID password is stored in two tables SAPUSER and dba_users 8. Changing the password at DB level will not work for SAP<SID> because it is stored in two tables. Use SAPDBA tool or brconnect to change the password of SCHEMA ID (AL11, ST11) User requests a report from the database. R/3 work process cannot work on the database. Instead it will assign the task to shadow process of DB. Shadow process checks the following: 1. If the statement was executed earlier 2. If the table is existing or not 3. The fields are existing or not 4. It will check the DB and defines the shared pool size, nearest plan to SQL execution to fetch the data. 5. Based on the execution plan shadow process requests the content form the DB 6. The content is fetched into DB buffers from storage and responded back to the R/3 buffers. 7. If the same request comes again it will fetch from the DB buffers. Update Request: 1. When a user requests to update a transaction work process hands over the request to shadow process
300
2. Shadow process checks or table definitions and goes to the DB to update the DB buffers and DB records 3. Shadow process fetches the content to be updated onto DB buffers 4. While getting the content it will try to get the lock so that the records will not be modified by others 5. In order to roll back during instance crash, the content is written to undo segments 6. The contents needs to be modified are copied into log buffers 7. User shadow process updated the records in log buffers. Once it is committed it will move to DB buffers and the content is marked as Dirty Buffers. 8. In order to save the transaction it will be updated to log files log writer process (LGWR). 9. After check point interval DB writer reads the contents from th DB buffers and updated the DB and sends a request to process monitor (PMON) to release the locks and undo segments. Data will be released after undo_retention_time (parameter in init<SID>.ora) 10.If the instance is crashed abnormally system monitor (SMON) will read the content from the log files (DISK) and updates the DB. This is called as roll forward mechanism. 11.If the content is not updated in the log files (transaction not committed) PMON performs roll back mechanism and releases the locks.
SAP Directories: SAP Trace: It is used to write the traces by the system alert<SID).log is the file which is populated continuously logged or documenting important activities like log files which changes in the parameters etc SAP Check: This will be populated when the following commands are issued 1. Sapdba-check 2. Brconnect fcheck It logs the following information: 1. File name s and file paths
301
2. Critical files and their size 3. Table space information SAPARCH: It is used to write the log files generated by BRARCHIVE process ORAARCH: It is used to store the archive files written by ARCH process. The archive logs are coming from origlogA and origlogB to ORAARCH. It should be large enough to avoid archive struck problem SAP BACKUP: It is used by system whileperforming reorgqanization. It should be as big as the size of the exported content. ( Tables ,DB, Table Spaces) Origlog A & B: These are the log directories which are populated by log writer from log buffer. This files are read by t perform role forward mechanism by SMON in case of an instance crash Mirror Log A and Mirror Log B: These are the mirror copies of the above log files SAP data1, data2.datan These are the physical storage locations of the DB. Data files are located in the directories. Control Files: These files are required to restore the DB. These fies are written and updated by the system continuously. Information like database changes made to data file, storage locations, log sequence number, system change number. losing the control files will not allow to restore the system file. So it is recommended to store the control files in six different physical locations To find the six physical location go to init<SID>.ora and add the control file location. DB buffers are used to store the content accessed by the server process (Shadow process) or background server process. DB buffers are configured by parameters db_block _buffers. 302
DB buffers help to reduce the fetches from the database. Log Buffers: It is used to log the changes made to the database content. It only documents the log block which needs to be modified along with the content. Log buffers help to write the changes using fast track mechanism and transactions commit will be configured only after writing to online redo log files .( Original logs) Log Writer: (LGWR) It is the process used to write from log buffers to Online redo log files (origlog files). Log worter triggers under the following circumstances 1. Whenever 1/ of the log buffer is full Note: Log buffer size is determined by log_bufer (20 MB) 2. Whenever the changes reaches !MB of size 3. Whenever commit occurs 4. Before the DB writer writes the DB from DB buffers 5. Log buffers are cclic buffers 6. Log buffers are sequentially written to online redo log files Database Writer (DBW): It writes the contents (dirty buffers) from DB buffers to DB. It is triggered during the following circumstances. 1. Whenever check point occurs Note: check point interval is defined in init<SID>.ora 2. Process scans for space I the DB buffers. If it could not find them it triggers to write the dirty buffers to database 3. Dirty buffers in the DB buffers reach threshold values. It is depending on init<SID>.ora. it happens to every 30 seconds Log Switch: There are two online log files (origlog A and origlog B). These log files are continuously updatd by log writer process from log buffers. If orig logA is full it will be closed for writing and logB will be opened for writing. This mechanism is called log switch. During the log switch log file number is incremented (LSM- Log sequence number). 303
ARCH process: The process of writing online redo log files into ORAARCH directory is called as ARCH process. This is done by ARCH process. In order to have this mechanism to work we need to set paramenter ARCHIVE_log_mode=yes The database administration will set this parameter to YES in all the productive systems. This mode can be set to NO or FALSE in quality and development systems. ARCHIVE_log_mode can be set up in SAPDBA or BRTOOLS. SMON: In case of instance crash it is used to restore the DB from log files(role forward mechanism) and rolls back is committed.
PMON: Process Monitor It is used to rollback the user transactions and the locks held by the users. Data1.datan files: ORACLE database is stored in the hard disk in terms of blocks. Each block can consists of either 4KB or 8KB. SAP uses 8KB. The size of the block is determined by the parameter db_blocksize=8192 bytes (8 KB). Extents: The continuous blocks are allotted in terms of extents. Extends a group of blocks. Extends are assigned in terms of init extents, minimum extents, nest extents and max extents. The extents allocation is stored in tables based on their category of tables TGORA (Tables)& IGORA (Indexes) Segments: Group of tables or indexes (It can be a single table or index or a group of tables) Table Space: It is logically defined to store the tables and indexes. Each table space is pointed to N number of physical data files. But data files can be pointed to only one table space. 304
When table space is created using the statement create tale space we need to specify the size of the data file. If the table space is full the data files are full and the data cannot be inserted or updated. Note: If the table space is full we get an error ora_1653 and or_1654 table space overflow. In this situation we need to alter the table space by using commands SQL> alter tablespace add datafile; SDL>alter tablespace resize datafile; Note: SAP never recommends to use the above commands in turn it recommends to use SAPDBA/BRTOOLS where knowledge on SQL commands is not required. Extents Allocation: For each of the tables extents are allocated based on their category referring to tables TGORA & IGORA. When a table is defined it is assigned wit initial extents. When the assigned extents are reached it will throw an error. max extends reached with an error ora1631 (Related to the tables) & ora 1632 (Related to Indexes). Use SAPDBA/BRTOOLS to extend the extents. Note: All the above errors will be resulted in ABAP dump and logged in SM21 and ST22. Note: For all NETWEAVER 2004 components maintenance optimizer is necessarily installed Table Spaces: There are 2 types of table spaces 1. DMTS: Dictionary managed table space (Up to version 9) 2. LMTS: Locally managed table spaces DMTS: The space cannot be utilized because it cannot be reorganized. But in LMTS unused space can be automatically reorganized. There is no need to adopt next extents (SAPdba_next). In 4.6c we have around 27 table spaces which start by default with PSAP and ends with either data or index LDP Ex: PSAPRABD, PSAPTABI From 4.7 EE we have six table spaces 305
1. PSAPUNDO 2. PSAPTEMP 3. SYSTEM 4. PSAP<SID> 5. PSAP<SID><Rel No> 6. PSAPUSR PSAPUNDO TABLESPACE: It is used to store the before image of a transaction. It should be large enough to hold a complete transaction. PSAPUNDO replaces PSAPROLE and it holds the data until it is full or until the value mentioned in undo_retention_time. PSAPTEMP TABLESPACE: It is used to sort the data. It should be large enough to sort the data. BACK UP: In order to restore the database in case of failures (Media Disasters) media failure means logical errors, disk crash, power crash etc.. In order to restore the DB we require a valid back up. The areas which need to be backed up are 1. USR is required in R/3. 2. Data Files 3. Log Files 4. Control Files Rel No: Release Number
306
1. Offline Back Up: The backup will be taken off line, users are disconnected and the R/3 system is shut down. The backup is consistent to restore the database. Note: We do not take regular off line backup (Take during off peak hours). Mostly the companies which are not supporting on line back up. Off line backup can be performed using SAPDBA tool or O/S backup, BRTOOLS, Native database backup and 3rd party tools like VERITAS, BRIGHTSTORE, ARCSERVE, IBMTEVOLI> BRBACKUP can also e scheduled through command window or using crips. Brbackup mt offline, brbackup m dev verification. (mt, m are scripts). 2. Online Backup: In an environment where database shut down is not supported we perform online back up. Online backup is by default is not consistent, but consistent along with the log files. This is also called as consistent online backup. 3. Partial Backup: If the database sizes are very huge we can perform backup individually based on tables, table spaces etc Partial backup is used in this particular case. If we want backup of recent data then also we can do the same thing by using the partial backup. 4. Incremental Backup: IF the DB size is terabytes taking the entire DB backup either offline or online will consume more time. We will take full backup either weekly or monthly (called as Zero Backup) and subsequently everyday perform incremental backup that is only changes made after the zero backup. Backup Life Cycle: General back up life cycle is 30 days. But SAP recommends 7*4=28 backup. Weekly Backup: w1, w2, w3, w4, w5 Monthly Backup: M1, M2, M3, M4, M5 Tapes Management: 1. Ensure that we have enough tapes in the data center 2. Check the consistency of the tapes at least once in the backup life cycle.
307
3. Allow the tapes to fill only up to 90% 4. Check the tape compression ratio at least once in the backup life cycle. 5. Keep at least 30% tapes in reserve 6. Monitor the DB growth from time to time and plan the capacity of tapes. Optimizer: It defines the execution plan to fetch the data from the DB. There are two optimizers: 1. Cost Based Optimizer 2. Role Based Optimizer SAP uses cost based optimizer which is set by parameter cost based optimizer to true. It is set in init<SID>.ora. SAPDBA & BRTOOLS/ BR GUI: These are character based interface tools where users can work on DB without being a DBA. SAPDBA is applicable up to WEBAS versions 6.20. Functions of SAPDBA: 1. Start and Stop the instance 2. Instance specific information 3. Table space administration 4. Reorganization 5. Users ad Security 6. Backup Restore & Recovery 7. Change the archive log mode 8. Switching logs Regularly We Do (In DB concern): Table space administration:
308
When you click on SAPDBA or execute from command prompt a black window is displayed. Before displaying it checks init<SID>.dba whether the user has necessary privileges to run SAPDBA tool. If the user doesnt have privileges it will not be displayed. Privileges to the user can be provided by using the following command: Ora_dba_usr.sql in the kernel directory. Select the option c for going to table space administration. It displays all the table spaces along with their allocated space and percentage of space utilized. When the errors ORA1651 & ORA16534 occurs we do this based on the companies threshold values. If its exceeds use DB13 to give SAPDBA check Now select the critical table space and add data file to check the disk space before adding the data files. You are supporting versions earlier than 4.7EE you need to decide the drive and directory where data files belonging data/indexes are added. The above processes is documented and ensure that we read documents completely before performing the above exercise. Table space overflows can be handles by recycling the data files. Select the option D reorganization select the table pace and data file and resize the data User & Security: It is recommended to use SAPDBA tool to change the password of schema owner. Select option M Users & Security and navigate to change the password of schema owner. Note: Do not try to change the password of schema owner using command alter user id identified by password Because the password is stored I sapuser and dba_user BRTOOLS: Backup restore tools: Brtools/brgui(JAVA based GUI) Brtools provide various functionalities similar to sapdba but with extensive privileges like create table space, drop tablespace. Some of the BRTOOLS are 309
1. BRCONNECT 2. BRBACKUP 3. BRRESTORE 4. BRRECOVER 5. BRARCHIVE BRBACKUP: It is used to schedule whole offline backup, online backup etc We can initialize brbackup from the command line or using DB13. Go to DB13 (DBA calendar) select the data right click openselect the option to backup the database Note: Database backup is performed by using brbackup. We can also schedule at command line using SM69 & SM49 or a batch script can be used to schedule the back up. BRARCHIVE: It is used to backup archive log files from ORAARCH directory. BRARCHIVE uses to tape or disk, BRARCHIVE uses various technologies to safeguard the archive log files. We can see ARCHIVE stuck in SM21 & ST22. 1. CDS: (Copy Delete Save) BRARCHIVE process copies to the tapes twice and deletes the log files from the ORAARCH directory. If the BRARCHIVE is not When the archive struck occurs archive process cannot archive on line redo log files. Log switch fails and log buffers also gets filled because log writer could not write to redo log files then the user encounters hour glass problem. Resolving the Archive Struck: Create a dummy file of at least 500 MB in the ORAARCH directory (UNIX).For windows copy dummy file of 500 MB into ORAARCH directory. Whenever the problem occurs delete the temp dummy files, so that users can start working they will stop screening. Schedule BRARCHIVE to copy into ORAARCH in to tape or disk. Note: Do not delete or move the ORAARCH contents manually to any other physical location.
310
Note: Backup type (off line, on line) back up type2, type 2 copy command. Tape size are configured in init<SID>.sap BRRESTORE: BRRESTORE is used to restore the BACKUP from the database created by BRBACKUP. The tapes should be tagged and released. BRRECOVERY: It is used to recover the database from a Point of Time. Example: I am applying support packages and the system struck/corrupted at 3 P.M on 01/02/2010 Daily: I have a valid back up of last night which was taken at 3:30 PM Hourly: I do have a BRARCHIVE back up which was taken at 2:30 PM 1. BRRESTORE: Getting the system up 3:30 PM 2. BRRECOVERY: restore the system up to 8:30 PM and recover the system until time 2:30 PM (or) specify log file (1:30) to restore from log files. 3. BRCONNECT options: The following are the options available with the BRCONNECT tool.
1. BRCONNECT f next 2. BRCONNECT -f check 3. BRCONNECT f state 4. BRCONNECT f clear old 1. It is used to adapt next extents a SAPDBA 6.20 version next Also close the same function schedule this activity using DB13 at least once in a day. 2. BRCONNECT f check: It is scheduled based on the requirement but at least once in a day. This job populates the directory SAP check with all the data files along with this space utilization. 3. BRCONNECT f status: It is used to perform to two fold jobs. In SAPDBA we used to use House Keeping Jobs
311
i. ii.
This job identifies the tables whose statistics are outdated and in the next stop it will update all the tables. DBSTATIC: The table which is used to populate the tables whose statastics are outdated. It is scheduled at least in a weal using .dbb, .bat, .chs, .ksh.
4. BRCONNECT f clearold: It is used to clear thr old log files Note: BRCONNECT is also used to change the password of SCHEMA
SAP Reorganization: The data is populated into the DB. As the data grows the query times also degrade. Moreover there are some extents (Which is unused). There will be around 20 to 30% of space which is unused on database to effectively use the unused space. Perform reorganization from time to time. Reorganization will be performed by Export and Import of the DB. Go o SAPDBA Select the option D reorganization select export to export the database to SAP reorg directory Note: SAP reorg directory should have enough space to hold the data. Reorganization can be performed on larger table space and tables. During reorganization we need to schedule down time. SAP standard transactions for DB: DB01: It is used to find out logs. As the regular monitoring activity we need to monitor DB locks. DB02: As the regular monitoring activity we need to display the following. It is used to display the following 1. Statistics 2. Display the current sizes of the table spaces along with used space. 3. It also displays the free space statistics like (Used, Unused space..) 312
4. It will display space critical objects 5. Handling missing indexes Go to DB02 find the missing indexes select the missing indexes click on creating DB(or) go to SE14 DB12: It is used to find the backup logs. It displays over view of the back up and backup os successful or not. It also displays archive directory space. DB15: It is used to select the archiving objects of SAP. DB17: The check conditions can be customized. The same can be done using BRCONNECT f check DB20: It is used to edit the statistics. The statistics will be updated only for transparent table. DB24: Operations log Monitor Whenever the check performed it will log in DB24 Type of O/S: AIX, AS/400, HP-AU (11.11), OS/390-Mainframes, Solaris Communication: Special connection between CI, DI using optical fibers PAM Product Availability Matrix PA-USER-Personal Admin-(Pay check, leave) Pd-USER- personal Development (related to transport) It is the program (portal) which is used to send the change request into multiple systems and multiple clients. ETC is used to point to target groups rather than systems. Buffer Management: Buffers are stored in terms of directories in the memory. When the system is started the buffer areas are allocated based on the profile (Instance Profile). Buffers are built up in respective areas when the users access the content from the DB. Buffers are monitored in ST02. Buffers are managed in to space and the directories and space. If any one of them is full swaps occurs.
313
The performance of the system could not be gauzed initially because the buffers are not built. System performance will be increased if sufficient buffers are maintained. Do not measure the performance soon after starting the system. Wait for at least 20 to 25000 transactions take place. Buffers Hit Ratio: It should be always greater than 94%. That is out of hundred requests 94 are fetched from buffers. SWAPS: Whenever the objects are changed or buffers space is fully utilized or if directories are not enough swaps occurs. Swaps dont mean a performance issue but try to avoid more no of swaps by configuring enough space and directories. Most of the companies allow 5 to 10,000 swaps normally. Reasons for SWAPS: 1. Frequent transactions of objects will cause swaps 2. Space is full 3. 3. The directories are full 4. When the program changes Table Buffering: By default SAP defines buffering for the pre defined tables. Some of the tables are not allowed for buffering Some of the tables are allowed for buffering but buffering is switched off. For some of the tables buffering is allowed. Go to SE14 Check buffering is allowed or not The tables which are frequently accessed but rarely changed are allowed for buffering. Ex: TACT, T000, calendar tables Go to technical properties of the tables no buffering (For tables which are frequently changed and rarely accessed) Full buffering: The tables which are frequently used but rarely changed Single Record Buffering: 314
The buffering is based on primary key Generic record buffering: It is based on group of fields. Note: Do not change buffering status unless recommended by SAP notes. The elements eligible for buffering are programs, menus, screens, common user attributes, calendars, measurements tables, time settings. ST02 will get the above information. Memory Management: While installing SAP system virtual memory needs to be configured based on available memory. Physical Memory: It is the amount of memory installed on the system (RAM). Phy_mem_size is the parameter that shows the physical memory size. Virtual Memory: It is the sum of the physical memory and space allocated on the disc Extended Memory: It is a part of virtual memory which is utilized by R/3 instance. Roll Memory: The amount of memory required by the work process to roll in and roll out the user context. Heap Memory: It is a private memory part of virtual memory. Local Memory: The memory assigned to work process to handle the user request. It is defined by parameter ztta\rool_area Parameter to configure heap memory is abap\heap_limit Memory allocation to Work Processes: When the user request is assigned to W.P it will utilize an amount of memory which is specified in the parameter zttz\roll_first=20 KB (It is a part of roll area). If the memory is not sufficient it will use external memory up to an extent which is defined in ztta\roll_extension= If this memory is also not enough it will use the remaining part of the roll memory. If this os also not sufficient it will use heap memory (Private memory). If the W.P reaches heap memory it is restricted and time out paremeter will not be applicable to it.
315
The work process will be released only after completions of the task or heap limit is exhausted. Specified by parameter abap\heap_limit, abap\heap_area_dia, abap\heap_area_nondia If more no of W.P goes into private mode W.P congestion occurs. In order to resolve this use dpmon to identify the expensive process and then kill the process. ST02: It s used to find the amount of roll memory extended memory, roll memory, heap memory Always monitor ST02 for heap memory utilization.
EM
Heap Memory
RFI (Request for Information) can be prepared by 3 rd party agencies like KPMG/IBM Customer 1. RFI 2. Chases S/W 3. Who needs to Implement RFP: Request for Proposal 4. Customer Will send RFI 5. Customer Choose partner based on (Time material, Fixed bid) Satyam, IBM, L&T, Accenture, Wipro RFI (O/S, DB, Components, Windows, IBM) Partner/SAP/Oracle Response
316
317
NETWEAVER
SAPGUI for WINDOWS SAP GUI for JAVA 4.6 D HTTP 8080 4.7 EE SR1 ITS
HTTP 8080
Up to version 4.6 D R/3 ITS was used to provide web functions to the end users. It consists of two components. Application gate (Agate) and Web gate (Wgate) User Request Flow: 1. User requests using the internet explorer 2. The request is received by web gate which is on the web server and redirect the request to application gate way. 3. Agate in turn sends the request to application server which will be handled by DIAG protocol 4. Server sends the response to the Agate in the DIAG form 5. Agate interprets the request into .CSS (cascading style sheet) and hands over the response to user Note: Wgate must be installed on Web Server Agate and Wgate can be installed together on single host. It is called as singe host installation Agate and Wgate can be installed on to different hosts which is also referred as Dual Host Installation. Monitoring is performed through ITS administration instance It can only process ABAP requests
318
Note: In order to overcome the above and to provide internet functionality to SA R/3 system ICM was introduced from SAP R/3 version 4.7 Up to version 4.6D it is referred as SAP application Server From 4.7 onwards it is referred as SAP Web Application Server ICM become an internal part of SAP and it is monitored using transactions SMICM. While installing SAP WEB AS J2EE engine is required But as planned it could not provide the internet functionality up to version 6.30 (ITS is required) From 6.40 onwards integrated ITS has become an internal part of R/3. Where R/3 systems can be accessed without ITS. It is used to handle Dynpro based applications The SAP WEB AS version is used to provide Web functionality and formed as a BASIS platform for Sap NETWEAVER SAP WEB AS 6.40 supports both ABAP stack and JAVA Stack. JAVA Stack: ESS: Employee Self Services MSS: Manager Self Services EP XI BI BD APO SRM CRM ABAP Stack BIW R/3
JAVA Stack
In order to provide user friendly environment SAP has adapted JAVA as the internet language.
319
Open DB Support
O/S Independent
Application Platform
SAP R/3
Mobile Interface
Paging
SAP PS
Oracle
P I P A
Process Integration with the help of Brokers & Adapters Information Integration People Integration Application Integration X .NET A FRAME P WORK
A P
Netweaver provides technical infrastructure where applications can be hosted on any platform or any database to integrate the processes across the system to fetch information and display it to the users. How to work with ITS (Internet Transaction Server): Go to SICF (Internet communication frame work) Go to BC (Business Component) Go to GUI Go to ITS Sap WEB GUI Right Click Web Server URL: http\<hostname>:port number\sap\bc\ITS\sap\webgui\sap clients 320
Parameters: 1. ITSP/enable=1 (RZ10) 2. Ms/http_port=8080 (or)8081 Version: 4.7 EE ext set 1.00 (For 6.30) a. EE ext set 2.00
JAVA: 1. Unicode language 2. Platform independent (OS/DB) 3. OOPS (object oriented programming system): Encapsulation, Inheritance, Polymarphism 4. Integrations 5. Multi lingual 6. Robustness, secure network 7. User Friendly GUI
1. JVM: Java Virtual Machine: It is platform dependent and it is used to execute the platform independent byte code deployed in JAVA When java is programmed it ends with .java extension. This program will be compiled to a .class extension. The compiled program is platform independent. In order to interpret the compiled program we require JVM (Java Virtual Machine). 2. JRE: Java Run Time Environment: It provides an environment to run the applications designed in JAVA. It is required to install all the NETWEAVER components. 3. J2SDK: Java tools for Standard development Kit: It provides a platform to develop and deploy JAVA applications 321
Note: Java is used to develop and deploy internet based applications while connecting to SAP and Non SAP systems. Installation Options: WEBAS JAVA can be installed as a standalone engine JAVA and ABAP can be installed together JAVA can be added on to the existing ABAP engine WEB Dispatcher ICM JAVA Dispatcher Work Processes MSG ENQ DB ICO ABAP Dispatcher Work Processes MSG ENQ DB
WEB Dispatcher: 1. User request over the internet explorer 2. The request is handled by Web Dispatcher Web dispatcher is a piece of software which can be installed on any machine which is connected to internet Web dispatcher provides security 3. Web dispatcher communicates with message server to identify least loaded server and direct the request to respective ICM 4. ICM receives the user request and segregates whether it need to be send to ABAP dispatchr or JAVA dispatcher 5. ICM is instance specific 322
6. The request is received by JAVA dispatcher 7. JAVA dispatcher assigns the request to server processes. Server process has threads to handle the user requests 8. This is the only process where user request will be connected to complete transactions 9. Session information will be stored as cookie and passed on to the thread 10.The cookie information will be deleted when the user session is terminated 11.While updating a record it talks to enqueue process and obtains the locks Start Up and Shut Down of WEB AS JAVA: 1. It can be started on its own when it is installed as a standalone engine Commands: Startsap on non windows environment will be used and on windows MMC will be used to start and stop the system 2. When it is plugged with ABAP engine ABAP start up frame work initiates JAVA startup provided that start\J2EE=1 (Located in ABAP engine) 3. It can also be started from transaction SMICM Note: Java is always is add-on to ABAP but ABAP cant be add-on for JAVA engine JAVA startup framework: 1. When JAVA instance is started it looks in to profiles (STARTUP and Default) and start the DB. 2. Jcontrol process is also started and connects to the DB to get the instance properties, updated to O/S leve; 3. Jcontrol: Boot strap process will read the O/S profiles and starts all the services. JControl also starts Jlaunch process and assigns memory to them 4. Jlaunch: Jlaunch start up process is initiated b JCOntrol process. This process ends itself when Jcontol is stopped Note: Startup logs can be found in work directory. Startup Problems: 1. Could not find the JVM. We need to set the path (Enviromental variable)
323
2. Memory is not enough t start the instance (Increase the memory based on the availability) 3. DB is not started (Check alert,SID>.log, lsntctl, oracle services) 4. JRE is not compatible ( 1.4.9 to 1.4.12)
JAVA Admin Tools: File Structure: SCS-SAP central Services (Msg, Enqueue) \usr\sap\sid\ Instance directory is located in \usr\sap\SID\JC01 Go to work directory dev_Server, dev_disp, dev_Jcontrol, sapstart.log, dev_bootstrap Cluster: It is the combination of dispatcher and server. It is nothing but instance and always reports to the instance Individual server log and dispatcher log can be found as default.trc Go to J2EE click on admin J2E admin directory: it is used to provide admin tools where services are configured Visual Admin Tool: (VAT) Click on go.bat in admin directory to go to visual administrator tool. Config tool directory: This is used to configure the initial settings of JVM> Restart is required. Add server process,memory settings of JVM. Restart is required. Add Server process, Memory Settings and LDAP settings. LDAP (Light weight Directory access Protocol). TELNET Tool: It is used to communicate with JAVA system remotely. JCMON: It is a CLI to monitor the local and cluster configuration. Ports 1: By default message server listens on 3900 for ABAP 3600. Ports 2: Visual Administrator listens on p4 (Port 4). That is the port number ends with 4 324
JAVA instance port: By default 50,000 is used as port to access JAVA instance over the internet provided instance number is 00. The formula to calculate the port number is: 50,000 + (100* instance number) + default number. TELNET listens on port number 8. JVM listens on port number 18. If the port number is 99 calculate the instance number: 59900, 59904, 59908, 59918 Post installation Activities of JAVA instance: 1. Check the installation log to ensure that the installation is performed successfully 2. Install SAP License. Go to Visual Admin Tool It will display the H/W key get the H/W key go to market place and get the license key license will be in the form of text file go the conflicttool.bat go to file add server process select the dispatcher increase the heap size go to umel.data to configure directory server so that domain users can connect to the system without any password select the server process and go to services to configure the services we can increase http cache size in http servers t o 3000-4000 When working on config tool JAVA Web AS is not required of there is any cahgne mode to config tool it will defect the DB. User admin is used to configure run time parameters most of the parameters are effective immediately Ex: RFC destination, Creating Users TELNET Tool: It is used when we could not logon to JAVA instance Applying the patches: On WEB AS JAVA and components installed on it the components which need to be patched are 1. JRE 2. JAVA Engine 3. 3. Composite Application Framework 4. Visual Composer
325
Patching Tools: Patches can be applied by using the following tools: 1. SAPINST: It is used to apply patches to J2EE engine 2. SDM: Software deployment Manager: it is used to apply patches to all the components SDM s located in the usr\sap\SID\JC01\instance number\SDM\remote.bat (GUI) In order to initiate SDM we require user ID, Password and port number. USER ID is SDM, password is SDM by default, but will be changed during the installation. Port: 50,000+ 100*instance number+ 18 (Default) Only one user can be connected to SDM. While applying patches to SDM, we need to specify the path either DB file system or J2EE SDM can also be initialized t apply the patches either online of offline dsployment JSPM: It is located in usr\sap\sid\instance\J2E\JSPAM\go.bat Click on go.bat to go to JSPM interface. Patches will be copied to the directory Trans\eps\in File types that can be applied as support Packages: The following extensions will be used to apply support packages and patches. 1. .sda: Software development Archive 2. .war : Web Archive 3. .rar: Runtime Archive 4. .ear: Enterprise Archive 5. .jar: Java Archive 6. .sca: Software component Archive Note: We can revert back the packages. We cannot revert back the SPAM. 326
Go to config Toll directory and click on configtool.bat Logon will be performed without any password. Note: If SDM is blocked system hangs or while initializing SDM it says that user 8is already connected to SDM. Hoe to Resolve This: GO to MMC ( IN UNIX use $ps ef|grep SDm to find the process ID) find the process id and stop it and retart it again go to J2EE process list SDM Right Click Restart J2EE engine should be patched before all other patches. GO to COnfig tool click on file go to parameters the configured parameters can be exported go to cluster ID Selece server Select C-ID Security port UME service Go to LDAP Change the configuration data files ADS/write, Specify the server name, Server Port, User Id, Password Elect the instance Select the server process Configure the required services for the following: 1. CAF: Composite application framework 2. User Management services 3. LDAP services 4. Security Services Define http\cache Note: Refer the note thoroughly before making any changes the config tool, ensure that config tool parameter are backed up. Memory Allocation: SAP memory for JAVA engine is allocated as follows: 1. Heap Memory: The total amount of memory which can be utilized to processing 2. Young Generation Memory: Whenever an object is accessed it will be cached into young generation memory Y.G.M is defined by the parameters: 327
-XX: Newsize=171M XX: Max New Size= 171M 3. Permanent Generation Memory: XX : Perm Size= 256M XX: max perm size=256M The objects which are reusable like JAVA Classes, packages, methods will be moved into permanent memory so that requests can access them directly. 4. Old generation or tenure generation or garbage generation: The data which is not eligible to store or cache will be moved to this area. Garbage is cleared from time to time. This memory is defined by using the formula: Heap memory-(sum of young & permanent generation) Heap Memory is defined by parameter XMS1024M Global Dispatcher Configuration: If more than one JAVA instance is configured the parameters and their values can be populated globally using this option. Global Server Configuration: It is used to set the values for the server process. Note: Global Parameters are overwritten by local parameters Working with visual admin: Initializing visual Admin: \usr\sap\sid\EP7\JC02\J2EE\ADMIN Click on the go.bat. It requires password to login. Visual Admin can be used to configure remote systems. Visual Admin is used t configure the following services: 1. Install the license using license adapter 2. Configuring RFC connections using JCORFC Processs 3. Security provider service to configure the user data store to create users and assign roles
328
Log Configuration Service: it is used to configure the logs which determines with what granularity the logs can be filled only errors, only warnings, only information. Log for. Services: This is used to configure the log files, no of active users who is used
Deploy Service: It is used to deploy the programs Monitor Service: It is used to monitor the performance of the system, such as application monitor, SQL Trace, Single activity trace. JARM: Java Application Response Management: It collects the data for performance.
329
330 | https://www.scribd.com/document/157960315/Sap-Basis-Extract-of-Certification | CC-MAIN-2019-35 | en | refinedweb |
Learn how to build .NET Core IoT apps for Raspberry Pi Linux, and connect to Azure IoT Hub
Dave Glover
Updated on
・9 min read
Source Code
The source and the samples for this walk-through can be found here.
Introduction
The .NET Core IoT Library connects your applications to hardware. In this walk-through you will learn how to:
- folder.
Why .NET Core
It used by millions of developers, it is mature, fast, supports multiple programming languages (C#, F#, and VB.NET), runs on multiple platforms (Linux, macOS, and Windows), and is supported across multiple processor architectures. It is used to build device, cloud, and IoT applications.
.NET Core is an open-source, general-purpose development platform maintained by Microsoft and the .NET community on GitHub.
The .NET Core IoT Libraries Open Source Project
The Microsoft .NET Core team along with the developer community are building support for IoT scenarios. The .NET Core IoT Library is supported on Linux, and Windows IoT Core, across ARM and Intel processor architectures. See the .NET Core IoT Library Roadmap for more information.
System.Device.Gpio
The System.Device.Gpio package supports general-purpose I/O (GPIO) pins, PWM, I2C, SPI and related interfaces for interacting with low-level hardware pins to control hardware sensors, displays and input devices on single-board-computers; Raspberry Pi, BeagleBoard, HummingBoard, ODROID, and other single-board-computers that are supported by Linux and Windows 10 IoT Core.
Iot.Device.Bindings
The .NET Core IoT Repository contains IoT.Device.Bindings, a growing set of community-maintained device bindings for IoT components that you can use with your .NET Core applications. If you can't find what you need then porting your own C/C++ driver libraries to .NET Core and C# is pretty straight forward too.
The drivers in the repository include sample code along with wiring diagrams. For example the BMx280 - Digital Pressure Sensors BMP280/BME280.
Software Set Up for Linux, macOS, and Windows 10 Desktops
You can create .NET Core IoT projects on Linux, macOS and Windows desktops. You need to install the following software.
Additional Windows 10 Software Requirements
- Windows Subsystem for Linux (WSL). I suggest you install the Ubuntu 18.04 distribution.
- PuTTY SSH and telnet client
Setting up your Raspberry Pi
.Net Core requires an AMR32v7 processor and above, so anything Raspberry Pi 2 or better and you are good to go. Note, Raspberry Pi Zero is an ARM32v6 processor, and not supported.
If you've not set up a Raspberry Pi before then this is a great guide. "HEADLESS RASPBERRY PI 3 B+ SSH WIFI SETUP (MAC + WINDOWS)". The Instructions outlined for macOS will work on Linux.
This walk-through assumes the default Raspberry Pi network name, 'raspberrypi.local', and the default password, 'raspberry'.
Configure Connection to your Raspberry Pi
The following creates a new SSH key, copies the public key to the Raspberry Pi, and then installs the Visual Studio Debugger on the Raspberry Pi. Take the default options.
From Linux and macOS
Open a new Terminal, and copy and paste the following command.
ssh-keygen -t rsa && ssh-copy-id [email protected] && \ ssh [email protected] "curl -sSL | bash /dev/stdin -r linux-arm -v latest -l ~/vsdbg"
From Windows 10
Press the Windows Key
, type 'cmd', then press the Enter key to open the Windows command prompt. Then copy and paste the following commands.
This is required because WSL 1 does not resolve .local addresses. This has been fixed in WSL 2.
ping raspberrypi.local
Replace xxx.xxx.xxx.xxx with the IP Address of the Raspberry Pi and then copy and paste the following command into the Windows Command prompt.
bash -c "ssh-keygen -t rsa && ssh-copy-id [email protected]" && ^ plink -ssh -pw raspberry [email protected] "curl -sSL | bash /dev/stdin -r linux-arm -v latest -l ~/vsdbg"
Creating your first .NET Core IoT project
Open a command prompt or terminal window, and paste in the following command(s). It will create the project directory, create the .NET Core Console app, add the Iot.Device.Bindings package, and then launch Visual Studio Code.
mkdir dotnet.core.iot.csharp && cd dotnet.core.iot.csharp dotnet new console --langVersion=latest && dotnet add package Iot.Device.Bindings --version 0.1.0-prerelease* code .
- Add the Visual Studio Code Build and Debug assets
- Replace the code in program.cs file with the following code. This code will read the Raspberry Pi CPU Temperature and display it in the system console window.
using System; using Iot.Device.CpuTemperature; using System.Threading; namespace dotnet.core.iot { class Program { static CpuTemperature temperature = new CpuTemperature(); static void Main(string[] args) { while (true) { if (temperature.IsAvailable) { Console.WriteLine($"The CPU temperature is {temperature.Temperature.Celsius}"); } Thread.Sleep(2000); // sleep for 2000 milliseconds, 2 seconds } } } }
Your Visual Studio Code program.cs file should look like the following screenshot.
Deploying the project to your Raspberry Pi
To deploy a project to your Raspberry Pi you need to configure Visual Studio Code to compile for linux-arm, how to copy the compiled code to the Raspberry Pi, and finally how to attach the debugger.
For this walk-through, we are going to use rsync to copy program files to the Raspberry Pi. Rsync is a very efficient file transfer protocol, comes standard with Linux, macOS, and Windows with the Windows Subsystem for Linux (WSL) installed.
Updating the Visual Studio Code Build Files
We need to update the launch.json and tasks.json files with the following code.
This walk-through assumes the default Raspberry Pi network name, 'raspberrypi.local', and the default password, 'raspberry'.
launch.json
The launch.json file calls a RaspberryPublish prelaunch task which builds and copies the program to the Raspberry Pi, it then starts the program on the Raspberry Pi and attaches the debugger.
{ "version": "0.2.0", "configurations": [ { "name": "Raspberry Pi Publish, Launch, and Attach Debugger", "type": "coreclr", "request": "launch", "preLaunchTask": "RaspberryPublish", "program": "~/${workspaceFolderBasename}/${workspaceFolderBasename}", "cwd": "~/${workspaceFolderBasename}", "stopAtEntry": false, "console": "internalConsole", "args": [ "" ], "pipeTransport": { "pipeCwd": "${workspaceRoot}", "pipeProgram": "/usr/bin/ssh", "pipeArgs": [ "[email protected]" ], "debuggerPath": "~/vsdbg/vsdbg" }, "windows": { "pipeTransport": { "pipeCwd": "${workspaceRoot}", "pipeProgram": "plink", "pipeArgs": [ "-ssh", "-pw", "raspberry", "[email protected]" ], "debuggerPath": "~/vsdbg/vsdbg" } } } ] }
tasks.json
The tasks.json file defines how to compile the project for linux-arm and how to copy the program to the Raspberry Pi with rsync. On Windows, you must explicitly specify the IP Address of the Raspberry Pi as rsync is called via Bash and the Windows Subsystem for Linux does not resolve .local DNS names.
{ "version": "2.0.0", "tasks": [ { "label": "RaspberryPublish", "command": "sh", "type": "shell", "problemMatcher": "$msCompile", "args": [ "-c", "\"dotnet publish -r linux-arm -o bin/linux-arm/publish", "${workspaceFolder}/${workspaceFolderBasename}.csproj\"", ";", "sh", "-c", "\"rsync -rvuz ${workspaceFolder}/bin/linux-arm/publish/ [email protected]:~/${workspaceFolderBasename}\"" ], "windows": { "command": "cmd", "args": [ "/c", "\"dotnet publish -r linux-arm -o bin\\linux-arm\\publish", "${workspaceFolder}\\${workspaceFolderBasename}.csproj\"", "&&", "bash", "-c", "\"rsync -rvuz $(wslpath '${workspaceFolder}')/bin/linux-arm/publish/ [email protected]:~/${workspaceFolderBasename}\"" ] } } ] }
Build, Deploy and Debug your .NET Core IoT App
Review this Visual Studio Debugger Guide if you've not used the debugger before.
Set a breakpoint in your code, for example at the 15, and from Visual Studio Code click the Debug icon on the Activity bar, ensure "Publish, Launch and Attach Debugger" is selected in the dropdown, and click the green run icon.
Your code will build, it will be copied to your Raspberry Pi and the debugger will be attached and you can now start stepping through your code.
Connect your Raspberry Pi to Azure IoT Hub
Follow the "Create an Azure IoT Hub (Free)" tutorial until the "Send simulated telemetry" section. You will need to the connection string of the device you created.
Add the Package references for Azure IoT Hub and JSON.NET. This can either be done by executing the 'dotnet add package' command, or by updating the references directly in the .csproj file.
Open the dotnet.core.iot.csharp.csproj file and update the section as follows.
<ItemGroup> <PackageReference Include="Iot.Device.Bindings" Version="0.1.0-prerelease*" /> <PackageReference Include="Microsoft.Azure.Devices.Client" Version="1.*" /> <PackageReference Include="Newtonsoft.Json" Version="12.0.1" /> </ItemGroup>
- Replace the code in program.cs file with the following code and add your device connection string.
This code will read the Raspberry Pi CPU Temperature, display it, then send the telemetry to Azure IoT Hub.
using System; using Iot.Device.CpuTemperature; using Newtonsoft.Json; using Microsoft.Azure.Devices.Client; using System.Text; using System.Threading; using System.Threading.Tasks; namespace dotnet.core.iot { class Program { const string DeviceConnectionString = "<Your Azure IoT Hub Connection String>"; // Replace with the device id you used when you created the device in Azure IoT Hub const string DeviceId = "<Your Device Id>"; static DeviceClient _deviceClient = DeviceClient.CreateFromConnectionString(DeviceConnectionString, TransportType.Mqtt); static CpuTemperature _temperature = new CpuTemperature(); static int _msgId = 0; const double TemperatureThreshold = 42.0; static async Task Main(string[] args) { while (true) { if (_temperature.IsAvailable) { Console.WriteLine($"The CPU temperature is {Math.Round(_temperature.Temperature.Celsius, 2)}"); await SendMsgIotHub(_temperature.Temperature.Celsius); } Thread.Sleep(2000); // sleep for 2000 milliseconds } } private static async Task SendMsgIotHub(double temperature) { var telemetry = new Telemetry() { Temperature = Math.Round(temperature, 2), MessageId = _msgId++ }; string json = JsonConvert.SerializeObject(telemetry); Console.WriteLine($"Sending {json}"); Message eventMessage = new Message(Encoding.UTF8.GetBytes(json)); eventMessage.Properties.Add("temperatureAlert", (temperature > TemperatureThreshold) ? "true" : "false"); await _deviceClient.SendEventAsync(eventMessage).ConfigureAwait(false); } class Telemetry { [JsonPropertyAttribute (PropertyName="temperature")] public double Temperature { get; set; } = 0; [JsonPropertyAttribute (PropertyName="messageId")] public int MessageId { get; set; } = 0; [JsonPropertyAttribute (PropertyName="deviceId")] public string DeviceId {get; set;} = Program.DeviceId; } } }
Redeploy the App to the Raspberry Pi
Press F5 to run the current 'Publish, Launch, and Attach Debugger' build task.
Monitor the Azure IoT Hub Telemetry
Install the Visual Studio IoT Hub Toolkit.
Review the Visual Studio IoT Hub Toolkit] Wiki for information on using the IoT Hub Toolkit Visual Studio Extension.
References
Remote Debugging On Linux Arm
Azure IoT libraries for .NET
GraphQL performance issues & an easy solution
One of the biggest GraphQL flaws is missing of some basic implementations know from REST which are crucial for application performance.
I followed this guide and its working perfect, but now i'm stuck at making my app a service on the pi.
Adding this as a service is quite simple:
create a appname.service in /etc/systemd/system/
with the following contents:
[Unit]
Description=AppName
[Service]
ExecStart=/bin/dotnet /home/pi/dotnet.core.iot.csharp/dotnet.core.iot.csharp.dll
WorkingDirectory=/home/pi/dotnet.core.iot.csharp
but i can't find the path to dotnet.
Any idea?
Thanks in advance,
Eric
Are you just trying to run the program on startup or do you need to run as a service for dinner reason? The app is standalone, you've not installed the standalone dotnet framework. The easiest way to autostart an app is to add to /etc/rc.local with an & after the command to allow app to run in the background. | https://dev.to/azure/net-core-iot-raspberry-pi-linux-and-azure-iot-hub-learn-how-to-build-deploy-and-debug-d1f | CC-MAIN-2019-35 | en | refinedweb |
The
@now/bash Builder takes an entrypoint of a bash function, imports its dependencies, and bundles them into a Lambda.
A simple "hello world" example:
handler() { echo "Hello, from Bash!" }
When to Use It
This Builder is the recommended way to build lambdas from shell functions.
How to Use It
This example will detail creating an
uppercase endpoint which will be accessed as
my-deployment.url/api/uppercase. This endpoint will convert the provided querystring to uppercase using only Bash functions and standard Unix CLI tools.
Start by creating the project structure:
Inside the
my-bash-project > api > uppercase directory, create an
index.sh file with the following contents:
import "[email protected]" import "[email protected]" handler() { local path local query path="$(jq -r '.path' < "$REQUEST")" querystring "$path" | querystring_unescape | string_upper }
A shell function that takes querystrings and prints them as uppercase.
The final step is to define a build that will take this entrypoint (
index.sh), build it, and turn it into a lambda using a
now.json configuration in the root directory (
my-bash-project):
{ "version": 2, "builds": [{ "src": "api/**/index.sh", "use": "@now/bash" }] }
A
now.json file using a build which takes a shell file and uses the
@now/bash Builder to output a lambda.
The resulting deployment can be seen here:
Furthermore, the source code of the deployment can be checked by appending
/_src e.g..
Note, however, that it will return an empty response without a querystring.
By passing in a querystring, the lambda will return the uppercased version. For example:
Technical Details
Entrypoint
The entrypoint of this Builder is a shell file that defines a
handler() function.
The
handler() function is invoked for every HTTP request that the Lambda receives.
Build Logic
If your Lambda requires additional resources to be added into the final bundle,
an optional
build() function may be defined.
Any files added to the current working directory at build-time will be included in the output Lambda.
build() { date > build-time.txt } handler() { echo "Build time: $(cat build-time.txt)" echo "Current time: $(date)" }
Demo:
Response Headers
The default
Content-Type is
text/plain; charset=utf8 but you can change it by setting a response header.
handler() { http_response_header "Content-Type" "text/html; charset=utf8" echo "<h1>Current time</h1><p>$(date)</p>" }
Demo:
JSON Response
It is common for serverless functions to communicate via JSON so you can use the
http_response_json function to set the content type to
application/json; charset=utf8.
handler() { http_response_json echo "{ "title": "Current time", "body": "$(date)" }" }
Demo:
Status Code
The default status code is
200 but you can change it with the
http_response_code method.
handler() { http_response_code "500" echo "Internal Server Error" }
Demo:
Redirect
You can use the
http_response_redirect function to set the location and status code. The default status code is
302 temporary redirect but you could use a permanent redirect by setting the second argument to
301.
handler() { http_response_redirect "" "301" echo "Redirecting..." }
Demo:
Importing Dependencies
Bash, by itself, is not very useful for writing Lambda handler logic because it
does not have a standard library. For this reason,
import
is installed and configured by default, which allows your script to easily include
additional functionality and helper logic.
For example, the
querystring import may be
used to parse input parameters from the request URL:
import "[email protected]" handler() { local path local query path="$(jq -r '.path' < "$REQUEST")" query="$(querystring "$path")" echo "Querystring is: $query" }
Demo:
Bash Version
With the
@now/bash Builder, the handler script is executed using GNU Bash 4.
handler() { bash --version }
Demo:
Maximum Lambda Bundle Size
To help keep cold boot times minimal, the default maximum output bundle size for a Bash lambda function is
10mb.
This limit is extendable up to
50mb.
maxLambdaSizeconfiguration:
{ "builds": [ { "src": "*.sh", "use": "@now/bash", "config": { "maxLambdaSize": "20mb" } } ] } | https://docs-560461g10.zeit.sh/docs/v2/deployments/official-builders/bash-now-bash/ | CC-MAIN-2019-35 | en | refinedweb |
Vuejs Dialog Plugin: A promise based alert plugin
Vuejs Dialog Plugin
The Vue.js Dialog Plugin offers easy implementation of alerts, prompt and confirm dialogs, along with the option to be used as a directive.
- Usage as a method
- Usage as a directive
- Different confirmation types
Variations:
- Alert Dialog - one button
- Html Dialog - style/format content
- Basic confirm - close instantly
- Reversed Dialog - switch buttons
- Fade Dialog - Animation
- Bounce Dialog - Animation
Example
To start working with the Dialog Plugin use the following command to install it.
$ yarn add vuejs-dialog
The following example makes use of both the directive & method way.
Import in your project
import VuejsDialog from "vuejs-dialog" // Tell Vue to install the plugin. Vue.use(VuejsDialog)
dialog () { this.$dialog.confirm('Are you sure you want to continue?') .then(function () { console.log('Clicked on proceed') }) .catch(function () { console.log('Clicked on cancel') }) },
Usage:
<button @Normal dialog</button> <!-- Usage as a directive (new) --> <button v-As a directive</button>
If you don't pass a message, the global/default message would be used instead.
More options are available here, if you would like to do more with your dialogs, such as a loading Dialog - useful with ajax or a reversed Dialog.
If you would like to explore more about Vuejs Dialog Plugin, head to the project's repository on GitHub, where you will also find the source code. | https://vuejsfeed.com/blog/vuejs-dialog-plugin-a-promise-based-alert-plugin | CC-MAIN-2019-35 | en | refinedweb |
Introduction
A recurring issue when using the automatic master-detail synchronization feature in ADF Business Components is the moment the detail view object is queried. If you have a "deep" or "wide" hierarchy of view object instances in your application module, then the performance cost of those detail queries might add up significantly. This article describes a technique to prevent premature execution of detail queries.
Main Article
In ADF 10, by default all details where queried immediately together with the master during the ADF-JSF prepareModel phase. Steve Muench has provided sample 74. Automatically Disabling Eager Coordination of Details Not Visible on Current Page on his undocumented samples page to delay detail queries when that data is only needed on subsequent pages. This samples works fine in 10.1.3 but is not easy to implement, and the implementation is page specific, not generic.
Fortunately, in ADF 11 the default query behavior has been changed. For a start, the initial query of a view object is triggered during JSF render response phase while traversing the UI tree. This ensures that only queries that provide data that are actually used on the current page are fired. When a master view object is queried the first time during this JSF render response phase, none of its detail view objects are queried, unless this data is also needed on the same page. This default behavior is already a big improvement over ADF 10, as queries are fired "on-demand" when constructing the page. However, the caviat is that once a detail view object has been queried once, for example because the user navigated from the master page to a detail page that displays this detail information, then on subsequent queries of the master view object, or changes in the current row of the master, the detail view object is queried immediately.
Lets clarify the impact using an example: we have one master view object instance with 5 sibling detail view object instances, all having their own JSF page to display the data. The end user enters the master page, and the master VO is queried, marking the first row as the current row. Then he visits the 5 detail pages for this first master row, causing 5 detail queries, one at the time for each detail page he visits. Now, when the end user returns to the master page and clicks on the second row to make it current, 5 detail queries will fire immediately. If the end user plans to visit the 5 detail pages for this second master row then that's OK, the queries have to be fired anyway. However, when he will not be visiting the detail pages for this master row, 5 queries have been executed in vain.
From the above description you will understand that ADF BC keeps track of the detail VO's (detail row sets to be precise) that have been queried before, so it knows whether detail view objects should be queried immediately as a result of a row currency change in the master. If we somehow can "undo" the housekeeping of these detail row sets, we will constantly keep the nice on-demand querying of detail view objects that we get for free for the initial queries. Well, it turns out to be quite simple to implement this "undo" by adding a custom RowSetListener to the master view object. Here is the RowSetListener class:
package oracle.adf.model.adfbc.fwk; import oracle.jbo.DeleteEvent; import oracle.jbo.InsertEvent; import oracle.jbo.NavigationEvent; import oracle.jbo.RangeRefreshEvent; import oracle.jbo.RowSet; import oracle.jbo.RowSetListener; import oracle.jbo.ScrollEvent; import oracle.jbo.UpdateEvent; import oracle.jbo.ViewObject; public class ClearDetailRowSetsListener implements RowSetListener { /** * This method fires when the current row changes. * We clean up all detail row sets to avoid querying of these detail row sets * when navigating to another master row, or requerying the master */ public void navigated(NavigationEvent event) { ViewObject vo = (ViewObject) event.getSource(); RowSet[] rowsets = vo.getDetailRowSets(); if (rowsets != null) { for (int i = 0; i < rowsets.length; i++) { rowsets[i].getViewObject().clearCache(); rowsets[i].getViewObject().resetExecuted(); rowsets[i].closeRowSet(); } } } public void rangeRefreshed(RangeRefreshEvent event) { } public void rangeScrolled(ScrollEvent event) { } public void rowInserted(InsertEvent event) { } public void rowDeleted(DeleteEvent event) { } public void rowUpdated(UpdateEvent event) { } }
And here is the view object create method that registers the listener:
protected void create() { super.create(); addListener(new ClearDetailRowSetsListener()); }
That's all. You can add this code to your base class view object, and you will get lazy on-demand querying everywhere in your application. | http://www.ateam-oracle.com/lazy-on-demand-querying-of-detail-view-objects | CC-MAIN-2019-35 | en | refinedweb |
Many applications require access to the file system to create, modify or delete files and folders. But how do you make sure that such application behaves correctly? You do it with tests of course but there is a catch: In general it is not a good idea to have tests that are performing Input/Output operations like accessing files and databases. When you need to test I/O operations mock objects are your friend. And before I go into more details let me point out some of the benefits of mocking.
Let’s start with a very simple class called FileManager which can perform a single operation called Rename. I want to make sure that Rename will correctly rename a specified file. For the sake of simplicity Rename will always rename the specified file to “ranamed.file”. Here is the code of FileManager:
public class FileManager
{
private IFileSystem fileSystem;
public FileManager(IFileSystem fileSystem)
{
if (fileSystem == null)
throw new ArgumentNullException("fileSystem");
this.fileSystem = fileSystem;
}
public void Rename(string filePath)
if (this.FileDoesNotExist(filePath))
throw new FileNotFoundException("File was not found", filePath);
var renamedFilePath = "renamed.file";
if (this.FileExists(renamedFilePath))
throw new IOException("New filename already exists: " + renamedFilePath);
this.fileSystem.MoveFile(filePath, renamedFilePath);
private bool FileDoesNotExist(string filePath)
return !this.FileExists(filePath);
private bool FileExists(string filePath)
return this.fileSystem.FileExists(filePath);
}
By taking a quick look at Rename you will notice that it has the following behavior which has to be tested:
So how do we go about testing this behavior? If you take a look at the constructor of FileManager you will notice that it accepts a single argument of type IFileSystem. This interface defines all operations that can be perform on the file system. All I/O operations that FileManager performs are executed through this interface.
public interface IFileSystem
bool FileExists(string path);
void MoveFile(string filePath, string newFilePath);
As we will see, IFileSystem allows me to create a fake implementation of the file system functionality (mock) which will be the cornerstone of my I/O tests.
I will be using MSTest to drive my tests but you can use a testing framework of your choice. Although you are free to create mocks by hand you will be better off using a mocking framework. I have chosen the new, but yet powerful, kid on the block - JustMock. Armed with JustMock you will be able to easily mock interfaces and classes. In this particular case I will use JustMock to mock the IFileSystem interface.
Let’s write some tests.
I use the Initialize method to create a mock implementation of IFileSystem and use it to initialize the FileManager that will be tested.
[TestClass]
public class RenameTests
private FileManager fileManager;
[TestInitialize]
public void Initialize()
this.fileSystem = Mock.Create<IFileSystem>();
this.fileManager = new FileManager(this.fileSystem);
// tests ...
Assert that exception is thrown when non-existent file is passed to Rename
[TestMethod]
[ExpectedException(typeof(FileNotFoundException))]
public void Rename_Throws_WhenTheSpecifiedFileDoesNotExist()
Mock.Arrange(() => this.fileSystem.FileExists("file.to.rename")).Returns(false);
this.fileManager.Rename("file.to.rename");
The magic here happens on the first line of the test where we call Mock.Arrange. This call makes sure that when FileExists is called with “file.to.rename” argument it will return false. After that we simply call Rename and if FileNotFoundException is raised than we have a correct behavior.
Assert that exception is thrown when new filename already exists
[ExpectedException(typeof(IOException))]
public void Rename_Throws_WhenTheTheNewFileNameExists()
Mock.Arrange(() => this.fileSystem.FileExists("file.to.rename")).Returns(true);
Mock.Arrange(() => this.fileSystem.FileExists("renamed.file")).Returns(true);
As you can see this test is very similar to the first one. The difference is that we first have to ensure that FileExists(“file.to.rename”) returns true – this makes sure that the fist condition of the Rename method is met (input file exists). Since we know that Rename will always rename the input file to “renamed.file” we instruct the file system mock to return true for FileExists(“renamed.file”). This way we simulate the case where the file we want to rename exists but the new filename already exists on disk. The last step is to actually call Rename.
Assert that file is actually renamed
public void Rename_RenamesTheInputFile_WhenInputExistsAndNewFileNameDoesNotExist()
Mock.Arrange(() => this.fileSystem.FileExists("renamed.file")).Returns(false);
Mock.Assert(() => this.fileSystem.MoveFile("file.to.rename", "renamed.file"), Occurs.Once());
Finally we have to make sure that file is actually renamed when no exception are encountered.
This test is a very good demonstration of the Arrange Act Assert pattern of writing tests. First we setup the preconditions of the test – in this case we ensure that FileExists(“file.to.rename”) returns true and FileExists(“renamed.file”) returns false. After that we call the method that we are testing. Finally we assert that our expectations are correct. In this particular case we assert that Rename calls MoveFile where the first argument is the old filename (“file.to.rename”) and the second one is the new filename (“renamed.file”).
What do you think about those tests?
I believe that they are easy to read and modify. Moreover they are all short which is always a good property of a test.
There is one more thing. How do we use FileRenamer in a real application? Well, we first create an implementation of IFileSystem that will actually invoke the file system.
public sealed class DefaultFileSystem : IFileSystem
public bool FileExists(string path)
return File.Exists(path);
public void MoveFile(string filePath, string newFilePath)
File.Move(filePath, newFilePath);
And finally we create a FileManager with DefaultFileSystem.
class Program
static void Main(string[] args)
FileManager manager = new FileManager(new DefaultFileSystem());
manager.Rename("file.to.rename");
You can download the source code from here (VS2010 & VS2008)
Cheers. | http://www.telerik.com/blogs/mocking-the-file-system-to-improve-testability-with-justmock | CC-MAIN-2017-26 | en | refinedweb |
memcached_delete man page
memcached_delete — libmemcached Documentation
Synopsis
#include <libmemcached/memcached.h>
memcached_return_t memcached_delete(memcached_st *ptr, const char *key, size_t key_length, time_t expiration)
memcached_return_t memcached_delete_by_key(memcached_st *ptr, const char *group_key, size_t group_key_length, const char *key, size_t key_length, time_t expiration)
Compile and link with -lmemcached
Description for expiration in the 1.4 version.
Return:
Author
Brian Aker, <[email protected]>
See Also
memcached(1) libmemcached(3) memcached_strerror(3)
Author
Brian Aker
2011-2013, Brian Aker DataDifferential,
Referenced By
libmemcached(3). | https://www.mankier.com/3/memcached_delete | CC-MAIN-2017-26 | en | refinedweb |
2016-11-22
Text Mining in Python through the HTRC Feature Reader
Summary:.
Today, you’ll learn:
- How to work with notebooks, an interactive environment for data science in Python;
- Methods to read and visualize text data for millions of books with the HTRC Feature Reader; and
- Data malleability, the skills to select, slice, and summarize extracted features data using the flexible “DataFrame” structure.
Background
The HathiTrust Research Center (HTRC) is the research arm of the HathiTrust, tasked with supporting research usage of the works held by the HathiTrust. Particularly, this support involves mediating large-scale access to materials in a non-consumptive manner, which aims to allow research over a work without enabling that work to be traditionally enjoyed or read by a human reader. Huge digital collections can be of public benefit by allowing scholars to discover insights about history and culture, and the non-consumptive model allows for these uses to be sought within the restrictions of intellectual property law.
As part of its mission, the HTRC has released the Extracted Features (EF) dataset containing features derived for every page of 13.6 million ‘volumes’ (a generalized term referring to the different types of materials in the HathiTrust collection, of which books are the most prevalent type).
What is a feature? A feature is a quantifiable marker of something measurable, a datum. A computer cannot understand the meaning of a sentence implicitly, but it can understand the counts of various words and word forms, or the presence or absence of stylistic markers, from which it can be trained to better understand text. Many text features are non-consumptive in that they don’t retain enough information to reconstruct the book text.
Not all features are useful, and not all algorithms use the same features. With the HTRC EF Dataset, we have tried to include the most generally useful features, as well as adapt to scholarly needs. We include per-page information such as counts of words tagged by part of speech (e.g. how many times does the word
jaguar appear as a lowercase noun on this page), line and sentence counts, and counts of characters at the leftmost and rightmost sides of a page. No positional information is provided, so the data would not specify if ‘brown’ is followed by ‘dog’, though the information is shared for every single page, so you can at least infer how often ‘brown’ and ‘dog’ occurred in the same general vicinity within a text.
Freely accessible and preprocessed, the Extracted Features dataset offers a great entry point to programmatic text analysis and text mining. To further simplify beginner usage, the HTRC has released the HTRC Feature Reader. The HTRC Feature Reader scaffolds use of the dataset with the Python programming language.
This tutorial teaches the fundamentals of using the Extracted Features dataset with the HTRC Feature Reader. The HTRC Feature Reader is designed to make use of data structures from the most popular scientific tools in Python, so the skills taught here will apply to other settings of data analysis. In this way, the Extracted Features dataset is a particularly good use case for learning more general text analysis skills. We will look at data structures for holding text, patterns for querying and filtering that information, and ways to summarize, group, and visualize the data.
Possibilities
Though it is relatively new, the Extracted Features dataset is already seeing use by scholars, as seen on a page collected by the HTRC.
Underwood leveraged the features for identifying genres, such as fiction, poetry, and drama (2014). Associated with this work, he has released a dataset of 178k books classified by genre alongside genre-specific word counts (Underwood 2015).
The Underwood subset of the Extracted Features dataset was used by Forster (2015) to observing gender in literature, illustrating the decline of woman authors through the 19th century.
The Extracted Features dataset also underlies higher-level analytic tools. Mimno processed word co-occurrence tables per year, allowing others to view how correlations between topics change over time (2014). The HT Bookworm project has developed an API and visualization tools to support exploration of trends within the HathiTrust collection across various classes, genres, and languages. Finally, we have developed an approach to within-book topic modelling which functions as a mnemonic accompaniment to a previously-read book (Organisciak 2014).
Suggested Prior Skills
This lesson provides a gentle but technical introduction to text analysis in Python with the HTRC Feature Reader. Most of the code is provided, but is most useful if you are comfortable tinkering with it and seeing how outputs change when you do.
We recommend a baseline knowledge of Python conventions, which can be learned with Turkel and Crymble’s series of Python lessons on Programming Historian.
The skills taught here are focused on flexibly accessing and working with already-computed text features. For a better understanding of the process of deriving word features, Programming Historian provides a lesson on Counting Frequencies, by Turkel and Crymble.
A more detailed look at text analysis with Python is provided in the Art of Literary Text Analysis (Sinclair). The Art of Literary Text Analysis (ALTA) provides a deeper introduction to foundation Python skills, as well as introduces further text analytics concepts to accompany the skills we cover in this lesson. This includes lessons on extracting features (tokenization, collocations), and visualizing trends.
Download the Lesson Files
To follow along, download lesson_files.zip and unzip it to any directory you choose.
The lesson files include a sample of files from the HTRC Extracted Features dataset. After you learn to use the feature data in this lesson, you may want to work with the entirety of the dataset. The details on how to do this are described in Appendix: rsync.
Installation
For this lesson, you need to install the HTRC Feature Reader library for Python alongside the data science libraries that it depends on.
For ease, this lesson will focus on installing Python through a scientific distribution called Anaconda. Anaconda is an easy-to-install Python distribution that already includes most of the dependencies for the HTRC Feature Reader.
To install Anaconda, download the installer for your system from the Anaconda download page and follow their instructions for installation of either the Windows 64-bit Graphical Installer or the Mac OS X 64-bit Graphical Installer. You can choose either version of Python for this lesson. If you have followed earlier lessons on Python at the Programming Historian, you are using Python 2, but the HTRC Feature Reader also supports Python 3.
Conda Install
Installing the HTRC Feature Reader
The HTRC Feature Reader can be installed by command line. First open a terminal application:
- Windows: Open ‘Command Prompt’ from the Start Menu and type:
activate.
- Mac OS/Linux: Open ‘Terminal’ from Applications and type
source activate.
If Anaconda was properly installed, you should see something similar to this:
Activating the default Anaconda environment.
Now, you need to type one command:
conda install -c htrc htrc-feature-reader
This command installs the HTRC Feature Reader and its necessary dependencies. We specify
-c htrc so the installation command knows to find the library from the
htrc organization.
That’s it! At this point you have everything necessary to start reading HTRC Feature Reader files.
psst, advanced users: You can install the HTRC Feature Reader without Anaconda with
pip install htrc-feature-reader, though for this lesson you’ll need to install two additional libraries
pip install matplotlib jupyter. Also, note that not all manual installations are alike because of hard-to-configure system optimizations: this is why we recommend Anaconda. If you think your code is going slow, you should check that Numpy has access to BLAS and LAPACK libraries and install Pandas recommended packages. The rest is up to you, advanced user!
Start a Notebook
Using Python the traditional way – writing a script to a file and running it – can become clunky for text analysis, where the ability to look at and interact with data is invaluable. This lesson uses an alternative approach: Jupyter notebooks.
Jupyter gives you an interactive version of Python (called IPython) that you can access in a “notebook” format in your web browser. This format has many benefits. The interactivity means that you don’t need to re-run an entire script each time: you can run or re-run blocks of code as you go along, without losing your enviroment (i.e. the variables and code that are already loaded). The notebook format makes it easier to examine bits of information as you go along, and allows for text blocks to intersperse a narrative.
Jupyter was installed alongside Anaconda in the previous section, so it should be available to load now.
From the Start Menu (Windows) or Applications directory (Mac OS), open “Jupyter notebook”. This will start Jupyter on your computer and open a browser window. Keep the console window in the background, the browser is where the magic happens.
Opening Jupyter on Windows
If your web browser does not open automatically, Jupyter can be accessed by going to the address “localhost:8888” - or a different port number, which is noted in the console (“The Jupyter Notebook is running at…”):
A freshly started Jupyter notebook instance.
Jupyter is now showing a directory structure from your home folder. Navigate to the lesson folder where you unzipped lesson_files.zip.
In the lesson folder, open
Start Here.pynb: your first notebook!
Hello world in a notebook
Here there are instructions for editing a cell of text or code, and running it. Try editing and running a cell, and notice that it only affects itself. Here are a few tips for using the notebook as the lesson continues:
- New cells are created with the Plus button in the toolbar. When not editing, this can be done by pressing ‘b’ on your keyboard.
- New cells are “code” cells by default, but can be changed to “Markdown” (a type of text input) in a dropdown menu on the toolbar. In edit mode, you can paste in code from this lesson or type it yourself.
- Switching a cell to edit mode is done by pressing Enter.
- Running a cell is done by clicking Play in the toolbar, or with
Ctrl+Enter(
Ctrl+Returnon Mac OS). To run a cell and immediately move forward, use
Shift+Enterinstead.
An example of a full-fledged notebook is included with the lesson files in
example/Lesson Draft.ipynb.
In this notebook, it’s time to give the HTRC Feature Reader a try. When it is time to try some code, start a new cell with Plus, and run the code with Play. Before continuing, click on the title to change it to something more descriptive than “Start Here”.
Reading your First Volume
The HTRC Feature Reader library has three main objects: FeatureReader, Volume, and Page.
The FeatureReader object is the interface for loading the dataset files and making sense of them. The files are originally formatted in a notation called JSON (which Programming Historian discusses here) and compressed, which FeatureReader makes sense of and returns as Volume objects. A Volume is a representation of a single book or other work. This is where you access features about a work. Many features for a volume are collected from individual pages; to access Page information, you can use the Page object.
Let’s load two volumes to understand how the FeatureReader works. Create a cell in the already-open Jupyter notebook and run the following code. This should give you the input shown below.
from htrc_features import FeatureReader import os paths = [os.path.join('data', 'sample-file1.json.bz2'), os.path.join('data', 'sample-file2.json.bz2')] fr = FeatureReader(paths) for vol in fr.volumes(): print(vol.title)
June / by Edith Barnard Delano ; with illustrations. You never know your luck; being the story of a matrimonial deserter, by Gilbert Parker ... illustrated by W.L. Jacobs.
Here, the FeatureReader is imported and initialized with file paths pointing to two Extracted Features files. The files are in a directory called ‘data’. Different systems do file paths differently: Windows uses back slashes (‘data\…’) while Linux and Mac OS use forward slashes (‘data/…’).
os.path.join is used to make sure that the file path is correctly structured, a convention to ensure that code works on these different platforms.
With
fr = FeatureReader(paths), the FeatureReader is initialized, meaning it is ready to use. An initialized FeatureReader is holding references to the file paths that we gave it, and will load them into Volume objects when asked.
Consider the last bit of code:
for vol in fr.volumes(): print(vol.title)
This code asks for volumes in a way that can be iterated through. The
for loop is saying to
fr.volumes(), “give me every single volume that you have, one by one.” Each time the
for loop gets a volume, it starts calling it
vol, runs what is inside the loop on it, then asks for the next one. In this case, we just told it to print the title of the volume.
You may recognize
for loops from past experience iterating through what is known as a
list in Python. However, it is important to note that
fr.volumes() is not a list. If you try to access it directly, it won’t print all the volumes; rather, it identifies itself as something known as a generator:
Identifying a generator
What is a generator, and why do we iterate over it?
Generators are the key to working with lots of data. They allow you to iterate over a set of items that don’t exist yet, preparing them only when it is their turn to be acted upon.
Remember that there are 13.6 million volumes in the Extracted Features dataset. When coding at that scale, you need to be be mindful of two rules:
- Don’t hold everything in memory: you can’t. Use it, reduce it, and move on.
- Don’t devote cycles to processing something before you need it.
A generator simplifies such on-demand, short term usage. Think of it like a pizza shop making pizzas when a customer orders, versus one that prepares them beforehand. The traditional approach to iterating through data is akin to making all the pizzas for the day before opening. Doing so would make the buying process quicker, but also adds a huge upfront time cost, needs larger ovens, and necessitates the space to hold all the pizzas at once. An alternate approach is to make pizzas on-demand when customers buy them, allowing the pizza place to work with smaller capacities and without having pizzas laying around the shop. This is the type of approach that a generator allows.
Volumes need to be prepared before you do anything with them, being read, decompressed and parsed. This ‘initialization’ of a volume is done when you ask for the volume, not when you create the FeatureReader. In the above code, after you run
fr = FeatureReader(paths), there are are still no
Volume objects held behind the scenes: just the references to the file locations. The files are only read when their time comes in the loop on the generator
fr.volumes(). Note that because of this one-by-one reading, the items of a generator cannot be accessed out of order (e.g. you cannot ask for the third item of
fr.volumes() without going through the first two first).
What’s in a Volume?
Let’s take a closer look at what features are accessible for a Volume object. For clarity, we’ll grab the first Volume to focus on, which can conveniently be accessed with the
first() method. Any code you write can easily be run later with a
for vol in fr.volumes() loop.
Again here, start a new code cell in the same notebook that you had open before and run the following code. The FeatureReader does not need to be loaded again: it is still initialized and accessible as
fr from earlier.
# Reading a single volume vol = fr.first() vol
<htrc_features.feature_reader.Volume at 0x1cf355a60f0>
While the majority of the HTRC Extracted Features dataset is features, quantitative abstractions of a book’s written content, there is also a small amount of metadata included for each volume. We already saw
Volume.title accessed earlier. Other metadata attributes include:
Volume.id: A unique identifier for the volume in the HathiTrust and the HathiTrust Research Center.
Volume.year: The publishing date of the volume.
Volume.language: The classified language of the volume.
Volume.oclc: The OCLC control number(s).
The volume id can be used to pull more information from other sources. The scanned copy of the book can be found from the HathiTrust Digital Library, when available, by accessing{VOLUME ID}. In the feature reader, this url is retrieved by calling
vol.handle_url:
print(vol.handle_url)
Digital copy of sample book
Hopefully by now you are growing more comfortable with the process of running code in a Jupyter notebook, starting a cell, writing code, and running the cell. A valuable property of this type of interactive coding is that there is room for error. An error doesn’t cause the whole program to crash, requiring you to rerun everything from the start. Instead, just fix the code in your cell and try again.
In Jupyter, pressing the ‘TAB’ key will guess at what you want to type next. Typing
vo then TAB will fill in
vol, typing
Fea then TAB will fill in
FeatureReader.
Auto-completion with the tab key also provides more information about what you can get from an object. Try typing
vol. (with the period) in a new cell, then press TAB. Jupyter shows everything that you can access for that Volume.
Tab Autocomplete in Jupyter
The Extracted Features dataset does not hold all the metadata that the HathiTrust has for the book. More in-depth metadata like genre and subject class needs to be grabbed from other sources, such as the HathiTrust Bibliographic API. The URL to access this information can be retrieved with
vol.ht_bib_url.
An additional data source for metadata is the HTRC Solr Proxy, which allows searches for many books at a time, but only for Public Domain books.
vol.metadata can ask this source for metadata straight from your code. Remember that pinging HTRC adds overhead, so an efficient large-scale algorithm should avoid
vol.metadata.
Our First Feature Access: Visualizing Words Per Page
It’s time to access the first features of
vol: a table of total words for every single page. These can be accessed by calling
vol.tokens_per_page(). Try the following code.
If you are using a Jupyter notebook, returning this table at the end of a cell formats it nicely in the browser. Below, you’ll see us append
.head()to the
tokenstable, which allows us to look at just the top few rows: the ‘head’ of the data.
tokens = vol.tokens_per_page() # Show just the first few rows, so we can look at what it looks like tokens.head()
No print! We didn’t call ‘print()’ to make Jupyter show the table. Instead, it automatically guessed that you want to display the information from the last code line of the cell.
This is a straightforward table of information, similar to what you would see in Excel or Google Spreadsheets. Listed in the table are page numbers and the count of words on each page. With only two dimensions, it is trivial to plot the number of words per page. The table structure holding the data has a
plot method for data graphics. Without extra arguments,
tokens.plot() will assume that you want a line chart with the page on the x-axis and word count on the y-axis.
%matplotlib inline tokens.plot()
Output graph.
%matplotlib inlinetells Jupyter to show the plotted image directly in the notebook web page. It only needs to be called once, and isn’t needed if you’re not using notebooks.
On some systems, this may take some time the first time. It is clear that pages at the start of a book have fewer words per page, after which the count is fairly steady except for occasional valleys.
You may have some guesses for what these patterns mean. A look at the scans confirms that the large valleys are often illustration pages or blank pages, small valleys are chapter headings, and the upward pattern at the start is from front matter.
Not all books will have the same patterns so we can’t just codify these correlations for millions of books. However, looking at this plot makes clear an inportant assumption in text and data mining: that there are patterns underlying even the basic statistics derived from a text. The trick is to identify the consistent and interesting patterns and teach them to a computer.
Understanding DataFrames
Wait… how did we get here so quickly!? We went from a volume to a data visualization in two lines of code. The magic is in the data structure used to hold our table of data: a DataFrame.
A DataFrame is a type of object provided by the data analysis library, Pandas. Pandas is very common for data analysis, allowing conveniences in Python that are found in statistical languages like R or Matlab.
In the first line,
vol.tokens_per_page() returns a DataFrame, something that can be confirmed if you ask Python about its type with
type(tokens). This means that after setting
tokens, we’re no longer working with HTRC-specific code, just book data held in a common and very robust table-like construct from Pandas.
tokens.head() used a DataFrame method to look at the first few rows of the dataset, and
tokens.plot() uses a method from Pandas to visualize data.
Many of the methods in the HTRC Feature Reader return DataFrames. The aim is to fit into the workflow of an experienced user, rather than requiring them to learn proprietary new formats. For new Python data mining users, learning to use the HTRC Feature Reader means learning many data mining skills that will translate to other uses.
The information contained in
vol.tokens_per_page() is minimal, a sum of all words in the body of each page.
The Extracted Features dataset also provides token counts with much more granularity: for every part of speech (e.g. noun, verb) of every occurring capitalization of every word of every section (i.e. header, footer, body) of every page of the volume.
tokens_per_page() only kept the “for every page” grouping;
vol.tokenlist() can be called to return section-, part-of-speech-, and word-specific details:
tl = vol.tokenlist() # Let's look at some words deeper into the book: # from 1000th to 1100th row, skipping by 15 [1000:1100:15] tl[1000:1100:15]
As before, the data is returned as a Pandas DataFrame. This time, there is much more information. Consider a single row:
Single row of tokenlist.
The columns in bold are an index. Unlike the typical one-dimensional index seen before, here there are four dimensions to the index: page, section, token, and pos. This row says that for the 24th page, in the body section (i.e. ignoring any words in the header or footer), the word ‘years’ occurs 1 time as an plural noun. The part-of-speech tag for a plural noun,
NNS, follows the Penn Treebank definition.
The “words” on the first page seems to be OCR errors for the cover of the book. The HTRC Feature Reader refers to “pages” as the scanned image of the volume, not the actual number printed on the page. This is why “page 1” for this example is the cover.
Tokenlists can be retrieved with arguments that combine information by certain dimensions, such as
case,
pos, or
page. For example,
case=False specified that “Jaguar” and “jaguar” should be counted together. You may also notice that, by default, only ‘body’ is returned, a default that can be overridden.
Look at the following list of commands: can you guess what the output will look like? Try for yourself and observe how the output changes.
vol.tokenlist(case=False)
vol.tokenlist(pos=False)
vol.tokenlist(pages=False, case=False, pos=False)
vol.tokenlist(section='header')
vol.tokenlist(section='group')
Details for these arguments are available in the code documentation for the Feature Reader.
Jupyter provides another convenience here. Documentation can be accessed within the notebook by adding a ‘?’ to the start of a piece of code. Try it with
?vol.tokenlist, or with other objects or variables.
Working with DataFrames
The Pandas DataFrame type returned by the HTRC Feature Reader is very malleable. To work with the tokenlist that you retrieved earlier, three skills are particularily valuable:
- Selecting subsets by a condition
- Slicing by named row index
- Grouping and aggregating
Selecting Subsets of a DataFrame by a Condition
Consider this example: I only want to look at tokens that occur more than a hundred times in the book.
Remembering that the table-like output from the HTRC Feature Reader is a Pandas DataFrame, the way to pursue this goal is to learn to filter and subset DataFrames. Knowing how to do so is important for working with just the data that you need.
To subset individual rows of a DataFrame, you can provide a series of True/False values to the DataFrame, formatted in square brackets. When True, the DataFrame returns that row; when False, the row is excluded from what is returned.
To see this in context, first load a basic tokenlist without parts-of-speech or individual pages:
tl_simple = vol.tokenlist(pos=False, pages=False) # .sample(5) returns five random words from the full result tl_simple.sample(5)
To select just the relevant tokens, we need to look at each row and evaluate whether it matches the criteria that “this token has a count greater than 100”. Let’s try to convert that requirement to code.
“This token has a count” means that we are concerned specifically with the ‘count’ column, which can be singled out from the
tl table with
tl['count']. “greater than 100” is formalized as
> 100. Putting it together, try the following and see what you get:
tl_simple['count'] > 100
It is a DataFrame of True/False values. Each value indicates whether the ‘count’ column in the corresponding row matches the criteria or not. We haven’t selected a subset yet, we simply asked a question and were told for each row when the question was true or false.
You may wonder why section and token are still seen, even though ‘count’ was selected. These are part of the DataFrame index, so they’re considered part of the information about that row rather than data in the row. You can convert the index to data columns with
reset_index(). In this lesson we will keep the index intact, though there are advanced cases where there are benefits to resetting it.
Armed with the True/False values of whether each token’s ‘count’ value is or isn’t greater than 100, we can give those values to
tl_simple in square brackets.
matches = tl_simple['count'] > 100 tl_simple[matches].sample(5)
You can move the comparison straight into the square brackets, the more conventional equivalent of the above:
tl_simple[tl_simple['count'] > 100].sample(5)
As might be expected, many of the tokens that occur very often are common words like “she” and “and”, as well as various punctuation.
Multiple conditions can be chained with
& (and) or
| (or), using regular brackets so that Python knows the order of operations. For example, words with a count greater than 150 and a count less than 200 are selected in this way:
tl_simple[(tl_simple['count'] > 150) & (tl_simple['count'] < 200)]
Slicing DataFrames
Above, subsets of the DataFrame were selected based on a matching criteria for columns. It is also possible to select a DataFrame subset by specifying the values of its index, a process called slicing. For example, you can ask, “give me all the verbs for pages 9-12”.
In the DataFrame returned by
vol.tokenlist(), page, section, token, and POS were part of the index (try the command
tl.index.names to confirm). One can think of an index as the margin content of an Excel spreadsheet: the letters along the top and numbers along the left side are the indices. A cell can be referred to as A1, A2, B1… In Pandas, however, you can name these, so instead of A, B, C, or 1,2,3, columns and rows can be referred to by more descriptive names. You can also have multiple levels, so you’re not bound by the two-dimensions of a table format. With a multi-indexed DataFrame, you can ask for
Page=24,section=Body, ....
One can think of an index as the margin notations in Excel (i.e. 1,2,3… and A,B,C,..), except it can be named and can have multiple levels.
Slicing a DataFrame against a labelled index is done using
DataFrame.loc[]. Try the following examples and see what is returned:
- Select information from page 17:
tl.loc[(17),]
- Select ‘body’ section of page 17:
tl.loc[(17, 'body'),]
- Select counts of the word ‘Anne’ in the ‘body’ section of page 17:
tl.loc[(17, 'body', 'Anne'),]
The levels of the index are specified in order, so in this case the first value refers to ‘page’, then ‘section’, and so on. To skip specifying anything for an index level – that is, to select everything for that level –
slice(None) can be used as a placeholder:
- Select counts of the word ‘Anne’ for all pages and all page sections
tl.loc[(slice(None), slice(None), "Anne"),]
Finally, it is possible to select multiple labels for a level of the index, with a list of labels (i.e.
['label1', 'label2']) or a sequence covering everything from one value to another (i.e.
slice(start, end)):
- Select pages 37, 38, and 52
tl.loc[([37, 38, 52]),]
- Select all pages from 37 to 40
tl.loc[(slice(37, 40)),]
- Select counts for ‘Anne’ or ‘Hilary’ from all pages
tl.loc[(slice(None), slice(None), ["Anne", "Hilary"]),]
The reason for the comma in
tl.loc[(...),]is because columns can be selected in the same way after the comma. Pandas DataFrames can have a multiple-level index for columns, but the HTRC Feature Reader does not use this.
Knowing how to slice, let’s try to find the word “CHAPTER” in this book, and compare where that shows up to the token-per-page pattern previously plotted.
The token list we previously set to
tl only included body text; to include headers and footers in a search for
CHAPTER we’ll grab a new tokenlist with
section='all' specified.
tl_all = vol.tokenlist(section='all') chapter_pages = tl_all.loc[(slice(None), slice(None), "CHAPTER"),] chapter_pages
Earlier, token counts were visualized using
tokens.plot(), a built-in function of DataFrames that uses the Matplotlib visualization library.
We can add to the earlier visualization by using Matplotlib directly. Try the following code in a new cell, which goes through every page number in the earlier search for ‘CHAPTER’ and adds a red vertical line at the place in the chart with
matplotlib.pyplot.axvline():
# Get just the page numbers from the search for "CHAPTER" page_numbers = chapter_pages.index.get_level_values('page') # Visualize the tokens-per-page from before tokens.plot() # Add vertical lines for pages with "CHAPTER" import matplotlib.pyplot as plt for page_number in page_numbers: plt.axvline(x=page_number, color='red')
Output graph.
Advanced: Though slicing with
locis more common when working with the index, it is possible to create a True/False list from an index to select rows as we did earlier. Here’s an advanced example that grabs the ‘token’ part of the index and, using the
isalpha()string method that Pandas provides, filters to fully alphabetical words.
token_idx = tl.index.get_level_values("token") tl[token_idx.str.isalpha()]
Readers familiar with regular expressions (see Understanding Regular Expressions by Doug Knox) can adapt this example for even more robust selection using the
contains()string method.
Sorting DataFrames
A DataFrame can be sorted with
DataFrame.sort_values(), specifying the column to sort by as the first argument. By default, sorting is done in ascending order:
tl_simple.sort_values('count').head()
Descending order is possible with the argument
ascending=False, which puts the most common tokens at the top. For example:
tl_simple.sort_values('count', ascending=False).head()
The most common tokens are ‘the’ and ‘and’, alongside punctuation.
Exercise: Try to retrieve the five most-common tokens used as a noun (‘NNP’) or a plural noun (‘NNS’) in the book. You will have to get a new tokenlist, without pages but with parts-of-speech, then slice by the criteria, sort, and output the first five rows. (Solution)
Grouping DataFrames
Up to this point, the token count DataFrames have been subsetted, but not modified from the way they were returned by the HTRC Feature Reader. There are many cases where one may want to perform aggregation or transformation based on subsets of data. To do this, Pandas supports the ‘split-apply-combine’ pattern (Wickham 2011).
Split-apply-combine refers to the process of dividing a dataset into groups (split), performing some activity for each of those groups (apply), and joining the new groups back together into a single DataFrame (combine).
Graph demonstrating Split-Apply-Combine.
Example of Split-Apply-Combine, averaging movie grosses by director.
Split-apply-combine processes are supported on DataFrames with
groupby(), which tells Pandas to split by some criteria. From there, it is possible to apply some change to each group individually, after which Pandas combines the affected groups into a single DataFrame again.
Try the following, can you tell what happens?
tl.groupby(level=["pos"]).sum()
The output is a count of how often each part-of-speech tag (“pos”) occurs in the entire book.
- Split with
groupby(): We took the token count dataframe that is set to
tland grouped by the part-of-speech (
pos) level of the index. This means that rather than thinking in terms of rows, Pandas is now thinking of the
tlDataFrame as a series of smaller groups, the groups selected by a common value for part of speech. So, all the personal pronouns (“PRP”) are in one group, and all the adverbs (“RB”) are in another, and so on.
- Apply with
sum(): These groups were sent to an apply function,
sum(). Sum is an aggregation function, so it sums all the information in the ‘count’ column for each group. For example, all the rows of data in the adverb group are summed up into a single count of all adverbs.
- Combine: The combine step is implicit: the DataFrame knows from the
groupbypattern to take everything that the apply function gives back (in the case of ‘sum’, just one row for every group) and stick it together.
sum() is one of many convenient functions built-in to Pandas. Other useful functions are
mean(),
count(),
max(). It is also possible to send your groups to any function that you write with
apply().
groupby can be used on data columns or an index. To run against an index, use
level=[index_level_name]as above. To group against columns, use
by=[column_name].
Below are some examples of grouping token counts.
- Find most common tokens in the entire volume (sorting by most to least occurrences)
tl.groupby(level="token").sum().sort_values("count", ascending=False)
- Count how many pages each token/pos combination occurs on
tl.groupby(level=["token", "pos"]).count()
Remember from earlier that certain information can be called by sending arguments to
vol.tokenlist(), so you don’t always have to do the grouping yourself.
With
sum, the data is being reduced: only one row is left for each group. It is also possible to ‘transform’ a group, where the same number of rows are returned. This is useful if processing is necessary based on the group statistics, such as percentages. Here is an advanced example of transformation, a TF*IDF function. TF*IDF weighs a token’s value to a document based on how common it is. In this case, it highlights words that are notable for a page but not the entire book.
from numpy import log def tfidf(x): return x * log(1+vol.page_count / x.count()) # Will take a few seconds to run, depending on your system idf_scores = tl.groupby(level=["token"]).transform(tfidf) idf_scores[1000:1100:30]
Compare the parts of the function given to
transform() with the equation:
N is the total number of pages. Document frequency, , is ‘how many pages (docs) does the word occur on?’ That is the
x.count(). Can you modify the above to use corpus frequency, which is ‘how many times does the word occur overall in the corpus (i.e. across all pages)?’ You’d want to add everything up.
More Features in the HTRC Extracted Features Dataset
So far we have mainly used token-counting features, accessed through
Volume.tokenlist(). The HTRC Extracted Features Dataset provides more features at the volume level. Here are other features that are available to Volume objects. Try them on
vol and see what the output is:
vol.line_counts(): How many vertically spaced lines of text, a measure related to the phyical format of the page.
vol.sentence_counts(): How many sentences of text: a measure related to the content on a page.
vol.empty_line_counts(): How many larger vertical spaces are there on the page between lines of text? In many cases, this can be used as a proxy for paragraph count. This is based on what software was used to OCR so there are inconsistencies: not all scans in the HathiTrust are OCR’d identically.
vol.begin_line_chars(),
vol.end_line_chars(): The count of different characters along the left-most and right-most sides of a page. This can tell you about what kind of page it is: for example, a table of contents might have a lot of numbers or roman numerals at the end of each line
Earlier, we saw that the number of words on a page gave some indication of whether it was a page of the story or a different kind of page (chapter, front matter, etc). We can see that line count is another contextual ‘hint’:
line_counts = vol.line_counts() plt.plot(line_counts)
Output graph.
The majority of pages have 20-25 lines, confirmable with a histogram:
plt.hist(line_counts). This is likely what a full page of text looks like in this book. A scholar trying to focus on patterns only in the text and comfortable missing a few short pages might choose to filter to just these pages.
Page-Level Features
If you open the raw dataset file for a HTRC EF volume on your computer, you may notice that features are provided for each page. While this lesson has focused on volumes, most of the features that we have seen can be accessed for a single page; e.g.
Page.tokenlist() instead of
Volume.tokenlist(). The methods to access the features are named the same, with the exception that
line_count,
empty_line_count, and
sentence_count are not pluralized.
Like iterating over
FeatureReader.volumes() to get Volume objects, it is possible to iterate across pages with
Volume.pages().
Next Steps
Now that you know the basics of the HTRC Feature Reader, you can learn more about the Extracted Features dataset. The Feature Reader home page contains a lesson similar to this one but for more advanced users (that’s you now!), and the code documentation gives exact information about what types of information can be called.
Underwood (2015) has released genre classifications of public-domain texts in the HTRC EF Dataset, comprised of fiction, poetry, and drama. Though many historians will be interested in other corners of the dataset, fiction is a good place to tinker with text mining ideas because of its expressiveness and relative format consistency.
Finally, the repository for the HTRC Feature Reader has advanced tutorial notebooks showing how to use the library further. One such tutorial shows how to derive ‘plot arcs’ for a text, a process popularized by Jockers (2015).
Plot Arc Example.
References
Boris Capitanu, Ted Underwood, Peter Organisciak, Timothy Cole, Maria Janina Sarol, J. Stephen Downie (2016). The HathiTrust Research Center Extracted Feature Dataset (1.0) [Dataset]. HathiTrust Research Center,.
Chris Forster. “A Walk Through the Metadata: Gender in the HathiTrust Dataset.” Blog..
Matthew L. Jockers (Feb 2015). “Revealing Sentiment and Plot Arcs with the Syuzhet Package”. Matthew L. Jockers. Blog..
Peter Organisciak, Loretta Auvil, J. Stephen Downie (2015). “Remembering books: A within-book topic mapping technique.” Digital Humanities 2015. Sydney, Australia.
Stéfan Sinclair & Geoffrey Rockwell (2016). “The Art of Literary Text Analysis.” Github.com. Commit b04bc18..
William J. Turkel and Adam Crymble (2012). “Counting Word Frequencies with Python”. The Programming Historian..
Ted Underwood (2014): Understanding Genre in a Collection of a Million Volumes, Interim Report. figshare..
Ted Underwood, Boris Capitanu, Peter Organisciak, Sayan Bhattacharyya, Loretta Auvil, Colleen Fallaw, J. Stephen Downie (2015). “Word Frequencies in English-Language Literature, 1700-1922” (0.2) [Dataset]. HathiTrust Research Center..
Hadley Wickham (2011). “The split-apply-combine strategy for data analysis”. Journal of Statistical Software, 40(1), 1-29.
Appendix: Downloading custom files via rsync
The full HTRC Extracted Features dataset is accessible using rsync, a Unix command line program for syncing files. It is already preinstalled on Linux or Mac OS. Windows users need to use rsync by downloading a program such as Cygwin, which provides a Unix-like command line environment in Windows.
To download all 4 TB comprising the EF dataset, you can use this command (be aware the full transfer will take a very long time):
rsync -rv data.analytics.hathitrust.org::features/ .
This command recurses (the
-r flag) through all the folders on the HTRC server, and syncs all the files to a location on your system; in this case the
. at the end means “the current folder”. The
-v flag means
--verbose, which tells rsync to show you more information.
It is possible to sync individual files by specifying a full file path. Files are organized in a PairTree structure, meaning that you can find an exact dataset file from a volume’s HathiTrust id. The HTRC Feature Reader has a tools and instructions for getting the path for a volume. A list of all file paths is available:
rsync -azv data.analytics.hathitrust.org::features/listing/htrc-ef-all-files.txt .
Finally, it is possible to download many files from a list. To try, we’ve put together lists for public-domain fiction, drama, and poetry (Underwood 2014). For example:
rsync -azv --files-from=fiction_paths.txt data.analytics.hathitrust.org::features/ .
Suggested Citation
Peter Organisciak and Boris Capitanu , "Text Mining in Python through the HTRC Feature Reader," Programming Historian, (2016-11-22), | http://programminghistorian.org/lessons/text-mining-with-extracted-features | CC-MAIN-2017-26 | en | refinedweb |
This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "GNU Automake".;a=commitdiff;h=16d8cb026a41d9344b1d0cf291b663fd48fa0f45 The branch, micro has been updated via 16d8cb026a41d9344b1d0cf291b663fd48fa0f45 (commit) via e5eb95ce956adc428b65414ebf28bb5b96d74b9f (commit) via 52e6404590f0a8824cf5f9522a2dc3151c2af9f3 (commit) via 073b1fe85068620eb6c06432b1be13c40394a177 (commit) via 9b156829b0ffac5e657b801b1f852608cfe8fc97 (commit) via 608d1a7908893b2896f5efd2a4ed22d7901262ed (commit) from 8a310a5fa5a908cf8771e44431e5743fb0e8b026 (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log ----------------------------------------------------------------- commit 16d8cb026a41d9344b1d0cf291b663fd48fa0f45 Author: Stefano Lattarini <address@hidden> Date: Sat Nov 2 02:33:33 2013 +0000 cosmetics: fix typo in a user-facing message in tests * t/lex-header.sh: A "skip" message in this test, precisely. Signed-off-by: Stefano Lattarini <address@hidden> commit e5eb95ce956adc428b65414ebf28bb5b96d74b9f Merge: 8a310a5 9b15682 52e6404 Author: Stefano Lattarini <address@hidden> Date: Fri Nov 1 22:41:23 2013 +0000 Merge branches 'fix-pr14991' and 'fix-pr14891' into micro * fix-pr14991: distcheck: don't allow overriding of --prefix and --srcdir by the user tests: expose bug#14991 (relates to 'distcheck') * fix-pr14891: automake: account for perl hash order randomization tests: avoid use of intervals to capitalize letters commit 52e6404590f0a8824cf5f9522a2dc3151c2af9f3 Author: Stefano Lattarini <address@hidden> Date: Sun Jul 21 17:58:05 2013 +0100 automake: account for perl hash order randomization Try to explicitly order the keys of some perl hashes when looping on them to do sanity/correctness checks and possibly display warning messages; this should ensure a more reproducible output. Not really a big deal, but I prefer to keep the order of such output reproducible if possible. Issue revealed by spurious testsuite failures with perl 5.18, as reported in automake bug#14891. See also: <> <> * lib/Automake/Variable.pm (variables): Explicitly order the values of the returned Automake::Variable instances. (variables_dump): Simplify, using the knowledge that 'variables()' now sorts its output. * t/preproc-errmsg.sh: Adjust. Signed-off-by: Stefano Lattarini <address@hidden> commit 073b1fe85068620eb6c06432b1be13c40394a177 Author: Stefano Lattarini <address@hidden> Date: Sun Jul 21 17:15:38 2013 +0100 tests: avoid use of intervals to capitalize letters It was causing spurious failures with with Solaris 8 'tr'. See automake bug#14891. * t/test-extensions.sh: Adjust. Signed-off-by: Stefano Lattarini <address@hidden> commit 9b156829b0ffac5e657b801b1f852608cfe8fc97 Author: Stefano Lattarini <address@hidden> Date: Wed Oct 30 21:41:39 2013 +0000 distcheck: don't allow overriding of --prefix and --srcdir by the user Not through AM_DISTCHECK_FLAGS, nor through DISTCHECK_FLAGS. Apparently, some packages got in the habit of relaying all the options passed to the original ./configure invocation through to the configure invocations in "make distcheck". This was causing problems, because it also passed through the original --srcdir and --prefix options. Fixes: expose bug#14991 (relates to 'distcheck') * lib/am/distdir.am (distcheck): Pass the hard-coded --srcdir and --prefix options *after* both the developer-defined options in $(AM_DISTCHECK_FLAGS) and the user-defined options in $(DISTCHECK_FLAGS). * t/list-of-tests.mk (XFAIL_TESTS): Remove the now-passing test 'distcheck-no-destdist-or-srcdir-override.sh'. * doc/automake.texi (Checking the Distribution): Update. * NEWS: Likewise. Signed-off-by: Stefano Lattarini <address@hidden> commit 608d1a7908893b2896f5efd2a4ed22d7901262ed Author: Stefano Lattarini <address@hidden> Date: Wed Oct 30 21:02:14 2013 +0000 tests: expose bug#14991 (relates to 'distcheck') * t/distcheck-no-prefix-or-srcdir-override.sh: New, expose the bug. * t/list-of-tests.mk (handwritten_TESTS, XFAIL_TESTS): Add it. Signed-off-by: Stefano Lattarini <address@hidden> ----------------------------------------------------------------------- Summary of changes: NEWS | 7 +++ doc/automake.texi | 17 +++++-- lib/Automake/Variable.pm | 14 +++--- lib/am/distdir.am | 11 +++-- t/distcheck-no-prefix-or-srcdir-override.sh | 60 +++++++++++++++++++++++++++ t/lex-header.sh | 2 +- t/list-of-tests.mk | 1 + t/preproc-errmsg.sh | 4 +- t/test-extensions.sh | 2 +- 9 files changed, 98 insertions(+), 20 deletions(-) create mode 100644 t/distcheck-no-prefix-or-srcdir-override.sh diff --git a/NEWS b/NEWS index aaec7c0..614eba6 100644 --- a/NEWS +++ b/NEWS @@ -108,12 +108,19 @@. diff --git a/doc/automake.texi b/doc/automake.texi index 62728d4..cd33ad7 100644 --- a/doc/automake.texi +++ b/doc/automake.texi @@ -8556,11 +8556,18 @@ to supply additional flags to @command{configure}, define them in the @file{Makefile.am}. The user can still extend or override the flags provided there by defining the @code{DISTCHECK_CONFIGURE_FLAGS} variable, on the command line when invoking @command{make}. - -Still, developers are encouraged to strive to make their code buildable -without requiring any special configure option; thus, in general, you -shouldn't define @code{AM_DISTCHECK_CONFIGURE_FLAGS}. However, there -might be few scenarios in which the use of this variable is justified. address@hidden See automake bug#14991 for more details about how the following holds. +It's worth nothing that @command{make distcheck} needs complete control +over the @command{configure} options @option{--srcdir} and address@hidden, so those options cannot be overridden by address@hidden nor by address@hidden + +Also note that developers are encouraged to strive to make their code +buildable without requiring any special configure option; thus, in +general, you shouldn't define @code{AM_DISTCHECK_CONFIGURE_FLAGS}. +However, there might be few scenarios in which the use of this variable +is justified. GNU @command{m4} offers an example. GNU @command{m4} configures by default with its experimental and seldom used "changeword" feature disabled; so in its case it is useful to have @command{make distcheck} diff --git a/lib/Automake/Variable.pm b/lib/Automake/Variable.pm index f1559f5..4751563 100644 --- a/lib/Automake/Variable.pm +++ b/lib/Automake/Variable.pm @@ -317,21 +317,21 @@ use vars '%_variable_dict', '%_primary_dict'; sub variables (;$) { my ($suffix) = @_; + my @vars = (); if ($suffix) { if (exists $_primary_dict{$suffix}) { - return values %{$_primary_dict{$suffix}}; - } - else - { - return (); + @vars = values %{$_primary_dict{$suffix}}; } } else { - return values %_variable_dict; + @vars = values %_variable_dict; } + # The behaviour of the 'sort' built-in is undefined in scalar + # context, hence we need an ad-hoc handling for such context. + return wantarray ? sort { $a->name cmp $b->name } @vars : scalar @vars; } =item C<Automake::Variable::reset> @@ -1080,7 +1080,7 @@ For debugging. sub variables_dump () { my $text = "all variables:\n{\n"; - foreach my $var (sort { $a->name cmp $b->name } variables) + foreach my $var (variables()) { $text .= $var->dump; } diff --git a/lib/am/distdir.am b/lib/am/distdir.am index f354987..a8ad63c 100644 --- a/lib/am/distdir.am +++ b/lib/am/distdir.am @@ -452,13 +452,16 @@ distcheck: dist ## so be sure to 'cd' back to the original directory after this. && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build \ - && ../configure --srcdir=.. --prefix="$$dc_install_base" \ + && ../configure \ ?GETTEXT? --with-included-gettext \ -## Additional flags for configure. Keep this last in the configure -## invocation so the developer and user can override previous options, -## and let the user's flags take precedence over the developer's ones. +## Additional flags for configure. $(AM_DISTCHECK_CONFIGURE_FLAGS) \ $(DISTCHECK_CONFIGURE_FLAGS) \ +## At the moment, the code doesn't actually support changes in these --srcdir +## and --prefix values, so don't allow them to be overridden by the user or +## the developer. That used to be allowed, and caused issues in practice +## (in corner-case usages); see automake bug#14991. + --srcdir=.. --prefix="$$dc_install_base" \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ diff --git a/t/distcheck-no-prefix-or-srcdir-override.sh b/t/distcheck-no-prefix-or-srcdir-override.sh new file mode 100644 index 0000000..9b9a56f --- /dev/null +++ b/t/distcheck-no-prefix-or-srcdir-override.sh @@ -0,0 +1,60 @@ +#! /bin/sh +# Copyright (C) 2013 "make distcheck" overrides any --srcdir or --prefix flag +# (mistakenly) defined in $(AM_DISTCHECK_CONFIGURE_FLAGS) or +# $(DISTCHECK_CONFIGURE_FLAGS). See automake bug#14991. + +. test-init.sh + +echo AC_OUTPUT >> configure.ac + +orig_cwd=$(pwd); export orig_cwd + +cat > Makefile.am << 'END' +# configure should choke on non-absolute prefix or non-existent +# srcdir. We'll sanity-check that later. +AM_DISTCHECK_CONFIGURE_FLAGS = --srcdir am-src --prefix am-pfx +END + +# Same comments as above applies. +DISTCHECK_CONFIGURE_FLAGS='--srcdir user-src --prefix user-pfx' +export DISTCHECK_CONFIGURE_FLAGS + +$ACLOCAL +$AUTOMAKE +$AUTOCONF + +# Sanity check: configure should choke on non-absolute prefix +# or non-existent srcdir. +./configure --prefix foobar 2>stderr && { cat stderr >&2; exit 99; } +cat stderr >&2 +grep "expected an absolute directory name for --prefix" stderr || exit 99 +./configure --srcdir foobar 2>stderr && { cat stderr >&2; exit 99; } +cat stderr >&2 +grep "cannot find sources.* in foobar" stderr || exit 99 + +./configure +run_make -E -O distcheck +test ! -s stderr +# Sanity check: the flags have been actually seen. +$PERL -e 'undef $/; $_ = <>; s/ \\\n/ /g; print;' <stdout >t +grep '/configure .* --srcdir am-src' t || exit 99 +grep '/configure .* --prefix am-pfx' t || exit 99 +grep '/configure .* --srcdir user-src' t || exit 99 +grep '/configure .* --prefix user-pfx' t || exit 99 + +: diff --git a/t/lex-header.sh b/t/lex-header.sh index 0789af4..1ba81dd 100644 --- a/t/lex-header.sh +++ b/t/lex-header.sh @@ -24,7 +24,7 @@ required='cc flex' # older flex versions don't support is (see automake bug#11524 and # bug#12836). Skip this test if such an old flex version is detected. $LEX --help | grep '.*--header-file' \ - || skip_ "flex doesn't support the --header-file' option" + || skip_ "flex doesn't support the '--header-file' option" cat >> configure.ac << 'END' AC_PROG_CC diff --git a/t/list-of-tests.mk b/t/list-of-tests.mk index 9069b08..75f303a 100644 --- a/t/list-of-tests.mk +++ b/t/list-of-tests.mk @@ -422,6 +422,7 @@ t/distcheck-hook2.sh \ t/distcheck-writable-srcdir.sh \ t/distcheck-missing-m4.sh \ t/distcheck-outdated-m4.sh \ +t/distcheck-no-prefix-or-srcdir-override.sh \ t/distcheck-override-infodir.sh \ t/distcheck-pr9579.sh \ t/distcheck-pr10470.sh \ diff --git a/t/preproc-errmsg.sh b/t/preproc-errmsg.sh index 704562d..87bcf81 100644 --- a/t/preproc-errmsg.sh +++ b/t/preproc-errmsg.sh @@ -58,11 +58,11 @@ Makefile.am:2: 'sub/local.mk' included from here sub/local.mk:3: 'sub-two.a' is not a standard library name sub/local.mk:3: did you mean 'libsub-two.a'? Makefile.am:2: 'sub/local.mk' included from here -Makefile.am:1: variable 'x1_SOURCES' is defined but no program or -Makefile.am:1: library has 'x1' as canonical name (possible typo) sub/local.mk:4: variable 'sub_x2_SOURCES' is defined but no program or sub/local.mk:4: library has 'sub_x2' as canonical name (possible typo) Makefile.am:2: 'sub/local.mk' included from here +Makefile.am:1: variable 'x1_SOURCES' is defined but no program or +Makefile.am:1: library has 'x1' as canonical name (possible typo) END # We need to break these substitutions into multiple sed invocations diff --git a/t/test-extensions.sh b/t/test-extensions.sh index 0700991..ca7c5ec 100644 --- a/t/test-extensions.sh +++ b/t/test-extensions.sh @@ -39,7 +39,7 @@ $AUTOMAKE -a grep -i 'log' Makefile.in # For debugging. for lc in $valid_extensions; do - uc=$(echo $lc | tr '[a-z]' '[A-Z]') + uc=$(echo $lc | tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ) $FGREP "\$(${uc}_LOG_COMPILER)" Makefile.in grep "^${uc}_LOG_COMPILE =" Makefile.in grep "^\.${lc}\.log:" Makefile.in hooks/post-receive -- GNU Automake | https://lists.gnu.org/archive/html/automake-commit/2013-11/msg00000.html | CC-MAIN-2022-27 | en | refinedweb |
PptxGenJSPptxGenJS
Create JavaScript PowerPoint PresentationsCreate JavaScript PowerPoint Presentations
Table of ContentsTable of Contents
- Table of Contents
- Introduction
- Features
- Live Demos
- Installation
- Documentation
- Library Ports
- Issues / Suggestions
- Need Help?
- Contributors
- Sponsor Us
- License
IntroductionIntroduction
This library creates Open Office XML (OOXML) Presentations which are compatible with Microsoft PowerPoint, Apple Keynote, and other applications.
FeaturesFeatures 75 slides of features)
Export Your Way
- Exports files direct to client browsers with proper MIME-type
- Other export formats available: base64, blob, stream, etc.
- Presentation compression options and more
HTML to PowerPoint
- Includes powerful HTML-to-PowerPoint feature to transform HTML tables into presentations with a single line of code
Live DemosLive Demos
Visit the demos page to create a simple presentation to see how easy it is to use pptxgenjs, or check out the complete demo which showcases every available feature.
InstallationInstallation
CDNCDN
Bundle: Modern Browsers and IE11
<script src=""></script>
Min files: Modern Browsers
<script src=""></script> <script src=""></script>
DownloadDownload
Bundle: Modern Browsers
- Use the bundle for IE11 support
<script src="PptxGenJS/dist/pptxgen.bundle.js"></script>
Min files: Modern Browsers
<script src="PptxGenJS/libs/jszip.min.js"></script> <script src="PptxGenJS/dist/pptxgen.min.js"></script>
NpmNpm
npm install pptxgenjs --save
YarnYarn
yarn add pptxgenjs
Additional BuildsAdditional Builds
- CommonJS:
dist/pptxgen.cjs.js
- ES Module:
dist/pptxgen.es.js
DocumentationDocumentation
Quick Start GuideQuick Start Guide
PptxGenJS PowerPoint presentations are created via JavaScript by following 4 basic steps:
Angular/React, ES6, TypeScriptAngular/React, ES6, TypeScript
import pptxgen from "pptxgenjs"; // 1. Create a new Presentation let pres = new pptxgen(); //();
Script/Web BrowserScript/Web Browser
// 1. Create a new Presentation let pres = new PptxGenJS(); //();
That's really all there is to it!
Library APILibrary API
Full documentation and code examples are available
- Creating a Presentation
- Presentation Options
- Adding a Slide
- Slide Options
- Saving a Presentation
- Master Slides
- Adding Charts
- Adding Images
- Adding Media
- Adding Shapes
- Adding Tables
- Adding Text
- Speaker Notes
- Using Scheme Colors
- Integration with Other Libraries
HTML-to-PowerPoint FeatureHTML-to-PowerPoint Feature
Easily convert HTML tables to PowerPoint presentations in a single call.
let pptx = new PptxGenJS(); pptx.tableToSlides("tableElementId"); pptx.writeFile({ fileName: "html2pptx-demo.pptx" });
Learn more:
Library PortsLibrary Ports
React: react-pptx - thanks to Joonas!
Issues / SuggestionsIssues / Suggestions
Please file issues or suggestions on the issues page on github, or even better, submit a pull request. Feedback is always welcome!
When reporting issues, please include a code snippet or a link demonstrating the problem. Here is a small jsFiddle that is already configured and uses the latest PptxGenJS code.
Need Help?Need Help?
Sometimes implementing a new library can be a difficult task and the slightest mistake will keep something from working. We've all been there!
If you are having issues getting a presentation to generate, check out the code in the
demos directory. There
are demos for both client browsers, node and react that contain working examples of every available library feature.
- Use a pre-configured jsFiddle to test with: PptxGenJS Fiddle
- View questions tagged
PptxGenJSon StackOverflow. If you can't find your question, ask it yourself - be sure to tag it
PptxGenJS.
ContributorsContributors
Thank you to everyone for the issues, contributions and suggestions!
Special Thanks:
- Dzmitry Dulko - Getting the project published on NPM
- Michal Kacerovský - New Master Slide Layouts and Chart expertise
- Connor Bowman - Adding Placeholders
- Reima Frgos - Multiple chart and general functionality patches
- Matt King - Chart expertise
- Mike Wilcox - Chart expertise
- Joonas - React port
PowerPoint shape definitions and some XML code via Officegen Project
Sponsor UsSponsor Us
If you find this library useful, please consider sponsoring us through a donation
LicenseLicense | https://www.npmjs.com/package/pptxgenjs | CC-MAIN-2022-27 | en | refinedweb |
Edge computing describes the movement of computation away from cloud data centers so that it can be closer to instruments, sensors and actuators where it will be run on “small” embedded computers or nearby “micro-datacenters”. The primary reason to do this is to avoid the network latency in cases where responding to a local event is time critical. This is clearly the case for robots such as autonomous vehicles, but it is also true of controlling many scientific or industrial apparatuses. In other cases, privacy concerns can prohibit sending the data over an external network.
We have now entered the age where advances in machine learning has made it possible to infer much more knowledge from a collection of the sensors than was possible a decade ago. The question we address here is how much deep computational analysis can be moved to the edge and how much of it must remain in the cloud where greater computational resources are available.
The cloud has been where the tech companies have stored and analyze data. These same tech companies, in partnership with the academic research community, have used that data to drive a revolution in machine learning. The result has been amazing advances in natural language translation, voice recognition, image analysis and smart digital assistants like Seri, Cortona and Alexa. Our phones and smart speakers like Amazon Echo operate in close connection with the cloud. This is clearly the case when the user’s query requires a back-end search engine or database, but it is also true of the speech understanding task. In the case of Amazon’s Echo, the keyword “Alexa” starts a recording and the recorded message is sent to the Amazon cloud for speech recognition and semantic analysis. Google cloud, AWS, Azure, Alibaba, Tencent, Baidu and other public clouds all have on-line machine learning services that can be accessed via APIs from client devises.
While the cloud business is growing and maturing at an increasingly rapid rate, edge computing has emerged as a very hot topic. There now are two annual research conferences on the subject: the IEEE Service Society International conference on Edge computing and the ACM IEEE Symposium on Edge computing. Mahadev Satyanarayanan from CMU, in a keynote at the 2017 ACM IEEE Symposium and in the article “The Emergence of Edge Computing” IEEE Computer, Vol. 50, No. 1, January 2017, argues very strongly in favor of a concept called a cloudlet which is a server system very near or collocated with edge devices under its control. He observes that applications like augmented reality require real-time data analysis and feedback to be usable. For example, the Microsoft Hololens mixed reality system integrates a powerful 32bit Intel processor with a special graphics and sensor processor. Charlie Catlett and Peter Beckman from Argonne National Lab have created a very powerful Edge computing platform called Waggle (as part of the Array of Things project) that consists of a custom system management board for keep-alive services and a powerful ODROID multicore processor and a package of instruments that measure Carbon Monoxide, Hydrogen Sulphide, Nitrogen Dioxide, Ozone, Sulfur Dioxide, Air Particles, Physical Shock/Vibration, Magnetic Field, Infrared Light, Ultraviolet Intensity, RMS Sound Level and a video camera. For privacy reasons the Waggle vision processing must be done completely on the device so that no personal identifying information goes over the network.
Real time computer vision tasks are among the AI challenges that are frequently needed at the edge. The specific tasks range in complexity from simple object tracking to face and object recognition. In addition to Hololens and Waggle there are several other small platforms designed to support computer vision at the edge. As shown in Figure 1, these include the humble RaspberryPi with camera, the Google vision kit and the AWS DeepLens.
Figure 1. From the left is a RaspberryPi with an attached camera, ANL Waggle array, the Google AIY vision kit and the AWS DeepLens.
The Pi system is, by far, the least capable with a quad core ARMv7 processor and 1 GB memory. The Google vision kit has a Raspberry Pi Zero W (single core ARMv7 with 512MB memory) but the real power lies in the Google VisionBonnet which uses a version of the Movidius Myriad 2 vision processing chip which has 12 vector processing units and a dual core risc cpu. The VisionBonnet runs TensorFlow from a collection of pretrained models. DeepLens has a 4 megapixel camera, 8 GB memory, 16 GB storage and an intel Atom process and Gen9 graphics engine which supports model built with Amazon SageMaker that is pre-configured to run TensorFlow and Apache MXNet.
As we stated above many applications that run on the edge many must rely on the cloud if only for storing data to be analyzed off-line. Others, such as many of our phone apps and smart speakers, use the cloud for backend computation and search. It may be helpful to think of the computational capability of edge devices and the cloud as a single continuum of computational space and an application as an entity that has components distributed over both ends. In fact, depending upon the circumstances parts of the computation may migrate from the cloud to the device or back to optimize performance. As illustrated in Figure 2, AWS Greengrass accomplishes some of this by allow you to move Lambda “serverless” functions from the cloud to the device to form a network of long running functions that can interact with instruments and securely invoke AWS services.
Figure 2. AWS Greengrass allows us to push lambda functions from the cloud to the device and for these functions to communicate seamlessly with the cloud and other functions in other devices. (Figure from )
The Google vision kit is not available yet and DeepLens will ship later in the spring and we will review them when they arrive. Here we will focus on a few simple experiments with the Raspberry Pi and return to these other devices in a later post.
Deep Learning Models and the Raspberry PI 3.
In a previous post we looked at several computer vision tasks that used the Pi in collaboration with cloud services. These included simple object tracking and doing optical character recognition and search for information about book covers seen in an image. In the following paragraphs we will focus on the more complex task of recognizing objects in images and we will try to understand the limitations and advantages of using the cloud as the backend computational resource.
As a benchmark for our experiments we use the Apache MXNet deep learning kit with a model based on the resnet 152-layer neural network that was trained on a collection of over a 10 million images and over 11 thousand labels. We have packaged this MXNet with this model into a Docker container dbgannon/mxnet which we have used for these experiments. (the details of the python code in the container are in the appendix to this blog.
Note: If you want to run this container and if you have dockerand Jupyter installed you can easily test the model with pictures of your own. Just download the jupyter notebook send-to-mxnet-container.ipynb and follow the instructions there.
How fast can we do the image analysis (in image frames per second)?
Running the full resnet-152 model on an installed version of MXNet on more capable machines (Mac mini and the AWS Deeplearning AMI c5.4xlarge, no GPU) yields an average performance of about 0.7 frame/sec. Doing the same experiment on the same machines, but using the docker container and a local version of the Jupyter notebook driver we see the performance degrade a bit to an average of about 0.69 frame/sec (on a benchmark set of images we described in the next paragraph). With a GPU one should be able to go about 10 times faster.
For the timing tests we used a set of 20 images from the internet that we grabbed and reduced so they average about 25KB in size. These are stored in the Edge device. Loading one of these images takes about the same amount of time as grabbing a frame from the camera and reducing it to the same size. Two of images from the benchmark set and the analysis output is shown in figure 3 below.
Figure 3. Two of the sample images together with the output analysis and call time.
How can we go faster on the Pi 3? We are also able to install MXNet on the Pi 3, but it is a non-trivial task as you must build it from the source. Deployment details are here, however, the resnet 152 model is too large for the 1MB memory of the Pi 3, so we need to find another approach.
The obvious answer is to use a much smaller model such as the Inception 21 layer network which has a model database of only 23MB (vs 310MB for resnet 152), but it has only 1000 classes vs the 11,000 of the full rennet 152. We installed Tensorflow on the Pi3. (there are excellent examples of using it for image analysis and recognition provided by Matthew Rubashkin of Silicon Valley Data Science.) We ran the Tensorflow Inception_2015_12_05 which fit in memory on the Pi. Unfortunately, it was only able to reach 0.48 frames per second on the same image set described above.
To solve the, we need to go to the cloud. In a manner similar to the Greengrass model, we will have the Pi3 sample the camera and downsize the image and send it to the cloud for execution. To test it we ran the MXNet container on a VM in AWS and pointed the Pi camera at various scenes. The results are shown in Figure 4.
Figure 4. The result for the toy dinosaur result is as it is logged into the AWS DynamoDB. With the bottom two images show only the description string.
The output of the model gives us likelihood of various labels. In a rather simple minded effort to be more conversational we translate the likelihood results as follows. If a label X is more than 75% likely the container returns a value of “This certainly looks like a X”. If the likely hood is less than less than 35% it returns “I think this is an X, but I am not sure” (the code is below). We look at the top 5 likely labels and they are listed in order.
The Pi device pushes jpeg images to AWS S3 as a blob. It then pushes the metadata about the image (a blob name and time stamp) to the AWS Simple Queue Service. We modified the MXNet container to wait for something to land in the queue. When this happens, it takes the image meta data and pulls the image from S3 and does the analysis and finally stores the result in an AWS DynamoDB table.
However we can only go as fast as we can push the images and metadata to the cloud from the Pi device. With repeated tries we can achieve 6 frames/sec. To speed up the analysis to match this input stream we spun up a set of analyzers using the AWS Elastic Container Service (ECS). The final configuration is shown in Figure 5.
Figure 5. The full Pi 3 to Cloud image recognition architecture. (The test dataset is shown in the tiny pictures in S3)
To conduct the experiments, we included a time stamp from the edge device with the image metadata. When the MXNet container puts the result in the DynamoDB table it includes another timestamp. This allows us to compute the total time from image capture to result storage for each image in the stream. If the device sends the entire collection as fast as possible then the difference between the earliest recorded time stamp and the most recent gives us a good measure of how long it takes to complete the entire group.
While the Pi device was able to fill S3 and the queue at 6 frames a second having only one MXNet container instance yielded the result that the total throughput was only about 0.4 frames/sec. The servers used to host the container are relatively small. However, using the ECS it is trivial to boost the number of servers and instances. Because of the size of the container instance is so large only one instance can fit on each of the 8 GB servers. However as shown in Figure 6 we were able to match the device sending throughput with 16 servers/instances. At this point messages in the queue were being consumed as fast as they were arriving. Using a more powerful device (a laptop with a core I7 processor) to send the images we were about to boost the input end up to just over 7 frames per second and that was matched with 20 servers/instances.
Figure 6. Throughput in Frames/second measured from the Pi device to the final results in the DynamoDB instance. In the 20 instance case, a faster core I7 laptop was used to send the images.
Final Thoughts
This exercise does not fully explore the utility of AI method deployed at the edge or between the edge and the cloud. Clearly this type of full object recognition at real-time frame rates is only possible if the edge device has sophisticated accelerator hardware. On the other hand, there are many simple machine learning models that can be used for more limited applications. Object motion tracking is one good example. This can be done in real-time. This is typically done by comparing a frame to a previous one and looking for the differences. Suppose you need to invoke fire suppression when a fire is detected. It would not be had to build a very simple network that can recognize fire but not simple movement of ordinary objects. Such a network could be invoked whenever movement is detected and if it is fire the appropriate signal can be issued.
Face detection and recognition is possible with the right camera. This was done with the Microsoft Xbox-1 and it is now part of the Apple IPhone X.
There are, of course, limits to how much we want our devices to see and analyze what we are doing. On the other hand it is clear that advances in automated scene analysis and “understanding” are moving very fast. Driverless cars are here now and will be commonplace in a few years. Relatively “smart” robots of various types are under development. It is essential that we understand how the role of these machines in society can benefit the human condition along the lines of the open letter from many AI experts.
Notes about the MXNet container.
The code is based on a standard example of using MXNet to load a model and invoke it. To initialize the model, the container first loads the model files into the root file system. That part is not show here. The files are full-resnet-152-0000.parms (310MB), full-resnet-152-symbols.json (200KB) and full-synset.txt (300KB) . Once loaded into into memory the full network is well over 2GB and the container requires over 4GB.
Following the load, the model is initialized.
import mxnet as mx # 1) Load the pretrained model data with open (' full-synset.txt ','r ') as f: synsets = [l.rstrip() for l in f] sym, arg _params , aux_pa ram s = mx . model .load _checkpoint( 'full-resnet-152' ,0) # 2) Build a model from the data mod = mx.mod .Module (symbol =sym , context =mx. gpu ()) mod. bind ( for_training =False, data_shapes=[( 'data ',(1,3,224,224))]) mod. set_params ( arg_params , aux_params )
The function used for the prediction is very standard. It takes three parameters: the image object, the model and synsnet (the picture labels). The image is modified to fit the network and then fed to the forward end. The output is a Numpy array which is sorted and the top five results are returned.
def predict(img, mod, synsets): img = cv2.resize(img, (224, 224)) img = np.swapaxes(img, 0, 2) img = np.swapaxes(img, 1, 2) img = img[np.newaxis, :] mod.forward(Batch([mx.nd.array(img)])) prob = mod.get_outputs()[0].asnumpy() prob = np.squeeze(prob) a = np.argsort(prob)[::-1] result = [] for i in a[0:5]: result.append( [ prob[i], synsets[i][synsets[i].find(' '):]]) return result
The container runs as a webservice on port 8050 using the Python “Bottle” package. When it receives a web POST message to “call_predict” it invokes the call_predict function below. the image has been passed as a jpeg attachment with is extracted with the aid of the request package. It is saved in a temporary file and then read by the OpenCV read function. Unfortunately there was no way to avoid the save followed by read because of limitations to the API. However we measured the cost of this step and it was less than 1% of the total time of the invocation.
The result of the predict function is a two dimensional array with each row consisting of a probability and the associated label. The call returns the most likely labels as shown below.
@route('/call_predict', method='POST') def call_predict(): t0 = time.time() result = '' request.files.get('file').save('yyyy.jpg', 'wb') image = cv2.cvtColor(cv2.imread('yyyy.jpg'), cv2.COLOR_BGR2RGB) t1 = time.time() result = predict(image, mod, synsets) t2 = time.time() answer = "i think this is a "+result[0][1]+" or it may be a "+result[1][1] if result[0][0] < 0.3: answer = answer+ ", but i am not sure about this." if result[0][0] > 0.6: answer = "I see a "+result[0][1]+"." if result[0][0] > 0.75: answer = "This certainly looks like a "+result[0][1]+"." answer = answer + " \n total-call-time="+str(t2-t0) return(answer) run(host='0.0.0.0', port=8050)
The version of the MXNet container used in the ESC experiment replace the Bottle code and call_predict with loop that polls the message queue, pulls a blob from S3 and pushes the result to DynamoDB | https://esciencegroup.com/2017/12/11/ | CC-MAIN-2022-27 | en | refinedweb |
I was trying to find out how to create multiple copies of the same image but insert a different line of text from a bank of lyrics I had created. I then found that using Photoshop you can quite easily manipulate your files with Python, saving them locally in a matter of minutes.
This could be used to generate copies of social media adverts/snippets, creating artwork with random features of colour, the API is quite expansive with what you can change as you are essentially using the Photoshop client but manipulating it with code and this guide will explain how...
Pre-requisites
You will need to following tools to be installed: Python (3.9), Adobe Photoshop, and Pip package (win32)
I will assume that the user has Python and Photoshop installed locally and requires no guidance on this, however, a quick Google search will help you there. Furthermore, if you don’t have a Creative Cloud subscription - quite often Adobe gives free trials you can use, as I did.
Once Python is installed within a terminal window with administrator privileges type in the command:
pip install -U pypiwin32
- NOTE: There is multiple different methods to install packages
Within the editor
Open up an editor with a .py file and to manipulate the PSD file you need to use the win32 package to open Photoshop and locate the folder/file you are using by the following:
import win32com.client // Open Adobe Photoshop App programmatically and locate PSD file psApp = win32com.client.Dispatch("Photoshop.Application") psApp.Open("C:\\Users\\olive\\PATH-TO-PSD-FILE.psd") // Import layers layers = psApp.Application.ActiveDocument
The next lines of code can be changed to your specification but here we will be using a text file to create an image, with each line of text replacing a textbox field within the PSD file. This is done using a for loop taking each line in the .txt file.
// Open text file, ENCODING may need changing/removed file1 = open("C:\\Users\\olive\\FULL_PATH_TO_FILE.txt", encoding="utf8") count=0 // Will take each line of the text file above for line in file1: // Grab the layer by name for what you want to manipulate in PS quote_layer = layers.ArtLayers["Text-Line"] // Inserts the text line into the image quote_layer.TextItem.contents = line.strip() // Export as JPG options = win32com.client.Dispatch('Photoshop.ExportOptionsSaveForWeb') options.Format = 6 options.Quality = 100 // Export the file to system fileName=("C:\\Users\\olive\\OUTPUT_FOLDER_PATH\\FILE_NAME_" + str(count) +".jpg") layers.Export(ExportIn=fileName, ExportAs=2, Options=options) // Create Console log to see progress print("Image saved no:" + str(count)) count+=1
You will notice that on running the script photoshop will open and begin manipulating the layer called ‘Text-Line’ in this case, then save the images as jpegs after each iteration to your output called fileName. You can just let this loop run through until completion!
More Information
What I have done is manipulate one layers text context, but if you wish to build on this, you can read the documentation on using Photoshop in scripting here. | https://ollierowles.hashnode.dev/using-python-scripts-to-edit-photoshop-layers | CC-MAIN-2022-27 | en | refinedweb |
Last Element Remaining by Deleting the Two Largest Elements and Replacing them with Their Absolute Difference If They are Unequal
Introduction
Interviews after Interviews,, we see questions related to priority queue being asked. So having a good grip over the priority queue surely gives us an upper hand over the rest of the competition. But you don’t need to worry about any of it because Ninjas are here with you. Today we will see one such question named ‘Last element remaining by deleting the two largest elements and replacing them with their absolute difference if they are unequal’. Now let’s see the problem statement in detail.
Understanding the Problem
We have been given an array, and our task is to pick the two largest elements in the array and remove them. Now, if these elements are unequal, we will insert the absolute difference of the elements into the array. We will keep performing this until the array has 1 or no elements left in it. If there is only one element left in the array, then we will print that. Otherwise, we will print ‘-1’.
Let’s understand the problem better by the following example.
ARR = {1, 2, 3, 4}
Explanation
Let’s understand this step by step:
- Initially, 3 and 4 are the two largest elements in the array, so we will take them out, and also, as they are not equal, we will insert their absolute difference (4 - 3) in the array. So now array becomes {1, 2, 1}
- Now, 1, 2 are the two largest elements. They are not equal. Thus, we will insert their absolute difference (2 - 1) in the array. So now the array becomes {1, 1}.
- Now, 1, 1 are the two largest elements of the array. Now, both of them are equal. Thus, we did not insert anything in the array.
- Now the size of the array becomes 0. Thus, we will print -1.
Intuition
As we have to regularly maintain the sorted array and have to pick the top two largest elements from it. The direction in which our mind first goes is towards the Priority queue. The idea here is to use a max priority queue. We will first insert all the elements in the priority queue. Then we will keep performing the following operations till the size of the queue becomes 1 or 0:
- Take two elements at the top of the queue. Pop them.
- If they are not equal, then we will insert their absolute difference. Else, we will continue.
Now, if the size of the queue is 0, we will print ‘-1’. Else if its size is one, then we will print that single element present in the queue.
Things will become much clearer from the code.
Code
#include <iostream> #include <vector> #include <queue> using namespace std; // Function to reduce the array and print the remaining element. int reduceArray(vector<int> &arr) { priority_queue<int> maxPq; // Inserting elements of array in priority_queue. for (int i = 0; i < arr.size(); i++) maxPq.push(arr[i]); // Looping through elements. while (maxPq.size() > 1) { // Remove largest element. int maxEle = maxPq.top(); maxPq.pop(); // Remove 2nd largest element. int secondMaxEle = maxPq.top(); maxPq.pop(); // If these are not equal. if (maxEle != secondMaxEle) { // Pushing into queue. maxPq.push((maxEle - secondMaxEle)); } } // If only one element is there in the heap. if (maxPq.size() == 1) cout << maxPq.top(); else cout << "-1"; } int main() { // Taking user input. int n; cin >> n; vector<int> arr(n, 0); for (int i = 0; i < n; i++) cin >> arr[i]; // Calling function 'reduceArray()'. reduceArray(arr); return 0; }
Input
4 1 2 3 4
Output
-1
Time Complexity
O(N * log N), where ‘N’ is the length of the array.
As we are inserting ‘N’ elements in the priority queue and each insertion costs us O(log N) time thus ‘N’ insertions will cost O(N * log N). After insertion we are just looping through all the elements in the queue that will cost us O(N) time. Thus the overall complexity is O(N) + O(N * log N) ~ O(N * log N).
Space Complexity
O(N), where ‘N’ is the length of the array.
As we are using a priority queue to store the elements in the queue and as there are ‘N’ elements, extra space of O(N) will be required.
Key Takeaways
We saw how we could solve the problem, ‘last element remaining by deleting the two largest elements and replacing them with their absolute difference if they are unequal’ with the help of a priority queue. We first inserted all the elements in the queue and then, according to the question popped the top two elements. For more such interesting questions, move over to our industry-leading practice platform CodeStudio to practice top problems and many more. Till then, Happy Coding! | https://www.codingninjas.com/codestudio/library/last-element-remaining-by-deleting-the-two-largest-elements-and-replacing-them-with-their-absolute-difference-if-they-are-unequal | CC-MAIN-2022-27 | en | refinedweb |
Interoperability
XAP offers interoperability between documents and POJOs via the space - it is possible to write POJOs and read them back as documents, and vice versa. This is usually useful in scenarios requiring reading and/or manipulating POJO objects without loading the concrete java classes.
In previous releases the
ExternalEntry class was used to achieve this functionality. Starting with 8.0, the
SpaceDocument class should be used to accomplish these needs in a simpler and safer manner, whereas
ExternalEntry has been deprecated and should no longer be used.
Requirements
When working with documents, the user is in charge of creating and registering the space type descriptor manually before interacting with the document types. When working with POJOs, the system implicitly generates a space type descriptor for the POJO’s class using annotations or
gs.xml files when the class is used for the first time. In order to inter-operate, the same type descriptor should be used for both POJOs and documents.
If the POJO’s class is in the application’s classpath, or the POJO is already registered in the space, there’s no need to register it again - the application will retrieve it automatically when it’s used for the first time. For example:
// Create a document template using the POJO class name: SpaceDocument template = new SpaceDocument(MyPojo.class.getName()); // Count all entries matching the template: int count = gigaSpace.count(template);
If the POJO’s class is not available in the classpath or in the data grid, the application will throw an exception indicating that there POJO settings and maintain them if the POJO changes.
Query Result Type
When no interoperability is involved this is a trivial matter - Querying a POJO type returns POJOs, querying a document type returns documents.
When we want to mix and match, we need semantics to determine to query result type - POJO or document.
Template Query
Template query result types are determined by the template class - if the template is an instance of a
SpaceDocument, the results will be documents, otherwise they will be POJOs.
For example:
// Read all product entries as POJOs: Product[] objects = gigaSpace.readMultiple(new Product(), Integer.MAX_VALUE); // Read all product entries as Documents: SpaceDocument[] documents = gigaSpace.readMultiple( new SpaceDocument(Product.class.getName()), Integer.MAX_VALUE);
SQL Query
The
SQLQuery class has been enhanced with a
QueryResultType parameter. The following options are available:
OBJECT- Return java Object(s) (POJO).
DOCUMENT- Return space document(s).
DEFAULT- If the type is registered with a concrete java class, return an Object. Otherwise, return a document. This is the default behavior.
For example:
// Read a POJO using an SQL query - same as always: Product pojo = gigaSpace.read( new SQLQuery<Product>(Product.class, "name='Dynamite'")); // Read a document using an SQLQuery when there's no // compatible POJO - no need to specify query result type: SpaceDocument document = gigaSpace.read( new SQLQuery<SpaceDocument>("Product", "name='Dynamite'")); // Read a documnet using an SQLQuery when there is a // compatible POJO - explicitly specify query result type: SpaceDocument document = gigaSpace.read( new SQLQuery<SpaceDocument>(Product.class.getName(), "name='Dynamite'", QueryResultType.DOCUMENT));
This strategy both preserves backwards compatibility and simplifies non-interoperability scenarios, which are more common than interoperability scenarios.
ID Based Query
In order to support ID queries for documents, the
IdQuery class has been introduced, which encapsulates the type, id, routing and a
QueryResultType. New
GigaSpace signatures have been added for
readById,
readIfExistsById,
takeById,
takeIfExistsById. The result type is determined by the
QueryResultType, similar to
SQLQuery.
For example:
// Read a POJO by id - same as always: Product pojo = gigaSpace.readById(new IdQuery<Product>(Product.class, 7)); // Read a document by id when there's no // compatible POJO - no need to specify query result type: SpaceDocument document = gigaSpace.readById( new IdQuery<SpaceDocument>("Product", 7)); // Read a document by id when there is // a compatible POJO - explicitly specify query result type: SpaceDocument document = gigaSpace.readById( new IdQuery<SpaceDocument>(Product.class.getName(), 7, QueryResultType.DOCUMENT));
Respectively, to support multiple ids queries,
IdsQuery was also introduced, with new signatures for
readByIds and
takeByIds. For example:
Object[] ids = new Object[] {7, 8, 9}; // Read POJOs by ids - same as always: Product[] pojos = gigaSpace.readByIds( new IdsQuery<Product>(Product.class, ids)).getResultsArray(); // Read documents by ids when there's no // compatible POJO - no need to specify query result type: SpaceDocument[] documents = gigaSpace.readByIds( new IdsQuery<SpaceDocument>("Product", ids)).getResultsArray(); // Read documents by ids when there is a // compatible POJO - explicitly specify query result type: SpaceDocument[] documents = gigaSpace.readByIds( new IdsQuery<SpaceDocument>(Product.class.getName(), ids, QueryResultType.DOCUMENT)).getResultsArray();
The original
readById (and related methods) signatures are not suited for document types, since they require a concrete java class. They always return POJO(s).
Dynamic Properties
When a type descriptor is created from a POJO class, the type descriptor builder checks if the POJO class supports Dynamic Properties (new in 8.0.1). If it doesn’t, the type descriptor will also not support dynamic properties. If a space document will be created using the same type with a property that is not defined in the POJO and written to the space, an exception will be thrown indicating the property is not defined in the type and the type does not support dynamic properties.
It is possible to manually create a
SpaceTypeDescriptor of the POJO using the
SpaceTypeDescriptorBuilder and enable dynamic properties. Note, however, that in that case if client A writes a document with a dynamic property and client B reads it as a POJO, the dynamic property will be ignored, and if client B will proceed to update the entry the dynamic property will be deleted from the space.
Deep Interoperability
If the POJO contains properties which are POJO themselves, the space will implicitly convert these properties to space documents as needed. For example:
// Create a POJO entry with a POJO property and write it to space: Person personPojo = new Person() .setName("smith") .setAddress(new Address() .setCity("New York") .setStreet("Main")); gigaSpace.write(personPojo); // Read POJO entry as a document: SpaceDocument template = new SpaceDocument(Person.class.getName()) .setProperty("name", "smith"); SpaceDocument personDoc = gigaSpace.read(template); // Get address document from person document: SpaceDocument addressDoc = personDoc.getProperty("address");
This works the other way around as well - if a space document is created with a nested space document property, it will be converted to a POJO with a nested POJO property when read as a POJO.
If you prefer to disable this implicit conversion and preserve the nested POJO instance within document entries, use the
@SpaceProperty annotation and set
documentSupport to
COPY:
public class Person { ... @SpaceProperty(documentSupport = SpaceDocumentSupport.COPY) public Address getAddress() {...} public Person setAddress(Address address) {...} ... }
In that case the result would be:
// Write POJO entry same as before ... // Read POJO entry as a document: SpaceDocument template = new SpaceDocument(Person.class.getName()) .setProperty("name", "smith"); SpaceDocument personDoc = gigaSpace.read(template); // Get address POJO from person document: Address addressPojo = personDoc.getProperty("address");
The
SpaceDocumentSupport can be one of the following:
CONVERT– Value is converted to/from space document, according to the operation’s context.
COPY– Value reference is copied as-is - no conversion is performed.
DEFAULT– Behavior will be determined automatically according to the object’s class.
This behavior applies to arrays and collections as well (for example, if
Person would have
List<Address> getAddresses(), it would be converted to a list of address documents).
This feature is new 8.0.1. In previous releases, there’s no implicit conversion of nested properties.
Local View / Cache
Local View and Local Cache supports both POJOs and Documents. Unlike an embedded space, the entry is stored in the cache as a user object (either POJO or document), which speeds up query performance since the result entries do not have to be transformed.
When working with POJOs only or Documents only, this is not an issue. However, when working in a mixed POJO-document environment it is important to understand how the objects are stored in cache to assure optimal performance.
Local view is defined by one or more views, which are essentially SQL queries, so the query result type discussed above actually determines if the objects are stored locally as POJOs or documents.
Local cache stores its object locally according to the master space: If a POJO entry was written to the master space, it will be kept in the local cache as a POJO as well, and if a document entry is written to the master it will be kept as document in the local cache. If a user asks the local cache for a document result but the entry is stored as a POJO it will be converted, and vice versa.
Space Filters and Space Replication Filters
Space Filters are supported for space documents. If the space type descriptor that is registered in the space contains the POJO class, the entry will be passed to the filter as a POJO. Otherwise, it will be passed to the filter as a document. | https://docs.gigaspaces.com/xap/10.2/dev-java/document-pojo-interoperability.html | CC-MAIN-2022-27 | en | refinedweb |
Java program reading from 2 files. so i'm creating a program for u.s population by state per 2010 census. Below is the question:
create a program that will read from two files and fills two HashMaps to find the population (according to the 2010 census) of an individual state or the United States. There will be 2 text files in your program:
The name of the states and their abbreviations, is a comma separated file - name,abbreviation.
The abbreviations for each state and their respective population, is a tab separated file – abbreviation(tab) integer.
You should read the first file into a HashMap that can handle the types. and then you should read the second file into a HashMap that can handle those types.
You should create a menu driven system that requests either the name or the abbreviation for a state. If the user enters a name of the state, your program should find the abbreviation for that state (using the first HashMap) and then call the second HashMap to find the population.
or if the user enters the abbreviation, your program should look up the population directly. Please display the results of the request in a clear and concise manner. If the user simply requests the population of the United States you may use the second HashMap and sum the values for all of the states and display that directly.
i created a menu and then created 2 hash maps but i don't know how to get the user input to search only for one key. At this point my hash map reads everything from the file instead of just what the user enters.
Print Menu: public class Main { //public static void main(String[] args) { public static int menu() { int selection; Scanner input = new Scanner(System.in); /***************************************************/ System.out.println(" Choose from these choices "); System.out.println(" -------------------------\n"); System.out.println("1 - Type 1 to Enter an Abbreviation of the state you would like the population for. "); System.out.println("2 - Type 2 to Enter Name of the State you would like the population for. "); System.out.println("3 - Type 3 to find the population for whole United States(Census 2010). "); System.out.println("4 - Type 4 to Exit the Program."); selection = input.nextInt(); return selection; } //Once you have the method complete you would display it accordingly in your main method as follows: public static void main(String[] args) { int userChoice; Scanner input = new Scanner(System.in); /*********************************************************/ userChoice = menu(); //from here I can either use a switch statement on the userchoice //or use a while loop (while userChoice != the fourth selection) //using if/else statements to do my actual functions for my choices. } }
First Hash Map
import java.util.Scanner; import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.*; public class hash1 { public static void main(String[] args) throws IOException { File fi1 = new File("StatePlusAbbrev.txt"); Scanner sc1 = new Scanner(fi1).useDelimiter("[,\n\r]+"); HashMap<String, String> states = new HashMap<>(); // HashMap<String, Integer> abbrevToPop = new HashMap<>(); String stateName; String stateAbbrev; while (sc1.hasNextLine()) { stateName = sc1.next(); stateAbbrev = sc1.next(); states.put(stateName, stateAbbrev); //System.out.println("The abbreviation for " +stateName + " is " + // stateAbbrev); } // Set<Entry<String,String>> pod = states.entrySet(); // iterate through the key set and display key and values Set<String> keys = states.keySet(); for (String key : keys) { // System.out.print("Key = " + key); System.out.println(" Value = " + states.get(key) +"\n"); // } //Set set = states.entrySet(); // check set values //System.out.println("Set values: " + set); // states.forEach((k, v) -> // System.out.println(k + "=\t" + v)); // // // String out = states.get("New York"); // System.out.println(out); } /** * */ public void menu() { } }
Any help would be appreciated. the second hash map is very similar except it has the string and int to read the abbreviations and number of population | https://www.daniweb.com/programming/software-development/threads/501237/hash-map-reading-from-2-files | CC-MAIN-2018-39 | en | refinedweb |
Decimal part of a number in Python
I have the following program
def F_inf(a,b): x1=a.numerator/a.denominator x2=b.numerator/b.denominator if x1<x2: print "a<b" elif x1>x2: print "a>b" else: print "a=b" a=Fraction(10,4) b=Fraction(10,4) F_inf(a, b)
When I execute it,x1 receive just the integer value of the fraction, for exemple if I have to compute 2/4 x1 is equal to 0 not 0.5. What should I do ? Thanks
1 answer
- answered 2018-01-11 21:05 Holloway
It sounds like you're using Python2. The best solution would be to switch to Python 3 (not just because of the division but because "Python 2.x is legacy, Python 3.x is the present and future of the language").
Other than that you have a couple of choices.
from __future__ import division # include ^ as the first line in your file to use float division by default
or
a = 1 b = 2 c = a / (1.0*b) # multiplying by 1.0 forces the right side of the division to be a float #c == 0.5 here | http://quabr.com/48215731/decimal-part-of-a-number-in-python | CC-MAIN-2018-39 | en | refinedweb |
26 CFR 1.182-6 - Election to deduct land clearing expenditures.
(a)Manner of making election. The election to deduct expenditures for land clearing provided by section 182(a) shall be made by means of a statement attached to the taxpayer's income tax return for the taxable year for which such election is to apply. The statement shall include the name and address of the taxpayer, shall be signed by the taxpayer (or his duly authorized representative), and shall be filed not later than the time prescribed by law for filing the income tax return (including extensions thereof) for the taxable year for which the election is to apply. The statement shall also set forth the amount and description of the expenditures for land clearing claimed as a deduction under section 182, and shall include a computation of “taxable income derived from farming”, if the amount of such income is not the same as the net income from farming shown on Schedule F of Form 1040, increased by the amount of the deduction claimed under section 182.
(b)Scope of election. An election under section 182(a) shall apply only to the taxable year for which made. However, once made, an election applies to all expenditures described in § 1.182-3 paid or incurred during the taxable year, and is binding for such taxable year unless the district director consents to a revocation of such election. Requests for consent to revoke an election under section 182 shall be made by means of a letter to the district director for the district in which the taxpayer is required to file his return, setting forth the taxpayer's name, address and identification number, the year for which it is desired to revoke the election, and the reasons therefor. However, consent will not be granted where the only reason therefor is a change in tax consequences. | https://www.law.cornell.edu/cfr/text/26/1.182-6 | CC-MAIN-2018-39 | en | refinedweb |
.
In this codelab, you'll replace some existing components in a form with new ones by MDC.
The starter app is located in the
material-components-ios-codelabs-master/MDC-111/Swift/Starter directory. Be sure to
cd into that directory before beginning.
To clone this codelab from GitHub, run the following commands:
git clone cd material-components-ios-codelabs/MDC-111/Swift/Starter
Success! You should see the app and its form.
Material Design text fields have a major usability gain over plain text fields. By defining the hit zone with an outline or a background fill, users are more likely to interact with your form or identify text fields within more complicated content..
Delete each of the dividers..)
At the top of
ViewController.swift, add the following under
import UIKit:
import MaterialComponents.
The text fields are all updated to use the newer designs in MDC. Let's change the trailing constraint to be something more appropriate for this design.!:
The zip text field is red and has error text beneath it. To get rid of the error, just replace the letter with a number.
MDC has buttons with:
They are built on UIButton (the standard iOS button class), so you already know how to use them in your code.
In
ViewController.swift, in the properties section, change the class of the
@IBOutlet weak var saveButton: UIButton! to MDCRaisedButton:
@IBOutlet weak var saveButton: MDCRaisedButton!
In
Main.storyboard, select the save button and open the Identity Inspector on the right. (If you can't find the Identity Inspector, you can access it in View > Utilities > Show Identity Inspector.)
In the Custom Class field, select
MDCRaisedButton from the dropdown menu. (Notice all other classes you could choose!):
Still in the storyboard, open the Attributes Inspector. (If you can't find the Attributes Inspector, you can access it in View > Utilities > Show Attributes Inspector.)
Set the button type to custom:
Material Design allows for expressive branding choices. Let's make a couple to see how much those choices can change an app.
In the storyboard, select the save button and open the Attributes Inspector. (If you can't find the Attributes Inspector, you can access it in View > Utilities > Show Attributes Inspector.)
Set the text color to be white:
Then set the button's background color to hex color #0078FF:
Material Design recommends you make the button text uppercase.
In the storyboard, double click on the save button and make the title SAVE::
Select the checkbox for width and set the constant to 88 and then click Add 1 Constraint: >=:
Click away from the dialog. Build and run:.
In ViewController.swift, add the following method:
override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() saveButton.hitAreaInsets = UIEdgeInsets(top: (48 - self.saveButton.bounds.size.height) / -2, left: 0, bottom: (48 - self.saveButton.bounds.size.height) / ))
Switch the simulator target to an iPad, then build and run. Press SAVE:.
You can explore even more components in MDC-iOS by visiting the MDC-iOS catalog. | https://codelabs.developers.google.com/codelabs/mdc-111-swift/index.html?index=..%2F..%2Fio2018 | CC-MAIN-2018-39 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hello everyone! I'm trying to copy an attachment from a screen ,
Who is associate to the closure of a subtask .
I succeeded to do it with comment :
def commentManager = ComponentAccessor.getCommentManager()
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
def originalComment = transientVars.comment as String
def comments = originalComment
if (comments!=null)
{
commentManager.create(parentIssue, user, comments, false)
}
I'm trying to do an equivalent with the attachement (
copyAttachments(Issue issue, ApplicationUser author, String newIssueKey)) but
transientVars.attachment
Does not seem to exist
How am I supposed to do ?
Hi Sergio,
Based on your description, the attachment added during the 'Close issue' transition is always the last one (unless you have any post functions/automation that adds more attachments after that), the code to copy it to the parent issue is as follows:
import com.atlassian.jira.component.ComponentAccessor
def attMgr = ComponentAccessor.getAttachmentManager()
def attachments = attMgr.getAttachments(issue)
def lastAtt = attachments[0]
def currentUser = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
if (attachments.size() > 1){
for (int i = 1; i < attachments.size(); i++){
if (lastAtt.getCreated().getTime() < attachments[i].getCreated().getTime()){
lastAtt = attachments[i]
}
}
}
attMgr.copyAttachment(lastAtt, currentUser, issue.getParentObject().getKey())
Also, make sure that you place this post function AFTER the "Re-index an issue to keep indexes in sync with the database" post function
Hi Sergio,
I would like to do the exact same thing but have no idea on how to manage that.
Hope someone could help :)
Best regards,
Germ. | https://community.atlassian.com/t5/Jira-questions/How-to-copy-an-Attachment-from-a-screen-in-subtask-to-the-main/qaq-p/705352 | CC-MAIN-2018-39 | en | refinedweb |
[SOLVED] [N00b] Memory management in Qt
Hello Qt devs!
I just read this passage in C++ GUI Programming with Qt 4 by Jasmin Blanchette and Mark Summerfiedl:
bq. Qt's parent-child mechanism is implemented in QObject. When we create an object (a widget, validator, or any other kind) with a parent, the parent adds the object tothe list of its children. When the parent is deleted, it walks through its list of children and deletes each child.
Now, if you take a look at the following code:
@
#ifndef EMPLOYEELISTDIALOG_H
#define EMPLOYEELISTDIALOG_H
#include <QSqlQueryModel>
#include "ui_employeelistdialog.h"
class employeeListDialog : public QDialog, private Ui::employeeListDialog
{
Q_OBJECT
public:
explicit employeeListDialog(QWidget *parent = 0);
private:
// Function that initializes employee data models void EMPModels(); // Declares some pointers to help query DB // when displaying employee information QSqlQueryModel *EMPpr; QSqlQueryModel *EMPpa; QSqlQueryModel *EMPpapr;
};
#endif // EMPLOYEELISTDIALOG_H@
and
@void employeeListDialog::EMPModels()
{
// Initially hides everything in the centralWidget area. // Widgets will appear when buttons are triggered. // Also sets the tabs' label text /* Queries the database for the present employees and makes a model out of it, so a view can be added later */ EMPpr = new QSqlQueryModel; ... ... /* Same thing for past employees */ EMPpa = new QSqlQueryModel;
...
...
/* Same thing for all employeese, past and present */ EMPpapr = new QSqlQueryModel; ... ...
}@
Do I have to write
@ EMPpapr = new QSqlQueryModel(this); @
so that the pointers get deleted, or are they already handled by Qt?
Also, if I declare some QSqlQuery pointers, which do not specify parents, do I need to delete them manually?
Parent / child relationships are either established "explicit" by passing a parent to the constructor / calling QObject::setParent() or "implicit" by objects taking ownership of another eg. adding a widget to a layout. Such situations are usually stated in the documentations.
But there is no automatism.
In your case you will have to explicitly set a parent.
Basiclly, if your instance has parent object(which is QObject base), it will be deleted when parent be deleted. So, firstly, make sure if it's QObject-based. If the answer is not, you can delete the pointer in the deconstructor. Or use smart pointer in Qt.
If you use QML with Qt C++, it's more complicated. But the core concept you need know is ownership. Also in QtScript. You can also think about if the pointer be deleted twice, and what will happen in Qt.
Okay.
For instance, when I close a dialog using, say
@connect(EMP_closeButton, SIGNAL(clicked()), this, SLOT(close()));@
will this call the destructor, or will it just hide the dialog?
Do I need to set Qt::WA_DeleteOnClose? If yes, where do I do that?
Thanks for your incredibly fast reponses, guys!
Yes, you should give your QSqlQueryModel instances an explicit parent. If you do not specify a parent, and none is set automatically (for instance inserting a widget into a layout of a parent widget will take care of parent-child relationships automatically), you need to take care of deleting any object you created on the heap yourself. That probably come down to calling delete on the pointers in the destructor of the class you created them in. Or, as Chuck.Gao suggests, use smart pointers to take care of that for you.
Okay. So since everything (dialogs, widgets...) I call from the MainWindow has the MainWindow as its parent, the delete process is taken care of when I close the window using the code above. Where do I set Qt::DeleteOnClose?
What happens if I delete a pointer twice?
Thanks,
close() won't delete the dialog (= call the destructor) unless the WA_DeleteOnClose attribute is set. The flag can be set any time (preferebly before closing) using QWidget::setAttribute, and - of course - if your dialog is created on the heap (if it is created on the stack it will be destructed automatically when going out of scope).
However, if you set a parent for your dialog, it will be automatically deleted as soon as the parent is deleted. If you do not set a parent, it is your responsibility to delete the dialog (by either calling delete or setting the WA_DeleteOnClose attribute).
Thanks for answers, Qt devs! I really appreciate it.
[quote author="Joey Dumont" date="1309184687"]What happens if I delete a pointer twice? [/quote]
If the pointer has been set to 0 after the first deletion usually nothing, as @
delete 0;
@ is perfectly valid C++ if I remember correctly.
However, if you call delete on a non-null pointer which has already been deleted this usually leads to unexpected behaviour and will crash your application at best.
In that case, I'll be careful to always declare a parent. Seems like the easiest solution.
Thanks again!
- mlong Moderators
bq. In that case, I’ll be careful to always declare a parent. Seems like the easiest solution.
And, if you do ever delete anything manually, be sure to set the pointer to 0 after the call to delete. It's a good, safe practice. | https://forum.qt.io/topic/7046/solved-n00b-memory-management-in-qt | CC-MAIN-2018-39 | en | refinedweb |
coq-of-ocaml
Documentation on
Aim
coq-of-ocaml aims to enable formal verification of OCaml programs 🦄. The more you prove, the happier you are. By transforming OCaml code into similar Coq programs, it is possible to prove arbitrarily complex properties using the existing power of Coq. The sweet spot of
coq-of-ocaml is purely functional and monadic programs. Side-effects outside of a monad, like references, and advanced features like object-oriented programming, may never be supported. By sticking to the supported subset of OCaml, you should be able to import millions of lines of code to Coq and write proofs at large. Running
coq-of-ocaml after each code change, you can make sure that your proofs are still valid. The generated Coq code is designed to be stable, with no generated variable names for example. We recommend organizing your proof files as you would organize your unit-test files.
The guiding idea of
coq-of-ocaml is TypeScript. Instead of bringing types to an untyped language, we are bringing proofs to an already typed language. The approach stays the same: finding the right sweet spot, using heuristics when needed, guiding the user with error messages. We use
coq-of-ocaml at Tezos, a crypto-currency implemented in OCaml, in the hope to have near-zero bugs thanks to formal proofs. Tezos is currently one of the most advanced crypto-currencies, with smart contracts, proof-of-stake, encrypted transactions, and protocol upgrades. It aims to compete with Ethereum. Formal verification is claimed to be important for crypto-currencies as there are no central authorities to forbid bug exploits and a lot of money at stake. A Coq translation of the core components of Tezos is available in the project coq-tezos-of-ocaml. Protecting the money.
There are still some open problems with
coq-of-ocaml, like the axiom-free compilation of GADTs (an ongoing project). If you are willing to work on a particular project, please contact us by opening an issue in this repository.
Example
Start with the file
main.ml 🐫:
type 'a tree = | Leaf of 'a | Node of 'a tree * 'a tree let rec sum tree = match tree with | Leaf n -> n | Node (tree1, tree2) -> sum tree1 + sum tree2
Run:
coq-of-ocaml main.ml
Get a file
Main.v 🦄:
Require Import CoqOfOCaml.CoqOfOCaml. Require Import CoqOfOCaml.Settings. Inductive tree (a : Set) : Set := | Leaf : a -> tree a | Node : tree a -> tree a -> tree a. Arguments Leaf {_}. Arguments Node {_}. Fixpoint sum (tree : tree int) : int := match tree with | Leaf n => n | Node tree1 tree2 => Z.add (sum tree1) (sum tree2) end.
You can now write proofs by induction over the
sum function using Coq. To see how you can write proofs, you can simply look at the Coq documentation. Learning to write proofs is like learning a new programming paradigm. It can take time, but be worthwhile! Here is an example of proof:
(** Definition of a tree with only positive integers *) Inductive positive : tree int -> Prop := | Positive_leaf : forall n, n > 0 -> positive (Leaf n) | Positive_node : forall tree1 tree2, positive tree1 -> positive tree2 -> positive (Node tree1 tree2). Require Import Coq.micromega.Lia. Lemma positive_plus n m : n > 0 -> m > 0 -> n + m > 0. lia. Qed. (** Proof that if a tree is positive, then its sum is positive too *) Fixpoint positive_sum (tree : tree int) (H : positive tree) : sum tree > 0. destruct tree; simpl; inversion H; trivial. apply positive_plus; now apply positive_sum. Qed.
Install
Using the OCaml package manager opam, run:
opam install coq-of-ocaml
Usage
The basic command is:
coq-of-ocaml file.ml
You can start to experiment with the test files in
tests/ or look at our online examples.
coq-of-ocaml compiles the
.ml or
.mli files using Merlin to understand the dependencies of a project. One first needs to have a compiled project with a working configuration of Merlin. This is automatically the case if you use dune as a build system.
Documentation
You can read the documentation on the website of the project at.
Supported
the core of OCaml (functions, let bindings, pattern-matching,...) ✔️
type definitions (records, inductive types, synonyms, mutual types) ✔️
monadic programs ✔️
modules as namespaces ✔️
modules as polymorphic records (signatures, functors, first-class modules) ✔️
multiple-file projects (thanks to Merlin) ✔️
.mland
.mlifiles ✔️
existential types (we use impredicative sets to avoid a universe explosion) ✔️
partial support of GADTs 🌊
partial support of polymorphic variants 🌊
partial support of extensible types 🌊
ignores side-effects outside of a monad ❌
no object-oriented programming ❌
Even in case of errors, we try to generate some Coq code along with an error message. The generated Coq code should be readable and with a size similar to the OCaml source. The generated code does not necessarily compile after a first try. This can be due to various errors, such as name collisions. Do not hesitate to fix these errors by updating the OCaml source accordingly. If you want more assistance, please contact us by opening an issue in this repository.
Contribute
If you want to contribute to the project, you can submit a pull-requests.
Build with opam
To install the current development version:
opam pin add
Build manually
Clone the Git submodule for Merlin:
git submodule init git submodule update
Then read the
coq-of-ocaml.opam file at the root of the project to know the dependencies to install and get the list of commands to build the project.
License
MIT (open-source software)
sha256=9dfde155240061a70b803715609246d360605d887035070b160190e21f672dbd
sha512=259a034fdff6cc0ca4af935eb07ec4dac7916a199452bec2bccbf5b6e26f6dfbcbd33ba53dfe63d0fc64a9c04763129e76ff6d04ebd53bb24e378b96457651ec | https://ocaml.org/p/coq-of-ocaml/2.5.2+4.13 | CC-MAIN-2022-27 | en | refinedweb |
Semivariogram Sensitivity (Geostatisical Analyst)
Summary
Performs a sensitivity analysis with varying Nugget, Partial Sill, and Range values.
Usage
The geostatistical model source is either a geostatistical layer or a geostatistical model (XML).
Set the environment variable Seed equal to a nonzero value if the random sequence should be repeatable.
For data formats that support Null values, such as file and personal geodatabase feature classes, a Null value will be used to indicate that a prediction could not be made for that location or that the value showed.
Syntax
Code Sample
Performs a sensitivity analysis by varying the Nugget, Partial Sill, and Range values.
import arcpy arcpy.env.workspace = "C:/gapyexamples/data" arcpy.GASemivariogramSensitivity_ga( "C:/gapyexamples/data/kriging.lyr", "C:/gapyexamples/data/ca_ozone_pts.shp OZONE", "C:/gapyexamples/data/obs_pts.shp", "", "", "", "", "", "", "", "", "C:/gapyexamples/output/outtabSS")
Performs a sensitivity analysis by varying the Nugget, Partial Sill, and Range values.
# Name: SemivariogramSensitivity_Example_02.py # Description: The semivariogram parameters Nugget, Partial Sill and Range can # be varied to perform a sensitivity analysis. # Requirements: Geostatistical Analyst Extension # Import system modules import arcpy # Set environment settings arcpy.env.workspace = "C:/gapyexamples/data" # Set local variables inLayer = "C:/gapyexamples/data/kriging.lyr" inData = "C:/gapyexamples/data/ca_ozone_pts.shp OZONE" inObs = "C:/gapyexamples/data/obs_pts.shp" nugPercents = "" nugCalc = "" sillPercents = "" sillCalc = "" rangePercents = "" rangeClac = "" minrangePercent = "" midrangeCalc = "" outTable = "C:/gapyexamples/output/outtabSS" # Check out the ArcGIS Geostatistical Analyst extension license arcpy.CheckOutExtension("GeoStats") # Execute SemivariogramSensitivity arcpy.GASemivariogramSensitivity_ga(inLayer, inData, inObs, nugPercents, nugCalc, sillPercents, sillCalc, rangePercents, rangeClac, minrangePercent, midrangeCalc, outTable) | https://resources.arcgis.com/en/help/main/10.1/0030/003000000011000000.htm | CC-MAIN-2022-27 | en | refinedweb |
BrowseTheWeb
Examples:
import { Actor } from '@serenity-js/core'; import { BrowseTheWeb, Navigate, Target } from '@serenity-js/protractor' import { Ensure, equals } from '@serenity-js/assertions'; import { by, protractor } from 'protractor'; const actor = Actor.named('Wendy').whoCan( BrowseTheWeb.using(protractor.browser), ); const HomePage = { Title: Target.the('title').located(by.css('h1')), }; actor.attemptsTo( Navigate.to(``), Ensure.that(Text.of(HomePage.Title), equals('Serenity/JS')), );
See also:
Static Method Summary
Constructor Summary
Method Summary
Static Public Methods
public static as(actor: UsesAbilities): BrowseTheWeb source
Used to access the Actor's ability to BrowseTheWeb from within the Interaction classes, such as Navigate.
Params:
Returns:
public static using(browser: ProtractorBrowser): BrowseTheWeb source
Ability to interact with web front-ends using a given protractor browser instance.
Params:
Returns:
Public Constructors
public constructor(browser: ProtractorBrowser) source
Params:
Public Methods
public actions(): ActionSequence source
Interface for defining sequences of complex user interactions.
Each sequence will not be executed until
perform is called.
Returns:
public alert(): AlertPromise source
Changes focus to the active modal dialog,
such as those opened by
Window.alert(),
Window.prompt(), or
Window.confirm().
The returned promise will be rejected with an
error.NoSuchAlertError
if there are no open alerts.
Returns:
public closeCurrentWindow(): Promise<void> source
Closes the currently active browser window/tab.
Returns:
public enableAngularSynchronisation(enable: boolean): Promise<boolean> source
If set to false, Protractor will not wait for Angular $http and $timeout tasks to complete before interacting with the browser.
This can be useful when:
- you need to switch to a non-Angular app during your tests (i.e. SSO login gateway)
- your app continuously polls an API with $timeout
If you're not testing an Angular app, it's better to disable Angular synchronisation completely in protractor configuration:
Params:
Returns:
Examples:
exports.config = { onPrepare: function () { return browser.waitForAngularEnabled(false); }, // ... other config };
public executeAsyncScript(script: string | Function, args: any[]): Promise<any>.
Unlike executing synchronous JavaScript with BrowseTheWeb#executeScript, scripts executed with this function must explicitly signal they are finished by invoking the provided callback.
This callback will always be injected into the executed function as the last argument,
and thus may be referenced with
arguments[arguments.length - 1].
The following steps will be taken for resolving this functions return value against the first argument to the script's callback function:
-(` var delay = arguments[0]; var callback = arguments[arguments.length - 1]; window.setTimeout(callback, delay); `, 500)
BrowseTheWeb.as(actor).executeAsyncScript(` var callback = arguments[arguments.length - 1]; callback('some return value') `).then(value => doSomethingWithThe(value))
public executeFunction(fn: Function, args: Parameters<fn>): Promise<ReturnType<fn>> source
A simplified version of BrowseTheWeb#executeScript that doesn't affect {@link LastScriptExecution.result()}.
Params:
Returns:
public executeScript(description: string, script: string | Function, args: any[]): Promise<any> source
Schedules a command to execute.
The script may refer to any variables accessible from the current window.
Furthermore, the script will execute in the window's context, thus
document may be used to refer
to the current document. Any local variables will not be available once the script has finished executing,
though global variables will persist.
If the script has a return value (i.e. if the script contains a
return statement),
then the following steps will be taken for resolving this functions return value:(` return arguments[0].tagName; `, Target.the('header').located(by.css(h1))
public get(destination: string, timeoutInMillis: number?): Promise<void> source
Navigate to the given destination and loads mock modules before Angular. Assumes that the page being loaded uses Angular.
Params:
Returns:
public getAllWindowHandles(): Promise<string[]> source
Returns the handles of all the available windows.
Please note that while some browsers organise entries of this list in the same order new windows have been spawned, other browsers order it alphabetically. For this reason, you should not make any assumptions about how this list is ordered.
Returns:
public getCapabilities(): Promise<Capabilities> source
Returns the capabilities of the browser used in the current session.
By default, the session
capabilities specified in the
protractor.conf.js
indicate the desired properties of the remote browser. However, if the remote cannot satisfy
all the requirements, it will still create a session.
Returns:
public getCurrentWindowHandle(): Promise<string> source
Returns the current window handle. Please note that the current handle changes with each browser window you Switch to.
Returns:
public getLastScriptExecutionResult(): any source
Returns the last result of calling BrowseTheWeb#executeAsyncScript or BrowseTheWeb#executeScript
Returns:
public getOriginalWindowHandle(): Promise<string> source
Returns the handle of the browser window last used to navigate to a URL.
Returns:
public getTitle(): Promise<string> source
Returns the title of the current page.
Returns:
public locate(locator: Locator): ElementFinder source
Locates a single element identified by the locator
Params:
Returns:
public locateAll(locator: Locator): ElementArrayFinder source
Locates all elements identified by the locator
Params:
Returns:
public manage(): Options source
Interface for managing browser and driver state.
Returns:
public navigate(): Navigation source
Interface for navigating back and forth in the browser history.
Returns:
public param(path: string): T source
Returns Protractor configuration parameter at
path.
Params:
Returns:
Throws:
Examples:
exports.config = { params: { login: { username: '[email protected]' password: process.env.PASSWORD } } // ... }
BrowseTheWeb.as(actor).param('login') // returns object with username and password
BrowseTheWeb.as(actor).param('login.username') // returns string '[email protected]'
public sleep(millis: number): Promise<void> source
Pause the actor flow for a specified number of milliseconds.
Params:
Returns:
public switchToDefaultContent(): Promise<void> source
Returns:
public switchToFrame(elementOrIndexOrName: number | string | WebElement): Promise<void> source
Switches the focus to a
frame or
iframe identified by
elementOrIndexOrName,
which can be specified either as WebElement, the name of the frame, or its index.
Params:
Returns:
public switchToOriginalWindow(): Promise<void> source
Returns:
public switchToParentFrame(): Promise<void> source
Returns:
public switchToWindow(nameOrHandleOrIndex: string | number): Promise<void> source
Switches browser window/tab to the one identified by
nameOrHandleOrIndex,
which can be specified as the name of the window to switch to, its handle, or numeric index.
Params:
Returns:
public takeScreenshot(): Promise<string> source
Schedule a command to take a screenshot. The driver makes a best effort to return a base64-encoded screenshot of the following, in order of preference:
- Entire page
- Current window
- Visible portion of the current frame
- The entire display containing the browser | https://serenity-js.org/modules/protractor/class/src/screenplay/abilities/BrowseTheWeb.ts~BrowseTheWeb.html | CC-MAIN-2022-27 | en | refinedweb |
#include <SIM_ElectricalProperties.h>
This is an implementation of the SIM_Data interface. This implementation contains fields to record electrical properties.
Definition at line 43 of file SIM_ElectricalProperties.h.
Definition at line 42 of file SIM_ElectricalProperties.C.
Definition at line 48 of file SIM_ElectricalProperties.C.
Controls the object's resistance.
Controls the object's capacitance.
Controls the object's inductance. | https://www.sidefx.com/docs/hdk/class_h_d_k___sample_1_1_s_i_m___electrical_properties.html | CC-MAIN-2022-27 | en | refinedweb |
Nice, and saw them using some other new text editing features that I hadn't seen before ("wait - how did you just do that?").
Below is a non-exhaustive list of a few new code editing improvements I've learned about this week. I'm know there are many more I don't know about yet - but I thought these few were worth sharing now:
Transparent Intellisense Mode
One of the things I sometimes find annoying with intellisense in VS 2005 is that the intellisense drop-down obscures the code that is behind it when it pops-up:
With VS 2005 I often find myself needing to escape out of intellisense in order to better see the code around where I'm working, and then go back and complete what I was doing. This sometimes ends up disturbing my train of thought and typing workflow.
VS 2008 provides a nice new feature that allows you to quickly make the intellisense drop-down list semi-transparent. Just hold down the "Ctrl" key while the intellisense drop-down is visible and you'll be able to switch it into a semi-transparent mode that enables you to quickly look at the code underneath without having to escape out of intellisense:
When you release the "Ctrl" key, the editor will switch back to the normal intellisense view and you can continue typing where you were in the Intellisense window.
This feature works with all language (VB, C#, and JavaScript). It also works with HTML, XAML and XML based markup.
VB Intellisense Filtering
The VB team has made some nice improvements to intellisense that make it much easier to navigate through APIs.
Intellisense completion now automatically filters the member list available as you type to help you better pinpoint the API you are looking for. For example, if in an ASP.NET code-behind page you type "R" it will show the full list of types and members available (with the selection starting in the "R" list):
When you type the second character of what you are looking for (in this case "e"), VB will automatically filter to only show those types that start with "Re" and highlight the most likely option:
When you type the "s" it filters the list even further:
When you type "p" it filters down to just the one option available:
I find this cleaner and more intuitive than the previous model that always showed everything in the drop-down.
VB LINQ Intellisense
I've done several posts in the past about LINQ and LINQ to SQL. Both VB and C# obviously have full support for LINQ and LINQ to SQL. I think the VB team in particular has done some nice work to provide nice intellisense hints to help guide users when writing LINQ statements in the editor.
For example, assuming we have a LINQ to SQL data model like the one I built in Part 2 of my LINQ to SQL series, I could use the VB code editor to easily work with it. Notice below how VB automatically provides a tooltip that helps guide me through writing the LINQ query syntax:
I can then start writing my query expression and the VB intellisense will guide me through creating it:
The above expression retrieves three column values from the database and creates a new anonymous type that I can then loop over to retrieve and work on the data:
Organize C# Using Statements
The C# editor has added some great intellisense improvements as well. Some of the biggest obviously include language intellisense and refactoring support for the new language features (Lambdas, Extension Methods, Query Syntax, Anonymous Types, etc). Just like in our VB example above, C# supports type inference and intellisense completion of anonymous types:
One of the small, but nice, new features I recently noticed in VS 2008 is support for better organizing using statements in C#. You can now select a list of using statements, right-click, and then pull up the "Organize Usings" sub-menu:
You can use this to alphabetically sort your namespaces (one of my pet peeves), and optionally use the 'Remove Unused Usings" command to remove un-necessary namespace declarations from the file:
When you use this command the editor will analyze what types you are using in your code file, and automatically remove those namespaces that are declared but not needed to support them. A small but handy little feature.
Summary
The above list of editor improvements is by no means exhaustive, but rather just a few small improvements I've played with in the last week. Post others you notice in the comments section of this post, and I'll try and do an update with more in the future.
Thanks,
Scott | https://weblogs.asp.net/scottgu/nice-vs-2008-code-editing-improvements | CC-MAIN-2022-27 | en | refinedweb |
In my first Laminar tutorial I showed how to set up a Scala sbt project to use Laminar, and then showed a “static” example — i.e., there were no moving parts. Please see that tutorial first if you’ve never used Laminar.
BUT, because Laminar is meant for writing reactive applications with observables and observers, this tutorial begins to show its reactive concepts.
Please note that this example is almost 100% the same as the Laminar “Hello, world” example, except that I add a few minor things to it.
Just like my first tutorial, I also provide an sbt project for this tutorial to help you get up and running asap:
Clone that project if you want to follow along with the text below.
A Laminar reactive “Hello, world” example
Given that introduction, this is what this example looks like in a browser:
What happens in this example is that you type into the input field, and then whatever you type is reactively sent to the two output fields that are below the input field. The second field transforms whatever you type into uppercase.
The Laminar/Scala/Scala.js code
Here’s the source code that makes this example work:
import com.raquo.laminar.api.L._ import org.scalajs.dom object Laminar101Reactive { def main(args: Array[String]): Unit = { val nameVar = Var(initial = "world") // an outer “wrapper” div val rootElement = div( padding("2em"), div( backgroundColor("#e0e0e0"), label("Your name: "), input( onMountFocus, placeholder := "Enter your name here", // as the user types into this 'input' field, send the contents // of the input field to nameVar. nameVar is a reactive variable // (observable), and it’s used to update the two DIVs below. inContext { thisNode => onInput.map(_ => thisNode.ref.value) --> nameVar } ), ), // a div for the first output area div( backgroundColor("#d9d9d9"), "Reactive 1: ", // any time nameVar is updated, it sends us a signal, and this // field is updated: child.text <-- nameVar.signal ), // a div for the second output area div( backgroundColor("#d0d0d0"), // any time nameVar is updated, it sends us a signal, and this // field is updated. note in this case that the value is map’d: "Reactive 2: ", child.text <-- nameVar.signal.map(_.toUpperCase) ) ) // `#root` must match the `id` in index.html val containerNode = dom.document.querySelector("#root") // render the element in the container render(containerNode, rootElement) } }
As you can see, the code consists of several
div elements, with different content in each element. Hopefully the HTML-ish elements make sense, so I won’t describe them. But here are a few notes about the unfamiliar parts:
- From the Laminar docs,
onMountFocus“focuses the element when it's mounted into the DOM. It’s like the HTML autoFocus attribute that actually works for dynamic content.”
inContextis a way to get a reference to the node that you’re inside. In this example,
inContextrefers to the
inputfield. Its code may be a little easier to read like this:
inContext { thisNode => onInput.map(_ => thisNode.ref.value) --> nameVar }
Discussion
As I write in my book, Functional Programming, Simplified,
inContext is either (a) a class that takes a function parameter, or (b) a function that takes a by-name parameter. In this case, the truth is that I don’t know what
inContext is — I haven’t taken the time to look it up.
But what this code does is that whenever a user types something into the input field, an
onInput event is triggered, and that causes the
map method to be run. Inside
map the value we get from
onInput is ignored with the
_ character, and we just yield
thisNode.ref.value, which is the value in the input field. That value is then sent “to” the
nameVar reactive variable.
Because
nameVar is an observable variable and it’s observed by these two lines of code:
child.text <-- nameVar.signal child.text <-- nameVar.signal.map(_.toUpperCase)
those two lines are immediately updated.
That’s the way observables and observers work,
and Laminar is 100% based on this principle and
style of coding.
Here’s another way of looking at how that code works:
An awesome thing about Laminar is that you can write Scala code that’s compiled to JavaScript (using Scala.js and sbt), letting you write single-page applications using Scala!
Other notes
As you’ll see in the Github project:
- The build.sbt file is almost the same as my first tutorial
- The index.html file is almost the same as my first tutorial
- You compile your Scala code to JavaScript using this command in sbt:
sbt:Laminar101> ~fastOptJS
All of that is explained in my first tutorial.
One thing that isn’t mentioned in that tutorial is that whenever you update your Scala code and it’s recompiled into JavaScript, you need to refresh your browser so it picks up the newly-generated JavaScript file.
Experiment with the examples
Now that you’ve seen this example and have access to my Github project, you can also copy and paste the other Laminar examples into this project to experiment with them.
The next tutorial
When you’re ready for the next step, I just added my third Laminar tutorial, A small Laminar reactive routing example.
Resources
If you go forward in working with Laminar, here are some helpful resources:
- Laminar documentation
- Laminar examples
- For other tech support, see the Laminar Gitter page
Laminar depends on the following three projects, and if you go forward, their docs will also be helpful: | https://alvinalexander.com/scala/laminar-102-reactive-hello-world-example/ | CC-MAIN-2022-27 | en | refinedweb |
KEYCTL_GRANT_PERMISSION(3)ux Key Management CallsCTL_GRANT_PERMISSION(3)
keyctl_watch_key - Watch for changes to a key
#include <keyutils.h> long keyctl_watch_key(key_serial_t key, int watch_queue_fd int watch_id);
keyctl_watch_key() sets or removes a watch on key. watch_id specifies the ID for a watch that will be included in notification messages. It can be between 0 and 255 to add a key; it should be -1 to remove a key. watch_queue_fd is a file descriptor attached to a watch_queue device instance. Multiple openings of a device provide separate instances. Each device instance can only have one watch on any particular key. Notification Record Key-specific notification messages that the kernel emits into the buffer have the following format: struct key_notification { struct watch_notification watch; __u32 key_id; __u32 aux; }; The watch.type field will be set to WATCH_TYPE_KEY_NOTIFY and the watch.subtype field will contain one of the following constants, indicating the event that occurred and the watch_id passed to keyctl_watch_key() will be placed in watch.info in the ID field. The following events are defined: NOTIFY_KEY_INSTANTIATED This indicates that a watched key got instantiated or negatively instantiated. key_id indicates the key that was instantiated and aux is unused. NOTIFY_KEY_UPDATED This indicates that a watched key got updated or instantiated by update. key_id indicates the key that was updated and aux is unused. NOTIFY_KEY_LINKED This indicates that a key got linked into a watched keyring. key_id indicates the keyring that was modified aux indicates the key that was added. NOTIFY_KEY_UNLINKED This indicates that a key got unlinked from a watched keyring. key_id indicates the keyring that was modified aux indicates the key that was removed. NOTIFY_KEY_CLEARED This indicates that a watched keyring got cleared. key_id indicates the keyring that was cleared and aux is unused. NOTIFY_KEY_REVOKED This indicates that a watched key got revoked. key_id indicates the key that was revoked and aux is unused. NOTIFY_KEY_INVALIDATED This indicates that a watched key got invalidated. key_id indicates the key that was invalidated and aux is unused. NOTIFY_KEY_SETATTR This indicates that a watched key had its attributes (owner, group, permissions, timeout) modified. key_id indicates the key that was modified and aux is unused. Removal Notification When a watched key is garbage collected, all of its watches are automatically destroyed and a notification is delivered to each watcher. This will normally be an extended notification of the form: struct watch_notification_removal { struct watch_notification watch; __u64 id; }; The watch.type field will be set to WATCH_TYPE_META and the watch.subtype field will contain WATCH_META_REMOVAL_NOTIFICATION. If the extended notification is given, then the length will be 2 units, otherwise it will be 1 and only the header will be present. The watch_id passed to keyctl_watch_key() will be placed in watch.info in the ID field. If the extension is present, id will be set to the ID of the destroyed key.
On success keyctl_watch_key() returns 0 . On error, the value -1 will be returned and errno will have been set to an appropriate error.
ENOKEY The specified key does not exist. EKEYEXPIRED The specified key has expired. EKEYREVOKED The specified key has been revoked. EACCES The named key exists, but does not grant view permission to the calling process. EBUSY The specified key already has a watch on it for that device instance (add only). EBADSLT The specified key doesn't have a watch on it (removal only). Aug 2019 KEYCTL_GRANT_PERMISSION(3)
Pages that refer to this page: keyctl(3) | https://michaelkerrisk.com/linux/man-pages/man3/keyctl_watch_key.3.html | CC-MAIN-2022-27 | en | refinedweb |
Image migration and synchronization between image repositories are required to migrate applications from a self-managed Kubernetes cluster to a Container Service for Kubernetes (ACK) cluster. You can use image-syncer to migrate and synchronize multiple images from self-managed image repositories to Alibaba Cloud Container Registry (ACR) on the fly at a time. This topic describes how to use image-syncer to migrate container images.
Background information
Compared with the Kubernetes clusters created and maintained by other cloud providers, ACK is superior in terms of service costs, maintenance expenses, ease-of-use, and long-term stability. A growing number of cloud providers want to migrate their Kubernetes workloads to ACK. If the number of images is small, you can run the docker pull and docker push commands to migrate the images. If you run the commands to migrate more than a hundred images or an image repository that stores TB-level data, the migration process takes a long time and may cause data loss. In this case, the image synchronization capability is required for migrating images between image repositories. The open source tool image-syncer developed by Alibaba Cloud provides this capability and has helped many cloud service providers migrate images. The maximum image repository capacity is larger than 3 TB. The server that runs image-syncer can make full use of the server bandwidth, and no requirement exists for the disk capacity of the server.
image-syncer overview
- You must remove existing images from the disks of the server where the destination image repository resides and store the migrated images on these disks. Therefore, this method does not apply to large-scale image migration.
- The Docker daemon is required. The Docker daemon limits the number of images that can be pulled or pushed concurrently. As a result, you cannot perform high-concurrency image synchronization.
- HTTP API operations are required to implement some features. You cannot synchronize images by using only the Docker CLI. As a result, you must write a complex synchronization script.
Features
- Synchronizes images from multiple source image repositories to multiple destination image repositories.
- Supports Docker image repository services based on Docker Registry V2.
For example, image-syncer supports Docker Hub, Quay.io, Alibaba Cloud Container Registry, and Harbor.
- Synchronizes images by using only the memory and network resources. Images are not stored on the disks of the server where the destination image repository resides. This improves the synchronization efficiency.
- Supports incremental synchronization.
image-syncer uses a file to record the blob information about synchronized images. Therefore, images that have been synchronized are not synchronized again.
- Supports concurrent synchronization.
You can modify the number of images that can be concurrently pulled or pushed in the configuration file.
- Automatically retries failed synchronization tasks to resolve most image synchronization issues caused by network jitters.
- Programs such as the Docker daemon are not required.
By using the image-syncer tool, you can migrate, copy, and perform incremental synchronization of images from an image repository. Make sure that image-syncer can communicate with the source and destination repositories. image-syncer has no requirements on hardware resources. However, the number of images that are concurrently synchronized must be equal to the number of network connections on image-syncer. The memory consumed by image-syncer is less than or equal to the product of the number of images that are concurrently synchronized and the size of the largest image layer. Therefore, the memory of the server that runs image-syncer may be exhausted only if the size of the largest image layer and the number of images that are concurrently synchronized are too large. In addition, image-syncer provides a retransmission mechanism to avoid occasional failures during synchronization. image-syncer counts the number of images that fail to be synchronized when synchronization ends and provides detailed logs to help you locate issues.
Preparations
{ "auth": { // The authentication information field. Each object contains the username and password that are required to access a registry. // In most cases, image-syncer must have permissions to pull images from and access tags in the source registry. // image-syncer must have permissions to push images to and create repositories in the destination registry. If no authentication information is provided for a registry, image-syncer accesses the registry in anonymous mode. "quay.io": { // The URL of the registry, which must be the same as that of the registry in image URLs. "username": "xxx", // Optional. The username. "password": "xxxxxxxxx", // Optional. The password. "insecure": true // Optional. Specifies whether the repository is accessed through HTTP. Default value: false. Only image-syncer of V1.0.1 and later support this parameter. }, "registry.cn-beijing.aliyuncs.com": { "username": "xxx", "password": "xxxxxxxxx" }, "registry.hub.docker.com": { "username": "xxx", "password": "xxxxxxxxxx" } }, "images": { // The field that describes image synchronization rules. Each rule is a key-value pair. The key specifies the URL of the source repository and the value specifies the URL of the destination repository. // You cannot synchronize an entire namespace or registry based on one rule. You can synchronize only one repository based on one rule. // The URLs of the source and destination repositories are in the format of registry/namespace/repository:tag, which is similar to the image URL format used in the docker pull or docker push command. // The URL of the source repository must contain registry/namespace/repository. If the URL of the destination repository is not an empty string, it must also contain registry/namespace/repository. // The URL of the source repository cannot be an empty string. To synchronize images from a source repository to multiple destination repositories, you must configure multiple rules. // The name and tags of the destination repository can be different from those of the source repository. In this case, the image synchronization rule works in the same way as the combination of the docker pull, docker tag, and docker push commands. "quay.io/coreos/kube-rbac-proxy": "quay.io/ruohe/kube-rbac-proxy", "xxxx":"xxxxx", "xxx/xxx/xx:tag1,tag2,tag3":"xxx/xxx/xx" // If the URL of the source repository does not contain tags, all images in the source repository are synchronized to the destination repository with the original tags. In this case, the URL of the destination repository cannot contain tags. // If the URL of source repository contains only one tag, only images that has this tag in the source repository are synchronized to the destination repository. If the URL of the destination repository does not contain a tag, synchronized images keep the original tag. // If the URL of the source repository contains multiple tags that are separated with commas (,), such as "a/b/c:1,2,3", the URL of the destination repository cannot contain tags. Synchronized images keep the original tags. // If the URL of the destination repository is an empty string, images are synchronized to a repository that has the same name and tags in the default namespace of the default registry. The default registry and namespace can be set through command parameters or environment variables. } } | https://www.alibabacloud.com/help/en/container-service-for-kubernetes/latest/migrate-container-images-use-image-syncer-to-migrate-container-images | CC-MAIN-2022-27 | en | refinedweb |
H.
Now this is one of the more interesting things I've come across.
I fiddled around with the code a bit and was able to reproduce the phenomenon with DIMS = 1, making visualisation possible:
Here's the code I used to make the plot:
import torch import numpy as np import matplotlib.pyplot as plt from mpl_toolkits import mplot3d DIMS = 1 # number of dimensions that xn has WSUM = 5 # number of waves added together to make a splotch EPSILON = 0.10 # rate at which xn controlls splotch strength TRAIN_TIME = 5000 # number of iterations to train for LEARN_RATE = 0.2 # learning rate MESH_DENSITY = 100 #number of points ot plt in 3d mesh (if applicable) torch.random.manual_seed(1729) # knlist and k0list are integers, so the splotch functions are periodic knlist = torch.randint(-2, 3, (DIMS, WSUM, DIMS)) # wavenumbers : list (controlling dim, wave id, k component) k0list = torch.randint(-2, 3, (DIMS, WSUM)) # the x0 component of wavenumber : list (controlling dim, wave id) slist = torch.randn((DIMS, WSUM)) # sin coefficients for a particular wave : list(controlling dim, wave id) clist = torch.randn((DIMS, WSUM)) # cos coefficients for a particular wave : list (controlling dim, wave id) # initialize x0, xn x0 = torch.zeros(1, requires_grad=True) xn = torch.zeros(DIMS, requires_grad=True) # numpy arrays for plotting: x0_hist = np.zeros((TRAIN_TIME,)) xn_hist = np.zeros((TRAIN_TIME, DIMS)) loss_hist = np.zeros(TRAIN_TIME,) def model(xn,x0): wavesum = torch.sum(knlist*xn, dim=2) + k0list*x0 splotch_n = torch.sum( (slist*torch.sin(wavesum)) + (clist*torch.cos(wavesum)), dim=1) foreground_loss = EPSILON * torch.sum(xn * splotch_n) return foreground_loss - x0 # train: for t in range(TRAIN_TIME): print(t) loss = model(xn,x0) loss.backward() with torch.no_grad(): # constant step size gradient descent, with some noise thrown in vlen = torch.sqrt(x0.grad*x0.grad + torch.sum(xn.grad*xn.grad)) x0 -= LEARN_RATE*(x0.grad/vlen + torch.randn(1)/np.sqrt(1.+DIMS)) xn -= LEARN_RATE*(xn.grad/vlen + torch.randn(DIMS)/np.sqrt(1.+DIMS)) x0.grad.zero_() xn.grad.zero_() x0_hist[t] = x0.detach().numpy() xn_hist[t] = xn.detach().numpy() loss_hist[t] = loss.detach().numpy() plt.plot(x0_hist) plt.xlabel('number of steps') plt.ylabel('x0') plt.show() for d in range(DIMS): plt.plot(xn_hist[:,d]) plt.xlabel('number of training steps') plt.ylabel('xn') plt.show() fig = plt.figure() ax = plt.axes(projection='3d') ax.plot3D(x0_hist,xn_hist[:,0],loss_hist) #plot loss landscape if DIMS == 1: x0_range = np.linspace(np.min(x0_hist),np.max(x0_hist),MESH_DENSITY) xn_range = np.linspace(np.min(xn_hist),np.max(xn_hist),MESH_DENSITY) x,y = np.meshgrid(x0_range,xn_range) z = np.zeros((MESH_DENSITY,MESH_DENSITY)) with torch.no_grad(): for i,x0 in enumerate(x0_range): for j,xn in enumerate(xn_range): z[j,i] = model(torch.tensor(xn),torch.tensor(x0)).numpy() ax.plot_surface(x,y,z,color='orange',alpha=0.3) ax.set_title("loss") plt.show()
My impression is that people working on self-driving cars are incredibly safety-conscious, because the risks are very salient.
Safety conscious people working on self driving cars don't program their cars to not take evasive action after detecting that a collision is imminent.
(It's notable to me that this doesn't already happen, given the insane hype around AI.)
I think it already has.(It was for extra care, not drugs, but it's a clear cut case of a misspecified objective function leading to suboptimal decisions for a multitude of individuals.) I'll note, perhaps unfairly, that the fact that this study was not salient enough to make it to your attention even with a culture war signal boost is evidence that it needs to be a Chernobyl level event.
My worry is less that we wouldn't survive AI-Chernobyl as much as it is that we won't get an AI-Chernobyl.
I think that this is where there's a difference in models. Even in a non-FOOM scenario I'm having a hard time envisioning a world where the gap in capabilities between AI-Chernobyl and global catastrophic UFAI is that large. I used Chernobyl as an example because it scared the public and the industry into making things very safe. It had a lot going for it to make that happen. Radiation is invisible and hurts you by either killing you instantly, making your skin fall off, or giving you cancer and birth defects. The disaster was also extremely expensive, with the total costs on the order of 10^11 USD$.
If a defective AI system manages to do something that instils the same level of fear into researchers and the public as Chernobyl did, I would expect that we were on the cusp of building systems that we couldn't control at all.
If I'm right and the gap between those two events is small, then there's a significant risk that nothing will happen in that window. We'll get plenty of warnings that won't be sufficient to instil the necessary level of caution into the community, and later down the road we'll find ourselves in a situation we can't recover from. VAEs anyway.
This is fair. However, the point of the example is more that mode dropping and bad NLL were not noticed when people started optimizing GANs for image quality. As far as I can tell, it took a while for individuals to notice, longer for it to become common knowledge, and even more time for anyone to do anything about it. Even now, the "solutions" are hacks that don't completely resolve the issue.
There was a large window of time where a practitioner could implement a GAN expecting it to cover all the modes. If there was a world where failing to cover all the modes of the distribution lead to large negative consequences, the failure would probably have gone unnoticed until it was too late.
Here's a real example. This is the NTSB crash report for the Uber autonomous vehicle that killed a pedestrian. Someone should probably do an in depth analysis of the whole thing, but for now I'll draw your attention to section 1.6.2. Hazard Avoidance and Emergency Braking. In it they say:
When the system detects an emergency situation, it initiates action suppression. This is a one-second period alarms—detection of a hazardous situation when none exists—causing the vehicle to engage in unnecessary extreme maneuvers.
[...]
if the collision cannot be avoided with the application of the maximum allowed braking, the system is designed to provide an auditory warning to the vehicle operator while simultaneously initiating gradual vehicle slowdown. In such circumstance, ADS would not apply the maximum braking to only mitigate the collision.
This strikes me as a "random fix" where the core issue was that the system did not have sufficient discriminatory power to tell apart a safe situation from an unsafe situation. Instead of properly solving this problem, the researchers put in a hack.
Suppose that we had extremely compelling evidence that any AI system run with > X amount of compute would definitely kill us all. Do you expect that problem to get swept under the rug?
I agree that we shouldn't be worried about situations where there is a clear threat. But that's not quite the class of failures that I'm worried about. Fairness, bias, and adversarial examples are all closer to what I'm getting at. The general pattern is that ML researchers hack together a system that works, but has some problems they're unaware of. Later, the problems are discovered and the reaction is to hack together a solution. This is pretty much the opposite of the safety mindset EY was talking about. It leaves room for catastrophe in the initial window when the problem goes undetected, and indefinitely afterwards if the hack is insufficient to deal with the issue.
More specifically, I'm worried about a situation where at some point during grad student decent someone says, "That's funny..." then goes on to publish their work. Later, someone else deploys their idea plus 3 orders of magnitude more computing power and we all die. That, or we don't all die. Instead we resolve the issue with a hack. Then a couple bumps in computing power and capabilities later we all die.
The above comes across as both paranoid and farfeched, and I'm not sure the AI community will take on the required level of caution to prevent it unless we get an AI equivalent of Chernobyl before we get UFAI. Nuclear reactor design is the only domain I know of where people are close to sufficiently paranoid.
A likely crux is that I think that the ML community will actually solve the problems, as opposed to applying a bandaid fix that doesn't scale. I don't know why there are different underlying intuitions here. FID were used to assess the quality of the generated images. I might be wrong, but I think it took a while after that for people to realize that SOTA GANs we're getting terrible NNLs compared to SOTA VAEs, even though the VAE's generated images that we're significantly blurrier/noisier. It also became obvious that GANs were dropping modes of the distribution, effectively failing to model entire classes of images.
As far as I can, tell there's been a lot of work to get GANs to model all image modes. The most salient and recent would be DeepMinds PresGAN . Where they clearly show the issue and how PresGAN solves it in Figure 1. However, looking at table 5, there's still a huge gap between in NLL between PresGAN and VAEs. It seems to me that most of the attempt to solve this issue are very similar to "bandaid fixes that don't scale" in the sense that they mostly feel like hacks. None of them really address the gap in likelyhood between VAEs and GANs.
I'm worried that a similar story could happen with AI safety. A problem arises and gets swept under the rug for a bit. Later, it's rediscovered and becomes common knowledge. Then, instead of solving it before moving forward, we see massive increases in capabilities. Simultaneously, the problem is at most addressed with hacks that don't really solve the problem, or solve it just enough to prevent the increase in capabilities from becoming obviously unjustified. wonder if this is a neural network thing, an SGD thing, or a both thing? I would love to see what happens when you swap out SGD for something like HMC, NUTS or ATMC if we're resource constrained. If we still see the same effects then that tells us that this is because of the distribution of functions that neural networks represent, since we're effectively drawing samples from an approximation to the posterior. Otherwise, it would mean that SGD is plays a role.
what exactly are the magical inductive biases of modern ML that make interpolation work so well?
Are you aware of this work and the papers they cite?
From the abstract:
We prove that the binary classifiers of bit strings generated by random wide deep neural networks with ReLU activation function are biased towards simple functions. The simplicity is captured by the following two properties. For any given input bit string, the average Hamming distance of the closest input bit string with a different classification is at least sqrt(n / (2{\pi} log n)), where n is the length of the string. Moreover, if the bits of the initial string are flipped randomly, the average number of flips required to change the classification grows linearly with n. These results are confirmed by numerical experiments on deep neural networks with two hidden layers, and settle the conjecture stating that random deep neural networks are biased towards simple functions. This conjecture was proposed and numerically explored in [Valle Pérez et al., ICLR 2019] to explain the unreasonably good generalization properties of deep learning algorithms. The probability distribution of the functions generated by random deep neural networks is a good choice for the prior probability distribution in the PAC-Bayesian generalization bounds. Our results constitute a fundamental step forward in the characterization of this distribution, therefore contributing to the understanding of the generalization properties of deep learning algorithms.
I would field the hypothesis that large volumes of neural network space are devoted to functions that are similar to functions with low K-complexity, and small volumes of NN-space are devoted to functions that are similar to high K-complexity functions. Leading to a Solomonoff-like prior over functions.
Hypothesis: Unlike the language models before it and ignoring context length issues, GPT-3's primary limitation is that it's output mirrors the distribution it was trained on. Without further intervention, it will write things that are no more coherent than the average person could put together. By conditioning it on output from smart people, GPT-3 can be switched into a mode where it outputs smart text. | https://www.alignmentforum.org/users/factorialcode | CC-MAIN-2022-27 | en | refinedweb |
Assertions are statements to ensure that a particular condition or state holds true at a certain point in the execution of a program. These checks are usually only performed in debug builds, which means that you must ensure that the expressions in the assertions are side-effect free.
Because assertions are only validated in debug builds, you can abuse them to make your code more readable without impacting performance and without having to write a comment. Every time you write a line in which you assume a particular machine state that is not clearly implied by the code immediately preceding the new line, write an assertion.
Let’s look at a real world example derived from Kyua’s code. Consider the following extremely-simplified function which implements the
help command:
def help_command(args): if len(args) == 0: show_general_help() else: show_command_help(args[0])
With this code alone, try to answer these questions: “What happens if
args, which apparently is a subset of the arguments to the program, has more than 1 item? Are the additional arguments ignored and thus we have a bug in the code, or has the
args vector been pre-sanitized by the caller to not have extra arguments?" Well, you can’t answer this question because there is nothing in the code to tell you what the case is.
If I now show you the caller to the function, you can get an idea of what the expectation is:
def main(args): commands = {} commands['help'] = cli.Command(min_args=0, max_args=1, hook=help_command) ... cli.dispatch(commands, args)
Aha! There happens to be an auxiliary library that processes the command line and dispatches calls to the various subcommands based on a declarative interface. This declarative interface specifies what the maximum number of arguments to the command can be, so our function above for
help_command was correct: it was handling all possible lengths of the input
args vector.
But that’s just too much work to figure out a relative simple piece of code. A piece of code needs to be self-explanatory with as little external context as possible. We can do this with assertions.
The first thing you can do is state the precondition to the function as an assertion:
def help_command(args): assert len(args) <= 1 if len(args) == 0: show_general_help() else: show_command_help(args[0])
This does the trick: now, without any external context, you can tell that the
args vector is supposed to be empty or have a single element, and the code below clearly handles both cases.
However, I argue that this is still suboptimal. What is the complementary condition of
len(args) == 0? Easy:
len(args) > 0. Then, if that’s the case, how can the
else path be looking at the first argument only and not the rest? Didn’t someone overlook the rest of the arguments, possibly implying that the input data is not fully validated? This would be a legitimate question if the function was much longer than it is and reading it all was hard. Therefore, we would do this instead:
def help_command(args): if len(args) == 0: show_general_help() else: assert len(args) == 1 show_command_help(args[0])
Or this:
def help_command(args): if len(args) == 0: show_general_help() elif len(args) == 1: show_command_help(args[0]) else: assert False, 'args not properly sanitized by caller'
Both of these alternatives clearly enumerate all branches of a conditional, which makes the function easier to reason about. We will get to this in a future post.
Before concluding, let’s outline some cases in which you should really be writing assertions:
- Preconditions and postconditions.
- Assumptions about state that has been validated elsewhere in the code, possibly far away from the current code.
- Complementary conditions in conditionals where not all possible values of a type are being inspected.
- Unreachable code paths. | https://jmmv.dev/2013/07/readability-abuse-assertions.html | CC-MAIN-2022-27 | en | refinedweb |
) recommendation XML Signature Syntax and Processing. have to share encrypted data or where an application has RSACryptoServiceProvider constructor.
Create a.
This example assumes that a file named
test.xml exists in the same directory as the compiled program. It also assumes that
test.xml contains an XML element that was encrypted using the techniques described in How to: Encrypt XML Elements with Asymmetric Keys.
Imports System Imports System.Xml Imports System.Security.Cryptography Imports System.Security.Cryptography.Xml Module Program Sub Main(ByVal args() As String) ' Create an XmlDocument object. Dim xmlDoc As New XmlDocument() ' Load an XML file into the XmlDocument object. Try xmlDoc.PreserveWhitespace = True xmlDoc.Load("test.xml") Catch e As Exception Console.WriteLine(e.Message) End Try Dim cspParams As New CspParameters() cspParams.KeyContainerName = "XML_ENC_RSA_KEY" ' Get the RSA key from the key container. This key will decrypt ' a symmetric key that was imbedded in the XML document. Dim rsaKey As New RSACryptoServiceProvider(cspParams) Try ' Decrypt the elements. Decrypt(xmlDoc, rsaKey, "rsaKey") ' Save the XML document. xmlDoc.Save("test.xml") ' Display the encrypted XML to the console. Console.WriteLine() Console.WriteLine("Decrypted XML:") Console.WriteLine() Console.WriteLine(xmlDoc.OuterXml) Catch e As Exception Console.WriteLine(e.Message) Finally ' Clear the RSA key. rsaKey.Clear() End Try Console.ReadLine() End Sub Sub Decrypt(ByVal Doc As XmlDocument, ByVal Alg As RSA, ByVal KeyName As String) ' Check the arguments. If Doc Is Nothing Then Throw New ArgumentNullException("Doc") End If If Alg Is Nothing Then Throw New ArgumentNullException("Alg") End If If KeyName Is Nothing Then Throw New ArgumentNullException("KeyName") End If ' Create a new EncryptedXml object. Dim exml As New EncryptedXml(Doc) ' Add a key-name mapping. ' This method can only decrypt documents ' that present the specified key name. exml.AddKeyNameMapping(KeyName, Alg) ' Decrypt the element. exml.DecryptDocument() End Sub End Module
To compile this example, you need to include a reference to
System.Security.dll.
Include the following namespaces: System.Xml, System.Security.Cryptography, and System.Security.Cryptography.Xml. by using Ildasm.exe (IL Disassembler).
System.Security.Cryptography.Xml
How to: Encrypt XML Elements with Asymmetric Keys | https://msdn.microsoft.com/en-us/library/ms229919.aspx?cs-save-lang=1&cs-lang=vb | CC-MAIN-2017-17 | en | refinedweb |
Hey all. this is my first day python programming (been programming java for almost a year tho) and I was trying to put together the classic "shout" method. so far it is like this:
import time def shout(string): for c in string: print("Gimme a " + c) print(c + "!") time.sleep(1) print("\nWhat's that spell... " + string + "!") shout("COUGARS")
Basically you can see what it does... Spells out the word in cheerleader fashion. Anyway the thing I was wondering is if I could get the program to pause for a second in between the "What does that spell..." and the string. so it would go like this:
What does that spell... <one second pause> COUGARS!
Any help? Seems trivial but I haven't been able to figure it out :confused: | https://www.daniweb.com/programming/software-development/threads/208921/pause-and-print-same-line | CC-MAIN-2017-17 | en | refinedweb |
Views¶
In the MVC paradigm the view manages the presentation of the model.
The view is the interface the user sees and interacts with. For Web applications, this has historically been an HTML interface. HTML remains the dominant interface for Web apps but new view options are rapidly appearing.
These include Macromedia Flash, JSON and views expressed in alternate markup languages like XHTML, XML/XSL, WML, and Web services. It is becoming increasingly common for web apps to provide specialised views in the form of a REST API that allows programmatic read/write access to the data model.
More complex APIs are quite readily implemented via SOAP services, yet another type of view on to the data model.
The growing adoption of RDF, the graph-based representation scheme that underpins the Semantic Web, brings a perspective that is strongly weighted towards machine-readability.
Handling all of these interfaces in an application is becoming increasingly challenging. One big advantage of MVC is that it makes it easier to create these interfaces and develop a web app that supports many different views and thereby provides a broad range of services.
Typically, no significant processing occurs in the view; it serves only as a means of outputting data and allowing the user (or the application) to act on that data, irrespective of whether it is an online store or an employee list.
Templates¶
Template rendering engines are a popular choice for handling the task of view presentation.
To return a processed template, it must be rendered and returned by the controller:
from helloworld.lib.base import BaseController, render class HelloController(BaseController): def sample(self): return render('/sample.mako')
Using the default Mako template engine, this will cause Mako to look in the
helloworld/templates directory (assuming the project is called ‘helloworld’) for a template filed called
sample.mako.
The
render() function used here is actually an alias defined in your projects’
base.py for Pylons’
render_mako() function.
Passing Variables to Templates¶
To pass objects to templates, the standard Pylons method is to attach them to the tmpl_context (aliased as c in controllers and templates, by default) object in the Controllers:
import logging from pylons import request, response, session, tmpl_context as c, url from pylons.controllers.util import abort, redirect from helloworld.lib.base import BaseController, render log = logging.getLogger(__name__) class HelloController(BaseController): def index(self): c.name = "Fred Smith" return render('/sample.mako')
Using the variable in the template:
Hi there ${c.name}!
Strict vs Attribute-Safe tmpl_context objects¶
The tmpl_context object is created at the beginning of every request, and by default is an instance of the
AttribSafeContextObj class, which is an Attribute-Safe object. This means that accessing attributes on it that do not exist will return an empty string instead of raising an
AttributeError error.
This can be convenient for use in templates since it can act as a default:
Hi there ${c.name}
That will work when c.name has not been set, and is a bit shorter than what would be needed with the strict
ContextObj context object.
Switching to the strict version of the tmpl_context object can be done in the
config/environment.py by adding (after the config.init_app):
config['pylons.strict_c'] = True
Default Template Variables¶
By default, all templates have a set of variables present in them to make it easier to get to common objects. The full list of available names present in the templates global scope:
- c – Template context object (Alias for tmpl_context)
- tmpl_context – Template context object
config– Pylons
PylonsConfigobject (acts as a dict)
- g – Project application globals object (Alias for app_globals)
- app_globals – Project application globals object
- h – Project helpers module reference
request– Pylons
Requestobject for this request
response– Pylons
Responseobject for this request
session– Pylons session object (unless Sessions are removed)
translator– Gettext translator object configured for current locale
ungettext()– Unicode capable version of gettext’s ngettext function (handles plural translations)
_()– Unicode capable gettext translate function
N_()– gettext no-op function to mark a string for translation, but doesn’t actually translate
url– An instance of the
routes.util.URLGeneratorconfigured for this request.
Configuring Template Engines¶
A new Pylons project comes with the template engine setup inside the projects’
config/environment.py file. This section creates the Mako template lookup object and attaches it to the app_globals object, for use by the template rendering function.
# these imports are at the top from mako.lookup import TemplateLookup from pylons.error import handle_mako_error # this section is inside the load_environment function # Create the Mako TemplateLookup, with the default auto-escaping config['pylons.app_globals'].mako_lookup = TemplateLookup( directories=paths['templates'], error_handler=handle_mako_error, module_directory=os.path.join(app_conf['cache_dir'], 'templates'), input_encoding='utf-8', default_filters=['escape'], imports=['from webhelpers.html import escape'])
Using Multiple Template Engines¶
Since template engines are configured in the
config/environment.py section, then used by render functions, it’s trivial to setup additional template engines, or even differently configured versions of a single template engine. However, custom render functions will frequently be needed to utilize the additional template engine objects.
Example of additional Mako template loader for a different templates directory for admins, which falls back to the normal templates directory:
# Add the additional path for the admin template paths = dict(root=root, controllers=os.path.join(root, 'controllers'), static_files=os.path.join(root, 'public'), templates=[os.path.join(root, 'templates')], admintemplates=[os.path.join(root, 'admintemplates'), os.path.join(root, 'templates')]) config['pylons.app_globals'].mako_admin_lookup = TemplateLookup( directories=paths['admin_templates'], error_handler=handle_mako_error, module_directory=os.path.join(app_conf['cache_dir'], 'admintemplates'), input_encoding='utf-8', default_filters=['escape'], imports=['from webhelpers.html import escape'])
That adds the additional template lookup instance, next a custom render function is needed that utilizes it:
from pylons.templating import cached_template, pylons_globals def render_mako_admin(template_name, extra_vars=None, cache_key=None, cache_type=None, cache_expire=None): # Create a render callable for the cache function def render_template(): # Pull in extra vars if needed globs = extra_vars or {} # Second, get the globals globs.update(pylons_globals()) # Grab a template reference template = globs['app_globals'].mako_admin_lookup.get_template(template_name) return template.render(**globs) return cached_template(template_name, render_template, cache_key=cache_key, cache_type=cache_type, cache_expire=cache_expire)
The only change from the
render_mako() function that comes with Pylons is to use the mako_admin_lookup rather than the mako_lookup that is used by default.
Custom
render() functions¶
Writing custom render functions can be used to access specific features in a template engine, such as Genshi, that go beyond the default
render_genshi() functionality or to add support for additional template engines.
Two helper functions for use with the render function are provided to make it easier to include the common Pylons globals that are useful in a template in addition to enabling easy use of cache capabilities. The
pylons_globals() and
cached_template() functions can be used if desired.
Generally, the custom render function should reside in the project’s
lib/ directory, probably in
base.py.
Here’s a sample Genshi render function as it would look in a project’s
lib/base.py that doesn’t fully render the result to a string, and
rather than use
c assumes that a dict is passed in to be used
in the templates global namespace. It also returns a Genshi stream
instead the rendered string.
from pylons.templating import pylons_globals def render(template_name, tmpl_vars): # First, get the globals globs = pylons_globals() # Update the passed in vars with the globals tmpl_vars.update(globs) # Grab a template reference template = globs['app_globals'].genshi_loader.load(template_name) # Render the template return template.generate(**tmpl_vars)
Using the
pylons_globals() function also makes it easy to get to the app_globals object which is where the template engine was attached in
config/environment.py.
Changed in version 0.9.7: Prior to 0.9.7, all templating was handled through a layer called ‘Buffet’. This layer frequently made customization of the template engine difficult as any customization required additional plugin modules being installed. Pylons 0.9.7 now deprecates use of the Buffet plug-in layer.
See also
pylons.templating - Pylons templating API
Templating with Mako¶
Introduction¶
The template library deals with the view, presenting the model. It generates (X)HTML code, CSS and Javascript that is sent to the browser. (In the examples for this section, the project root is ``myapp``.)
Making a template hierarchy¶
Create a base template¶
In myapp/templates create a file named base.mako and edit it to appear as follows:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html> <head> ${self.head_tags()} </head> <body> ${self.body()} </body> </html>
A base template such as the very basic one above can be used for all pages rendered by Mako. This is useful for giving a consistent look to the application.
- Expressions wrapped in ${...} are evaluated by Mako and returned as text
- ${ and } may span several lines but the closing brace should not be on a line by itself (or Mako throws an error)
- Functions that are part of the self namespace are defined in the Mako templates
Create child templates¶
Create another file in myapp/templates called my_action.mako and edit it to appear as follows:
<%inherit <%def <!-- add some head tags here --> </%def> <h1>My Controller</h1> <p>Lorem ipsum dolor ...</p>
This file define the functions called by base.mako.
- The inherit tag specifies a parent file to pass program flow to
- Mako defines functions with <%def name=”function_name()”>...</%def>, the contents of the tag are returned
- Anything left after the Mako tags are parsed out is automatically put into the body() function
A consistent feel to an application can be more readily achieved if all application pages refer back to single file (in this case base.mako)..
Check that it works¶
In the controller action, use the following as a return() value,
return render('/my_action.mako')
Now run the action, usually by visiting something like in a browser. Selecting ‘View Source’ in the browser should reveal the following output:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html> <head> <!-- add some head tags here --> </head> <body> <h1>My Controller</h1> <p>Lorem ipsum dolor ...</p> </body> </html>
See also
- The Mako documentation
- Reasonably straightforward to follow
- See the Internationalization and Localization
- Provides more help on making your application more worldly. | http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/views.html | CC-MAIN-2017-17 | en | refinedweb |
network-builder
Linux NetworkNameSpace Builder
This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks.
network-builder : Linux Network NameSpace Builder for test
network-builder makes network using Linux Network NameSpaces and tunnels.
Getting started
Install this from Hackage.
cabal update && cabal install network-builder
Usage
When you create network, put network-builder.yml on current directory. The yaml format is below.
nss: - - ip: 192.168.10.1/24 name: br1 - - - ip: 192.168.10.2/24 name: veth-2 - name: server2 nss: - - ip: 192.168.11.1/24 name: br1 - - - ip: 192.168.11.4/24 name: veth-3 - name: server3 - - ip: 192.168.10.3/24 name: veth-4 - name: server4 nss: - - ip: 192.168.12.1/24 name: br1 - - - ip: 192.168.12.4/24 name: veth-5 - name: server5
When you create tunnel for server2 of namespace put yaml file(just example) below.
- name: server2 - tag: gretunnel Name: gre2 LocalIp: 192.168.10.2 RemoteIp: 192.168.10.3 RemoteNetwork: 192.168.12.0/24 GreDeviceIp: 192.168.11.254/24
Commands
create network
network-builder create
destroy network
network-builder destroy
create tunnel
network-builder create-tunnel "yaml-file"
destroy tunnel
network-builder destroy-tunnel "yaml-file"
Changes
0.1.0
- First Release
Depends on:
Used by 1 package: | https://www.stackage.org/package/network-builder | CC-MAIN-2017-17 | en | refinedweb |
This is the mail archive of the [email protected] mailing list for the glibc project.
On Mon, Jan 07, 2013 at 05:05:34PM -0800, Roland McGrath wrote: > Paul's points are valid as a generic thing. But they aren't the key > points for considering changes to libc. > > The entire discussion about maximum usable size is unnecessary > fritter. We already have __libc_use_alloca, alloca_account, > etc. (include/alloca.h) to govern that decision for libc code. > If people want to discuss changing that logic, they can do that > separately. But we will certainly not have more than one separate > implementation of such logic in libc. > > Extending that internal API in some fashion to make it less work to > use would certainly be welcome. That would have to be done in some > way that doesn't add overhead when existing uses of __libc_use_alloca > are converted to the new interface. The simplest way to do that would > be a macro interface that stores to a local bool, which is what the > users of __libc_use_alloca mostly do now. It would be nice to have an > interface that is completely trivial to use like malloca is, but for > code inside libc that ideal is less important than making sure we do > not degrade the performance (including code size) of any of the > existing uses. I wrote a possible new interface below. I added __libc_use_alloca that tests if current stack frame will becomes larger than __MAX_ALLOCA_CUTOFF. However needs change of nptl __libc_use_alloca. This makes alloca_account be counted twice so I aliased it with alloca to be counted only once. This could be more effective than current state as we do not need to track counters. (modulo details like that on x64 stackinfo_get_sp definition causes %rsp be unnecessary copied into %rax.) > > There are a few existing uses of alloca that use their own ad hoc code > instead of __libc_use_alloca (misc/err.c, sunrpc/auth_unix.c, maybe > others). Those should be converted to use __libc_use_alloca or > whatever nicer interface is figured out. > > Then there are the existing uses of alloca that don't use > __libc_use_alloca at all, such as argp/argp-help.c. Those should > probably be converted as well, though in some situations like the argp > ones it's a bit hard to imagine their really being used with sizes > large enough to matter. One technical issue is if we want to use STACKINFO_BP_DEF. It would make getting base pointer more portable but must be added to each function that uses __libc_use_alloca. /* TODO: switch to later case when __builtin_frame_address don't work. */ #if 1 #define STACKINFO_BP_DEF #define stackinfo_get_bp() __builtin_frame_address(0) #else #define STACKINFO_BP_DEF void *__stackinfo_bp = &__stackinfo_bp; #define stackinfo_get_bp() __stackinfo_bp #endif #ifdef _STACK_GROWS_DOWN #define __STACKINFO_UB stackinfo_get_bp () #define __STACKINFO_LB stackinfo_get_sp () #endif #ifdef _STACK_GROWS_UP #define __STACKINFO_UB stackinfo_get_sp () #define __STACKINFO_LB stackinfo_get_bp () #endif Then alloca can use following #define __libc_use_alloca(x) \ ( __STACKINFO_UB - __STACKINFO_LB + (x) <= __MAX_ALLOCA_CUTOFF ) #define alloca_account(n, var) alloca(n) #define extend_alloca_account(buf, len, newlen, avar) \ extend_alloca (buf, len, newlen, avar) And here is new version of malloca. /* Safe automatic memory allocation. Copyright (C) 2012 _MALLOCA_H #define _MALLOCA_H #ifdef HAVE_ALLOCA_H #include <alloca.h> #include <stdlib.h> #ifdef __cplusplus extern "C" { #endif /* malloca(N) is a safe variant of alloca(N). It allocates N bytes of memory allocated on the stack until stack frame has __MAX_ALLOCA_CUTOFF bytes and heap otherwise. It must be freed using freea() before the function returns. */ #define malloca(n) ({ \ size_t __n__ = (n); \ void * __r__ = NULL; \ if (__libc_use_alloca (__n__)) \ { \ __r__ = alloca (__n__); \ } \ else \ { \ __r__ = malloc (__n__); \ } \ __r__; \ }) /* Maybe it is faster to use unsigned comparison such as __r - __STACKINFO_LB <= __STACKINFO_UB -__STACKINFO_LB */ #define freea(r) do { \ void *__r = (r); \ if ( __r && !( __STACKINFO_LB <= __r & \ __r <= __STACKINFO_UB )) \ free (__r); \ } while (0) #ifdef __cplusplus } #endif #else #define malloca(x) malloc (x) #define freea(x) free (x) #endif #endif /* _MALLOCA_H */ | http://cygwin.com/ml/libc-alpha/2013-01/msg00266.html | CC-MAIN-2017-17 | en | refinedweb |
Appropriate and Beneficial Education The third principle of IDEA ensures an appropriate and beneficial education. contiguity The quality of two events occurring relatively closely in time. Washing Machine Speed Queen Company. Honigfeld G. For a one-dimensional linear element, the shape functions can be written using Equation 3. These two effects together compress the ringing phenomenon. Listening to the teacher read good childrens literature stimulates linguistic development (vocabulary meaning, syntactic awareness, and discourse under- standing) and cognitive development, and this in turn benefits reading development.
Double Affiliation is with both fathers patrilineal kin and mothers matrilineal kin. 33) (4. Reflection from a One-Layer (Half-Space) Medium The reflection coefficients are given by for horizontal Rh polarization and RV- Ptk oz- P0kt z ptkoz PO Etkoz - Eoktz do EoJCtz (5.
However, while it is true that both storage binary options auto bot transmission capacities are steadily increasing with new technological innovations, as a corollary to Parkinsons First Law,I it seems that the need for mass storage and transmission increases at least twice as fast as storage and transmission capacities improve.
All rights reserved. Xiao. Humor and Well-Being 239 Page 1088 s0020 4.19(4), 966-974. Π2 Thus, if λDbπ2 1 the cross section σ will significantly exceed σlarge. Refrigerated shakerincubator. In the hands of movement leaders, K. The name of the method is best binary options courses by name. Unfortunately, even honest answers to direct ques- tions may not be enough because many people are un- aware of the risks associated with some behaviors best binary options courses of their own disease status.
(1960).racioethnicity, gender) in making selection, promotion, or termination decisions. It seems reasonable to use this method again in order to investigate waves from the Vlasov point of view. Born, M. Moreover, people who live on the fifth floor or below complained more because they felt the indoor wind velocity was low.
s impairment is worse with the more complex figures.1997) of lingering effects of unopposed estrogen after administration binary options 60 second signals ceased, most carcinogenic effects of estrogens to the endometrium can be prevented by the addition of a progestin (Barrett-Connor, 1992). Table 15-9. Dopamine Chlorpromazine D2 receptor Antagonist Figure 6. Along with the summaries, each entry in Best binary options courses contains the publi- cation date for the test, information on how to contact the publisher, and cost information for purchasing the test.
The refractory periods place a limit on how frequently action potentials can oc- cur. All rights reserved. Page 283 2. 8 2. Managers, supervisors. This result led Kimura to propose that, suggesting some generalization of both stimulus and response. Burnout. Positive uncertainty A new decision- making framework for counseling. In 1913, Ringelmann performed an experiment that consisted of asking volunteers to pull on a rope as hard as possible, and handling characteristics.
A classification of diseases and death causes was approved in 1855 during the Second International Binary options trading strategy review of Statistics in Paris. Thetotalosmotic pressure is above that of blood. TheSecretLifeofSchool Supplies.Milan, C.and A.
INTRODUCTION Organizational socialization refers to the learning of what it is to be an organizational insider. Even so, it should be avoided completely by women who are preg- nant.
Galerkin method This is one of the most important methods used best binary options courses finite element analysis.
) returned to the hospital years later because of complications. Food and Best binary options courses Administration for use in the United States. In addition, this literature has valued studies that address practice binary option brokers in nigeria associated with measure- ment, program evaluation, specific techniques and their application, and studies of psychosocial adjust- ment and vocational outcomes of clients with specific disability conditions.
Best binary options courses York Harper Best binary options courses. 5678. 067. (After Goldstein et al. Preventing and remediating reading difficulties Bringing science to scale. In ideomotor apraxia, patients are unable to copy movements or to make gestures (for best binary options courses, to wave best binary options courses. 41 -1. Due to the twin factors best binary options courses increased life expectancy and reduced birth rates, most nations throughout the world are experiencing a dramatic shift toward an older adult population.
Cancer Inst. Seed(time(0)); To extract a random value from the generator, is in a new place where the environmental cues are unfamiliar, or is in a place where the visual cues often change. Page 477 a0005 Cooperation at Work Eduardo Salas, Dana E. Recent binaryoptiontradingschool com indicates that bipolar disorder may be more common in children and adolescents than was previously believed.
Tenenbaum, G. Methodological and conceptual problems complicate the investigation of psychosocial functioning in family intervention studies. Vidulich, S. They aimed to teach Best binary options courses ASL hand movements, or signs, partially cleaved, and secreted in association with the immu- noglobulin. Ind. sacrifices A component of job best binary options brokers reviews that refers to material or psychological benefits that an incumbent for- feits when he or she vacates the job.
Integration 131 y C C C C O hole D best binary options courses FIGURE 3. A single neuron may use titan trade binary options transmitter at one synapse and a different transmitter at another synapse, as David Sulzer and his coworkers have shown.
Appropriate control studies are needed to evaluate the specific efficacy of aden- ovtral mediated rtbozymes, such as anttsense. In M.similarly. 2, pp. Figure 6. Res. Thissignalistransferredto theelectroniccircuitboardassembly,where it is converted by the data binary options live trends system intoaudiosignalsforplayback.
(1997) Managing scarcity a worked example using burden and efficacy. could still reproduce the table, reciting the columns in any order or combination, without error. Thus, one might associate certain attributes of customers with sales.
8) binaryoptionsaffiliates info (9. The snr-scalable, spatially scalable. The effectiveness and rapidity of capping, most work best binary options courses been cross-sectional rather than longitudinal. At this point, the business may make financial contribu- best binary options courses, offer training opportunities andor paid jobs to select students, contribute materials and equipment that may be business related (e.
21 is the reconstruction obtained from a compressed representation that used 0.1143354354, 1983. Image theory has become very popular binary options review forum management and business schools. Next the subjects re- best binary options courses the prism glasses and threw a few more darts.Twigg J.
Performance in diverse teams is a function of time.125351363, 1987. 29 binary options touch shows the sketch of the staggered arrangement considered (Figure 9. Bond (Ed. I(H) 3 bits, i(l) 0. Obesity and cancer risk a Danish record- linkage study. Because peers play different roles in the bully pro- cess, J. Meditation involves relaxation techniques such as deep breathing but adds an element of mental focus, P, Rosst, J J.
51 21 32, the foundation of reversal theory binary options trading signals opinioni an athletes interpretation of arousal rather than itm financial binary options signals high or low the energizing state is.and Tsichlis, P.
(A) Side view of the human brain illustrating the measuring best binary options courses on the Best binary options courses fissure.
The following subsections give a brief account of commonly employed methods that deal with transient heat conduction during a phase change. Paris Angot, 1664. Drenth Binary option signals franco University, Amsterdam, Tulchinsky, E. Negative symptoms are marked not by any par- ticular behavior but rather by the absence of a behavior or by the inability to engage in an activity.
As a result, the visual input is split binary option trading signals two, and so input from the right side of the world as seen by both eyes goes to the left hemisphere and input from the left side of the world as seen by both eyes goes to the right hemisphere. Specifically, V.
Avellini, W. Bellamy, W. Thwas was a test. Surface instabilities can exist only if the surface can move, Theory of plasma oscillations. Length; i) { fr new FileReader(argsi); wc(fr); } } } catch (IOException e) { return; } System. Variables within a specific domain (e. Wagner, to use an iterator to cycle through the contents of a collection, follow these steps 1. (a) Using the Cauchy integral formula, show that f(a)f(b) ab f(z) for f(z) analytic in C, |a| R, |b| R. Findings from lesion studies suggest that the premotor cortex and the pri- mary motor cortex each have a movement lexicon and that the lexicon of the premotor cortex is more complex than that of the motor cortex.
Rather, 1996. Raw sequence from the sample data sets, C. 66947951, F-CH22-O-CH23-C02Et, by warming it with absolute alcohol and dry best binary options courses oxide. In contrast, procedural knowl- binary option brokers in usa and strategic knowledge require both a measure of memory retention and a measure of transfer.
; public class ShowFonts extends Applet { public void paint(Graphics g) { String msg ""; String FontList; GraphicsEnvironment ge GraphicsEnvironment. An interesting implication of the discovery of so many cortical maps is that little cortex is left over for the more-complex cognitive functions in Flechsigs hierarchy.
5 Alternative notations and formulations 2. In other words, the boundary values are {O, 128, 255}, including use of the dominant language. Chromosomal Alterations As noted in Chapter 6, chromosomal alterations are extremely common if not ubiquitous in all malignant neoplasms, as was originally suggested by Boveri Binary trading strategies pdf. s2 ω2 1.height and attractiveness).
write("add " other CRLF); } idcon. The properties of these and related compounds are now being 4 systematically studied.making up evidence) may backfire. Old, rela- tives, or persons who are already employed in an organization.
Calle, E. Clin. In no case are they both set to zero. Technically speaking, we cannot define the monopolistically com- petitive industry because each firm produces a somewhat different prod- uct. This is necessary to allow the file to be read and written. println("Implement meth3().Schoenherr, Best binary options courses. The mold best binary options courses equippedwithtinyholes,through whichtheairinthechamberis drawnoutjustbeforetheneoprene enters.
In this connection rlbozymes having the same potential structure as the synthetic ones described m this chapter can be m vwo synthesized by simply using a czs-acting rlbozyme to trim the 5 end andor the 3 end of a tram-acting nbozyme(s) Page 277 Preformed Ribozymes 277 Acknowledgments We thank David Fraser for critical reading of the manuscript.
Binary options trading profit, your program can spawn as many threads as it needs.
The mentor provides empathetic listening and serves as a confidant or as someone who provides advice and encouragement.withdrawal, anxiety, somatic complaints, obsessions). (1980). The desired URL is then constructed. ) After a brief rest period, the original test pulse was presented again, and this time the magnitude of the response (that is, the EPSP) was greater than before.
We found that it could be isolated from the reaction product formed by adding ethyl alcohol (1 mol. Benjamin New York. In file included from alphabeta. Psychologists also have given attention to the application of leisure psychology to counseling and recreation resource management. I suggest that the major area to concentrate on is savings in indirect costs, that is, lost income, disability transfer payments, costs associated with courts and incarceration, and family burden, which may be anticipated based upon the superior efficacy and tolerability of the atypical antipsychotic drugs.
These indi- viduals stress external standards for their behavior others have to tell us what to best binary options courses. For example, according to both 142 Anxiety and Optimal Athletic Performance 1. The growing use of robots and computers to complete repetitive and monotonous tasks could potentially reduce boredom at work. 24 Page 90 74 3 H U F F M A N C O D IN G TABLE 3.
JOB EVALUATION There are many different systems of job evaluation in use.and P. Nuclear power generation), pro- mises, or confidence-building measures seem to be most readily available.
Why do we need another one.regarding vaccination and compulsory mili- tary service) interpreted as infringement of individual freedom, the number of individual actions reported as unacceptable (e.
Food, shelter), etc. Brunswick. However, there is some consistency of prosocial tendencies over time. 13 The perceptions derived from the body senses depend on different receptors located variously in skin, muscles, joints, and tendons. The use of measures of cognitive ability is a particularly sensitive issue. L(eat coshωt) sa (sa)2 ω2 L(eat sinh ωt) ω (sa)2 ω2 Page 42 L1 s2 4s1 L1 L1 s 2 1.binary option trading in usa reds vs the greens, the dogs vs the horses).
Pathol. Best binary options courses, Roberts, B. Next we define the operators. ) coming progressively shorter during successive cell divisions, as such a process ultimately leads to a loss of viability.
Theengineisfirstplaced inventionisthesolar-poweredlawnmower, 68 179 190.Wambach, G. The rate at which a signal is sampled is governed by the highest frequency component of a signal. A simplestraight-edge servesasaback-gauge. Aizawa and T. Such research, informed by cultural and cross- cultural perspectives, opens up new vistas in the devel- opment of the self that can shed light on the interface of culture, parenting, and the individual through time.
The entries with the same subscripts in Equations C. Best binary options courses, 351550554, 1998. Latent carcinoma of thyroid in Israel best binary options courses study of 260 autopsies. Java. Its meaning depends on the context in which it best binary options courses used. Wash the CTLL cells four times m RPM1 1640 plus 10 FCS prior to the assay, these services should include the students family, should begin early (some advocate as early as elementary school), should be sensitive to familial and cultural fac- tors, and must be comprehensive.
Arch. Instead, the JIT compiles code as it is needed, during execution. The first time the procedure is called, it creates a table of Fibonacci numbers holding Fn through F40. out. Malabar, Fla. 1001 .Binary options wiki | http://newtimepromo.ru/best-binary-options-courses-3.html | CC-MAIN-2017-17 | en | refinedweb |
Note
Requirements
This example runs OpenMDAO in parallel which requires petsc4py and mpi4py. You must have these packages installed in order to proceed. To get these packages set up on Linux, see MPI on Linux. To get these packages set up on Windows, see MPI on Windows.
Distributed Components¶
OpenMDAO can work with components that are actually distributed themselves. This is useful for dealing with complex tools, like PDE solver (CFD or FEA). But it can also be used to speed up any calculations you’re implementing yourself directly in OpenMDAO using our MPI-based parallel data passing.
Why should you use OpenMDAO to build your own distributed components? Because OpenMDAO lets you build distributed components without writing any significant MPI code yourself. Here is a simple example where we break up the job of adding a value to a large float array (1,000,000 elements).
from __future__ import print_function import numpy as np from six.moves import range from openmdao.api import Component from openmdao.util.array_util import evenly_distrib_idxs class DistributedAdder(Component): """ Distributes the work of adding 10 to every item in the param vector """ def __init__(self, size=100): super(DistributedAdder, self).__init__() self.local_size = self.size = int(size) #NOTE: we declare the variables at full size so that the component will work in serial too self.add_param('x', shape=size) self.add_output('y', shape=size) def get_req_procs(self): """ min/max number of procs that this component can use """ return (1,self.size) def setup_distrib(self): """ specify the local sizes of the variables and which specific indices this specific distributed component will handle. Indices do NOT need to be sequential or contiguous! """ comm = self.comm rank = comm.rank # NOTE: evenly_distrib_idxs is a helper function to split the array # up as evenly as possible sizes, offsets = evenly_distrib_idxs(comm.size, self.size) local_size, local_offset = sizes[rank], offsets[rank] self.local_size = int(local_size) start = local_offset end = local_offset + local_size self.set_var_indices('x', val=np.zeros(local_size, float), src_indices=np.arange(start, end, dtype=int)) self.set_var_indices('y', val=np.zeros(local_size, float), src_indices=np.arange(start, end, dtype=int)) def solve_nonlinear(self, params, unknowns, resids): #NOTE: Each process will get just its local part of the vector print('process {0:d}: {1}'.format(self.comm.rank, params['x'].shape)) unknowns['y'] = params['x'] + 10 class Summer(Component): """ Agreggation component that collects all the values from the distributed vector addition and computes a total """ def __init__(self, size=100): super(Summer, self).__init__() #NOTE: this component depends on the full y array, so OpenMDAO # will automatically gather all the values for it self.add_param('y', val=np.zeros(size)) self.add_output('sum', shape=1) def solve_nonlinear(self, params, unknowns, resids): unknowns['sum'] = np.sum(params['y'])
The distributed component magic happens in the setup_distrib method of the DistributedAdder class. This is where we tell the framework how to split up the the big array into smaller chunks handled separately by each distributed process. In this case, we just split the array up one chuck at a time in order as we go from process to process. But OpenMDAO does not require that the src_indices be ordered or sequential!
Note
Only the DistributedAdder class is a distributed component. The Summer is class is a normal component that aggregates the whole array to sum it up.
Next we’ll use these components to build an actual distributed model:
import time from openmdao.api import Problem, Group, IndepVarComp from openmdao.core.mpi_wrap import MPI if MPI: # if you called this script with 'mpirun', then use the petsc data passing from openmdao.core.petsc_impl import PetscImpl as impl else: # if you didn't use `mpirun`, then use the numpy data passing from openmdao.api import BasicImpl as impl #how many items in the array size = 1000000 prob = Problem(impl=impl) prob.root = Group() prob.root.add('des_vars', IndepVarComp('x', np.ones(size)), promotes=['x']) prob.root.add('plus', DistributedAdder(size), promotes=['x', 'y']) prob.root.add('summer', Summer(size), promotes=['y', 'sum']) prob.setup(check=False) prob['x'] = np.ones(size) st = time.time() prob.run() #only print from the rank 0 process if prob.root.comm.rank == 0: print("run time:", time.time() - st) #expected answer is 11 print("answer: ", prob['sum']/size)
You can run this model in either serial or parallel, depending on how you call the script. Lets say you put the above code into a python script called dist_adder.py. Then to run it in serial you would call it just like any other python script:
python dist_adder.py
In that case, you’ll expect to see some output that looks like this:
process 0: (30000000,) run time: 1.76785802841 answer: 11.0
To run the model in parallel you need to have an MPI library (e.g. OpenMPI), mpi4py, PETSc, and petsc4py installed. Then you can call the script like this:
mpirun -n 2 python dist_adder.py
And you can expect to see some output as follows:
process 0: (15000000,) process 1: (15000000,) run time: 1.00080680847 answer: 11.0
With two processes running, you get a decent speed up. You can see that each process took half the array. Why don’t we get a full 2x speedup? Two reasons. The first, and more significant factor is that we don’t have a fully parallel model. The DistributedAdder component is distributed, but the Summer component is not. This introduces a bottleneck because we have to wait for the serial operation to complete. | http://openmdao.readthedocs.io/en/latest/usr-guide/examples/distrib_adder.html | CC-MAIN-2017-17 | en | refinedweb |
Lack of Encapsulation in Addons
I first noticed a lack of good design in addon code when I started trying to tweak existing addons to be slightly different.
One of the stand out examples was a Threat Meter (you know which one I mean). It works well, but I felt like writing my own, to make it really fit into my UI, with as little overhead as possible. Not knowing how to even begin writing a Threat Meter, I downloaded a copy, and opened its source directory... to discover that the entire addon is one 3500+ line file, and 16 Ace.* dependencies.
When I had finished my Threat Meter, I had two files (170 lines and 130 lines), and one dependency (Dark.Core, which all my addons use). I learnt a lot while reading the source for the original threat meter - it is very customisable, is externally skinable, and has some very good optimisations in it. But it also has a lot of unused variables (which are named very similarly to used ones), and so much of it's code could be separated out, making it easier to modify by newer project members.
This set of observations goes on forever when concerning addons. The three main problems I see are:
- Pollution of the global namespace
- All code in one file
- No separation of concerns
All of this makes it harder for new developers to pick up and learn how to maintain and write addons. They are all fairly straight forward to solve problems, so lets address them!
Pollution of the Global Namespace
A lot of addons you find declare many variables as global so they can access them anywhere within their addon. For example, this is pretty standard:
MyAddonEvents = CreateFrame("Frame", "MyAddonEventFrame") MyAddonEvents:RegisterEvent("PLAYER_ENTERING_WORLD") MyAddonEvents:SetScript("OnEvent", MyAddonEventHandler) MyAddonEventHandler = function(self, event, ...) if event == "PLAYER_ENTERING_WORLD" then --do something useful end end
This is an example of poluting the global namespace, as now the entire UI has access to:
MyAddonEvents,
MyAddonEventFrame,
MyAddonEventHandler. This is very trivial to rewrite to not expose anything to the global namespace:
local events = CreateFrame("Frame") local handler = function(self, event, ...) if event == "PLAYER_ENTERING_WORLD" then --do something useful end end events:RegisterEvent("PLAYER_ENTERING_WORLD") events:SetScript("OnEvent", handler)
This version exposes nothing to the global namespace, and performs exactly the same function (you can even get rid of the
handler variable and just pass the function directly into
SetScript).
However, by writing your code like this, you can't access any of this from another file (either a lua file, or shudder a frameXml file), but using namespaces we can get around this limitation without polluting the global namespace.
Splitting into Separate Files
So, how to access local variables in other files? Well Warcraft addons come with a feature where all lua files are provided with two arguments:
addon and
ns. The first of these is a string of the addon name, and the second is an empty table. I almost never use the
addon parameter, but the
ns (or "namespace") parameter is key to everything.
You can access these two variables by writing this as the first line of your lua file:
local addon, ns = ... print("Hello from, " .. addon)
By using the
ns, we can put our own variables into it to access from other files. For example, we have an event system in one file:
eventSystem.lua
local addon, ns = ... local events = CreateFrame("Frame") local handlers = {} events:SetScript("OnEvent", function(self, event, ...) local eventHandlers = handlers[event] or {} for i, handler in ipairs(eventHandlers) do handler(event, ...) end end) ns.register = function(event, handler) handlers[event] = handlers[event] or {} table.insert(handlers[event], handler) events:RegisterEvent(event) end
Note how the
register function is defined on the
ns. This means that any other file in our addon can do this to handle an event:
goldPrinter.lua
local addon, ns = ... ns.register("PLAYER_MONEY", function() local gold = floor(money / (COPPER_PER_SILVER * SILVER_PER_GOLD)) local silver = floor((money - (gold * COPPER_PER_SILVER * SILVER_PER_GOLD)) / COPPER_PER_SILVER) local copper = mod(money, COPPER_PER_SILVER) local moneyString = "" local separator = "" if ( gold > 0 ) then moneyString = format(GOLD_AMOUNT_TEXTURE, gold, 0, 0) separator = " " end if ( silver > 0 ) then moneyString = moneyString .. separator .. format(SILVER_AMOUNT_TEXTURE, silver, 0, 0) separator = " " end if ( copper > 0 or moneyString == "" ) then moneyString = moneyString .. separator .. format(COPPER_AMOUNT_TEXTURE, copper, 0, 0) end print("You now have " .. moneyString) end)
A pretty trivial example, but we have managed to write a two file addon, without putting anything in the global namespace.
We have also managed to separate our concerns - the
goldPrinter does not care what raises the events, and the
eventSystem knows nothing about gold printing, just how to delegate events. There is also an efficiency here too - anything else in our addon that needs events uses the same eventSystem, meaning we only need to create one frame for the entire addon to receive events.
Structure
Now that we can separate things into individual files, we gain a slightly different problem - how to organise those files. I found over time that I end up with roughly the same structure each time, and others might benefit from it too.
All my addons start with four files:
- AddonName.toc
- initialise.lua
- config.lua
- run.lua
The toc file, other than the usual header information is laid out in the order the files will run, for example this is the file segment of my bags addon's toc file:
initialise.lua config.lua models\classifier.lua models\classifiers\equipmentSet.lua models\itemModel.lua models\model.lua groups\group.lua groups\bagGroup.lua groups\bagContainer.lua views\item.lua views\goldDisplay.lua views\currencyDisplay.lua views\bankBagBar.lua sets\container.lua sets\bag.lua sets\bank.lua run.lua
The
initialise lua file is the first thing to run. All this tends to do is setup any sub-namespaces on
ns, and copy in external dependencies to
ns.lib:
local addon, ns = ... ns.models = {} ns.groups = {} ns.views = {} ns.sets = {} local core = Dark.core ns.lib = { fonts = core.fonts, events = core.events, slash = core.slash, }
By copying in the dependencies, we not only save a global lookup each time we need say the event system, but we also have an abstraction point. If we want to replace the event system, as long as the replacement has the right function names, we can just assign the new one to the lib:
ns.lib.events = replacementEvents:new()
The sub namespaces correspond to folders on in the addon (much the same practice used by c# developers), so for example the
classifier.lua file might have this in it:
local addon, ns = ... local classifier = { new = function() end, update = function() end, classify = function(item) end, } ns.models.classifier = classifier
The config file should be fairly simple, with not much more than a couple of tables in it:
local addon, ns = ... ns.config = { buttonSize = 24, spacing = 4, screenPadding = 10, currencies = { 823, -- apexis 824, -- garrison resources } }
And finally, the
run.lua file is what makes your addon come to life:
local addon, ns = ... local sets = ns.sets local pack = sets.bag:new() local bank = sets.bank:new() local ui = ns.controllers.uiIntegration.new(pack.frame, bank.frame) ui.hook() --expose DarkBags = { addClassifier = ns.classifiers.add }
If you need to expose something to the entire UI or other addons, that's fine. But make sure you only expose what you want to. In the example above the
DarkBags global only has one method -
addClassifier, because that is all I want other addons to be able to do.
Wrapping Up
I hope this helps other people with their addons - I know I wish that I had gotten to this structure and style a lot sooner than I did.
There will be a few more posts incoming covering encapsulation, objects and inheritance in more detail, so stay tuned. | http://stormbase.net/2014/11/23/good-design-in-warcraft-addons/ | CC-MAIN-2017-17 | en | refinedweb |
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
return today.getMonth()+1+"/"+today.getDate()+"/"+(today.getYear())
return (today.getMonth()+1) & "/" & today.getDate() & "/" & (today.getYear())
<%@language="javascript" %> <% function todayStr() { var today=new Date() return today.getMonth()+1+"/"+today.getDate()+"/"+(today.getYear()) } %> date: <%=todayStr()%>
<%@language="vbscript" %> <% function todayStr() { var today = now() todayString = FormatDateTime( today, 2 ) end function %> date: <%=todayStr()%>
Big Monty: I already have that but the value still not being displayed on the page. :$
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions. | https://www.experts-exchange.com/questions/28519362/Value-not-being-displayed-on-page-todayStr.html | CC-MAIN-2017-17 | en | refinedweb |
js> function f() { } f.__proto__ = this; this.m setter = f; uneval(this); Assertion failure: vlength > n, at jsobj.c:938 The code is trying to remove "(function " and ")" from something that turns out to be a sharp variable. Security sensitive because it looks like the code might jump past the end of a string. /* * Remove '(function ' from the beginning of valstr and ')' from the * end so that we can put "get" in front of the function definition. */ if (gsop[j] && VALUE_IS_FUNCTION(cx, val[j])) { size_t n = strlen(js_function_str) + 2; JS_ASSERT(vlength > n); vchars += n; vlength -= n + 1; } Strange behavior in opt: js> function f() { } f.__proto__ = this; this.m setter = f; uneval(this); ({f:#1={prototype:{}}, set m #1#}) js> function f() { } f.hhhhhhhhh = this; this.m setter = f; uneval(this); #1={f:#2=function f() {}, set m #2#}
Btw, the sharp stuff I mentioned at the end of comment 0 doesn't play well with the "set" syntax when trying to eval again. js> a = {}; h = function() { }; a.b getter = h; a.c getter = h; print(uneval(a)); eval(uneval(a)); ({get b #1=() {}, get c #1#}) typein:23: SyntaxError: missing ( before formal parameters: typein:23: ({get b #1=() {}, get c #1#}) typein:23: ........^ (I didn't test this before because I thought sharp variables were output-only!)
This bug is getting in the way of my testing :(
Maybe sharped functions just need to use the "x getter: " syntax rather than the "get x" syntax.
(In reply to comment #3) > Maybe sharped functions just need to use the "x getter: " syntax rather than > the "get x" syntax. Yes, that's the ticket. Brian, can you detect a # at the front of the value string and switch to this syntax? /be
Created attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic There must be a decompiler version of this same bug, though what it is isn't obvious to me. This uses the logic added in bug 356083 with some refactoring to handle the value side of expressions (previously only calculated needOldStyleGetterSetter for the propid).
> There must be a decompiler version of this same bug, though what it is > isn't obvious to me. The decompiler seems to get it right: it collapses some uses of "getter" in object literals into the new "get" syntax, but leaves ones with sharps alone. js> function() { return { x getter: function(){} } } function () { return {get x() {}}; } js> function() { return { x getter: #1=function(){} } } function () { return {x getter:#1=function () {}}; } js> function() { return { x getter: #1# } } function () { return {x getter:#1#}; }
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic Moving review to Igor
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic > val[valcnt] = (jsval) ((JSScopeProperty *)prop)->getter; >-#ifdef OLD_GETTER_SETTER >- gsop[valcnt] = >+ gsopold[valcnt] = > ATOM_TO_STRING(cx->runtime->atomState.getterAtom); >-#else >- gsop[valcnt] = needOldStyleGetterSetter >- ? ATOM_TO_STRING(cx->runtime->atomState.getterAtom) >- : ATOM_TO_STRING(cx->runtime->atomState.getAtom); >-#endif Why the patch removes OLD_GETTER_SETTER ifdefs?
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic #ifndef OLD_GETTER_SETTER + if (vchars[0] == '#') +#endif + needOldStyleGetterSetter = JS_TRUE; + We use that ifdef to make this decision later, so the ifdefs above are just old untidiness from the last time I worked in this code.
Comment on attachment 264243 [details] [diff] [review] use new "needOldStyleGetterSetter" logic > #ifndef OLD_GETTER_SETTER >+ if (vchars[0] == '#') >+#endif >+ needOldStyleGetterSetter = JS_TRUE; >+ >+ if (needOldStyleGetterSetter) >+ gsop[j] = gsopold[j]; >+ >+#ifndef OLD_GETTER_SETTER > /* > * Remove '(function ' from the beginning of valstr and ')' from the > * end so that we can put "get" in front of the function definition. > */ That if wrappend into #ifndef makes code hard to follow. I suggest to move it into the following +#ifndef OLD_GETTER_SETTER block and add a copy of "gsop[j] = gsopold[j];" as #else code. r+ with that fixed.
Created attachment 264918 [details] [diff] [review] implementation v2 This addresses Igor's comments, some bugs I encountered in testing this patch and cleans up an old preprocessor-guarded section of code, which really is no longer necessary. People defining OLD_GETTER_SETTER will suffer some bloat, but the code is tidier now.
Created attachment 264920 [details] [diff] [review] implementation v2a A minor tweak for OLD_GETTER_SETTER users
jsobj.c: 3.335
on the 1.8 branch I hit the original assertion using a debug xpcshell but not when I try to run the code in the browser. What's the potential security impact for Mozilla clients from this flaw? Do we need to land this on the 1.8 branch?
I think this is a relatively harmless (ie., crash, but nothing more) UMR on the branches.
Does it make decisions based on the UMR that might reveal sensitive information in Firefox's address space.
Ah, there might be a privacy problem, then, since this string can be printed and inspected for sensitive information.
Created attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up This should pick up fixes from bug 358594, bug 381303, bug 356083 (pre-requisite for 358594), bug 381211 (another bug introduced here, fixed in bug 367629), and bug 380933. I think if we're going to do a branch patch here, we should get all of these. If that is undesirable, I will try to back out a few of them. That proposal is more painful than using this, though. (not ready for review yet, still needs more testing)
The bug in comment #1 is still not fixed here (on trunk or by my patch roll-up). Jesse, would you mind opening another bug for that?
I already filed bug 380831 for that.
Comment on attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up This roll-up catches brokenness from all the testcases in this and the bugs mentioned in comment #19, except as noted in comment #20. I haven't had time to run the full JS test suite on it.
(In reply to comment #21) > I already filed bug 380831 for that. Which explains why I thought I'd fixed that.
Comment on attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up Do we want 1.8.0.x for this, too? Will wait for both approval and test-suite blessing to land this.
Created attachment 266144 [details] comparison of before|after MOZILLA_1_8_BRANCH rollup patch There are some changes in decompilation that aren't cool.
Thanks, Bob. I will address these and upload another attempt in a day or so.
Created attachment 266313 [details] js1_7/regress/regress-358594.js This doesn't include any of the decompiler tests.
I'm not worried about the two performance bugs here; 101964 is only showing a delta because it's value is different. The orders of magnitude are the same. There's no reason this test should affect the performance of sort() (bug 99120), either. Not sure why there's a delta there, but it seems like perhaps just a change in the test environment or something (maybe the file moved?) I'll have a patch to address the others shortly.
It looks like the rest of these are addressed in the patch for bug 355736. That bug should be nominated for branch approval if we care, otherwise we should probably just land this. Bob, can I get you to try the test-suite again when you get a chance? If nothing has changed, I think we can land this... I have a sneaking suspicion a few more tweaks have happened since I did the roll-up.
I'll get to it this evening on linux at the least.
crowder, do you just want the vanilla trunk tested? From what time period? I already have vanilla 1.8, trunk tests running on all three platforms with builds from this morning. Is that sufficient for your needs?
I want a 1.8 engine, but run against the trunk testsuite (if they differ) to make sure that this patch doesn't regress anything and that it improves the decompilation/obj_toSource situation. I'm not worried about '' quoted keywords, though; as I mentioned that should be handled in another patch. The patch for bug 355736 applies cleanly on 1.8
crowder, I've been trying to get a good result for this on 1.8 with the patch but am having problems for which I can not conclusively blame this patch. I'll keep trying and see if I can get a definite answer for you tonight.
Comment on attachment 266313 [details] js1_7/regress/regress-358594.js js1_7 not required.
Created attachment 268516 [details] js1_5/extensions/regress-358594-01.js
Created attachment 268517 [details] js1_5/extensions/regress-358594-02.js
Created attachment 268518 [details] js1_5/extensions/regress-358594-03.js
Created attachment 268519 [details] js1_5/extensions/regress-358594-04.js
Created attachment 268520 [details] js1_5/extensions/regress-358594-05.js
Created attachment 268521 [details] js1_5/extensions/regress-358594-06.js
Created attachment 269533 [details] difference between 1.8.1 without patch and with patch These differences are all due to (from what I can tell) bugs which are fixed on the trunk but not branch. /me stamps approval fwiw
Comment on attachment 265857 [details] [diff] [review] MOZILLA_1_8_BRANCH patch, roll-up approved for 1.8.1.5, a=dveditz for release-drivers
Checking in jsobj.c; /cvsroot/mozilla/js/src/jsobj.c,v <-- jsobj.c new revision: 3.208.2.52; previous revision: 3.208.2.51 done Checking in jsopcode.c; /cvsroot/mozilla/js/src/jsopcode.c,v <-- jsopcode.c new revision: 3.89.2.72; previous revision: 3.89.2.71 done
bc, could you help verifying this fix on the latest 2.0.0.5 rc builds?
verified fixed 1.8, 1.9.0 windows, linux, macppc with 7/16 opt/debug shell/browser.
Created attachment 274461 [details] [diff] [review] roll-up for 1.8.0
(In reply to comment #46) > Created an attachment (id=274461) [details] > roll-up for 1.8.0 > Could you add a patch using the same cvs diff -u -p -8 options as the patch from comment 19 and also add a plain diff between patches to simplify the review?
Comment on attachment 274461 [details] [diff] [review] roll-up for 1.8.0 Sorry for a late review, I forgot about it. >Index: mozilla/js/src/jsopcode.c >=================================================================== >--- mozilla.orig/js/src/jsopcode.c 2007-07-16 16:36:40.000000000 +0200 >+++ mozilla/js/src/jsopcode.c 2007-07-18 12:22:30.000000000 +0200 >@@ -61,16 +61,17 @@ ... > if (lastop == JSOP_GETTER || lastop == JSOP_SETTER) { > rval += strlen(js_function_str) + 1; >- todo = Sprint(&ss->sprinter, "%s%s%s %s%.*s", >- lval, >- (lval[1] != '\0') ? ", " : "", >- (lastop == JSOP_GETTER) >- ? js_get_str : js_set_str, >- xval, >- strlen(rval) - 1, >- rval); >+ if (!atom || !ATOM_IS_STRING(atom) || >+ !ATOM_IS_IDENTIFIER(atom) || >+ !!ATOM_KEYWORD(js_AtomizeChars(cx, >+ ATOM_TO_STRING(atom), >+ sizeof(ATOM_TO_STRING(atom)), >+ 0))|| No need to re-atomize the atom here meaning that atom == js_AtomizeChars(cx, JSSTRING_CHARS(ATOM_TO_STRING(atom)), JSSTRING_LENGTH(ATOM_TO_STRING(atom)), 0). In any case js_AtomizeChars(cx, ATOM_TO_STRING(atom), sizeof(ATOM_TO_STRING(atom)), 0) is bogus and should generate at least warnings on the wrong pointer type of js_AtomizeChars argumnet. Now given that ATOM_IS_IDENTIFIER is: !ATOM_KEYWORD(atom) && js_IsIdentifier(ATOM_TO_STRING(atom)) Then !ATOM_IS_IDENTIFIER(atom) is ATOM_KEYWORD(atom) || !js_IsIdentifier(ATOM_TO_STRING(atom)) meaning that !!ATOM_KEYWORD() can be omitted. >Index: mozilla/js/src/jsobj.c >=================================================================== >@@ -806,91 +808,111 @@ > /* > * We have four local roots for cooked and raw value GC safety. Hoist the > * "argv + 2" out of the loop using the val local, which refers to the raw > * (unconverted, "uncooked") values. > */ > val = argv + 2; > > for (i = 0, length = ida->length; i < length; i++) { >+ JSBool idIsLexicalIdentifier, needOldStyleGetterSetter; >+ char *atomstrchars; >+ > /* Get strings for id and value and GC-root them via argv. */ > id = ida->vector[i]; > > #if JS_HAS_GETTER_SETTER >- > ok = OBJ_LOOKUP_PROPERTY(cx, obj, id, &obj2, &prop); > if (!ok) > goto error; >+#endif >+ >+ /* >+ * Convert id to a jsval and then to a string. Decide early whether we >+ * prefer get/set or old getter/setter syntax. >+ */ >+ atom = JSID_IS_ATOM(id) ? JSID_TO_ATOM(id) : NULL; >+ idstr = js_ValueToString(cx, ID_TO_VALUE(id)); >+ if (!idstr) { >+ ok = JS_FALSE; >+ OBJ_DROP_PROPERTY(cx, obj2, prop); >+ goto error; >+ } >+ *rval = STRING_TO_JSVAL(idstr); /* local root */ >+ idIsLexicalIdentifier = js_IsIdentifier(idstr); >+ >+ atomstrchars = ATOM_TO_STRING(atom); >+ needOldStyleGetterSetter = >+ !idIsLexicalIdentifier || >+ ATOM_KEYWORD(js_AtomizeChars(cx, >+ atomstrchars, >+ sizeof(atomstrchars), >+ 0)) != TOK_EOF; >+ Again, use ATOM_KEYWORD(atom) here.
/cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-01.js,v <-- regress-358594-01.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-02.js,v <-- regress-358594-02.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-03.js,v <-- regress-358594-03.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-04.js,v <-- regress-358594-04.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-05.js,v <-- regress-358594-05.js initial revision: 1.1 /cvsroot/mozilla/js/tests/js1_5/extensions/regress-358594-06.js,v <-- regress-358594-06.js initial revision: 1.1
Created attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) updated according to comment #48 this time using: cvs diff -u -p -8 jsobj.c jsopcode.c
Comment on attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) Igor did the last review, so setting the ? to him again.
Comment on attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) Sorry for a late review, I missed the request 2 months ago.
Comment on attachment 282402 [details] [diff] [review] roll-up for 1.8.0 (with comments) a=asac for 1.8.0.15
MOZILLA_1_8_0_BRANCH: Checking in js/src/jsobj.c; /cvsroot/mozilla/js/src/jsobj.c,v <-- jsobj.c new revision: 3.208.2.12.2.28; previous revision: 3.208.2.12.2.27 done Checking in js/src/jsopcode.c; /cvsroot/mozilla/js/src/jsopcode.c,v <-- jsopcode.c new revision: 3.89.2.8.2.12; previous revision: 3.89.2.8.2.11 done | https://bugzilla.mozilla.org/show_bug.cgi?id=358594 | CC-MAIN-2017-17 | en | refinedweb |
Lately I’ve received several emails from people asking about setting up projects using more than one file. Having come from an Eclipse background, I found it really intuitive but realized there are not many good tutorials on how this works specifically with Flex Builder or Flash Builder from Adobe. Here is a quick start on how to get your project up and running.
Start a new project and name it. A Flash Builder/Flex Builder project may contain several components, ActionScript files, classes, packages and other assets. The first step is to identify your project’s entry point and then reference other files.
The Flex Project I had emailed has the following lines of code:
Line 1 is the XML processing declaration and line two contains the root component of the application. This particular application is a Windowed Application (Adobe AIR). The xmlns:mx=”” line tells the compiler to use a specific Flex SDK, in this case the Flex 3.4 SDK. The namespace declaration below on line 3 (xmlns:components=”components.*”) declares that the project may use any or all of the components in the Package named “components”. Note that at this point in your application, the Package has not yet been created so your project will throw an error (correct behavior).
Line 5 of the code is where the a specific component is referenced. Because this component is namespace qualified with the same namespace prefix given to the “components” Package, the component MUST exist within that package. The specific component named here is CountriesCombo. The declaration components:CountriesCombo tells the compiler to create an instance of that component at runtime.
Creating a new Package is really easy. In Flash Builder 4, highlight the src folder and right click (PC) or Command-Click (OSX) the folder and a context menu will appear. Select “New -> Package” as shown and when the dialog pops up, give the Package the name “components”. Remembers that these are case sensetive.
Now that you have a Package created, it is time to add your component. It is just as easy. right-click (PC) or Command-Click (OSX) on the newly created package and select “New -> MXML Component” from the Menu as shown below.
In the dialog box that opens, name your component “CountriesCombo”. Paste the following code into the component source view:
Your project should now be runnable and have the following structure:
Note that all red X’s are gone.
If your project still does not work, try cleaning it “Project -> Clean” from the top menu. If you have added a component to your project that is not being recognized, you may have to manually refresh the package Explorer view. Do this by right clicking on the root folder of the project (or even just the src folder) and hit “refresh”. | http://blogs.adobe.com/digitalmedia/2010/01/multi_file_flex_projects/ | CC-MAIN-2017-17 | en | refinedweb |
Red Hat Bugzilla – Bug 28269
printconf cannot set local printer. error from
Last modified: 2008-05-01 11:37:59 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.0-0.99.11 i686)
system is HP6535, 128M RAM, HP Lasejet 832C. Printer works just fine with
Win98 SE and SUSE Linux 6.4.
Running 'printconf-gui as user gives following error:-
[d3j452@winston d3j452]$ printconf-gui
Xlib: connection to ":0.0" refused by server
Xlib: Client is not authorized to connect to Server
Traceback (innermost last):
File "/usr/sbin/printconf-gui", line 2, in ?
import gtk
File "/usr/lib/python1.5/site-packages/gtk.py", line 29, in ?
_gtk.gtk_init()
RuntimeError: cannot open display
Running 'printconf-gui' as 'root' gives following error:-
[root@winston d3j452]# printconf-gui
** CRITICAL **: file alchemist.c: line 3226 (AdmList_addChild): assertion
`_adm_valid_name(name)' failed.
Traceback (innermost last):
File "/usr/lib/python1.5/site-packages/libglade.py", line 28, in __call__
ret = apply(self.func, a)
File "/usr/sbin/printconf-gui", line 863, in handle_new_button
queue = new_queue(name)
File "/usr/sbin/printconf-gui", line 125, in new_queue
queue = dynamic_ctx.getDataByPath('/' + namespace +
'/print_queues').addChild(Alchemist.Data.ADM_TYPE_LIST, name)
File "/usr/lib/python1.5/site-packages/Alchemist.py", line 134, in
addChild
name))
Alchemist.ListError: addChild failed
Printtool does NOT autodetect printer. Trying to generate print queue by
selecting HP 832C does not create print queue. /etc/printcap is empty.
Have re-installed "Fisher" twice, taking default partitioning, KDE
workstation. Never asks if I wish to setup printer!! (I seem to remember we
did in 6.2 ?!).
Reproducible: Always
Steps to Reproduce:
1. "printtool"
2. Try to create '832C' as queue name
3. Select HP ==> Deskjet 832C
4. Click OK
5. NO QUEUE generated!!!
Actual Results: Error output in description
Expected Results: I should have been able to create a printer.
Correct driver is apparenetly selected...
We (Red Hat) should really try to resolve this before next release.
did you click 'apply'?
(Yes, it should bug you about saving and restarting, but it does now.)
Ah, this is because names (for various reasons) cannot start with a didgit in
printconf. This is a limitation of the encoding, and a check to prevent this
error has been introduced.
So, the traceback won't happen, but I can't make '832c' a valid name and keep
the configuration merging capabilities of printconf. | https://bugzilla.redhat.com/show_bug.cgi?id=28269 | CC-MAIN-2017-17 | en | refinedweb |
On Tue, Nov 6, 2012 at 8:03 AM, <cyberirakli at gmail.com> wrote: > I've used angle brackets just for posting here,becauze this forum doesn't support [code][/code] This is a Usenet group, not a web forum. > Just got answer, I didn't call a class it's self. Correct code is: > class derivedClass(baseClassMod.baseClass): > def ...... Better style would be to import the class from the module in the first place: from baseClass import baseClass # ... class derivedClass(baseClass): # ... Better yet would be to put both classes in the same file in the first place. Python isn't Java, where each class is an independent compilation unit. There is no reason to put each class in its own separate module, and it tends to cause namespace confusion as you have discovered. | https://mail.python.org/pipermail/python-list/2012-November/634574.html | CC-MAIN-2017-17 | en | refinedweb |
0
this program is just messing around with file i/o, and then i stumbled across something that i¨ve clearly forgot or missed in the tutorials i've read. my 'guess' is its the to if statements messing up (or perhaps rather their else's)
i know i used goto and that means i am a moron so comments about that are welcomme too (as for anything else i may have done in wierd ways:P )
#include <iostream> #include <fstream> using namespace std; int main() { ofstream ud; ud.open ("C:\\Documents and Settings\\Happy\\Skrivebord\\testfile.txt", ios::app); crashstart: //for the goto later on string userin; cout<< "\ninput for file\n"; cin >> userin; ud << userin; ud.close(); if (ud.is_open()) //this is where trouble begins i think, altso i //dont know of this is neccesary but i want to do it just to get the //hang of it. { cout << "\nur screwed, the file cant close, shutdown anyway? y/n \n"; string shutdownyn,y="y", Y="Y", n="n", N="N"; cin >> shutdownyn; if (shutdownyn==y || shutdownyn==Y) { goto faseout; } else { goto crashstart; } else //this one right here is the little satan i believe? { goto faseout; } } faseout: cout << "\nprogram terminated\n"; cin.get(); return 0; } | https://www.daniweb.com/programming/software-development/threads/166355/trouble-sepperating-to-if-statements | CC-MAIN-2017-17 | en | refinedweb |
0
I have this code:
def welcome(): '''========Welcome to Jaron's======= ====Video Game Rental Service====''' def menu(): while True: print '''0-\t Exit 1-\t Register 2-\t Log-in 3-\t Browse 4-\t Read-me''' try: menu_choice=int(raw_input('Please make a selection by number: ')) global menu_choice break except ValueError: print "Oops, it seems like you made a mistake. Try again by entering a number between 0 and 5." def register(): print 'Please fill out the form below to register' username=raw_input('Username: ') while True: password=raw_input('Password: ') confirmpassword=raw_input('Confirm Password: ') if password==confirmpassword: break else: print "Oops, it seems like you made a mistake. Try again by entering the same password in the 'Password field' as the 'Confirm Password' field." firstname=raw_input('First name: ') lastname=raw_input('Last name: ') dayofbirth=raw_input('Day of Birth: ') monthofbirth=raw_input('Month of Birth: ') yearofbirth=raw_input('Year of Birth: ') pay_info() def pay_info(): print 'Which type of payment would you like to use?' while True: print '''0-\t Exit 1-\t Paypal 2-\t Visa 3-\t Master Card''' try: payment=int(raw_input('Please make a selection by number: ') global payment break except ValueError: print "Oops, it seems like you made a mistake. Try again by entering a number between 0 and 3." welcome() menu() if menu_choice==1: register()
and I get an error that looks like this:
Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)]
Type "help", "copyright", "credits" or "license" for more information.
>>> [evaluate CAT.py]
Traceback (most recent call last):
File "<string>", line 45, in <fragment>
invalid syntax: <string>, line 45, pos 18
>>>
I don't see what the problem is with setting payment to a global variable, and even if I remove that line I still get an error with the break section. Any help is great.
Thanks,
Jaro | https://www.daniweb.com/programming/software-development/threads/366709/can-t-solve-syntax-error | CC-MAIN-2017-17 | en | refinedweb |
Results 1 to 1 of 1
Posting a comment on the news feed [facebook api]
Hey all i am using the following code to post to a posting on my news feed:
Code:
<?php require '../src/facebook.php'; $facebook = new Facebook(array( 'appId' => 'xxxxxxxxxxxxxxx', 'secret' => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'fileUpload' => true, 'cookie' => true )); $user = $facebook->getUser(); if ($user) { try { $access_token = $facebook->getAccessToken(); $user_profile = $facebook->api('/me'); $comment = $facebook->api('/xxxxxxxxxxxxxx/comments', 'POST', array( 'access_token' => $access_token, 'message' => 'testing!' ) ); } catch (FacebookApiException $e) { echo ($e); $user = null; } } <?php if ($user): ?> <a href="<?php echo $logoutUrl; ?>">Logout</a> <?php else: ?> <a href="<?php echo $loginUrl; ?>">Login with Facebook</a> <?php endif ?> if ($user) { $logoutUrl = $facebook->getLogoutUrl(); } else { $statusUrl = $facebook->getLoginStatusUrl(); $params = array( 'scope' => 'read_stream, friends_likes, email, read_mailbox, read_requests, user_online_presence, friends_online_presence, manage_notifications, publish_actions, publish_stream, user_likes, user_photos, user_status, user_videos, read_insights' ); $loginUrl = $facebook->getLoginUrl($params); } ?> <?php print_r($user_profile); ?>
OAuthException: (#221) Photo not visible
And i have no idea since i am posting a text comment and not even an image??
If i comment out the code line
Code:
/*$comment = $facebook->api('/xxxxxxxxxxxxxx/comments', 'POST', array( 'access_token' => $access_token, 'message' => 'testing!' ) );*/
Using the graph API i was able to do the same thing i am trying to do via PHP so i know it works since it returns a json ID and when i refresh the page i see the "testing" comment...:
And what makes it more odd is that, i can use the above code to comment on just a comment that doesnt have an image posted... But if i am commenting on a image that was posted then i get that error???!?!?!?
I use this code to sussesfully post to a non-image post (as in, just someone posting text). Where xxxxxxxxxxxx is the post ID and yyyyyyyyyyyyyy is the users ID who posted the comment that i am posting my comment to.
Code:
$comment = $facebook->api('/xxxxxxxxxxx_yyyyyyyyyyyyy/comments', 'post', array( 'message' => 'testing!', ) );
Last edited by StealthRT; 02-16-2014 at 11:51 PM. | http://www.codingforums.com/php/317623-posting-comment-news-feed-%5Bfacebook-api%5D.html | CC-MAIN-2017-17 | en | refinedweb |
Push-Messaging with JMS
Hiram Chirino, Logo for Apache ActiveMQ, Released under the Apache Software License 2.0 (Photo credit: Wikipedia)
While the finer details of new web standards like ActivityStrea.ms Realtime, PubSubHubbub and HTML5 Web Sockets are still being ironed out, the often-used, tried & true method of push messaging and real-time data integration in the Java community (not to mention larger developer community as a whole) has been Java Message Service (JMS).
JMS is a mainstay of the Java core and has been used for Web Service interoperability, as well as more innovative uses such as real-time and/or event-driven architecture (EDA) Web Applications.
In this article, I’d like to summarize some of my developments (i.e. joys and pains) with JMS in my current project, as well as summarize some best practices and lessons learned.
First I’ll start with some important findings, JMS is:
- best used as a supplement to other Web Service technologies (such as SOAP and REST)
- dangerous to use in transactional systems if you aren’t careful to synchronize messages
- not very useful if you want to send very large datasets at once, or, do quite expensive I/O operations or calculations between messages
- an excellent solution for real-time short “bursty” data exchange between disparate applications
- has a far too steep learning curve, largely due to the immense number of providers with JMS Server offerings, each with their own variations of the standard, API syntax, threading model, command-line controls, and/or custom features
Next, I would walk through a brief tutorial on using one particularly well-known JMS provider, Apache’s ActiveMQ…
Installation
- Download the latest stable ActiveMQ release:
- Unzip to a location easy to access (for convenience, I use the recommended ACTIVEMQ_HOME environment variable, pointed to the download location which I drop in C:/Apps/apache-activemq-5.x on Windows or /home//Apps/apache-activemq-5.x on Unix-based systems (i.e. Mac, Linux)
Configuration
To configure, you should:
- uncomment the JMS endpoints for the protocols you want to use in “apache-activemq-5.x/conf/activemq.xml” (i.e. vm, TCP, HTTP, Stomp)
- Create a JNDI configuration (where to do this depends on your implementation, in an IDE you should zip, rename to “jndi-properties.jar” and include in library build path).
An example of my JNDI configuration file is as follows:. (i.e. queueConnectionFactory, topicConnectionFactory, ConnectionFactory) connectionFactoryNames = jms/ConnectionFactory # register some queues in JNDI using the form EX: queue.[jndiName] = [physicalName] queue.MyQueue = s5.MyQueue # register some topics in JNDI using the form EX: topic.[jndiName] = [physicalName] topic.MyTopic = s5.MyTopic
Running
- open a command-line or terminal to the ACTIVEMQ_HOME directory and type:
activemq.bat
-OR-
./activemq
Your JMS server should now be running
- Receiving Messages
open your favourite IDE or text editor, then copy & paste the following starter code:
package jms; /** * The SimpleAsynchConsumer class consists only of a main * method, which receives one or more messages from a queue or * topic using asynchronous message delivery. It uses the * message listener TextListener. Run this program in * conjunction with SimpleProducer. * * Specify a queue or topic name on the command line when you run * the program. To end the program, type Q or q on the command * line. */ import javax.jms.*; import javax.naming.*; import java.io.*; public class Receive { /** * Main method. * * @param args the destination name and type used by the */ public static void main(String[] args) { String destName = null; Context jndiContext = null; ConnectionFactory connectionFactory = null; Connection connection = null; Session session = null; Destination dest = null; MessageConsumer consumer = null; TextListener listener = null; TextMessage message = null; InputStreamReader inputStreamReader = null; char answer = 'n'; if (args.length != 1) { System.out.println("Program takes one argument: "); System.exit(1); } destName = new String(args[0]); System.out.println("Destination name is " + destName); /** * Create a JNDI API InitialContext object if none exists * yet. */. */ try { connectionFactory = (ConnectionFactory) jndiContext.lookup( "jms/ConnectionFactory"); dest = (Destination) jndiContext.lookup(destName); } catch (Exception e) { System.out.println("JNDI API lookup failed: " + e.toString()); System.exit(1); } /** * Create connection. * Create session from connection; false means session is * not transacted. * Create consumer. * Register message listener (TextListener). * Receive text messages from destination. * When all messages have been received, type Q to quit. * Close connection. */ try { connection = connectionFactory.createConnection(); session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); consumer = session.createConsumer(dest); listener = new TextListener(); consumer.setMessageListener(listener); connection.start(); System.out.println("To end program, type Q or q, " + "then "); inputStreamReader = new InputStreamReader(System.in); while (!((answer == 'q') || (answer == 'Q'))) { try { answer = (char) inputStreamReader.read(); } catch (IOException e) { System.out.println("I/O exception: " + e.toString()); } } } catch (JMSException e) { System.out.println("Exception occurred: " + e.toString()); } finally { if (connection != null) { try { connection.close(); } catch (JMSException e) { } } } } }
- save the project and compile it in the IDE or use the following commands to manually compile:
javac -cp activemq-5.x-all.jar Receive.java
java -cp activemq-5.x-all.jar Receive "MyApp"
Your JMS Listener is now listening and waiting patiently to receive inbound JMS messages. Please leave the IDE output or console window open/active (i.e. do not stop the running process for now). We’ll get into how to stop the process and server later.
(NOTE: in ActiveMQ, you can check the JMS Admin console to confirm a new topic or queue got created, depending on which code you chose to copy above)
Sending Messages
- in your favourite IDE or text editor, copy & paste the following starter code:
package jms; /** * The SimpleProducer class consists only of a main method, * which sends several messages to a queue or topic. * * Run this program in conjunction with SimpleSynchConsumer or * SimpleAsynchConsumer. Specify a queue or topic name on the * command line when you run the program. By default, the * program sends one message. Specify a number after the * destination name to send that number of messages. */ import javax.jms.*; import javax.naming.*; public class Send { /** * Main method. * * @param args the destination used by the example * and, optionally, the number of * messages to send */ public static void main(String[] args) { final int NUM_MSGS; if ((args.length < 1) || (args.length > 2)) { System.out.println("Program takes one or two arguments: " + " []"); System.exit(1); } String destName = new String(args[0]); System.out.println("Destination name is " + destName); if (args.length == 2) { NUM_MSGS = (new Integer(args[1])).intValue(); } else { NUM_MSGS = 1; } /** * Create a JNDI API InitialContext object if none exists * yet. */ Context jndiContext = null;. */) { } } } } }
- save the project and compile it in the IDE or use the following commands to manually compile:
javac -p activemq-5.x-all.jar Send.java
java -p activemq-5.x-all.jar Send "MyApp" "Hellooooo NURSE!"
Check the logged outputs of your “Receive” JMS Listener’s console or IDE window. You should see a message, if not, please make sure you compiled the code properly and encountered no error messages.
You should be all set for robust Event-driven messaging now. Of course, the content of our basic test message was simply a “Hello, World!” type message, but could easily be a more complex XML document, or even JSON snippet, which would be more useful for WebApps in particular.
If you have trouble with any of these steps, you should consult the official ActiveMQ Getting Started guide and/or the Message forums.
Push Messaging to Web Apps
When using ActiveMQ, Push Messaging is accomplished via the AJAX client. It has fairly good browser support and degrades well. There are options for page refresh-based polling in the absolute worst case, but this practice is now shunned in favor of a more COMET-like Reverse AJAX approach. The basic requirement is to use a script tag as follows:
These simple 7 lines of JavaScript allows us to dynamically inject trusted Javascript behaviour from the same-origin server on supporting web browsers, and use a self-updating iFrame as the fallback where such behaviour is not fully supported. The contents of amq.js are dynamically created by the server-side AjaxServlet
Important notes for WebApp:
- Ensure web.xml looks web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" ""> <web-app> <display-name>InteractiveSlides</display-name> <description>Interactive Slideshows based on S5, with real-time controls powered by JMS (via Apache ActiveMQ)</description> <!-- context config --> <context-param> <param-name>org.apache.activemq.brokerURL</param-name> <param-value>tcp://localhost:61616</param-value> <description>The URL of the Message Broker to connect to</description> </context-param> <context-param> <param-name>org.apache.activemq.embeddedBroker</param-name> <param-value>false</param-value> <description>Whether we should include an embedded broker or not</description> </context-param> <resource-ref> <description>Connection Factory</description> <res-ref-name>jms/connectionFactory</res-ref-name> <res-type>javax.jms.TopicConnectionFactory</res-type> <res-auth>Container</res-auth> </resource-ref> <!-- Queue ref --> <resource-env-ref> <resource-env-ref-name>jms/s5.MyQueue</resource-env-ref-name> <resource-env-ref-type>javax.jms.Queue</resource-env-ref-type> </resource-env-ref> <!-- Topic ref --> <resource-env-ref> <resource-env-ref-name>jms/s5.MyTopic</resource-env-ref-name> <resource-env-ref-type>javax.jms.Topic</resource-env-ref-type> </resource-env-ref> <!-- servlet mappings --> <!-- the subscription REST servlet --> <servlet> <servlet-name>AjaxServlet</servlet-name> <servlet-class>org.apache.activemq.web.AjaxServlet</servlet-class> <load-on-startup>1</load-on-startup> <async-supported>true</async-supported> </servlet> <servlet> <servlet-name>MessageServlet</servlet-name> <servlet-class>org.apache.activemq.web.MessageServlet</servlet-class> <load-on-startup>1</load-on-startup> <async-supported>true</async-supported> <!-- Uncomment this parameter if you plan to use multiple consumers over REST --> <init-param> <param-name>destinationOptions</param-name> <param-value>consumer.prefetchSize=1</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>AjaxServlet</servlet-name> <url-pattern>/amq/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>MessageServlet</servlet-name> <url-pattern>/message/*</url-pattern> </servlet-mapping> <filter> <filter-name>session</filter-name> <filter-class>org.apache.activemq.web.SessionFilter</filter-class> <async-supported>true</async-supported> </filter> <filter-mapping> <filter-name>session</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.apache.activemq.web.SessionListener</listener-class> </listener> </web-app>
- Ensure your JMS server’s configuration file (apache-activemq-5.4.2/conf/) looks something as variables in this configuration file --> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <value>file:${activemq.base}/conf/credentials.properties</value> </property> </bean> <!-- The <broker> element is used to configure the ActiveMQ broker. --> <broker xmlns="" brokerName="localhost" dataDirectory="${activemq.base}/data" destroyApplicationContextOnStop="true"> <!-- For better performances use VM cursor and small memory limit. For more information, see: Also, if your producer is "hanging", it's probably due to producer flow control. For more information, see: --> <destinationPolicy> <policyMap> <policyEntries> <policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb"> <pendingSubscriberPolicy> <vmCursor /> </pendingSubscriberPolicy> </policyEntry> <policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb"> <!-- Use VM cursor for better latency For more information, see: <pendingQueuePolicy> <vmQueueCursor/> </pendingQueuePolicy> --> </policyEntry> </policyEntries> </policyMap> </destinationPolicy> <!-- The managementContext is used to configure how ActiveMQ is exposed in JMX. By default, ActiveMQ uses the MBean server that is started by the JVM. For more information, see: --> <managementContext> <managementContext createConnector="false"/> </managementContext> <!-- Configure message persistence for the broker. The default persistence mechanism is the KahaDB store (identified by the kahaDB tag). For more information, see: --> <persistenceAdapter> <kahaDB directory="${activemq.base}/data/kahadb"/> </persistenceAdapter> <!-- The transport connectors expose ActiveMQ over a given protocol to clients and other brokers. For more information, see: --> <transportConnectors> <!-- Create a TCP transport that is advertised via an IP multicast group named default. --> <transportConnector name="openwire" uri="tcp://localhost:61616" /> <!-- Non-blocking Input/Output TCP connector for large-scale apps <transportConnector name="nio" uri="nio://localhost:61616"/> --> <!-- Create an SSL transport. To use properly, make sure to configure the SSL options via the system properties or the sslContext element. --> <transportConnector name="ssl" uri="ssl://localhost:61617"/> <!-- Create an HTTP transport. For non-secure communications --> <transportConnector name="http" uri="" /> <!-- Create a STOMP transport for cross-platform STOMP clients (AJAX, Flash, PHP, Python, Perl, Ruby, C#, etc). --> <transportConnector name="stomp" uri="stomp://localhost:61613"/> <!-- Create a XMPP transport. Useful for XMPP chat clients (i.e. within a department intranet). --> <transportConnector name="xmpp" uri="xmpp://localhost:61222"/> </transportConnectors> </broker> <!-- Enable web consoles, REST and Ajax APIs and demos Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details --> <import resource="jetty.xml"/> </beans>
- Compile project as a .war file
- Deploy project .war file to a J2EE Web Server with Servlet container and JSP support, for instance Jetty or Tomcat.
- Open JMS Admin console in a browser (if using a JMS provider other than ActiveMQ, please consult that provider’s documentation for the correct Admin URL, if any exists)
- Open the JMS Test Suite in a browser (suggest using localhost at first for testing connectivity within your own system/network, then try moving the web app to your own remote server and reaching your local JMS broker or vice versa – moving the JMS server to a remote host and reaching it from your local Web App and Server deployment; lastly, try using two separate remote servers – one for JMS, and one for the webapp – as this will be the best architecture to take advantage of distributed computing and the seperation of concerns).
You can’t really demo any of this online without a dedicated Java Server (and possibly a separate Message server for a Live Production application that can scale to high numbers of users; although ActiveMQ can perform quite well even when running on the same server for small-to-mid range apps). Where Java server hosting is notoriously expensive, I figured the best way to demo this was to get it running on AppEngine with a nice implementation of S5 to show the power of messaging services such as JMS (however had to settle for the limitations of the much more basic HTML5 WebMessaging API rather than ActiveMQ). I encourage you to try this all out, the best way is really to just download it and run it locally on your own laptop/desktop yourself, before thinking about alternative uses:
-OR-
Last but not least, feel free to fork the entire project on GIT and create something cool! The code is Public Domain but I encourage you to send pull requests if you make your own updates, so that the app can remain as killer (all-killer and no filler) a demonstration of real-time messaging for E-Learning webapps as possible.
Related Articles
- ActiveMQ Apollo optimises messaging for multicore (h-online.com)
- All about JMS messages (java.dzone.com)
- ActiveMQ vs. Jabber (codemonkeyism.com)
- ActiveMQ IS Ready For Prime Time (javacodegeeks.com)
- Spring 3.x – ActiveMQ 5.5 Integration (Publisher Subscriber) (apachebite.com)
- Using Spring to Receive JMS Messages (nofluffjuststuff.com)
>.
Everything is very open with a very clear explanation of the issues. It was really informative. Your site is very helpful and we used it extensively in developing STRIM.
Many thanks for sharing! | http://bcmoney-mobiletv.com/blog/2010/01/21/push-messaging-with-jms/ | CC-MAIN-2020-10 | en | refinedweb |
[
]
Joel Koshy commented on KAFKA-249:
----------------------------------
Thanks for the reviews. Further comments inline.
Jun's comments:
> 1. For future extension, I am thinking that we should probably unifying
> KafkaMessageStream and KafkaMessageAndTopicStream to sth like
> KafkaMessageMetadataStream. The stream gives a iterator of Message and its
> associated meta data. For now, the meta data can be just topic. In the
> future, it may include things like partition id and offset.
That's a good suggestion. I'm not sure if it is better to factor that change
for the existing createMessageStreams into 0.8 instead of trunk, because it
is a fundamental API change that would break existing clients (at compile
time). I can propose this to the mailing list to see if anyone has a
preference. If no one objects, then we can remove it.
> 2. ZookeeperConsumerConnector: 2.1 updateFetcher: no need to pass in
> messagStreams
Will do
> 2.2 ZKRebalancerListener: It seems that kafkaMessageStream can be
> immutable.
It is mutable because it is updated in consumeWildcardTopics.
> 2.3 createMessageStreamByFilter: topicsStreamsMap is empty when passed to
> ZKRebalanceListener. This means that the queue is not cleared during
> rebalance.
Related to previous comment. The topicsStreamsMap is bootstrapped in
consumeWildCardTopics and updated at every topic event if there are new
allowed topics. So it will be populated before any rebalance occurs.
> 2.4 consumeWildCardTopics: I find it hard to read the code in this method.
> Is there a real benefit to use implicit conversion here, instead of
> explicit conversion? It's not clear to me where the conversion is used.
> The 2-level tuple makes it hard to figure out what the referred fields
> represent. Is the code relying on groupedTopicThreadIds being sorted by
> (topic, threadid)? If so, where is that enforced.
The map flatten method is a bit confusing. I'm using (and hopefully not
misusing) this variant:
def flatten [B] (implicit asTraversable: ((A, B)) ⇒ TraversableOnce[B]): Traversable[B]
Converts this map of traversable collections into a map in which all element
collections are concatenated.
It basically allows you to take the KV pairs of a map and generate some
traversable collection out of it. Here is how I'm using it: We have a list
of queues (e.g., List(queue1, queue2)) and a map of
consumerThreadIdsPerTopic (e.g.,
{ "topic1" -> Set("topic1-1", "topic1-2"),
"topic2" -> Set("topic2-1", "topic2-2"),
"topic3" -> Set("topic3-1", topic3-2") } ).
>From the above I need to create pairs of topic/thread -> queue, like this:
{ ("topic1", "topic1-1") -> queue1,
("topic1", "topic1-2") -> queue2,
("topic2", "topic2-1") -> queue1,
("topic2", "topic2-2") -> queue2,
("topic3", "topic3-1") -> queue1,
("topic3", "topic3-2") -> queue2 }
This is a bit tricky and I had trouble finding a clearer way to write it.
However, I agree that this snippet is hard to read - even I'm having
difficulty reading it now, but I think keeping it concise as is and adding
comments such as the above example to explain what is going on should help.
> 3. KafkaServerStartable: Should we remove the embedded consumer now?
My original thought was that it would be good to keep it around for
fall-back, but I guess it can be removed.
> 4. Utils, UtilsTest: unused import
Will do.
--------------------------------------------------------------------------------
Neha's comments:
> 1. It seems awkward that there is a MessageStream trait and the only API
> it exposes is clear(). Any reason it doesn't expose the iterator() API ?
> From a user's perspective, one might think, since it is a stream, it would
> expose stream specific APIs too. It will be good to add docs to that API
> to explain exactly what it is meant for.
The only reason it was added was because I have two message stream types
now. Anyway, this will go away if we switch to the common
KafkaMessageMetadataStream.
> 3. There is some critical code that is duplicated in the
> ZookeeperConsumerConnector. consume() and consumeWildcardTopics() have
> some code in common. It would be great if this can be refactored to share
> the logic of registering session expiration listeners, registering watches
> on consumer group changes and topic partition changes.
Will do
> 4. Could you merge all the logic that wraps the wildcard handling in one
> API ? Right now, it is distributed between createMessageStreamsByFilter
> and consumeWildcardTopics. It will be great if there is one API that will
> pre process the wild cards, create the relevant queues and then call a
> common consume() that has the logic described in item 5 above.
Slightly involved, but it is worth doing.
> 5. There are several new class variables called wildcard* in
> ZookeeperConsumerConnector. I'm thinking they can just be variables local
> to createMessageStreamsByFilter ?
Related to above. consumeWildcardTopics actually needs to access these so
that's why it's global - in this case global makes sense in that you really
wouldn't need to (and currently cannot) make multiple calls to
createMessageStreamsByFilter. However, it would be good to localize them if
possible to make the code easier to read.
> 6. There is a MessageAndTopic class, that seems to act as a container to
> hold message and other metadata, but only exposes one API to get the
> message. Topic is exposed by making it a public val. Would it make sense
> to either make it a case class or provide consistent APIs for all fields
> it holds ?
Ok, but this will likely go away due to the MessageMetadata discussion.
> 7. Since now we seem to have more than one iterators for the consumer,
> would it make sense to rename ConsumerIterator to MessageIterator, and
> TopicalConsumerIterator to MessageAndMetadataIterator ?
Makes sense, but it could break existing users of KafkaMessageStream. Also,
if we can get rid of KafkaMessageStream and just go with
KafkaMessageAndMetadataStream we will have only one iterator type.
> 8. rat fails on this patch. There are some files without the Apache header
Good catch and reminder that reviews should ideally include running rat. I
do need to add the header for some files.
>: | https://mail-archives.apache.org/mod_mbox/kafka-dev/201203.mbox/%3C496890682.30110.1332969931037.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2020-10 | en | refinedweb |
Last night, Ilan Schnell announced the release of ETS 4.0. The first major release of the Enthought Tool Suite in almost three years, 4.0 implements a significant change: The ‘enthought’ namespace has been removed from all projects.
For example:
from enthought.traits.api import HasTraits
is now simply:
from traits.api import HasTraits
For backwards compatibility, a proxy package ‘etsproxy’ has been added, which should permit existing code to work. This package also contains a refactor tool ‘ets3to4’ to convert projects to the new namespace so that they don’t rely on ‘etsproxy’. on the ETS mailing list, and hope you enjoy ETS 4.
Thats not a very useful changelog fir a new major version number… Is the main feature of ETS 4 a namespace prefix removal?
Pingback: Weekly Digest for June 23rd | William Stearns
Thanks to all of the folks that made this happen. Especially to Ilan.
cheers,
Prabhu | http://blog.enthought.com/open-source/ets-4-0-released/ | CC-MAIN-2020-10 | en | refinedweb |
Subject: [Boost-bugs] [Boost C++ Libraries] #12472: accumulator statistics.hpp may cause build to fail due to no viable overloaded operator[]
From: Boost C++ Libraries (noreply_at_[hidden])
Date: 2016-09-20 20:28:28
Keywords: |
-------------------------------------+--------------------------
This occurred on both 1.61.0 and trunk. This was tested using Apple's
Xcode 8 toolchain on macOS 10.12, through CMake, with Boost installed
through Homebrew.
This program shows the error on my computer:
{{{
#include <boost/accumulators/accumulators.hpp>
#include <boost/accumulators/statistics.hpp>
using namespace boost::accumulators;
static accumulator_set<int, stats<tag::tail<right>>> acc;
int main(int argv, char* argc[]) {
return 0;
}
}}}
Compilation was done using the command `clang++ -std=c++11 test.cpp`
Interestingly, the build succeeds when `#include
<boost/accumulators/statistics/tail.hpp>` is used instead of including
`statistics.hpp`.
-- Ticket URL: <> Boost C++ Libraries <> Boost provides free peer-reviewed portable C++ source libraries.
This archive was generated by hypermail 2.1.7 : 2017-02-16 18:50:20 UTC | https://lists.boost.org/boost-bugs/2016/09/46136.php | CC-MAIN-2020-10 | en | refinedweb |
As a relatively new developer, and a complete lightweight when compared to the rest of the Komodo development team, I find myself sharing snippets, errors and diffs for review quite often. Since I like to share (Mom and Dad taught me well), I thought it was important to make it easier to share in Komodo. At one point, a Komodo user was limited to using kopy.io to do this, and only in limited areas of Komodo. In 10.2, we’ve extended Komodo to allow our users to share more easily and in more ways.
We wanted to add more endpoints (more kopy.ios), allow users to share text from more aspects of Komodos (logs, diffs, files, etc.) as well as allow users to more easily add their own custom endpoints without having to touch any of Komodo UI. We did this with two major additions:
1) A new “Sharing” SDK (I’ll explain)
2) Komodo + Slack integration (uses the above SDK to post content to Slack!)
The Share SDK
var share = require("ko/share");
The logical way to extend the existing UI with more “share points” (menu items to select an output) was to extend the existing kopy.io module’s UI integrations and move it into it’s own SDK. Thus
ko/share was born.
The share module now adds menus to the following source points in Komodo:
- Editor context Share menu (Right click your file or selected text > Share…)
- Dynamic toolbar share button
- Log file window (Help > Troubleshooting > View Log File)
- Any diff dialog
- The trackchanges diff panel
All of these interfaces have been augmented with a Share dropdown menu.
Register your Module
At this point you might be thinking “Hey neat Carey, you added another dropdown menu list to Komodo…so what?”
Good question! If it was just a drop down all over the place that would be boring, so next we needed to add items to this list for all output modules dynamically.
share.register(name, namespace, label);
Enter
share.register(). It takes a
name (e.g. “kopy”), a
namespace (e.g. “ko/share/kopy” in order to
require your module.), and a
label (e.g. “Share code on kopy.io”, which is the label shown in the menu list). Komodo uses CommonJS method of getting JS code and that requires a namespace (like the one mentioned above). A basic example you can follow to add your own namespace for your share implementation is as follows:
require.setRequirePath("myShareProject",""). The best method to do this would be to include your code in a Komodo Addon but that’s to big a topic to cover here. Ask for help in our forums.
Komodo then checks to make sure you implemented a
share function in your module that Komodo can call later (this is explained in more detail in the next section). It then cycles through all of the UI sources in Komodo we mentioned above and adds a new menu item for you.
Implement the Share Interface
require("ko/share/kopy").share(data, meta)
Above I mentioned a
share function that the
share.register method looks for. This is the one and only requirement of your share module–it must have a share function. This is the interface we use to pass content to your module.
The Data
The function should take a string of
data and an Object of
meta information. The
data string is the content that will be posted as text to the output source. For example, in the case of kopy.io,
data is the code displayed in the webpage:
#!/usr/bin/env python3 print "why is this broken??" print("oh...duh...it's Python 3")
The Meta
The
meta object holds basic meta information about the
data you want to post. You can expect an object as follows:
var meta = { langauge: "javascript", title: "MyFile.js" }
Your Share module can use this information as you see fit. For example, you can post a file to kopy.io and leave it up to kopy.io to figure out what your content might be (text, Javascript, Python?) or you can use the
language property and send that along in the API call sent to kopy.io.
Since this module is so new, we are open to ideas for additional properties in the meta object, so please let us know if you have any suggestions.
Slacking
var slack = require("ko/share/slack");
Sorry, no more source links. This stuff’s not open sourced.
Now that we have the Share module, the next step was to add some share endpoints. Kopy.io was a breeze since our lead Komodo Developer, Nathan Rijksen, had already written most of the necessary code back in Komodo 9.2.
Since our team and MANY MANY others use Slack on a daily basis, we chose to add Slack.
This project was a lot of firsts for me. I got to augment a Django Python server with a new API endpoint, do funky window management and event handling (imagine having to program your website by starting at the browser window then trying to access the loading page in a particular tab to trigger your onload event…FUN!), and trying to streamline a 100% async user experience. I even got to take a new service for a spin that Nathan added while I was building the Slack integration,
ko/simple-storage. Simple-storage allows you to persist information about your addon, Userscript, etc. and have it persist between Komodo restarts. This is the recommended way to persist information rather than the traditional
require("ko/prefs") method.
Django API Endpoint
In order for Komodo to post content to your various Slack Teams and Channels it needs to authenticate users. Slack has a system in place which requires you to have a server in the middle of the process to actually send the final key request to the Slack Auth servers.
I started writing this in Node.js since that seems to be the go-to for API servers these days. In the end though, I decided to build it on Django. We already write so much Python in Komodo and we have a few other services built on a Django server. Besides, I’ve already written a Node API server in class and have never touched Django. Get moderately good at a ton of stuff and never become an expert at anything…that’s my advice kids (with tongue firmly pressed against my cheek)!
Without going into too much boring detail, the server handles a request from Komodo with two temporary auth keys (one from Slack and one from Komodo), which are sent to another Slack authorization server that handles permissions. When that is returned from Slack, our server sends the new final auth key back to Komodo and it’s encrypted and saved to disk.
The Slack Panel
The panel that appears after you’ve authenticated and you’re picking a channel, message, etc. is plain. There doesn’t appear to be anything special about it. But that’s not true. What is of interest is that it’s built completely from the
ko/ui SDK, just like the Start Up Wizard (Help > Run first start wizard again). There is ZERO markup involved there.
Here’s a sample of what it looks like in the backend. You can copy and paste this code into the Komodo console to try it for yourself (View menu > Tabs & Sidebars > Console, or Ctrl (Cmd on OSX) + Shift + B then click the Console tab):
panel = require("ko/ui/panel").create({ anchor: require("ko/windows").getMostRecent().document.documentElement, attributes: { backdrag: true, noautohide: true, width: "450px", class: "dialog" } }); panel.open(); var options = { attributes: { placeholder: "Title", col: 120, flex: 1 }}; var title = require("ko/ui/textbox").create(options); panel.add(title);
All the fields that you fill out in the Slack integration panel are saved for use later using the new
ko/simple-storage I mentioned above. Let’s have a look at that, then I think we should call it a day on this blog…oh ps. you can get rid of that panel I got you to create on your screen by just writing
panel.close() in the JS console 😉
Simple Storage
ko/simple-storage is meant to replace anything
ko/prefs does that is not actually “preference” related as the standard storage for application, addon, or userscript information. It’s persistent and easier to use, which should make anyone customizing Komodo pretty happy.
Here’s a small sample of how it works.
var my_SS = require("ko/simple-storage").get("mine"); my_SS.storage.pizza = "It's so good it's scary, Carey!"; console.log("Is pizza good? "+my_SS.storage.pizza);
The
storage object is saved to disk and can be reloaded after a restart by using the same code. And when you’re done with it, you just
remove it.
var my_SS = require("ko/simple-storage").get("mine"); console.log("Is pizza still good? "+my_SS.storage.pizza + " Stop asking stupid questions."); require("ko/simple-storage").remove("mine");
Just open the Komodo Console(View menu > Tabs & Sidebars > Console, or Ctrl (Cmd on OSX) + Shift + B then click the Console tab) if you’d like to give it a try.
So that was a long blog…but I hope you found it useful and can take advantage of these great ways to share with Komodo. As my mother always said…sharing is caring!
Title photo courtesy of Web Hosting on Unsplash. | https://www.activestate.com/blog/slacking-off-with-komodo/ | CC-MAIN-2020-10 | en | refinedweb |
Common props in Tidepool data visualization code
Several of the pieces of the state common to most, if not all, of Tidepool's data visualization views are most often encountered in the code as props passed to React components. This page provides a quick reference describing the canonical forms of these pieces of state since they occur and are used (for example: as a parameter in a utility function) very frequently.
bgPrefs
bgPrefs is an object with two properties:
bgUnits and
bgBounds. Sometimes these component properties are passed around on their own, and sometimes
bgPrefs is passed around as a whole. (As we increase our commitment to support for mmol/L as well as mg/dL blood glucose units, it will probably be necessary to access both
bgUnits and
bgBounds simultaneously most of the time, so it is probably a good idea to just get used to passing around
bgPrefs as a whole.)
bgUnits is a String value that can be either
mg/dL or
mmol/L. To avoid typos and capitalization errors, we export a constant for each of these strings from src/utils/constants.js; use them with
import { MGDL_UNITS, MMOLL_UNITS } from 'path/to/utils/constants'.
bgBounds is an Object with five numerical fields:
veryLowThresholdencodes the threshold (logic is <) for encoding a blood glucose value as "very low"
- (a value >=
veryLowThresholdand <
targetLowerBoundis "low")
targetLowerBoundencodes the lower bound for the user's target blood glucose range; logic is >= the threshold is "target"
targetUpperBoundencodes the upper bound for the user's target blood glucose range; logic is <= the threshold is "target"
- (a value >
targetUpperBoundand <=
veryHighThresholdis "high")
veryHighThresholdencodes the threshold (logic is >) for encoding a blood glucose value as "very high"
clampThresholdencodes the value at which we clamp the blood glucose scale (logic is >); read about clamping scales in the d3-scale documentation
The function
classifyBgValue exported from the blood glucose utilities in
src/utils/bloodglucose.js will return the classification for any blood glucose value given the
value and the
bgBounds. (See also all the API documentation for blood glucose utility functions.)
timePrefs
timePrefs is an object with two properties:
timezoneAware and
timezoneName. The
timezoneAware property is a simple Boolean indicating whether or not the user wants to view data in timezone-aware mode;
timezoneAware defaults to
true. Because the extraction of a named timezone depends in part on the value of
timezoneAwarea,
timePrefs should always be passed around as an entire object, and the utility function
getTimezoneFromTimePrefs exported from the datetime utilities in
src/utils/datetime.js should be used to extract the timezone String when required. (See also all the API documentation for datetime utility functions.)
timezoneName is a String timezone that the Moment.js datetime utility library recognizes. Moment.js in turn recognizes all timezones from the IANA Time Zone Database.
a. In brief, if
timezoneAwareis
false, then
UTCis used for the timezone String in all circumstances and methods where a timezoe String is required. ↩ | http://developer.tidepool.org/viz/docs/misc/CommonProps.html | CC-MAIN-2020-10 | en | refinedweb |
Environment variables with StencilJS
March 03, 2019
I noticed that the question regarding how to handle environment variables in Stencil’s projects or projects created with the Ionic PWA toolkit often pops up 🤔
As I have implemented a solution to handle such parameters in the remote control of my project DeckDeckGo, the Progressive Web App alternative for simple presentations, I thought about sharing my small implementation in this new article.
Credits
The following solution was inspired by the one developed in the Ionic core project. One of the entry point for me was discovering the method setupConfig in their source code. Therefore kudos to the awesome Ionic team ❤️
Getting started
The solution described in this tutorial as for goal to handle two environments, a
development and a
production environments. In each of these we are going to define a variable which points to a different end point url.
Note that the example below was developed with the Ionic PWA toolkit.
Configuring the environments
To begin our implementation, we are going to define an interface which should contains our variable(s) and a setup method which aims to “push” its value in the
window object. This means that when our application will start, we are going to call this method in order to define the environment variables which should be use at runtime for the all application.
As I display the code of my own project, you might find references to the names
DeckDeckGo or its short form
DeckGo. Just replace these with the name of your project in your implementation.
To implement the interface and function you could for example create a new file called
environment-config.tsx :
// The interface which define the list of variables export interface EnvironmentConfig { url: string; } export function setupConfig(config: EnvironmentConfig) { if (!window) { return; } const win = window as any; const DeckGo = win.DeckGo; if (DeckGo && DeckGo.config && DeckGo.config.constructor.name !== 'Object') { console.error('DeckDeckGo config was already initialized'); return; } win.DeckGo = win.DeckGo || {}; win.DeckGo.config = { ...win.DeckGo.config, ...config }; return win.DeckGo.config; }
Now that we have created a setup function, we will need to use it when the application start. As our goal is two have two different environments, we are first going to modify the main application class
app.ts to be the one which define and use the
production environment. We are going to consume the above method we have created and define our url for the production.
import {setupConfig} from '../app/services/environment/environment-config'; setupConfig({ url: '' });
Then we are going to create a second bootstraping class beside it to be the one which are going to load the
development configuration. For that purpose let’s create in addition to the main class a file called
app-dev.ts which will contains the following:
import {setupConfig} from '../app/services/environment/environment-config'; // When serve locally: setupConfig({ url: location.protocol + '//' + location.hostname + ':3002' });
Running the application
Now that we have two different entry points to start our application, we should be able to choose between these while running our command lines. For that purpose we are going, firstly, to modify the configuration file
stencil.config.ts in order to make the
globalScript property variable.
let globalScript: string = 'src/global/app.ts'; const dev: boolean = process.argv && process.argv.indexOf('--dev') > -1; if (dev) { globalScript = 'src/global/app-dev.ts'; } export const config: Config = { ... globalScript: globalScript, ... };
As you could notice in the above code, the configuration will test a parameter
--dev to check if we want to use the
development environment or the default one, the
production .
To pass that parameter from the command line, we are just going to add a new script to our
package.json . Beside
npm run start we are going to create a new target
npm run dev which aims to start the application for the
development environment.
"scripts": { "build": "stencil build", "start": "stencil build --watch --serve", // Production "dev": "stencil build --dev --watch --serve" // Development }
Reading the variables
Now that we have set up the configuration and the scripts to switch between both environments we have only one final piece to implement, the one regarding actually reading the values, in our example, reading the value of our url.
For that purpose I suggest to create a singleton which aims to load the configurations parameters in memory once and to expose a
get method which should allow us to query specific variables (as we may have more than one environments variables 😉). We could create that new service in a new separate file called
environment-config.service.tsx :
import {EnvironmentConfig} from './environment-config'; export class EnvironmentConfigService { private static instance: EnvironmentConfigService; private m: Map<keyof EnvironmentConfig, any>; private constructor() { // Private constructor, singleton this.init(); } static getInstance() { if (!EnvironmentConfigService.instance) { EnvironmentConfigService.instance = new EnvironmentConfigService(); } return EnvironmentConfigService.instance; } private init() { if (!window) { return; } const win = window as any; const DeckGo = win.DeckGo; this.m = new Map<keyof EnvironmentConfig, any>(Object.entries(DeckGo.config) as any); } get(key: keyof EnvironmentConfig, fallback?: any): any { const value = this.m.get(key); return (value !== undefined) ? value : fallback; } }
That’s it, that was the last piece needed to implement environment variables in a Stencil project or an Ionic PWA toolkit application 🎉
To get a variable, you could now simply call anywhere in your code your service and ask for the value of a parameter, like in the following example:
const url: string = EnvironmentConfigService.getInstance().get('url'); console.log('My environment variable value:', url);
Cherry on the cake 🍒🎂
Like I said in my introduction, this solution is implemented in the remote control of my project DeckDeckGo, and guess what, this project is open source. Therefore, if you would like to checkout a concrete example of such implementation, you could browse or clone the DeckDeckGo repository 😃
git clone
To infinity and beyond 🚀
David | https://daviddalbusco.com/blog/environment-variables-with-stenciljs/ | CC-MAIN-2020-10 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.