text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
lambdacube-compiler LambdaCube 3D is a DSL to program GPUs Maintained by [email protected] Module documentation for 0.5.0.1 LambdaCube 3D is a domain specific language and library that makes it possible to program GPUs in a purely functional style. Changes v0.6 - new features - support mutual constant and function definitions - support pattern type annotations - support guards + where - support view patterns - support pattern guards - support as-patterns - implement pattern match reachability and exhaustiveness warnings - support parsing only - support printing desugared source code - improvements - allow pattern match on tuple types - implement constraint kinds (useful for type classes) - improve pretty printing - better presentation of types in editor tooltips - better error messages (e.g. for mismatching operator fixities) - speedup the builtin interpreter in the compiler - bugfixes - fix local function handling - fix parens around operators - fix parsing of operator definitions - fix parsing of sections - fix parsing of literals - fix switching to type namespace after @ - fix a bug in escape code parsing - documentation - reorganise and cleanup the compiler sources - begin to write developer's guide - documentation on pattern match compilation - dependencies - use megaparsec 5.0 - switch to ansi-wl-pprint - allow newer base, optparse-applicative and QuickCheck libraries - other - move the TODOs to Trello: - work on prototypes - Reducer.hs -- experiment with lazy evaluation in the ST monad - ShiftReducer.hs -- experiment with lazy evaluation purely, with incremental GC - LamMachine.hs -- experiment with lazy evaluation purely, with incremental GC (next version) - Inspector.hs -- a tool for inspect the state of LamMachine, intended for debugging/visualizing lazy evaluation - LamMachineV2.hs -- experiment with lazy evaluation in the ST monad, with explicit, generational GC v0.5 - compiler - support local pattern matching functions - support recursive local definitions - more polymorph type for equality constraints (~) :: forall a . a -> a -> Type - tuples are representated as heterogeneous lists - support one-element tuple syntax: (( element )) - reduction: don't overnormalize (String -/-> [Char]) - compiler optimization: names have Int identifiers - libraries/OpenGL API - use the advantage of heterogeneous lists (simplified and more complete type family instances) - needed to explicitly denote one-element attribute tuples - set programmable point size with ProgramPointSize - use lists instead of streams - rename - fetch --> fetch; fetchArrays --> fetchArrays - zeroComp --> zero; oneComp --> one - codegen - generate functions in shaders (previously functions were inlined) - normalized variable names in the generated pipeline - optimization: remove duplicate shader programs - pretty printed shader programs - include compiler version in the generated pipeline as a string info field - testenv - performance benchmarks (time and memory consumption) - other - parsec dependency changed to megaparsec - registered on stackage too (next to HackageDB) v0.4 - tagged on Feb 5, 2016 - - testenv - language feature tests framework - other - released on HackageDB v0.3 - tagged on Jan 18, 2016 - compiler - complete rewrite from scratch - use De Bruijn indices instead of String names - pattern match compilation - compositional type inference is replaced by a zipper-based approach which plays better together with dependent types - libraries/OpenGL API - interpolation handling is decoupled from vertex shader descriptions - introduce Stream data type; use just two types of streams instead of 4 - testenv - use Travis CI (continuous integration) with a docker image - timeout for tests first DSL compiler - tagged on Jun 14, 2015 - supports a fair amount of Haskell98 language features - partially supports GADTs and type families - supports explicit type application - supports row polymorphism and swizzling - uses compositional typing for better error messages - OpenGL API provided in attached Builtins and Prelude modules - generates LambdaCube3D IR (intermediate representation) Depends on: aeson, ansi-wl-pprint, async, base, base64-bytestring, bytestring, containers, deepseq, directory, exceptions, filepath, JuicyPixels, lambdacube-compiler, lambdacube-ir, megaparsec, monad-control, mtl, optparse-applicative, patience, pretty-show, process, QuickCheck, semigroups, tasty, tasty-quickcheck, text, time, vect, vector, websockets Used by 2 packages:
https://www.stackage.org/package/lambdacube-compiler
CC-MAIN-2017-51
en
refinedweb
Bugzilla – Bug 203 Vector4f result = a1*v1 + a2*v2 compiled like ass by eigen3; much better with eigen2 Last modified: 2014-03-20 10:16:09 UTC This test program: #include <Eigen/Core> using namespace Eigen; void foo(float a1, const Vector4f& v1, float a2, const Vector4f& v2, Vector4f& result) { asm volatile("#begin"); result = a1*v1 + a2*v2; asm volatile("#end"); } compiled like this with eigen3 and gcc 4.4.5 x86-64 linux: $ g++ -c -S -O2 -I eigen derf.cpp -DNDEBUG -o derf.s gives this crappy assembly: #APP # 9 "derf.cpp" 1 #begin # 0 "" 2 #NO_APP xorps %xmm2, %xmm2 movss %xmm1, %xmm2 pshufd $0, %xmm2, %xmm1 xorps %xmm2, %xmm2 mulps (%rsi), %xmm1 movss %xmm0, %xmm2 pshufd $0, %xmm2, %xmm0 mulps (%rdi), %xmm0 addps %xmm1, %xmm0 movaps %xmm0, (%rdx) #APP # 11 "derf.cpp" 1 #end # 0 "" 2 #NO_APP while with eigen2, it gives this good assembly: #APP # 9 "derf.cpp" 1 #begin # 0 "" 2 #NO_APP shufps $0, %xmm1, %xmm1 shufps $0, %xmm0, %xmm0 mulps (%rsi), %xmm1 mulps (%rdi), %xmm0 addps %xmm1, %xmm0 movaps %xmm0, (%rdx) #APP # 11 "derf.cpp" 1 #end # 0 "" 2 #NO_APP The first bad revision is: changeset: 1396:ab39cda02b30 user: Gael Guennebaud <[email protected]> date: Fri Aug 07 11:09:34 2009 +0200 summary: * implement a second level of micro blocking (faster for small sizes) And then the asm changes again at this revision, to give the current asm: changeset: 1593:402e5f111006 parent: 1561:cf40d4554411 user: Gael Guennebaud <[email protected]> date: Thu Sep 17 23:18:21 2009 +0200 summary: fix #53: performance regression, hopefully I did not resurected another So, the regression is introduced by the 'smart' implementation of pset1(), that is, when you replace _mm_set1_ps by _mm_set_ss + shufps. The comment says that's to work around a GCC bug whereby it implements _mm_set1_ps using multiple moves, which is slow. I can reproduce that with -m32, but not in 64bit. However, still with -m32, the 'fix' does not seem to help at all. Created attachment 109 [details] revert to just using _mm_set1_p[sd] This patch fixes the problem on x86-64 and does not change much on x86-32 (asm remains bad on x86-32). OK, pushed this patch. After all it's just removing a weird workaround that creates problems and I can't reproduce any improvement from using this workaround. Feel free to reintroduce it if you know what exactly it fixes, but please make sure that it doesn't hurt perf in other cases (here it hurted on linux/gcc4.4/x86-64). There remains the problem that _mm_set1_ps compiles very poorly with gcc/i386 (i.e. -m32). I don't know what to do about it. Maybe inline asm again? pshufd is really faster than shufps. Using asm does the job without introducing such a regression: Changeset: 60ca549abed6 User: ggael Date: 2014-03-20 10:14:26 Summary: Makes gcc to generate a pshufd instruction for pset1
http://eigen.tuxfamily.org/bz/show_bug.cgi?id=203
CC-MAIN-2015-35
en
refinedweb
A manually advanced clock. More... #include <ManualClock.h> A manually advanced clock. Construct a manual clock. Advance the clock by the given amount of milliseconds. Referenced by gnash::MovieTester::advanceClock(). Return number of milliseconds elapsed since start. Subclass this to provide time to the core lib. NOTE: 32bit unsigned int has an upper limit of 4294967295 which means about 49 days before overlflow. Implements gnash::VirtualClock. Restart the clock. Implements gnash::VirtualClock.
http://gnashdev.org/doc/html/classgnash_1_1ManualClock.html
CC-MAIN-2015-35
en
refinedweb
Related Titles - Full Description .NET Framework simply blows away the archaic tools previously available to web programmers, the authors predict that many Visual Basic programmers who successfully avoided Web programming in the past will now bring their expertise to the Web. However, even experienced web programmers will greatly benefit from the authors' thorough coverage of the ASP.NET namespaces and their clear coverage of the ADO.NET classes most important to Web applications that use relational databases for data storage. All developers will benefit from the authors' extensive practical advice (based on their unique professional backgrounds) about how to produce create high-quality code and how to create professional, usable websites. After reading Programming the Web with Visual Basic .NET, you'll understand how to build and deploy top-quality, professionally designed, highly usable web applications using Visual Basic .NET. - Source Code/Downloads - - Errata -
http://www.apress.com/microsoft/vb-net/9781590590270
CC-MAIN-2015-35
en
refinedweb
1-7 DEBUGGING AND PROGRAM VALIDATION ************************************* (Thanks to Sergio Gelato for the good comments and wealth of information, thanks to Dan Pop for the important information, and to Craig Burley for the helpful comments) +----------------------------------------------------------+ | THE BEST WAY TO DEBUG A PROGRAM IS TO MAKE NO MISTAKES | +----------------------------------------------------------+ Using compiler options ---------------------- If you write programs in the 'right' way, you will have few syntax errors, a good compiler (called with the right options) will flag them for you, and you will correct them easily. It is important to emphasize that most compilers are rather lenient by default, and must be invoked with special options in order to get many useful warnings. Some of these compiler options enable compile-time checks such as: 1) Disabling implicit type declarations 2) Flagging standard violations Others enable various run-time checks such as: 1) Checking array bounds 2) Trapping floating-point exceptions 3) Special variable initializations that are supposed to produce floating-point exceptions if uninitialized variables are used. It is recommended that you create an alias (VMS global symbol) as a shorthand for these long switch sequences. Recommended compiler options for debugging ============================================================ FORTRAN/WARNING=ALL/STANDARD/CHECK=ALL (VMS) f77 -u -ansi -fnonstd -C -XlistE -Xlisto /dev/tty (SunOS) f77 -u -w0 -strictIEEE -trapuv -C (IRIX) f77 -u -std -fpe4 -check format \ (DUNIX) output_conversion overflow underflow \ -automatic -C f77 -u -std -fp4 -check overflow \ (ULTRIX) underflow -automatic -C cf77 -Wf-enih-m0 (UNICOS) xlf -C -qflttrap=inv:ov:zero:en:imp (AIX) f77 -C +FPVZOuiD (HP-UX) A note on the SunOS compiler ---------------------------- The Xlist option triggers global program checking. While there is no way to just toggle the checking, one can use: f77 $basicopts -XlistE -Xlisto /dev/tty To get just the errors sent to the terminal, and: -Xlistwar320 -Xlistwar315 -Xlistwar314 -Xlistwar357 -Xlistwar359 -Xlistwar391 -Xlistwar338 -Xlistwar355 -Xlistwar380 -Xlistwar381 -Xlistwar205 -Xlistwar370 To turn off the messages that you will probably want to turn off. A note on the xlf compiler -------------------------- In some cases, it may be appropriate to use -qextchk as well. This outputs extra information about the types of subprogram arguments that the binder can then check at link time. Occasionally one may need to stretch the rules about the types of arguments (a routine may expect a COMPLEX array, but the caller may have declared the array as REAL (2,*)), in which case -qextchk may have to be omitted. Current versions (3.1 and later) of XL Fortran also support the (recommended) -qlanglvl=77std, -qlanglvl=90std, -qlanglvl=90pure. Choose the one that is most appropriate to your code. When writing new code, try to have no warnings with -qlanglvl=90pure. A note on the HP-UX f77 compiler -------------------------------- The recommended options for f77 are: -C (bounds checking, done at compile time) +FPVZOuiD (preferred FP options, done at link time) The preferred FP options are: trap overflow, divide by zero, invalid operand, allow abrupt underflow if supported by the hardware; ignore inexact and underflow faults. You may have to say -Wl,+FPVZOuiD if you link with the f77 command. A note on the VMS compiler -------------------------- The VMS options here are an overkill, and may produce unneeded error messages A note on the UNICOS f77 compiler --------------------------------- UNICOS switches may not work when running parallel. Using list files for correcting syntax errors --------------------------------------------- When compiling the compiler throws rapidly a list of errors at you, being out of context they are hard to understand, and even harder to remember when you return to the editor. Using a windowing terminal is helpful, but you can do nicely without one. Ask the compiler to generate a LIST FILE, then call your editor and put the list file in one EDITING BUFFER and your program in another. See the compiler options chapter for a table of compiler switches on various operating systems. The list file may have the same name as the source file, with FILE EXTENSION '.l', '.L' or '.LIS'. Static code analyzers --------------------- There are static tools that can help you flush 'deeper' bugs. The Fortran FAQ is a good place to look for them. For example, you can get FTNCHEK by anonymous ftp from netlib (see the section on getting useful routines), and install it easily. It is supposed to be rather primitive compared with other products, but you may be surprised at the results. Try the following switches: Purpose Switches --------- -------------------------------------- More info -calltree -crossref -reference -symtab Portability -portability -f77 -sixchar Declarations -declare Numerics -division Common blocks -volatile List file -list Other interesting checks are performed by default. The '-' character serves as a switch prefix on both VMS and UNIX. It is sometimes useful to turn off an option that produces too many spurious warnings, e.g. -notruncation -nopure, and occasionally even -nopretty. Preparing for a testing ----------------------- Then comes the real thing: you will have to test your program, and find the errors that are the result of faulty reasoning - the program logic errors. Check again all code before testing, NOT immediately after you wrote it, but when your mind is clear. You should then find all those little slips of mind. If you didn't do it already now is the last time to apply defensive programming, a good and easy way is to check all variables upon entering a procedure for obvious errors, e.g. variables that should be positive/non-zero (array indexes, string lengths, etc) should be checked and a warning issued if they fail the test. Data input is best done inside special routines, so the above rule covers that case also, but of course the check should be done upon exiting the routine. If possible, it's recommended that you test the more complex routines separately, i.e. with a suitable driver - a simple main program that produces the correct input for the routine, and helps you analyze the resulting output. The extra work involved in preparing the driver is worth it, debugging is much easier when you don't have to take into consideration complex inter-routine interactions. Testing the program ------------------- Whenever possible, routines should be tested individually for conformance to their specification. Prepare a test suite that probes the working of the various interesting combinations of inputs, trying to exercise all possible execution paths within the routine. A good way to test a complete program is to have a 'verbose mode' in your program, a 'mode' that writes to a FILE all key results. You get into this mode when you supply a special value (say 1) to an input variable called for example 'DEBUG'. The DEBUG variable is passed to ALL subprograms, in every 'interesting' place the value of DEBUG is checked and if it equals 1 you write to file the last result. Some programmers use different 'debug levels', e.g. (DEBUG .EQ. 1) writes part of the information, (DEBUG .EQ. 2) writes more info, etc. Some programmers develop a 'subsystem classification'. Each value of the DEBUG variable corresponds to another part of the program, when you pass the value assigned to some subsystem, all info relevant to this subsystem will be written to the 'debug file'. Verbose mode statements can be easily removed in the final version of the program, if you put them inside 'preprocessor if statements', like this cpp example: #ifdef DEBUG IF (DEBUG .EQ. 1) WRITE(*,*) ' X= ', X #endif If you define DEBUG to the preprocessor, the FORTRAN 'if' statement will be compiled, otherwise it will not. A remark: "#ifdef" and "#endif" are nonstandard, but can usually be supported on almost any system, since C preprocessors are nearly ubiquitous at this point. See the chapter on preprocessors. Another way is to use the non-standard 'd lines' (debug lines) feature, some compilers can be instructed with a compiler switch to ignore or use lines that begin with the letter 'D' (on the first column). +--------------------------------------------------------------------+ | TRY AS DIFFERENT AS POSSIBLE INPUTS IN VERBOSE MODE, AND LOOK | | CLOSELY AT THE RESULTS WRITTEN IN THE DEBUG FILE | | | | IF POSSIBLE, COMPARE RESULTS WITH OTHER RESULTS KNOWN TO BE GOOD | +--------------------------------------------------------------------+ Tracing bugs ------------ The first rule about debugging is staying cool, treat the misbehaving program as an intellectual exercise, ignore schedules and bosses. 1) You got some strange results that made you think there is a bug. Think again, are you sure they are not the correct output for some special input? 2) If you are not sure what causes the bug, DON'T try semi-random code modifications, that's seldom works. Your aim should be to gather as much as possible information! 3) If you have a modular program, each part does a clearly defined task, so properly placed 'debug statements' can ISOLATE the malfunctioning procedure/code-section. 4) If you are familiar with a debugger use it, but be careful not to be carried away by the many options and start playing. Some common pitfalls -------------------- Data-types ========== 1) Using implicit variable declarations 2) Using a non-intrinsic function without a type declaration 3) Incompatible argument lists in CALL and the routine definition 4) Incompatible declarations of a common block 5) Using constants and parameters of incorrect (smaller) type 6) Assuming that untyped integer constants get typed properly 7) Assuming intrinsic conversion-functions take care of result type Arithmetic ========== 1) Using constants and parameters of incorrect type 2) Uncareful use of automatic type promotions 3) Assuming that dividing two integers will give a floating-point result (Like in Pascal, where there is a special operator for integer division) 4) Assuming integer exponents (e.g. 2**(-3)) are computed as floating-point numbers 5) Using floating-point comparisons tests, .EQ. and .NE. are particularly risky 6) Loops with a REAL or DOUBLE PRECISION control variable 7) Assuming that the MOD function with REAL arguments is exact 8) Assuming real-to-integer assignment will work in all cases Miscellaneous ============= 1) Code lines longer than the allowed maximum 2) Common blocks losing their values while the program runs 3) Aliasing of dummy arguments and common block variables, or other dummy arguments in the same subprogram invocation 4) Passing constants to a subprogram that modifies them 5) Bad DO-loop parameters (see the DO loops chapter) 6) TABs in input files - what you see is not what you get! General ======= 1) Assuming variables are initialized to zero 2) Assuming variables keep their value between the execution of a RETURN statement and subsequent invocations 3) Letting array indexes go out of bounds 4) Depending on assumptions about the computing order of subexpressions 5) Assuming Short-circuit evaluation of expressions 6) Using trigonometric functions with large arguments on some machines 7) Inconsistent use of physical units Remarks on the pitfalls list ============================ See the chapter on FORTRAN pitfalls for a fuller explanation. Some of these pitfalls can be located by the compiler (see the beginning of this chapter) or a static code checker like FTNCHEK, others may need carful debugging. The improved syntax of Fortran 90 eliminates some of these pitfalls e.g. loops with a floating-point control variable(?), other pitfalls will be detected by a compiler conforming to this standard. Note that a Fortran 90 compiler will often only be able to help you if you make use of the stricter checking features of the new standard: 1) IMPLICIT NONE 2) Explicit interfaces 3) INTENT attributes 4) PRIVATE attributes for module-wide data that should not be accessible to the outsideReturn to contents page
http://www.ibiblio.org/pub/languages/fortran/ch1-7.html
CC-MAIN-2015-35
en
refinedweb
Jaxb2.1.10 xjc generated classes extend JAXBElement Please note these java.net forums are being decommissioned and use the new and improved forums at. Jaxb2.1.10 xjc generated classes extend JAXBElement<Type>May 16, 2012 - 12:03 I am migrating from JAXB1 to JAXB2 (specifically JAXB RI2.1.10) using jdk 1.6(u 31). Portion of my xsd Information about the scorer This generated public class ScorerId extends JAXBElement which is creating host of issues for me. The get/setScorerId which expected a string in the original code now fail. Please can you help me. I am stuck.
https://www.java.net/forum/topic/glassfish/metro-and-jaxb/jaxb2110-xjc-generated-classes-extend-jaxbelementtype
CC-MAIN-2015-35
en
refinedweb
web interface for Metalsmith Smithsonian is a web interface for Metalsmith. If you are already using Metalsmith, adopting Smithsonian could not be easier. Smithsonian extends Metalsmith so the exact same plugin/middleware system works, just swap out Metalsmith for Smithsonian. Metalsmith = require'metalsmith';Metalsmith__dirnameusemarkdownusetemplates'handlebars'build; ..becomes.. Smithsonian = require'smithsonian';Smithsonian__dirnameusemarkdownusetemplates'handlebars'build // still builds as expectedlisten8080; // listening on localhost:8080 Note: Smithsonian calls build() internally when listen() is called. npm install smithsonian Smithsonian is really just a basic file explorer that only works with a Metalsmith source directory. It does not serve the built files; use http-server for that. Smithsonian is really useful for remote deploys and as an administration interface. Smithsonian is like an extremely minimal CMS, but for Metalsmith. Say you have Metalsmith building static content behind Nginx. Expose Smithsonian (preferably backed by forever) in the Nginx config and you now have an easily accessible adminstration tool to create, edit, and delete source files. No need to build locally and deploy with git or any other manual tool. Building a simple blog for a company/client? As long as they can handle YAML being at the top of the file, Smithsonian is good enough to hand off to clients. Smithsonian only exposes the plugin system of Metalsmith, which are only use() and build(). All other methods calls outside of Metalsmith plugins will need to use Smithsonian.metalsmith which is Smithsonian's Metalsmith instance. Smithsonian also exposes options() which can override Smithsonian defaults (see Configuration) and listen() which will start the web server that serves the interface. Smithsonian is very configurable. Need authenication? See auth and authKeys. Don't want to serve from root? See namespace. Don't like the default CSS? See static. Don't like the default HTML? See views, viewEngine, and viewOptions. Don't like pretty much anything? Change it. appName Stringdisplays in the navbar and footer. Good for custom branding like project or company names. auth Booleanwhether to use authenication authKeys Objecthash containing username/password autoBuild Booleanwhether or not to build on every file create, update, or delete credits Booleanwhether to show links to Smithsonian on Github extension Stringwhat file extension to use for new files filename Functiongenerates a filename, given the user supplied name and the extension filedata Functiongenerates the default file contents given, given the user supplied name and the extension handleError Functionhandles any error in Smithsonian, mostly filesystem errors. namespace Stringif set to, say, "/admin" would serve everything Smithsonian through localhost:8080/admin/ sessionKeys Array[String]are used in initializing Express's cookie-session static Stringis the directory serving the favicon.ico and CSS files views Stringmaps to Express's #set 'view engine', is the directory for template files. viewEngine Stringmaps to Express's #set 'view engine' viewOptions Objectmaps to Express's #set 'view options' Here are all of the override-able defaults. appName: 'Smithsonian'auth: falseauthKeys:username: 'admin'password: 'password'autoBuild: falsecredits: trueextension: 'md'var d date desc m y;date = Datenow;y = dategetFullYear;m = dategetMonth + 1;d = dategetDate;desc = nametoLowerCasesplit' 'join'-';return "" + y + "-" + m + "-" + d + "-" + desc + "." + ext;var timestamp;timestamp =var d date hr m mn sc y;date = Datenow;y = dategetFullYear;m = dategetMonth + 1;d = dategetDate;hr = dategetHours + 1;mn = dategetMinutes + 1;sc = dategetSeconds + 1;return "" + y + "-" + m + "-" + d + " " + hr + ":" + mn + ":" + sc;;return "---\nlayout: post\ntitle: \"" + name + "\"\ndate: " + timestamp + "\n---";{}namespace: ''sessionKeys: 'undercover' 'renegade'"static": pathnormalize__dirname + '/..' + '/public'views: pathnormalize__dirname + '/..' + '/view'viewEngine: 'jade'viewOptions:layout: falseself: true Thanks for using Smithsonian. Or for at least reading this far down into the README.
https://www.npmjs.com/package/smithsonian
CC-MAIN-2015-35
en
refinedweb
Wiring Java Applications with Spring If you are not building an application from scratch, but must bow to the constraints of an existing code base, you can wire dependencies using existing factory objects with Spring's MethodInvokingFactoryBean. To do this, you simply define properties called "targetMethod" and "targetClass" to specify the name of the class and the static method to invoke to retrieve an instance. A key benefit to creating beans via an ApplicationContext is that you have the ability to invoke 'lifecycle' methods on the objects. ApplicationContext (by way of extending BeanFactory) implements two lifecycle interfaces: InitializingBean and DisposableBean. So, suppose you wanted to setup a connection pool in SalesManagerJdbcDAO using the dbProperties map we specified in the configuration file. You could do this by coding an init() method on this object, and then add the following markup for the bean: <bean id="salesManagerDAO" class="examples.SalesManagerJdbcDAO" init- After the bean has been instantiated, meaning all properties have been set, Spring automatically invokes the init-method 'init()' for you. Likewise, if you wanted to dispose of the resources by invoking a destroy() method, you could do so by adding an additional element: <bean id="salesManagerDAO" class="examples.SalesManagerJdbcDAO" init- Building on the connection pool scenario, suppose you decided that you would rather not add these lifecycle methods directly to salesManagerDAO, but prefer to externalize those services as a separate class. You first write this Java class: //Initialize and Destroy the DAO public class DaoConfigurer() { public void init() { SalesManagerDAO salesManagerDAO = (SalesManagerDAO)ctx.getBean("salesManagerDAO"); Map dbProperties = salesManagerDAO.getDbProperties(); //perform initialization work... ... } public void destroy() { ... } } As you've seen, Spring automatically detects and coordinates the initialization of dependencies specified with <ref-bean>. However, DaoConfigurer does not have a reference to the DAO instance. Rather, it obtains a reference from the ApplicationContext inside the init method. Unless something is done to prevent it, an error may result if the ApplicationContext attempts to create DaoConfigurer before it creates the DAO bean. Spring's solution to this problem is the <depends-on> tag. Essentially, this tag tells Spring to stop what it is doing and instantiate the dependant object first. Here's the correct entry for this in applicationContext.xml: <bean id="daoConfigurer" class="examples.DaoConfigurer" < Although it's beyond the scope of this article, the ApplicationContext provides many other features. One of these is the capability to specify bean 'post processors' to provide a callback mechanism to Spring-managed beans, after they have been created. These callbacks enable you to do various things, such as perform additional 'initialization' tasks after a bean's instantiation. One example of a bean post processor is the PropertyPlaceHolderConfigurer that allows you to replace bean values with those specified in a properties file. Last, it was mentioned that Spring provides utilities for obtaining the ApplicationContext singleton for use in your code. Unlike JNDI, Spring doesn't tie you to J2EE. You can use Spring in just about any Java-related project. However, if you are writing a J2EE application, you can make use of Spring's ContextLoader to access your ApplicationContext registry. This can be achieved by adding a listener, ContextLoaderListener, to a web.xml file: <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> An additional <context-param>, named contextConfigLocation, may be added to specify the location of your ApplicationContext's configuration file. It is only necessary if the file is somewhere other than /WEB-INF/applicationContext.xml, which is where it looks by default. You also split may your configuration data among multiple files. In this case, simply delimit the list of files in contextConfigLocation. Once you have configured web.xml, there are a number of ways to obtain the context after application startup. For example, if you are using the Struts framework, Spring provides an ActionSupport class that extends Struts Action. ActionSupport contains a method called getWebApplicationContext() that returns the ApplicationContext for your use. Conclusion This article provided an introduction to the Spring Framework. More specifically, an introduction to Spring's most powerful feature—its ability to wire applications through Inversion of Control. With Spring, it is easy to create an application that is both robust and flexible to future changes. Please keep in mind that Spring is much more than just an Inversion of Control container and offers a wide array of features to support enterprise development. You will find extensive documentation at the Spring Web site to assist you in your work. Page 3 of 3
http://www.developer.com/java/other/article.php/10936_3504506_3/Wiring-Java-Applications-with-Spring.htm
CC-MAIN-2015-35
en
refinedweb
Post removed at user's request. GOOD question... I only use Option #1 because thats what Java taught me.. It makes sense in that its not direct code involved within the framework, but at the same time, with option #2, you are saying that your namespace is using other namespaces.. I want to hear more on other peoples thoughts... Jake Internally we use option 2. I seem to remember there being a good rationale for making that decision when we did, but it escapes me now I'll ask around and see if i can come up with any justification for it... We switched to using Option 2 after a situation where a namespace that was being referenced wasn't working. Actually we use option 1 for System. references and option 2 for our own dll references. I wasn't directly involved but I remember our lead developer expressing astonishment that you could even do option 2 and that it got around the problem one of our developers was experiencing. I'll try and find out more on Monday. Post removed at user's request. The way I had thought that the using directive worked was essentially saying, if you dont find it in the current namespace look in these name spaces for it. Or in the case of using Data = System.Data is a way to shorten the name space. Also looking at the IL there really are no using, all the types have fully qualified namespaces. So I would think that it is a matter of preference. i've always done the first one, but i wondered about doing it the other way. i never tried it, but I assume they can go anywhere in the file. - irascian wrote: We switched to using Option 2 after a situation where a namespace that was being referenced wasn't working. Actually we use option 1 for System. references and option 2 for our own dll references. It turns out that there's a small semantic difference between the two. This can cause problems when you have a class with the same name as it's enclosing namespace (this is, in general, a really bad practice because it just confuses everyone. But it does happen). As an example, consider this class: namespace NamespaceMangler { public class NamespaceMangler { public static void Mangle(){} } } Things get confusing when the static Mangle() member needs to be called from types that exist in another namespace. One way to get this code to compile is to use Option 2 without a namespace-qualified type, like so: //If you put the using statment out here, you'll get //a compiler error. namespace AnotherNamespace { //Putting it here works, though... using NamespaceMangler; public class AnotherClass { public AnotherClass() { NamespaceMangler.Mangle(); } } } If you were to use Option 1 and put the using outside of the namespace delcaration, you'd get a compiler error complaining that that NamespaceMangler does not declare a type called Mangle. The poor compiler has become confused, thinking that you meant NamespaceMangler-the-namespace instead of NamespaceMangler-the-type. Interestingly enough, if you use Option 2 but attempt to clairfy things by namespace-qualifying the type name, you'll get a compiler error. For example, this code will not compile: namespace AnotherNamespace { using NamespaceMangler; public class AnotherClass { public AnotherClass() { NamespaceMangler.NamespaceMangler.Mangle(); } } }. The part of the C# spec that governs this behavior is Section 10.8, paragraph 4. Apparently, it's confusing enough to motivate the return of the global namespace resolution operator for C# 2.0, a.k.a. the "double snakebite" operator (: . Prefixing a type name with :: will ensure that namespace resolution begins at the global namespace, thereby avoiding this whole bloody mess entirely. The bottom line: don't create types with the same name as their enclosing namespace! -steve - stevem wrote:. Day late and dollar short to be the hero on this one, but to back up stevem here is a little bit of code I use to create "template" files. I have an exe that uses this class to create .cs files for me. Saves me the typing and gets right to the code. <note>First time posting code, forgive me if it comes off formatted poorly</note> using System; using System.CodeDom; using System.CodeDom.Compiler; using Microsoft.CSharp; namespace CodeGenerator { /// <summary> /// Summary description for DOM. /// </summary> public class DOM { private string fileName; private CodeCompileUnit codeCompileUnit; private CodeNamespace codeNamespace; private CodeDomProvider codeDomProvider; private ICodeGenerator generator; public DOM() { codeCompileUnit = new CodeCompileUnit(); codeNamespace = new CodeNamespace(); codeNamespace.Name = "DefaultNamespace"; } public string Namespace { get { return this.codeNamespace.Name; } set { this.codeNamespace.Name = value;} } public string FileName { get { return this.fileName; } set { this.fileName = value; } } public void AddUsingStatement(string import) { CodeNamespaceImport newImport = new CodeNamespaceImport(import); this.codeNamespace.Imports.Add(newImport); } public void AddType(string type) { CodeTypeDeclaration newType = new CodeTypeDeclaration(type); this.codeNamespace.Types.Add(newType); CodeConstructor constructor = new CodeConstructor(); constructor.Name = type; constructor.Attributes = MemberAttributes.Public; this.codeNamespace.Types[0].Members.Add(constructor); } public void AddMember(string type, string name) { CodeMemberField codeMemberField = new CodeMemberField(type,name.ToLower()); this.codeNamespace.Types[0].Members.Add(codeMemberField); CodeMemberProperty codeMemberProperty = new CodeMemberProperty(); codeMemberProperty.Name = name; codeMemberProperty.Type = new CodeTypeReference(type); codeMemberProperty.Attributes = MemberAttributes.Public; codeMemberProperty.GetStatements.Add(new CodeMethodReturnStatement(new CodeFieldReferenceExpression(new CodeThisReferenceExpression(),name.ToLower()))); codeMemberProperty.SetStatements.Add(new CodeAssignStatement(new CodeFieldReferenceExpression(new CodeThisReferenceExpression(),name.ToLower()),new CodePropertySetValueReferenceExpression())); this.codeNamespace.Types[0].Members.Add(codeMemberProperty); } public void GenerateFile() { codeDomProvider = new CSharpCodeProvider(); generator = codeDomProvider.CreateGenerator(this.fileName); codeCompileUnit.Namespaces.Add(codeNamespace); System.IO.StreamWriter streamWriter = new System.IO.StreamWriter(this.fileName,false); CodeGeneratorOptions generatorOptions = new CodeGeneratorOptions(); generatorOptions.BracingStyle = "C"; generator.GenerateCodeFromCompileUnit(this.codeCompileUnit,streamWriter,generatorOptions); streamWriter.Close(); } } } - stevem wrote: The bottom line: don't create types with the same name as their enclosing namespace! Does somebody can inform the TabletPC team of this rule !!? The Ink class in Microsoft.Ink namespace is really annoying :< Change its name for longhorn !? Post removed at user's request. Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Forums/TechOff/5948-C-Best-Practice-Question
CC-MAIN-2015-35
en
refinedweb
New-SmbShare cmdlet. Parameters -AsJob -CachingMode<CachingMode>. -CATimeout<UInt32> Specifies the continuous availability timeout for the share. -ChangeAccess<String[]> Specifies which user will be granted modify permission to access the share. Multiple users can be specified by using a comma-separated.-temporary. Microsoft.Management.Infrastructure.CimInstanceobject is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign ( #) provides the namespace and class name for the underlying WMI object. The MSFT_SmbShare object represents the new SMB share. Examples EXAMPLE 1 This example creates a new SMB share. EXAMPLE 2 This example creates a new encrypted SMB share. Related topics
https://technet.microsoft.com/en-us/library/jj635722(v=wps.620).aspx
CC-MAIN-2015-35
en
refinedweb
sigh... -.- sigh... -.- When row is at 0 the row goes to 50 When column is at 0 the column goes to 0 When row is at 1 the row goes to 100 When the column is at 1 the column goes to 50 The row is always ahead of the... Is Y also computed outside the loop? It says that: The i'th row of blocks has its upper edge aligned with i (row) * ROWSEPARATION +ROWZERO. The j'th column of blocks has its left edge... for every row equal to 0 to BLOCKROWS for every column equal to 0 to NBLOCKS create x create y create colour perform addbrick (list created) perform row * ROWSEPARATION + ROWZERO perform... Do you mean the variables in i and j? if so i is the row and j is the column ohhh, so create row create column 6) and 7) and computed together and added to the list? for every i equal to 0 to BLOCKROWS for every j equal to 0 to NBLOCKS create x and y to compute location create colour perform addbrick create i create j perform i * j add (i * j) to list... for every i equal to 0 to BLOCKROWS for every j equal to 0 to NBLOCKS perform addbrick perform i added to j add (i added to j) to location of list for every i equal to 0 to BLOCKROWS for every j equal to 0 to NBLOCKS perform addbrick perform i added to j add (i added to j) to location of list i and j are put on a grid, j depends on i, so.. for i for j do i * j? still not sure and then put into the list? The row is vertically seperated and then moved to a location The column is enlarged at that rows location ROWZERO is the vertical offset, in pixels from the top of the screen to the top of the first row of blocks ROWSEPARATION is the vertical separation between rows. So for every i on the upper... Maybe that is iterated through every i and put into the block same with j'th column. Thats exactly where I'm lost, have no Idea about what to do with that code i just know i*ROWSEPARATION + ROWZERO is supposed to be implemented, I have no idea where though, same with j'th column. for i do i = i * ROWSEPARATION + ROWZERO for j do j = j * BLOCKWIDTH "do things needed for this row", "do things needed for this column" is missing is that a typo? and also for "do things needed for this row" would it be i = i * ROWSEPARATION + ROWZERO; or is there a... Yes they do but I didn't make it keep track of the row and column or is that automatic because its a for-loop public static java.util.List<acm.graphics.GRect> createPlayfield() { Random r = new Random(); List<GRect> l = null; int x = 0; int y = ROWZERO - BLOCKHEIGHT; Color c =... oh maybe for 4) I do for(GRect e : l) this.add(e); 4) I'm not sure.. I have to place the GRects onto the grid, probably would be l.add(i,j) or l.add(x,y) because of the coordinates? Maybe l.setLocation(x,y)... 3) it would be... Also since the grid has a size would I need to put a loop for that? public static java.util.List<acm.graphics.GRect> createPlayfield() { Random r = new Random(); List<GRect> l = null; int x = 0; int y = ROWZERO - BLOCKHEIGHT; Color... This should give you a good example the methods are used in this program. import acm.graphics.*; import acm.program.*; import java.awt.Color; import java.util.Random; import... You're thinking way too complicated, there are no arrays in this at all. This program is based on a game breakout The package imported... 1) Create a static method (createPlayfield) 2) Each GRect is randomly coloured 3) Each GRect is of size BLOCKWIDTH x BLOCKHEIGHT 4) The blocks are placed in an grid 5) The grid is of size...
http://www.javaprogrammingforums.com/search.php?s=3a66fbe96d32739885e522b16636ba4b&searchid=1724789
CC-MAIN-2015-35
en
refinedweb
#include <pthread.h> #include <stdio.h> #include <sys/select.h> #include <unistd.h> void *f (void*foo) { char buf[128]; //pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, NULL); while (1) { read (0, buf, sizeof(buf)); } } int main (void) { pthread_t t; pthread_create (&t, NULL, f, NULL); sleep (1); pthread_cancel (t); pthread_join (t, NULL); exit(0); } read() is not behaving as a cancellation point, only setting the cancel type to asynchronous permits this testcase to terminate. We do have the pthread_setcanceltype glibc/libpthread hook in the forward structure, but we are not using it: the LIBC_CANCEL_ASYNC macros are void, and we're not using them in the mig msg call either. Provenance IRC, OFTC, #debian-hurd, 2013-04-15 <paravoid> so, let me say a few things about the bug in the first place <paravoid> the package builds and runs a test suite <paravoid> the second test in the test suite blocks forever <paravoid> a blocked pthread_join is what I see <paravoid> I'm unsure why <paravoid> have you seen anything like it before? <youpi> whenever the thread doesn't actually terminate, sure <youpi> what is the thread usually blocked on when you cancel it? <paravoid> this is a hurd-specific issue <paravoid> works on all other arches <youpi> could be just that all other archs have more relaxed behavior <youpi> thus the question of what exactly is supposed to be happening <youpi> apparently it is inside a select? <youpi> it seems select is not cancellable here <pinotree> wasn't the patch you sent? <youpi> no, my patch was about signals <youpi> not cancellation <pinotree> k <youpi> (even if that could be related, of course) <paravoid> how did you see that? <paravoid> what's the equivalent of strace? <youpi> thread 3 is inside _hurd_select <paravoid> thread 1 is blocked on join <paravoid> but the code is <paravoid> if(gdmaps->reload_thread_spawned) { <paravoid> pthread_cancel(gdmaps->reload_tid); <paravoid> pthread_join(gdmaps->reload_tid, NULL); <paravoid> } <paravoid> so cancel should have killed the thread <youpi> cancelling a thread is a complex matter <youpi> there are cancellation points <youpi> e.g. a thread performing while(1); can't be cancelled <paravoid> thread 3 is just a libev event loop <youpi> yes, "just" calling poll, the most complex system call of unix :) <youpi> paravoid: anyway, don't look for a bug in your program, it's most likely a bug in glibc, thanks for the report <paravoid> I think it all boils down to a problem cancelling a thread in poll() <youpi> yes <youpi> paravoid: ok, actually with the latest libc it does work <paravoid> oh? <youpi> where latest = not uploaded yet :/ <paravoid> did you test this on exodar? <youpi> pinotree: that's the libpthread_cancellation.diff I guess <paravoid> because I commented out the join :) <youpi> paravoid: in the root, yes <youpi> well, I tried my own program <paravoid> oh, okay <youpi> which is indeed hanging inside select (or just read) in the chroot <youpi> but not in the root <pinotree> ah, richard's patch <paravoid> url? <youpi> I've installed the build-dep in the root, if you want to try <paravoid> strange that root is newer than the chroot :) <youpi> paravoid: it's the usual eglibc debian source <paravoid> tried in root, still fails <youpi> could you keep the process running? <paravoid> done <youpi> Mmm, but the thread running gdmaps_reload_thread never set the cancel type to async? <youpi> that said I guess read and select are supposed to be cancellation points <youpi> thus cancel_deferred should be working, but they are not <youpi> it seems it's cancellation points which have just not been implemented <youpi> (they happen to be one of the most obscure things in posix) IRC, freenode, #hurd, 2013-04-15 <youpi> but yes, there is still an issue, with PTHREAD_CANCEL_DEFERRED <youpi> how calls like read() or select() are supposed to test cancellation? <pinotree> iirc there are the LIBC_CANCEL_* macros in glibc <pinotree> eg sysdeps/unix/sysv/linux/pread.c <youpi> yes <youpi> but in our libpthredaD? <pinotree> could it be we lack the libpthread → glibc bridge of cancellation stuff? <youpi> we do have pthread_setcancelstate/type forwards <youpi> but it seems the default LIBC_CANCEL_ASYNC is void <pinotree> i mean, so when you cancel a thread, you can get that cancel status in libc proper, just like it seems done with LIBC_CANCEL_* macros and nptl <youpi> as I said, the bridge is there <youpi> we're just not using it in glibc <youpi> I'm writing an open_issues page IRC, freenode, #hurd, 2013-04-16 <braunr> youpi: yes, we said some time ago that it was lacking
https://www.gnu.org/software/hurd/open_issues/libpthread_cancellation_points.html
CC-MAIN-2015-35
en
refinedweb
(For more resources related to this topic, see here.) Installing SciPy SciPy is the scientific Python library and is closely related to NumPy. In fact, SciPy and NumPy used to be one and the same project many years ago. In this recipe, we will install SciPy. How to do it... In this recipe, we will go through the steps for installing SciPy. Installing from source: If you have Git installed, you can clone the SciPy repository using the following command: git clone python setup.py build python setup.py install --user This installs to your home directory and requires Python 2.6 or higher. Before building, you will also need to install the following packages on which SciPy depends: BLAS and LAPACK libraries C and Fortran compilers There is a chance that you have already installed this software as a part of the NumPy installation. Installing SciPy on Linux: Most Linux distributions have SciPy packages. Installing SciPy on Mac OS X: Apple Developer Tools (XCode) is required, because it contains the BLAS and LAPACK libraries. It can be found either in the App Store, or in the installation DVD that came with your Mac, or you can get the latest version from Apple Developer's connection at. Make sure that everything, including all the optional packages is installed. You probably already have a Fortran compiler installed for NumPy. The binaries for gfortran can be found at. Installing SciPy using easy_install or pip: Install with either of the following two commands: sudo pip install scipy easy_install scipy Installing on Windows: If you have Python installed already, the preferred method is to download and use the binary distribution. Alternatively, you may want to install the Enthought Python distribution, which comes with other scientific Python software packages. Check your installation: Check the SciPy installation with the following code: import scipy print scipy.__version__ print scipy.__file__ This should print the correct SciPy version. How it works... Most package managers will take care of any dependencies for you. However, in some cases, you will need to install them manually. Unfortunately, this is beyond the scope of this book. If you run into problems, you can ask for help at: The #scipy IRC channel of freenode, or The SciPy mailing lists at Installing PIL PIL, the Python imaging library, is a prerequisite for the image processing recipes in this article. How to do it... Let's see how to install PIL. Installing PIL on Windows: Install using the Windows executable from the PIL website. Installing on Debian or Ubuntu: On Debian or Ubuntu, install PIL using the following command: sudo apt-get install python-imaging Installing with easy_install or pip: At the t ime of writing this book, it appeared that the package managers of Red Hat, Fedora, and CentOS did not have direct support for PIL. Therefore, please follow this step if you are using one of these Linux distributions. Install with either of the following commands: easy_install PIL sudo pip install PIL Resizing images In this recipe, we will load a sample image of Lena, which is available in the SciPy distribution, into an array. This article picture in question is completely safe for work. We will resize the image using the repeat function. This function repeats an array, which in practice means resizing the image by a certain factor. Getting ready A prerequisite for this recipe is to have SciPy, Matplotlib, and PIL installed. How to do it... Load the Lena image into an array. SciPy has a lena function , which can load the image into a NumPy array: lena = scipy.misc.lena() Some refactoring has occurred since version 0.10, so if you are using an older version, the correct code is: lena = scipy.lena() Check the shape. Check the shape of the Lena array using the assert_equal function from the numpy.testing package—this is an optional sanity check test: numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) Resize the Lena array. Resize the Lena array with the repeat function. We give this function a resize factor in the x and y direction: resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) Plot the arrays. We will plot the Lena image and the resized image in two subplots that are a part of the same grid. Plot the Lena array in a subplot: matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) The Matplotlib subplot function creates a subplot. This function accepts a 3-digit integer as the parameter, where the first digit is the number of rows, the second digit is the number of columns, and the last digit is the index of the subplot starting with 1. The imshow function shows images. Finally, the show function displays the end result. Plot the resized array in another subplot and display it. The index is now 2: matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() The following screenshot is the result with the original image (first) and the resized image (second): The following is the complete code for this recipe: import scipy.misc import sys import matplotlib.pyplot import numpy.testing # This script resizes the Lena image from Scipy. if(len(sys.argv) != 3): print "Usage python %s yfactor xfactor" % (sys.argv[0]) sys.exit() # Loads the Lena image into an array lena = scipy.misc.lena() #Lena's dimensions LENA_X = 512 LENA_Y = 512 #Check the shape of the Lena array numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) # Get the resize factors yfactor = float(sys.argv[1]) xfactor = float(sys.argv[2]) # Resize the Lena array resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) #Check the shape of the resized array numpy.testing.assert_equal((yfactor * LENA_Y, xfactor * LENA_Y), resized.shape) # Plot the Lena array matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) #Plot the resized array matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() How it works... The repeat function repeats arrays, which, in this case, resulted in changing the size of the original image. The Matplotlib subplot function creates a subplot. The imshow function shows images. Finally, the show function displays the end result. See also The Installing SciPy recipe The Installing PIL recipe Creating views and copies It is important to know when we are dealing with a shared array view, and when we have a copy of the array data. A slice, for instance, will create a view. This means that if you assign the slice to a variable and then change the underlying array, the value of this variable will change. We will create an array from the famous Lena image, copy the array, create a view, and, at the end, modify the view. Getting ready The prerequisites are the same as in the previous recipe. How to do it... Let's create a copy and views of the Lena array: Create a copy of the Lena array: acopy = lena.copy() Create a view of the array: aview = lena.view() Set all the values of the view to 0 with a flat iterator: aview.flat = 0 The end result is that only one of the images shows the Playboy model. The other ones get censored completely: The following is the code of this tutorial showing the behavior of array views and copies: import scipy.misc import matplotlib.pyplot lena = scipy.misc.lena() acopy = lena.copy() aview = lena.view() # Plot the Lena array matplotlib.pyplot.subplot(221) matplotlib.pyplot.imshow(lena) #Plot the copy matplotlib.pyplot.subplot(222) matplotlib.pyplot.imshow(acopy) #Plot the view matplotlib.pyplot.subplot(223) matplotlib.pyplot.imshow(aview) # Plot the view after changes aview.flat = 0 matplotlib.pyplot.subplot(224) matplotlib.pyplot.imshow(aview) matplotlib.pyplot.show() How it works... As you can see, by changing the view at the end of the program, we changed the original Lena array. This resulted in having three blue (or black if you are looking at a black and white image) images—the copied array was unaffected. It is important to remember that views are not read-only. Flipping Lena We will be flipping the SciPy Lena image—all in the name of science, of course, or at least as a demo. In addition to flipping the image, we will slice it and apply a mask to it. How to do it... The steps to follow are listed below: Plot the flipped image. Flip the Lena array around the vertical axis using the following code: matplotlib.pyplot.imshow(lena[:,::-1]) Plot a slice of the image. Take a slice out of the image and plot it. In this step, we will have a look at the shape of the Lena array. The shape is a tuple representing the dimensions of the array. The following code effectively selects the left-upper quadrant of the Playboy picture. matplotlib.pyplot.imshow(lena[:lena.shape[0]/2, :lena.shape[1]/2]) Apply a mask to the image. Apply a mask to the image by finding all the values in the Lena array that are even (this is just arbitrary for demo purposes). Copy the array and change the even values to 0. This has the effect of putting lots of blue dots (dark spots if you are looking at a black and white image) on the image: mask = lena % 2 == 0 masked_lena = lena.copy() masked_lena[mask] = 0 All these efforts result in a 2 by 2 image grid, as shown in the following screenshot: The following is the complete code for this recipe: import scipy.misc import matplotlib.pyplot # Load the Lena array lena = scipy.misc.lena() # Plot the Lena array matplotlib.pyplot.subplot(221) matplotlib.pyplot.imshow(lena) #Plot the flipped array matplotlib.pyplot.subplot(222) matplotlib.pyplot.imshow(lena[:,::-1]) #Plot a slice array matplotlib.pyplot.subplot(223) matplotlib.pyplot.imshow(lena[:lena.shape[0]/2,:lena.shape[1]/2]) # Apply a mask mask = lena % 2 == 0 masked_lena = lena.copy() masked_lena[mask] = 0 matplotlib.pyplot.subplot(224) matplotlib.pyplot.imshow(masked_lena) matplotlib.pyplot.show() See also The Installing SciPy recipe The Installing PIL recipe Fancy indexing In this tutorial, we will apply fancy indexing to set the diagonal values of the Lena image to 0. This will draw black lines along the diagonals, crossing it through, not because there is something wrong with the image, but just as an exercise. Fancy indexing is indexing that does not involve integers or slices, which is normal indexing. How to do it... We will start with the first diagonal: Set the values of the first diagonal to 0. To set the diagonal values to 0, we need to define two different ranges for the x and y values: lena[range(xmax), range(ymax)] = 0 Set the values of the other diagonal to 0. To set the values of the other diagonal, we require a different set of ranges, but the principles stay the same: lena[range(xmax-1,-1,-1), range(ymax)] = 0 At the end, we get this image with the diagonals crossed off, as shown in the following screenshot: The following is the complete code for this recipe: import scipy.misc import matplotlib.pyplot # This script demonstrates fancy indexing by setting values # on the diagonals to 0. # Load the Lena array lena = scipy.misc.lena() xmax = lena.shape[0] ymax = lena.shape[1] # Fancy indexing # Set values on diagonal to 0 # x 0-xmax # y 0-ymax lena[range(xmax), range(ymax)] = 0 # Set values on other diagonal to 0 # x xmax-0 # y 0-ymax lena[range(xmax-1,-1,-1), range(ymax)] = 0 # Plot Lena with diagonal lines set to 0 matplotlib.pyplot.imshow(lena) matplotlib.pyplot.show() How it works... We defined separate ranges for the x values and y values. These ranges were used to index the Lena array. Fancy indexing is performed based on an internal NumPy iterator object. The following three steps are performed: The iterator object is created. The iterator object gets bound to the array. Array elements are accessed via the iterator. Indexing with a list of locations Let's use the ix_ function to shuffle the Lena image. This function creates a mesh from multiple sequences. How to do it... We will start by randomly shuffling the array indices: Shuffle array indices. Create a random indices array with the shuffle function of the numpy.random module: def shuffle_indices(size): arr = numpy.arange(size) numpy.random.shuffle(arr) return arr Plot the shuffled indices: matplotlib.pyplot.imshow(lena[numpy.ix_(xindices, yindices)]) What we get is a completely scrambled Lena image, as shown in the following screenshot: The following is the complete code for the recipe: import scipy.misc import matplotlib.pyplot import numpy.random import numpy.testing # Load the Lena array lena = scipy.misc.lena() xmax = lena.shape[0] ymax = lena.shape[1] def shuffle_indices(size): arr = numpy.arange(size) numpy.random.shuffle(arr) return arr xindices = shuffle_indices(xmax) numpy.testing.assert_equal(len(xindices), xmax) yindices = shuffle_indices(ymax) numpy.testing.assert_equal(len(yindices), ymax) # Plot Lena matplotlib.pyplot.imshow(lena[numpy.ix_(xindices, yindices)]) matplotlib.pyplot.show() Indexing with booleans Boolean indexing is indexing based on a boolean array and falls in the category fancy indexing. How to do it... We will apply this indexing technique to an image: Image with dots on the diagonal. This is in some way similar to the Fancy indexing recipe, in this article. This time we select modulo 4 points on the diagonal of the image: def get_indices(size): arr = numpy.arange(size) return arr % 4 == 0 Then we just apply this selection and plot the points:) Set to 0 based on value. Select array values between quarter and three-quarters of the maximum value and set them to 0: lena2[(lena > lena.max()/4) & (lena < 3 * lena.max()/4)] = 0 The plot with the two new images will look like the following screenshot: The following is the complete code for this recipe: import scipy.misc import matplotlib.pyplot import numpy # Load the Lena array lena = scipy.misc.lena() def get_indices(size): arr = numpy.arange(size) return arr % 4 == 0 # Plot Lena) lena2 = lena.copy() # Between quarter and 3 quarters of the max value lena2[(lena > lena.max()/4) & (lena < 3 * lena.max()/4)] = 0 matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(lena2) matplotlib.pyplot.show() How it works... Because boolean indexing is a form of fancy indexing, the way it works is basically the same. This means that indexing happens with the help of a special iterator object. See also The Fancy Indexing recipe Stride tricks for Sudoku The ndarray class has a strides field, which is a tuple indicating the number of bytes to step in each dimension when going through an array. Let's apply some stride tricks to the problem of splitting a Sudoku puzzle to the 3 by 3 squares of which it is composed. For more information see. How to do it... Define the Sudoku puzzle array Let's define the Sudoku puzzle array. This one is filled with the contents of an actual, solved Sudoku puzzle:] ]) Calculate the strides. The itemsize field of ndarray gives us the number of bytes in an array. Using the itemsize, calculate the strides: strides = sudoku.itemsize * numpy.array([27, 3, 9, 1]) Split into squares. Now we can split the puzzle into squares with the as_strided function of the numpy.lib.stride_tricks module: squares = numpy.lib.stride_tricks.as_strided (sudoku, shape=shape, strides=strides) print(squares) This prints separate Sudoku squares: [[[[2 8 7] [9 5 4] [6 1 3]] [[1 6 5] [7 3 2] [8 4 9]] [[9 4 3] [1 6 8] [7 5 2]]] [[[8 7 9] [4 2 1] [3 6 5]] [[6 5 1] [3 9 8] [4 2 7]] [[2 3 4] [6 7 5] [8 9 1]]] [[[1 9 8] [5 4 2] [7 3 6]] [[5 7 3] [9 1 6] [2 8 4]] [[4 2 6] [3 8 7] [5 1 9]]]] The following is the complete source code for this recipe: import numpy] ]) shape = (3, 3, 3, 3) strides = sudoku.itemsize * numpy.array([27, 3, 9, 1]) squares = numpy.lib.stride_tricks.as_strided (sudoku, shape=shape, strides=strides) print(squares) How it works... We applied stride tricks to decompose a Sudoku puzzle in its constituent 3 by 3 squares. The strides tell us how many bytes we need to skip at each step when going through the Sudoku array. Broadcasting arrays Without knowing it, you might have broadcasted arrays. In a nutshell, NumPy tries to perform an operation even though the operands do not have the same shape. In this recipe, we will multiply an array and a scalar. The scalar is "extended" to the shape of the array operand and then the multiplication is performed. We will download an audio file and make a new version that is quieter. How to do it... Let's start by reading a WAV file: Reading a WAV file. We will use a standard Python code to download an audio file of Austin Powers called "Smashing, baby". SciPy has a wavfile module, which allows you to load sound data or generate WAV files. If SciPy is installed, then we should have this module already. The read function returns a data array and sample rate. In this example, we only care about the data: sample_rate, data = scipy.io.wavfile.read(WAV_FILE) Plot the original WAV data. Plot the original WAV data with Matplotlib. Give the subplot the title Original. matplotlib.pyplot.subplot(2, 1, 1) matplotlib.pyplot.title("Original") matplotlib.pyplot.plot(data) Create a new array. Now we will use NumPy to make a quieter audio sample. It's just a matter of creating a new array with smaller values by multiplying with a constant. This is where the magic of broadcasting occurs. At the end, we need to make sure that we have the same data type as in the original array, because of the WAV format: newdata = data * 0.2 newdata = newdata.astype(numpy.uint8) Write to a WAV file. This new array can be written into a new WAV file as follows: scipy.io.wavfile.write("quiet.wav", sample_rate, newdata) Plot the new WAV data. Plot the new data array with Matplotlib: matplotlib.pyplot.subplot(2, 1, 2) matplotlib.pyplot.title("Quiet") matplotlib.pyplot.plot(newdata) matplotlib.pyplot.show() The result is a plot of the original WAV file data and a new array with smaller values, as shown in the following screenshot: The following is the complete code for this recipe: import scipy.io.wavfile import matplotlib.pyplot import urllib2 import numpy response = urllib2.urlopen(' austinpowers/smashingbaby.wav') print response.info() WAV_FILE = 'smashingbaby.wav' filehandle = open(WAV_FILE, 'w') filehandle.write(response.read()) filehandle.close() sample_rate, data = scipy.io.wavfile.read(WAV_FILE) print "Data type", data.dtype, "Shape", data.shape matplotlib.pyplot.subplot(2, 1, 1) matplotlib.pyplot.title("Original") matplotlib.pyplot.plot(data) newdata = data * 0.2 newdata = newdata.astype(numpy.uint8) print "Data type", newdata.dtype, "Shape", newdata.shape scipy.io.wavfile.write("quiet.wav", sample_rate, newdata) matplotlib.pyplot.subplot(2, 1, 2) matplotlib.pyplot.title("Quiet") matplotlib.pyplot.plot(newdata) matplotlib.pyplot.show() Summary NumPy has very efficient arrays that are easy to use due to their powerful indexing mechanism. This fame of efficient arrays is partly due to the ease of indexing. Thus, in this article we have demonstrated the advanced indexing tricks using images. Resources for Article : Further resources on this subject: - Plotting data using Matplotlib: Part 2 [Article] - Advanced Matplotlib: Part 1 [Article] - Plotting Data with Sage [Article]
https://www.packtpub.com/books/content/advanced-indexing-and-array-concepts
CC-MAIN-2015-35
en
refinedweb
! Very cool challenge, Eric! I too have put together my own algorithm-oriented developer challenge while we were in "interview / hiring" mode a few months back. I was never satisfied with the approach of asking insanely technical questions and getting back canned answers, so I wanted to present a unique, non-standard way of gauging interviewee skills where one simply cannot fake it. I have not subjected any candidate to this test but it's something I have on the back-burner. I would be very interested in your thoughts on this exercise, sir! Here's my first stab at it. Admittedly it does some extra work because it regenerates the "indent" each time, but on the plus side, it's aware of where it is in the tree so it could do additional things with that information. I opted for an iterative approach because I think it's easier to debug and so forth. static class Dumper { const string BarSep = " "; const string BranchSep = "─"; const string Bar = "│"; const string Branch = "├"; const string LastBranch = "└"; struct NodeAndLevel { public Node Node; public List<bool> Level; } static public string Dump(Node root) { Stack<NodeAndLevel> stack = new Stack<NodeAndLevel>(); stack.Push(new NodeAndLevel { Node = root, Level = new List<bool>() }); StringBuilder builder = new StringBuilder(); while(stack.Count > 0) { NodeAndLevel l = stack.Pop(); foreach (var graphic in l.Level.SelectCheckLast<bool, string>(DumpSingleLevel)) { builder.Append(graphic); } builder.AppendLine(l.Node.Text); foreach (var nodeAndLevel in l.Node.Children.SelectCheckLast((isLast, child) => new NodeAndLevel { Node = child, Level = new List<bool>(l.Level) { isLast } }).Reverse()) { stack.Push(nodeAndLevel); } return builder.ToString(); static string DumpSingleLevel(bool isBranch, bool isLast) { if(isBranch) return (isLast ? LastBranch : Branch) + BranchSep; return (isLast ? BarSep : Bar) + BarSep; static IEnumerable<TOut> SelectCheckLast<TIn, TOut>(this IList<TIn> source, Func<bool, TIn, TOut> selector) { if(source.Count == 0) yield break; for (int i = 0; i < source.Count - 1; i++) yield return selector(false, source[i]); yield return selector(true, source[source.Count - 1]); } Why not have the Dump method return an IEnumerable<string> to imply a line-by-line solution, intended for generic output. Is this intended for console output or file output? Do we have a console maximum column width with which to impose word-wrapping logic upon? That would certainly allow for "prettier" formatting line-by-line so you can describe the output in terms of word-wrapped lines where each line would be able to start with the appropriate "box drawing" characters to indicate proper tree depth rather than an assumed count of spaces, or worse, no spaces and relying on the default console line wrapping. Good questions. This is actually not just an idle exercise; I wrote this challenge because I wrote this code myself for a real purpose. I am writing a code analysis tool in C# that requires that I build up a number of large, complicated trees in memory. I wanted a way to be able to dump a whole or partial tree as a string at once into the debugger "text viewer" window, so that I could rapidly ensure that the tree was the shape I expected. The trees will typically be shallow and broad, so I am not too worried about word-wrap. - Eric My attempt. I haven't looked at your code at all yet, so I'm pretty curious where it differs. My intent was to go for obvious correctness. Since recursion naturally drives you to mostly ignore everything except the current node and its children, I chose to make each node responsible for printing its own name and all its descendents, nothing more. However, I allowed myself to stray from that (a bit more cleverness here, instead of obviousness), by having each node also handle indentation that doesn't really have anything to do with it. This is so that I can keep appending characters left-to-right, instead of using a far more complicated system to insert characters at arbitrary positions. I didn't miss having parent pointers at all. I do a depth-first recursion, and whenever I do that I carry any information from the parent just by passing it through the method parameters. My other design considerations are hopefully apparent from my code comments. sealed class Dumper { public static string Dump(Node root) { // A StringBuilder is more convenient than manual concatenation. StringBuilder sb = new StringBuilder(); Dump(root, sb, ""); return sb.ToString(); } // We're taking a depth-first recursive solution, because that naturally follows the // structure of both the Node class and the desired output string. If it does not perform // well, or blows the stack, it would be easy enough to convert it to an iterative form. // We have the recursive method append its results to the StringBuilder we pass it, so that // we don't allocate an arbitrarily large amount of StringBuilders. // // The indentation string functions as a stack of characters to add on each line. // We do not otherwise have enough information to tell if we should print '│' characters, // because it depends on the amount of children our ancestors have. // The immutability of the String class makes it ideal for this purpose, since we do not // have to worry about popping anything off the stack when returning to lower levels in the // recursion. private static void Dump(Node node, StringBuilder builder, string indentation) var children = node.Children; builder.AppendLine(node.Text); // If we have no children at all, we're done. if (children.Count == 0) return; for (int i = 0; i < children.Count - 1; i++) { // Indent appropriately to the depth of the current Node. builder.Append(indentation); // For every child that is not the last, print "├─" . builder.Append("├─"); // Then print the child, increasing the indentation by "│ ". Dump(children[i], builder, indentation + "│ "); // The child will entirely take care of all the lines that contain its own // children, so the current child has now been entirely handled. } // Indent appropriately to the depth of the current Node. builder.Append(indentation); // For the last child, print "└─" instead of "├─". builder.Append("└─"); // We already have a line of │ connecting all our children. We have no children left, // so now we indent with only spaces. Dump(children[children.Count - 1], builder, indentation + " "); } Recursive because it's short and simple. static public string Dump(Node root) StringBuilder sb = new StringBuilder(); DoDump(sb, "", "", root); return sb.ToString(); static private void DoDump(StringBuilder sb, string prefixRoot, string prefixChild, Node root) sb.Append(prefixRoot); sb.Append(root.Text); sb.Append('\n'); for (int i = 0; i != root.Children.Count; ++i) if (i == root.Children.Count - 1) // Final child DoDump(sb, prefixChild + "└─", prefixChild + " ", root.Children[i]); else // Not final child DoDump(sb, prefixChild + "├─", prefixChild + "│ ", root.Children[i]); quick and dirty. sealed class Dumper{ static public string Dump(Node root) { TextWriter writer = new StringWriter(); Action<Node> requestParentWrite = n => {}; // no-op DFS(root, requestParentWrite, writer); return writer.ToString(); }/* ... */ private static void DFS(Node n, Action<Node> requestParentWrite, TextWriter writer) { requestParentWrite(n); writer.WriteLine(n.Text); string nonDirectChildren = "│ "; Action<Node> newRequestParentWrite = (actual) => { requestParentWrite(actual); if (n.Children.Contains(actual)) { if (n.Children.Last() == actual) { writer.Write("└"); nonDirectChildren = " "; } else { writer.Write("├"); } writer.Write("─"); } else { writer.Write(nonDirectChildren); } }; for (int i = 0; i < n.Children.Count; i++) DFS(n.Children[i], newRequestParentWrite, writer); }} I note two assumptions here. First, that repeatedly searching the child list for a particular node is efficient; if the tree is very broad and shallow then this becomes a quadratic algorithm. And second, that nodes are not re-used. In immutable trees it is commonplace to re-use nodes; what happens if the same node is referred to in both the first and second positions of a parent with two children? - Eric I'm surprised no-one has posted the shortest meets-the-literal-specification solution: static public string Dump(Node root){ return "a\n├─b\n│ ├─c\n│ │ └─d\n│ └─e\n│ └─f\n└─g\n ├─h\n │ └─i\n └─j\n";} Do you by any chance write video card drivers? - Eric Here's mine: I decided that I didn't want to pass any information to lower levels, so each level of recursion indents the whole subtree that was returned, simply because the correct tree "growing" out of the simple single indents seems the most elegant to me. I used a trailing loop to make the code a bit more flexible: we have an IList, but I wanted to make sure it would work if it was IEnumerable instead. And finally, I used iterator blocks because they allow me to write this recursive code that makes sense to me, but which in the end gets turned into (effectively) a single loop over the nodes. static public string Dump(Node root) foreach (string s in DumpLines(root)) sb.AppendLine(s); static public IEnumerable<string> DumpLines(Node root) yield return root.Text; IEnumerable<string> last = null; foreach (Node node in root.Children) if (last != null) { foreach (string line in Indent(last, "├─", "│ ")) { yield return line; } } last = DumpLines(node); if (last != null) foreach (string line in Indent(last, "└─", " ")) yield return line; private static IEnumerable<string> Indent(IEnumerable<string> lines, string first, string rest) bool isFirst = true; foreach (string line in lines) if (isFirst) yield return first + line; isFirst = false; else yield return rest + line; } Straightforward DFS recursive solution. In order to maintain state, I pass along a boolean array "isLastPath", which contains an entry for every node in the current path (excluding the root) - true if that ancestor is the last child of its parent, false otherwise. I wrote this. (Sorry that the language is not C#, though translation should be obvious.) def show(text, blist) if blist.size == 0 puts text else s = blist[0..-2].map{|b| b ? "| " : " "}.join puts(s + (blist[-1] ? "+" : "\\") + "-" + text) end end def dump(node, blist = []) show(node.text, blist) blist.push true for n in node.children blist[-1] = n != node.children.last dump(n, blist) blist.pop Ah, I am ashamed, for your solution is simpler. String being an immutable value type beats using a list of bool. Also, I notice that you did not separate the iterating and printing logic, but there's no point to it in such a small example. Here's the rest of the code, for testing. class Node attr_accessor :text, :children def initialize(text, *children) @text, @children = text, children end n = Node.new("a", Node.new("b", Node.new("c", Node.new("d")), Node.new("e", Node.new("f"))), Node.new("g", Node.new("h", Node.new("i")), Node.new("j"))) dump(n) I actually wrote this very thing a few months ago for a project that I was working on. (I also wrote something similar last year, which was just for binary trees, and had the parent on the middle left and the children above right and below right.) Here's my version adapted to this excersize: static public string Dump(Node root) StringBuilder sb = new StringBuilder(); DumpCore(root, sb, string.Empty, string.Empty); return sb.ToString(); static private void DumpCore(Node node, StringBuilder sb, string initialPrefix, string followingPrefix) sb.Append(initialPrefix); sb.Append(node.Text); sb.AppendLine(); if (node.Children.Count == 0) return; string nextInitialPrefix = followingPrefix + "├─"; string nextFollowingPrefix = followingPrefix + "│ "; string lastInitialPrefix = followingPrefix + "└─"; string lastFollowingPrefix = followingPrefix + " "; for (int childIndex = 0; childIndex < node.Children.Count; childIndex++) if (childIndex < node.Children.Count - 1) DumpCore(node.Children[childIndex], sb, nextInitialPrefix, nextFollowingPrefix); else DumpCore(node.Children[childIndex], sb, lastInitialPrefix, lastFollowingPrefix); I went with the recursive solution, because it was the most obvious. I was just going for simplicity. I never even considered parent pointers. My approach was basically to draw a tree in notepad and then figure out the essense of the problem and create the simplist design I could that embodied that essense. I should read my code before I post it! I should definitely have taken the if-statement out of the loop. I'm guessing that is an unfortunate artifact of an early unsuccessful design. I tried to come up with some ideas for how you could use a parent pointer, but I couldn't think of anything useful to do with it off the top of my head. I was able to come up with a purely functional approach to the problem, though (with a bit of augmentation to LINQ): static class Dumper const string vbar = "│ ", hbar = "─", branch = "├" + hbar, corner = "└" + hbar, blank = " "; static public string Dump(Node root) return Dump(root, Enumerable.Empty<bool>()); static public string Dump(Node root, IEnumerable<bool> isLastPath) return string.Join("", // draw vertical bars for parent nodes isLastPath.Take(isLastPath.Count() - 1).Select(isLast => isLast ? blank : vbar) // draw connector for current node .Concat(isLastPath.Any() ? (isLastPath.Last() ? corner : branch) : "") // text for this node .Concat(root.Text) // new line .Concat(Environment.NewLine) // recurse for child nodes .Concat(root.Children.Select((node, index) => Dump(node, isLastPath.Concat(index == root.Children.Count - 1))))); // sadly, LINQ doesn't include the "return" part of the IEnumerable monad, so we make a Concat that accepts a scalar public static IEnumerable<T> Concat<T>(this IEnumerable<T> list, T item) foreach (T i in list) yield return i; yield return item; Coincidentally, the other Gabe came up with a similar solution (using isLast) while I was writing mine. Here's mine, without looking at yours or any of the others just yet (posting it on PasteBin because I am afraid of what your blog is going to do to the formatting): My design criteria: * It should be a small amount of code. As much as I love coding, I hate code. * It should not take me long to write. (I considered using LINQ to objects, but I ruled it out because I was not confident that I could do it quickly without hitting a snag - in particular I was worried about whether it would be easy to treat the last child specially without having to write an entire extra method, and about how I would assemble the string. I think both are possible, but I could have easily seen me wasting 20 minutes looking stuff up.) I'm not sure I did very well on these counts. The only thing worth mentioning there is that my first attempt was wrong and printed extraneous vertical bars to the left of children of a last child. When I fixed that I ended up with an if statement in the middle of the loop to check whether we were at the last child and if so tweak our prefix and children's prefixes. I didn't like the extra indentation and the assignments in different branches. After some humming and hawing I decided that two uses of the conditional operator were preferable to the if statement, since it reduced the number of assignments and the indentation of the code. I find it's slightly nicer to read, but that might be very subjective. Hm no iterative BFS so far? Here you are. static public string Dump(Node root) StringBuilder output = new StringBuilder(); foreach(string line in subTreePicture(root)) output.Append(line); output.Append('\n'); return output.ToString(); private struct NodesAndPrefix public LinkedListNode<string> listNodeToAddAfter; public Node treeNode; public string prefix; public NodesAndPrefix(LinkedListNode<string> listNodeToAddAfter, Node treeNode, string prefix) this.listNodeToAddAfter = listNodeToAddAfter; this.treeNode = treeNode; this.prefix = prefix; static private IEnumerable<string> subTreePicture(Node root) LinkedList<string> thePicture = new LinkedList<string>(); Queue<NodesAndPrefix> queueOfNodesToProcess = new Queue<NodesAndPrefix>(); LinkedListNode<string> listNode = thePicture.AddLast(root.Text); queueOfNodesToProcess.Enqueue(new NodesAndPrefix(listNode, root, "")); while(queueOfNodesToProcess.Count > 0) NodesAndPrefix nextItem = queueOfNodesToProcess.Dequeue(); LinkedListNode<string> nodeToAddAfter = nextItem.listNodeToAddAfter; IList<Node> children = nextItem.treeNode.Children; int lastIndex = children.Count - 1; for(int i = 0; i < lastIndex; ++i) nodeToAddAfter = thePicture.AddAfter(nodeToAddAfter, nextItem.prefix + "├─" + children[i].Text); queueOfNodesToProcess.Enqueue(new NodesAndPrefix(nodeToAddAfter, children[i], nextItem.prefix + "│ ")); if(lastIndex >= 0) nextItem.prefix + "└─" + children[lastIndex].Text); children[lastIndex], nextItem.prefix + " ")); return thePicture;
http://blogs.msdn.com/b/ericlippert/archive/2010/09/09/old-school-tree-display.aspx?Redirected=true
CC-MAIN-2015-35
en
refinedweb
NAME nanosleep - high-resolution sleep SYNOPSIS #include <time.h> int nanosleep(const struct timespec *req, struct timespec *rem); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): nanosleep(): _POSIX_C_SOURCE >= 199309L DESCRIPTION nan signal that was delivered to the thread. POSIX.1-2001. NOTES If In order to support applications requiring much more precise pauses (e.g., in order to control some time-critical hardware), nanosleep() would handle pauses of up to 2 ms by busy waiting with microsecond precision), sched_setscheduler(2), sleep(3), timer_create(2), usleep(3), time(7) COLOPHON This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/maverick/man2/nanosleep.2.html
CC-MAIN-2015-35
en
refinedweb
Introduction To day we will create a customized split application in Metro Style Apps. There are three type of application templates available in Metro Apps, Blank Application, Grid Application and Split Application. Here we are creating a split application with the use of blank application templates. In this application we will use a group of six IT institutes in six split boxes and each split box associated with its detail grid that will show the information about the institute where the user clicks. In the following we are including the entire code of the XAML file and code behind file to create this mini application.:Code : <Page x:Class="App1.MainPage" IsTabStop="false" xmlns="" xmlns:x="" xmlns:local="using:App7" xmlns:d="" xmlns:mc="" mc: <Grid> <Grid x: <Grid.ColumnDefinitions> <ColumnDefinition Width=".333*"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height=".333*"></RowDefinition> </Grid.RowDefinitions> <TextBlock Grid.</TextBlock> <Button x: <Image Source="ims.jpg" Width="200" Height="200"></Image> </Button> <TextBlock Text="IMS Ghaziabad" FontSize="20" Grid.</TextBlock> <Button x: <Image Source="jss.jpg" Width="200" Height="200"></Image> <TextBlock Text="JSS Noida" FontSize="20" Grid.Column="2" Grid.Row="1" <Button x: <Image Source="abes.jpg" Width="200" Height="200"></Image> <TextBlock Text="ABES Ghaziabad" FontSize="20" Grid.Column="3" Grid.Row="1" <Button x:Name="b4" Grid.Column="1" Grid.Row="2" Width="200" Height="200" <Image Source="dit.jpg" Width="200" Height="200"></Image> <TextBlock Text="DIT Dehradun" FontSize="20" Grid.Column="1" Grid.Row="2" <Button x:Name="b5" Grid.Column="2" Grid.Row="2" Width="200" Height="200" <Image Source="kiet.jpg" Width="200" Height="170"></Image> <TextBlock Text="KIET Ghaziabad" FontSize="20" Grid.Column="2" Grid.Row="2" <Button x: <Image Source="its.jpg" Width="200" Height="200"></Image> <TextBlock Text="ITS Ghaziabad" FontSize="20" Grid.Column="3" Grid.Row="2" </Grid> <Grid x: <Grid.ColumnDefinitions> <ColumnDefinition Width=".333*"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height=".333*"></RowDefinition> </Grid.RowDefinitions> <Image x:</Image> <Image x:</Image> <Image x:</Image> <Image x:</Image> <Image x:</Image> <Image x:</Image> <TextBlock x:</TextBlock> <Button Grid.</Button> </Grid> </Grid> </Page> Step 4 : The MainPage.xaml.cs file is as in the following code; namespace App1 { public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } protected override void OnNavigatedTo(NavigationEventArgs e) private void Button_Click_1(object sender, RoutedEventArgs e) if (b1.IsPressed) { grd1.Visibility = Windows.UI.Xaml.Visibility.Collapsed; grd2.Visibility = Windows.UI.Xaml.Visibility.Visible; img1.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Text = "IMS, Engineering Sciences and Journalism through its three educational campuses equipped with state of art infrastructure. IMS has attained a unique and a highly respectable place amongst the best professional education institutions in India."; } if (b2.IsPressed) img2.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Text = "JSS Academy of Technical Education (JSSATE), NOIDA was established to meet the ever growing demand for trained professional manpower for industries and to serve as training ground for students from this region in the Engineering profession. The Academy."; if (b3.IsPressed) img3.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Text = "Academy of Business and Engineering Sciences (ABES), with captivating state of art campus having aesthetically lush green, serene and capitulating landscape of eco-friendly environment and situated on Delhi Hapur by-."; if (b4.IsPressed) grd2.Visibility = Windows.UI.Xaml.Visibility.Visible; img4.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Text = "DIT is a leading technical institute offering Undergraduate and Postgraduate programmes in several streams of Engineering, IT, Management, Architecture and Pharmacy. This is the oldest self financed professional institute of uttarakhand which was established in the year 1998."; if (b5.IsPressed) img5.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Text = "The mission of KIETIET. Thus KIET will act as a catalyst in creating excellence in technical education. The reach for excellence would also ensure fin our mission to serve the society by producing dedicated professionals with appropriate knowledge and skills capable of providing imaginative and technologically informed solutions to industry, academia and other professions."; if (b6.IsPressed) img6.Visibility = Windows.UI.Xaml.Visibility.Visible; txt1.Text = "Welcome to I.T.S-The Education Group. The I.T.S Group was founded in the year 1995. Since then the I.T.S has grown impressively and achieved widespread recognition from corporate, academia , and professional circles. At I.T.S we are committed to provide a value driven culture along with creating a professional order. The I.T.S as a group is large and diversified group and imparts knowledge in field of Management, IT, Dentistry, Bio-technology, Physiotherapy, Pharmacy, and Engineering. The I.T.S group has more than 700 highly qualified and experienced faculty members in their respective functional areas. About 8000 students are enrolled in various courses in four campuses. The Group runs two hospitals and is one of two groups in India which have two dental colleges."; } private void Button_Click_2(object sender, RoutedEventArgs e) grd2.Visibility = Windows.UI.Xaml.Visibility.Collapsed; grd1.Visibility = Windows.UI.Xaml.Visibility.Visible; } } Step 5 : After running this code the output looks like this: Click on the institute that you want information about. With the help of the Back button you can return to the main screen and select another institute. ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/0524d6/customized-split-application-in-metro-apps/
CC-MAIN-2015-35
en
refinedweb
IntroductionIn this article we will create a "Hello World" Windows Store (Metro) App using C#. It's a simple app to open a Windows picture library. Step 1We will select the "C#" template Windows Store Blank App.Step 2We will add a simple button control with click event.If we right-click the "Button_Click_1" event name in XAML and click the "Navigate to Event Handler", it will redirect to the code behind file of this method. It's similar to double-clicking the button in our existing Windows/Web UI.It will result in the UI being as shown below:Step 3Navigate to the button click event and add the following code to show the alert message using the WinRT API.And we will see the result as shown below:Step 4Using the WinRT API, we will use the Pickers object to select the file with ".jpg" format.We will see the result as below:SummaryWe need to use the System.IO namespace to do the Windows file related operations. But we are able to do that using the WinRT API in C#. Interested in learning Microsoft Technologies and following my Mentors/Virtual Experts. ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/63e78b/hello-world-windows-store-metro-app-using-csharp/
CC-MAIN-2015-35
en
refinedweb
Many developers have run into this dilemma: A CORBA client needs to obtain the services of a Distributed Component Object Model (DCOM) client or vice versa. The common solution is to use a COM/CORBA bridge, however, this answer is fraught with failure points. Suppose you have just introduced a complex new piece of software in the midst of two already complicated pieces (the CORBA ORB and the COM infrastructure). The bridge's complexity results from the intricate back-and-forth translation that it must complete from CORBA's Internet Inter-ORB Protocol (IIOP) to DCOM's Object Remote Procedure Call (ORPC). Any changes to these protocols mean changes to the bridge. What if I tell you that SOAP can alleviate the problem? Interested?: -.. Read the whole series on SOAP: - Part 1: An introduction to SOAP basics - Part 2: Use Apache SOAP to create SOAP-based applications - Part 3: Create SOAP services in Apache SOAP with JavaScript - Part 4: Dynamic proxies make Apache SOAP client development easy Inside SOAP As I mentioned above, SOAP uses XML as the data-encoding format. The idea of using XML is not original to SOAP and is actually quite intuitive. XML-RPC and ebXML use XML as well. See Resources for references to Websites where you can find more information. Consider the following Java interface: Listing 1 public interface Hello { public String sayHelloTo(String name); } A client calling the sayHelloTo() method with a name would expect to receive a personalized "Hello" message from the server. Now imagine that RMI, CORBA, and DCOM do not exist yet and it is up to you to serialize the method call and send it to the remote machine. Almost all of you would say, "Let's use XML," and I agree. Accordingly, let's come up with a request format to send to the server. Assuming that we want to simulate the call sayHelloTo("John"), I propose the following: Listing 2 <?xml version="1.0"?> <Hello> <sayHelloTo> <name>John</name> </sayHelloTo> </Hello> I've made the interface name the root node. I've also made the method and parameter names nodes as well. Now we must deliver this request to the server. Instead of creating our own TCP/IP protocol, we'll defer to HTTP. So, the next step is to package the request into the form of an HTTP POST request and send it to the server. I will go into the details of what is actually required to create this HTTP POST request in a later section of this article. For now let's just assume that it is created. The server receives the request, decodes the XML, and sends the client a response, again in the form of XML. Assume that the response looks as follows: Listing 3 <?xml version="1.0"?> <Hello> <sayHelloToResponse> <message>Hello John, How are you?</message> </sayHelloToResponse> </Hello> The root node is still the interface name Hello. But this time, instead of just the method name, the node name, sayHelloTo, is the method name plus the string Response. The client knows which method it called, and to find the response to that method it simply looks for an element with that method name plus the string Response. I have just introduced you to the roots of SOAP. Listing 4 shows how the same request is encoded in SOAP: Listing 4 <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Header> </SOAP-ENV:Header> <SOAP-ENV:Body> <ns1:sayHelloTo xmlns: <name xsi:John</name> </ns1:sayHelloTo> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Looks slightly more complicated, doesn't it? Actually it's similar to what we did before with a few enhancements added in for extensibility. First, note how the SOAP document is neatly organized into an Envelope (the root node), a header section, and a body. The header section is used to encapsulate data that is not tied to a specific method itself, but instead provides context knowledge, such as a transaction ID and security information. The body section contains the method-specific information. In Listing 2, the homegrown XML only had a body section. Second, note the heavy use of XML namespaces. SOAP-ENV maps to the namespace, xsi maps to, and xsd maps to. Those are standard namespaces that all SOAP documents have. Finally, in Listing 4 the interface name (i.e., Hello) is no longer the node name as it was in Listing 2. Rather it refers to a namespace, ns1. Also, along with the parameter value, the type information is also sent to the server. Note the value of the envelope's encodingStyle attribute. It is set to. That value informs the server of the encoding style used to encode -- i.e., serialize -- the method; the server requires that information to successfully deserialize the method. As far as the server is concerned, the SOAP document is completely self-describing. The response to the preceding SOAP request would be as follows: Listing 5 <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <ns1:sayHelloToResponse xmlns: <return xsi:Hello John, How are you doing?</return> </ns1:sayHelloToResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Listing 5 resembles the request message in Listing 4. In the code above, the method parameters don't contain the return value -- which in this example is the personalized "Hello" message; the body does. The document's format has tremendous flexibility built in. For example, the encoding style is not fixed but instead, specified by the client. As long as the client and server agree on this encoding style, it can be any valid XML. Plus, separating the call context information means that the method doesn't concern itself with that information. Major application servers in the market today follow that same philosophy. Earlier, I indicated that context knowledge could include transaction and security information, but context knowledge could cover almost anything. Here's an example of a SOAP header with some transaction information: Listing 6 <SOAP-ENV:Header> <t:Transaction xmlns: 5 </t:Transaction> </SOAP-ENV:Header> The namespace t maps to some application-specific URI. Here 5 is meant to be the transaction ID of which this method is a part. Note the use of the SOAP envelope's mustUnderstand attribute. It is set to 1, which means that the server must either understand and honor the transaction request or must fail to process the message; the SOAP specification mandates that. When good SOAP requests go bad Just because you use SOAP does not mean that all your requests will succeed all the time. Things can go wrong in many places. For example, the server may not honor your request because it can't access a critical resource such as a database. Let's return to our "Hello" example and add a silly constraint to it: "It is not valid to say hello to someone on Tuesday." So on Tuesdays, even though the request sent to the server is valid, the server will return an error response to the client. This response would be similar to the following: Listing 7 <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <SOAP-ENV:Fault> <faultcode>SOAP-ENV:Server</faultcode> <faultstring>Server Error</faultstring> <detail> <e:myfaultdetails xmlns: <message> Sorry, my silly constraint says that I cannot say hello on Tuesday. </message> <errorcode> 1001 </errorcode> </e:myfaultdetails> </detail> </SOAP-ENV:Fault> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Let's focus on the Fault element defined in the. All SOAP servers must always return any error condition in that element, which is always a direct child of the Body element. Without exception, the Fault element must have faultcode and faultstring elements. The faultcode is a code that can identify problems; client-side software uses faultcode for algorithmic processing as the SOAP specification calls it. The SOAP specification defines a small set of fault codes that you can use. The faultstring on the other hand is meant for human consumption. The code snippet in Listing 7 also shows a detail element. Since the error occurred while processing the SOAP message's body section, the detail element must be present. As you'll see later, if the error occurs while processing the header, detail must not be present. In Listing 7, the application used that element to provide a more detailed explanation of the nature of the error, namely that it was not allowed to say hello on Tuesdays. An application-specific error code is also present as well: a semioptional element called faultfactor that I have not shown in the error message. I call it semioptional because it must be included if the error message was sent by a server that was not the request's end-processing point, i.e., an intermediate server. SOAP does not specify any situation in which the faultcode element must not be included. In Listing 7, the fault resulted from the method invocation itself, and the application processing the method caused it. Now let's take a look at another type of fault; one that generates as a result of the server not being able to process the header information. As an example, assume that all hello messages must generate in the context of a transaction. That request would look similar to this: Listing 8
http://www.javaworld.com/article/2075167/soa/clean-up-your-wire-protocol-with-soap--part-1.html
CC-MAIN-2015-35
en
refinedweb
Understanding Kubernetes GVR What is the GVR in Kubernetes? It stands for Group Version Resource and this is what drives the Kubernetes API Server structure. We will cover exactly what the terminology means for Groups, Versions, Resources (and Kinds) and how they fit into the Kubernetes API. Kind Kinds in Kubernetes relate to the object you are trying to interact with. A pod or deployment would be your Kind. There are three categories of Kinds - Objects : These are your pods, endpoints, deployments, etc. - Lists : These would be collections of one or more Kinds. Example would be pod list or node list. - Special Purpose : These are used as specific actions on objects or none persistent objects. Examples would be /bindingor /scale Group A group is simply a collection of kinds. You can have kinds such as ReplicaSets, StatefulSets, and Deployments which are all part of the apps group. One thing to note is that you can have Kinds living in multiple groups. The group may start off as a alpha version in group and as it matures it move be moved into another group. Version Versions allow Kubernetes to release groups as tagged versions. Here are the versions that Kubernetes has available. - Alpha : This is usually disabled by default since they should only be used for testing. You may see these labeled as v1alpha1 - Beta : This is enabled by default. However there is no guarantee that any further beta or stable releases will be backwards compatible. You may see these labeled as v1beta1 - Stable : These have reached maturity and will be around for further releases. You may see these labeled as v1 You can have a group exist within any of these versions if not all of the. A group usually starts off in Alpha then moves onto Beta and eventually Stable. Resource The resource is an identifier that receives and returns a its corresponding kind. Resources also expose CRUD actions for that Kind . API URI Now with a base understanding if we look at the URI for Deployment creation The uri is as follows: Now the breakdown for the URI is as follows If you wanted to get further actions on a resource there are further endpoints available. Here is getting a specific deployment and it’s status GET /apis/apps/v1/namespaces/{namespace}/deployments/{name} GET /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status Now, there may be some resources that are cluster wide such as nodesor namespaces . These can be grouped into a GVK (Group Version Kind) where the namespaces is omitted. As opposed to the namespace being part of the resource in a GVR. GET /api/v1/nodes To summarize This should give you a little more insight into how the API Servers APIs are designed with their URI structure. Along with a new appreciation for what some of the terms such as kinds groups resources that you may see your yaml definitions you write for Kubernetes. If you want some more information about this topic here are some useful links
https://ddymko.medium.com/understanding-kubernetes-gvr-e7fb94093e88
CC-MAIN-2022-40
en
refinedweb
I've created a lambda expression at runtime, and want to evaluate it - how do I do that? I just want to run the expression by itself, not against any collection or other values. At this stage, once it's created, I can see that it is of type Expression<Func<bool>>, with a value of {() => "MyValue".StartsWith("MyV")}. I thought at that point I could just call var result = Expression.Invoke(expr, null); against it, and I'd have my boolean result. But that just returns an InvocationExpression, which in the debugger looks like {Invoke(() => "MyValue".StartsWith("MyV"))}. I'm pretty sure I'm close, but can't figure out how to get my result! Thanks. Try compiling the expression with the Compile method then invoking the delegate that is returned: using System; using System.Linq.Expressions; class Example { static void Main() { Expression<Func<Boolean>> expression = () => "MyValue".StartsWith("MyV"); Func<Boolean> func = expression.Compile(); Boolean result = func(); } } As Andrew mentioned, you have to compile an Expression before you can execute it. The other option is to not use an Expression at all, which woul dlook like this: Func<Boolean> MyLambda = () => "MyValue".StartsWith("MyV"); var Result = MyLambda(); In this example, the lambda expression is compiled when you build your project, instead of being transformed into an expression tree. If you are not dynamically manipulating expression trees or using a library that uses expression trees (Linq to Sql, Linq to Entities, etc), then it can make more sense to do it this way.
https://expressiontree-tutorial.net/knowledge-base/1856215/getting-the-result-from-an-expression
CC-MAIN-2022-40
en
refinedweb
Segmentation Fault Hi, I was working on a GUI in PyQt5, and I seem to often get Segmentation Faults. It did give a backtrace once, which is here: My entire code: (requires) I wasn't sure what happened so I asked some people and they said it's likely an issue with PyQt5, so.. here I am. Basically, the program randomly exits with Segmentation fault (core dumped)at random times. The program is meant to receive text, and add it to a QLineEdit. PyQt5 version: 5.11.3 Okay first off you are probably biting off way more than you should with this basic program so instead of trying so much at once let us make it even more basic -- I would have tested this myself but do not have the minecraft stuff loaded so you will have to test it instead -- by doing the following: import sys from time import sleep from PyQt5.QtWidgets import QApplication from minecraft import authentication from minecraft.exceptions import YggdrasilError from minecraft.networking.connection import Connection from minecraft.networking.packets import Packet, clientbound, serverbound from minecraft.compat import input def handle_join_game(join_game_packet): print('Connected:',join_game_packet) def print_chat(chat_packet): print("Chat :",chat_packet.json_data) sleep(2) #I had to add this or it instantly crashed def main(): print('Start Main') try: connection = Connection(options_addr, options_port, username=options_user) except Exception as err: print("ERROR 1:",err) sys.exit() try: connection.register_packet_listener(handle_join_game, clientbound.play.JoinGamePacket) except Exception as err: print("ERROR 2:",err) sys.exit() try: connection.register_packet_listener(print_chat, clientbound.play.ChatMessagePacket) except Exception as err: print("ERROR 3:",err) sys.exit() try: connection.connect() except Exception as err: print("ERROR 4:",err) sys.exit() sys.exit() if __name__ == '__main__': app = QApplication([]) ex = main() sys.exit(app.exec_()) What you are trying to do with the above is figure out where the error is occurring and perhaps get a better idea of what that error actually is. Having looked at the dump and the logic you have in place my guess is that it is not actually a pyqt5 error exactly but perhaps more an error contained within the Minecraft process or a misunderstanding of how to handle what you are getting back from Minecraft -- this is meant to help you determine both. Now if the above runs without crashing and you are getting what you expect back within those various calls then post again with sample data of what you are getting back and I will check the rest of the code but again my guess is the issue resides in Minecraft or not fully understanding (aka correctly handling) what you are getting back from Minecraft Still if you catch the error at least we should have a slightly better idea of exactly what the error is this way and that should help as well. Hi, I did test out your code above - it simply ran and outputted "Start Main". However I tried to go back to the original version as much as I could: def handle_join_game(join_game_packet): print('Connected:',join_game_packet) def print_chat(chat_packet): print("Chat :",chat_packet.json_data) sleep(2) #I had to add this or it instantly crashed def main(): try: connection = Connection(options_addr, options_port, username=options_user) except Exception as e: print("conn error") connection.register_packet_listener(handle_join_game, clientbound.play.JoinGamePacket) connection.register_packet_listener(print_chat, clientbound.play.ChatMessagePacket) connection.connect() and the output that I was expecting was Username set to testing249399 Connected: 0x25 JoinGamePacket(entity_id=3343, game_mode=0, dimension=0, difficulty=3, max_players=20, level_type='flat', reduced_debug_info=False) Chat : {"text":""} Chat : {"text":""} Chat : {"text":""} (This should be a replica of what you have above without try/excepts (I'll see if any error at all happens), not sure why yours isn't working) There should be a tcp connection established until the client quits. The messages at this point should be something along the lines of endless {"text":""}. Nothing has thrown an error so far. Okay well start adding in elements of your full program in the smallest chunks feasible and keep encapsulating things with the try except -- eventually you should catch your error where it is occurring and once you have that it might be easier to figure out how to fix the issue Hi, I believe that the problem is during def print_chat(chat_packet): self.textbox.append(chat_packet.json_data) It only exits with seg fault if the values are appended - however it happens if any value is appended. The program runs fine without issues if no value is appended... however I have no clue on fixes for this. Do you have any recommendations? (The lib can perfectly return the values, I tried printing them and it works - it's just when anything is appended to that box that there's a seg fault at different times) Okay @KCocco what I would suggest is the following: def print_chat(chat_packet): try: self.textbox.append(chat_packet.json_data) except Exception as err: print ("Append Error:",err) print ("Append Data :","["+chat_packet.json_data+"]") The key here is that it appears this is where the error is and since its intermittent it makes me feel its more a data issue of some sort (again something coming through that is not expected to) as such we print the error along with the data that we are shipping to the append routine. The "[ ]" are to make sure there are no hidden characters preceding or following the regular textual string we assume is contained within this packet. Once we see what the string contains then we ought to be able to duplicate the error by simply duplicating that string and sending it through the append string. If this does not work then perhaps its the packet just before the packet that triggers the error that is the issue -- in this case do thing following: newPacket = "[" + chat_packet.json_data + "]" self.textbox.append(newPacket) This way you can look at the contents of the file and determine if any hidden characters are corrupting the file and causing the issue. @Denni said in Segmentation Fault: self.textbox.append(newPacket) Hi, I've tried the solution you've suggested. However, I still get segfaults without any exceptions thrown. Also - this time I got a copy of the crash log, which I've uploaded here: Maybe it can be useful to you. Also, here's the stacktrace: Okay first off not sure if this is any part of the issue but from my understanding pyqt5 (which I am assuming you are using) does not play well with anything earlier than python 3.7 (and it appears you are using python 3.5) -- now I know you can perhaps get it to work with python 3.5 but that does not mean it will be 100% stable and this might be a symptom of that -- so my suggestion at this point is to make sure you are using pyqt5 on python 3.7+ --- and if you can do a clean install of latest version python 3.7 along with pyqt5 that would be best ..... while it might not happen I have run into situations where having earlier versions of platform software cause issues - note I am currently upgrading a python 2.7 / pyqt4 project to python 3.7 / pyqt5 and I made sure to load nothing but python 3.7 and pyqt5 on the development machine .... I use a different machine to run the py2.7/qt4 stuff Note keep in mind this kind of bug is not only catastrophic as you have seen but hard to track down -- especially since I cannot run it myself using your environment so this means you have to try and think outside the box a bit and figure out how to catch the error if possible -- normally a try except within code (if placed properly) would catch the error and keep it from being catastrophic so not sure why its not catching it unless its not placed in the right spot --- try encapsulating all of your code elements within try except blocks -- aka start each function there-abouts with a try and end it there-abouts with an except printing an error statement that clearly lets you know where you are at -- the issue might be cropping up outside our context without being realized -- then again if its deep enough issue perhaps a try except will not catch it but its always worth a try (pun intended) ;) P.S. It appears that there have been several of these crash reports for py3.5 using pyqt5 and they all "seem" to have to do with graphic rendering -- that is just my quick glance out on the internet using the crash designated [ python3.5 crashed with SIGSEGV in QTextEngine::shapeTextWithHarfbuzzNG() ] Hi, I didn't expect to have that many issues with an older version of python... welp. Either way, thanks for your help :). Yeah sometimes you do and sometimes you do not -- it is a crap shoot... box cars or snake eyes do come up from time to time. When investigating this I saw that I could use earlier versions of python but frankly there are reasons that they make changes to the software and using an earlier version (if you do not have to) is just asking for trouble (imho). Further I had noticed in my quick research that while pyqt5 has been tweaked to work with earlier versions of python it was not designed too. As a final note -- when developing something new I almost always make sure I am using the latest most up to date tools especially since in today's rapidly advancing technological process hardware actually becomes obsolete in about 5 years or so and software while it will last longer sometimes -- It to has been changing fairly quickly. For instance Python 2.7 will no longer be supported come next year and we are only on Python 3.7 So it behooves you to get the latest and greatest stable platforms to do any new coding on or with. If you are needing more assistance once you have upgraded do drop a line if I am about I would enjoy lending you a hand. Hey @KCocco your project interested me so I started looking into it and I came across this and thought you might be interested Granted I think you are using ubuntu but I have found that their are similar aspects from one OS to another so maybe there is something here you can use to help you and I also see my you might be using an earlier version of python -- still all-things-considered -- I would simply get the source code in bits and pieces and begin updating it to 3.7 and pyqt5 ... which is kind of what I plan to do ... so perhaps we can help one another I was going to send you an email but could not so figured I would try it this way.
https://forum.qt.io/topic/103631/segmentation-fault
CC-MAIN-2022-40
en
refinedweb
1549002059 I want to create SMS sending application using PHP with SMS API. In my response message I can see success code and success message but my Problem is in my log file says as Passed Exception exception 'SMSServiceException' with message 'Format of the address is invalid' I think problem is, starts when I hash coding phone number. And I don't know how to do that, Please help me. I have tried to convert phone number using md5() but result is same. my index.php $jsonData = array( 'requestId'=> '','message' => 'thismsg hello', 'password' => '25c8db49905003e3347ad861546fce1a', 'sourceAddress' => '77000', 'deliveryStatusRequest' => '1', 'chargingAmount' => '2.00', 'destinationAddresses' => ['88b7a1e8dbf419a2c0835b4f33d06c1a'],//this convert with md5 'applicationId' => 'APP_051000', 'encoding' => '0', 'version' => '1.0', 'binaryHeader' => '' ); my .log file result: [01-Feb-2019 08:57:34 Asia/Colombo] Message received msg_header hello [01-Feb-2019 08:57:35 Asia/Colombo] Passed Exception exception 'SMSServiceException' with message 'Format of the address is invalid.' in /ophielapp/lib/SMSSender.php:58 Stack trace: #0 /ophielapp/lib/SMSSender.php(46): SMSSender->handleResponse(Object(stdClass)) #1 /ophielapp/lib/SMSSender.php(34): SMSSender->sendRequest('{"applicationId...') #2 /ophielapp/sms.php(66): SMSSender->sendMessage('hello', '77000') #3 {main} I want to correctly send phone number into smsSender.php with hash code and If want to more details of other php file I can provide it. result in my browser:this is the image of response this is my smsSender.php <?php require_once ‘SMSServiceException.php’; class SMSSender{ private $applicationId, $serverURL; public function __construct($serverURL, $applicationId, $password) { $this->applicationId = $applicationId; $this->password = $password; $this->serverURL = $serverURL; } public function broadcastMessage($message){ return $this->sendMessage($message, array('tel:all')); } public function sendMessage($message, $addresses){ if(empty($addresses)) throw new SMSServiceException('Format of the address is invalid.', 'E1325'); else { $jsonStream = (is_string($addresses))?$this->resolveJsonStream($message, array($addresses)):(is_array($addresses)?$this->resolveJsonStream($message, $addresses):null); return ($jsonStream!=null)?$this->sendRequest($jsonStream):false; } } private function sendRequest($jsonStream){ $opts = array('http'=>array('method'=>'POST', 'header'=>'Content-type: application/json', 'content'=>$jsonStream)); $context = stream_context_create($opts); $response = file_get_contents($this->serverURL, 0, $context); return $this->handleResponse(json_decode($response)); } private function handleResponse($jsonResponse){ $statusCode = $jsonResponse->statusCode; $statusDetail = $jsonResponse->statusDetail; if(empty($jsonResponse)) throw new SMSServiceException('Invalid server URL', '500'); else if(strcmp($statusCode, 'S1000')==0) return true; else throw new SMSServiceException($statusDetail, $statusCode); } private function resolveJsonStream($message, $addresses){ // $addresses is a array $messageDetails = array('message'=>$message, 'destinationAddresses'=>$addresses); $applicationDetails = array('applicationId'=>$this->applicationId, 'password'=>$this->password); $jsonStream = json_encode($applicationDetails+$messageDetails); return $jsonStream; } public function get_location_stream( $addresse){ $reqDetails = array( 'applicationId'=>$this->applicationId, 'password'=>$this->password, 'serviceType'=>'IMMEDIATE', 'subscriberId'=>$addresse ); return json_encode($reqDetails); } public function getlocation( $addresse){ //$jsonStream = get_location_stream($addresse); $jsonStream= array( 'applicationId'=>$this->applicationId, 'password'=>$this->password, 'serviceType'=>'IMMEDIATE', 'subscriberId'=>$addresse ); print_r($jsonStream); json_encode($jsonStream); $opts = array('http'=>array('method'=>'POST', 'header'=>'Content-Type: application/json', 'content'=>$jsonStream)); $context = stream_context_create($opts); $response = file_get_contents('', 0, $context); echo $response; //return $this->location_response(json_decode($response)); return json_decode($response); } public function location_response($jsonResponse){ $statusCode = $jsonResponse->statusCode; if(empty($jsonResponse)){ throw new SMSServiceException('Invalid server URL', '500'); }else if(strcmp($statusCode, 'S1000')==0){ return array( $jsonResponse->longitude, $jsonResponse->latitude, $jsonResponse->horizontalAccuracy, $jsonResponse->freshness, $jsonResponse->messageId ); }else{ throw new SMSServiceException($statusDetail, $statusCode); } } public function getResponse($addresse){ $jsonStream= array( 'applicationId'=>$this->applicationId, 'password'=>$this->password, 'serviceType'=>'IMMEDIATE', 'subscriberId'=>$addresse ); $opts = array('http'=>array('method'=>'POST', 'header'=>'Content-Type: application/json', 'content'=>json_encode($jsonStream))); $context = stream_context_create($opts); $response = file_get_contents('', 0, $context); return json_decode($response); } } ?> #php #json 1549013483 destinationAddresses pass as string 'destinationAddresses' =>'88b7a1e8dbf419a2c0835b4f33d06c1a',266
https://morioh.com/p/21595a556966
CC-MAIN-2022-40
en
refinedweb
Acyclic Steps for Dart Disclaimer: This is not an officially supported Google product. Package acyclic_steps enables the definition of steps with acyclic dependencies on other steps and the evaluation of such steps. A step is a function (optionally async) which produces a result (or side-effect). A step may depend on other steps, but cyclic dependencies will produce a compile-time error. When a step is evaluated, the dependencies for the step is evaluated first. To the extend permitted by dependency constraints the steps depended upon will run concurrently. Steps can also be overriden to inject an initial value, or a mock/fake object during testing. The result from a Step is cached in the Runner object that evaluated the Step, this ensures that steps will not be repeated. package:acyclic_steps was written to facilitate complex projects with many components that depends on other components to be initialized. This is frequently the case for servers, where one step might be to setup a database connection, while other steps depend upon the database connection. This is also a frequent case where it is desirable to be able to override the database connection step during testing, to use a different database or even database driver. The package is also intended to be useful for evaluation of complex task graphs, where tasks may depend on the result of previous tasks. Example import 'dart:async' show Future; import 'package:acyclic_steps/acyclic_steps.dart'; /// A step that provides a message, this is a _virtual step_ because it /// doesn't have an implementation instead it throws an error. Hence, to /// evaluate a step that depends on [messageStep] it is necessary to /// override this step, by injecting a value to replace it. final Step<String> messageStep = Step.define('message').build( () => throw UnimplementedError('message must be overriden with input'), ); /// A step that provides date and time final dateTimeStep = Step.define('date-time').build( () => DateTime.now().toString(), ); /// A step which has side effects. final Step<void> printStep = Step.define( 'print', ) // Dependencies: .dep(messageStep) .dep(dateTimeStep) // Method to build the step .build(( msg, // result from evaluation of messageStep time, // result from evaluation of dateTimeStep ) async { await Future.delayed(Duration(milliseconds: 100)); print('$msg at $time'); }); Future<void> main() async { final r = Runner(); // Override [messageStep] to provide an input value. r.override(messageStep, 'hello world'); // Evaluate [printStep] which in turn evaluates [dateTimeStep], and re-uses // the overridden value for [messageStep]. await r.run(printStep); // When testing it might be desirable to override the [dateTimeStep] to // produce the same output independent of time. To do this we must create a // new runner: final testRunner = Runner(); testRunner.override(messageStep, 'hello world'); testRunner.override(dateTimeStep, '2019-11-04 09:47:37.461795'); // Now we can be use the [dateTimeStep] evaluates to something predictable assert(await testRunner.run(dateTimeStep) == '2019-11-04 09:47:37.461795'); // This wil print a fixed time, useful when testing. await testRunner.run(printStep); } Libraries - acyclic_steps - The package:acyclic_steps/acyclic_steps.dartlibrary enables the definition and execution of acyclic graphs of dependent steps. - step_builder -
https://pub.dev/documentation/acyclic_steps/latest/
CC-MAIN-2022-40
en
refinedweb
Python and Data Science from Scratch With RealLife Exercises What you'll learn - Learn the skills for collecting, shaping, storing, managing, and analyzing data with Python - The rise of data science needs will create 11.5 million job openings by 2026 - Learn In-Demand Data Science Careers - Learn to use Python professionally - Learn to use Python 3 - Learn to use Object Oriented Programming - Free software and tools used during the course - You will be able to work with Python functions, namespaces and modules - Apply the Python knowledge you get from this course in coding exercises, real-life scenarios - Build a portfolio with your Python skills - Fundamentals of Pandas Library - Installation of Anaconda and how to use Anaconda - Using Jupyter notebook for Python, python data science - Numpy Arrays for Numpy python - Combining Dataframes, Data Munging and how to deal with Missing Data - How to use Matplotlib library and start to journey in Data Visualization - Whether you’re interested in machine learning, data mining, or data analysis, Udemy has a course for you. - OAK offers highly-rated data science courses that will help you learn how to visualize and respond to new data, as well as develop innovative new technologies - Python instructors on OAK Academy specialize in everything from software development to data analysis, and are known for their effective. - Python is a multi-paradigm language, which means that it supports many programming approaches. Along with procedural and functional programming styles - Data science is everywhere. Better data science practices are allowing corporations to cut unnecessary costs, automate computing, and analyze markets. - Data science is the key to getting ahead in a competitive global climate. - Data science uses algorithms to understand raw data. The main difference between data science and traditional data analysis is its focus on prediction. - Data Scientists use machine learning to discover hidden patterns in large amounts of raw data to shed light on real problems. - Data science requires lifelong learning, so you will never really finish learning. - Python is a popular language that is used across many industries and in many programming disciplines. DevOps engineers use Python to script website. - Python is a general programming language used widely across many industries and platforms. One common use of Python is scripting, which means automating tasks. - Python has a simple syntax that makes it an excellent programming language for a beginner to learn. To learn Python on your own, you first must become familiar - Python is a widely used, general-purpose programming language, but it has some limitations. Because Python is an interpreted, dynamically typed language - It is possible to learn data science on your own, as long as you stay focused and motivated. Luckily, there are a lot of online courses and boot camps available - Some people believe that it is possible to become a data scientist without knowing how to code, but others disagree. - A data scientist requires many skills. They need a strong understanding of statistical analysis and mathematics, which are essential pillars of data science. - The demand for data scientists is growing. We do not just have data scientists; we have data engineers, data administrators, and analytics managers. Requirements - No prior data science, python, pandas, numpy knowledge is required - Free software and tools used during the python data science course - Basic computer knowledge for python, python data science, python pandas, numpy pandas - Desire to learn data science - Motivation to learn the second largest number of job postings relative python program language among all others - Curiosity for python programming - Desire to learn Python - Desire to work on data science Project - Desire to learn python data science, data science from scratch - Desire to learn python, pandas, numpy, numpy python - LIFETIME ACCESS, course updates, new content, anytime, anywhere, on any device - Nothing else! It’s just you, your computer and your ambition to get started today Description Welcome to my "Python and Data Science from Scratch With Real Life Exercises" course. Python Data Science with Python programming, NumPy, Pandas, Matplotlib and dive into Data Science with Python Projects Numpy, Pandas, Data science, data science from scratch, python, pandas, python data science, NumPy, python programming, python and data science from scratch with real life exercises, python for data science, data science python, matplotlib OAK Academy offers highly-rated data science courses that will help you learn how to visualize and respond to new data, as well as develop innovative new technologies. Whether you’re interested in machine learning, data mining, or data analysis, Udemy has a course for you. Data science is everywhere. "Python and Data Science from Scratch With Real Life Exercises!” a straight-forward course for the Python programming language. In the course, you will have a down-to-earth way explanations with hands-on projects. With this course, you will learn Python Programming step-by-step. I made Python 3 programming simple and easy with exercises, challenges, and lots of real-life examples. We will open the door of the Data Science world and will move deeper. You will learn the fundamentals of Python and its beautiful libraries such as Numpy, Pandas, and Matplotlib step by step. Throughout the course, we will teach you how to use the Python to analyze data, create beautiful visualizations, and use powerful machine learning algorithms and we will also do a variety of exercises to reinforce what we have learned in this Python for Data Science course. This Python and Data Science course is for everyone! My "Python and Data Science from Scratch With Real Life Exercises!" science, it progresses by creating new algorithms to analyze data and validate current methods. What does a data scientist do? Data Scientists use machine learning to discover hidden patterns in large amounts of raw data to shed light on real problems. This requires several steps. First, they must identify a suitable problem. Next, they determine what data are needed to solve such a situation and figure out how to get the data. Once they obtain the data, they need to clean the data. The data may not be formatted correctly, it might have additional unnecessary data, it might be missing entries, or some data might be incorrect. Data Scientists must, therefore, make sure the data is clean before they analyze the data. To analyze the data, they use machine learning techniques to build models. Once they create a model, they test, refine, and finally put it into production. What are the most popular coding languages for data science? Python for data science is the most popular programming language for data science. It is a universal language that has a lot of libraries available. It is also a good beginner language. R is also popular; however, it is more complex and designed for statistical analysis. It might be a good choice if you want to specialize in statistical analysis. You will want to know either Python or R and SQL. SQL is a query language designed for relational databases. Data scientists deal with large amounts of data, and they store a lot of that data in relational databases. Those are the three most-used programming languages. Other languages such as Java, C++, JavaScript, and Scala are also used, albeit less so. If you already have a background in those languages, you can explore the tools available in those languages. However, if you already know another programming language, you will likely be able to pick up. How long does it take to become a data scientist? This answer, of course, varies. The more time you devote to learning new skills, the faster you will learn. It will also depend on your starting place. If you already have a strong base in mathematics and statistics, you will have less to learn. If you have no background in statistics or advanced mathematics, you can still become a data scientist; it will just take a bit longer. Data science requires lifelong learning, so you will never really finish learning. A better question might be, "How can I gauge whether I know enough to become a data scientist?" Challenge yourself to complete data science projects using open data. The more you practice, the more you will learn, and the more confident you will become. Once you have several projects that you can point to as good examples of your skillset as a data scientist, you are ready to enter the field. How can ı learn data science on my own? It is possible to learn data science projects on your own, as long as you stay focused and motivated. Luckily, there are a lot of online courses and boot camps available. Start by determining what interests you about data science. If you gravitate to visualizations, begin learning about them. Starting with something that excites you will motivate you to take that first step. If you are not sure where you want to start, try starting with learning Python. It is an excellent introduction to programming languages and will be useful as a data scientist. Begin by working through tutorials or Udemy courses on the topic of your choice. Once you have developed a base in the skills that interest you, it can help to talk with someone in the field. Find out what skills employers are looking for and continue to learn those skills. When learning on your own, setting practical learning goals can keep you motivated. Does data science require coding? The jury is still out on this one. Some people believe that it is possible to become a data scientist without knowing how to code, but others disagree. A lot of algorithms have been developed and optimized in the field. You could argue that it is more important to understand how to use the algorithms than how to code them yourself. As the field grows, more platforms are available that automate much of the process. However, as it stands now, employers are primarily looking for people who can code, and you need basic programming skills. The data scientist role is continuing to evolve, so that might not be true in the future. The best advice would be to find the path that fits your skill set. What skills should a data scientist know? A data scientist requires many skills. They need a strong understanding of statistical analysis and mathematics, which are essential pillars of data science. A good understanding of these concepts will help you understand the basic premises of data science. Familiarity with machine learning is also important. Machine learning is a valuable tool to find patterns in large data sets. To manage large data sets, data scientists must be familiar with databases. Structured query language (SQL) is a must-have skill for data scientists. However, nonrelational databases (NoSQL) are growing in popularity, so a greater understanding of database structures is beneficial. The dominant programming language in Data Science is Python — although R is also popular. A basis in at least one of these languages is a good starting point. Finally, to communicate findings. Is data science a good career? The demand for data scientists is growing. We do not just have data scientists; we have data engineers, data administrators, and analytics managers. The jobs also generally pay well. This might make you wonder if it would be a promising career for you. A better understanding of the type of work a data scientist does can help you understand if it might be the path for you. First and foremost, you must think analytically. Data science from scratch is about gaining a more in-depth understanding of info through data. Do you fact-check information and enjoy diving into the statistics? Although the actual work may be quite technical, the findings still need to be communicated. Can you explain complex findings to someone who does not have a technical background? Many data scientists work in cross-functional teams and must share their results with people with very different backgrounds. What is python? Machine learning python is a general-purpose, object-oriented, high-level programming language. Whether you work in artificial intelligence or finance or are pursuing a career in web development or data science, Python bootcamp in data science , data analysis. What are the limitations of Python? Python is a widely used, general-purpose programming language, but it has some limitations. Because Python in machine learning. Udemy’s online courses are a great place to start if you want to learn Python on your own. No prior knowledge is needed! Python doesn't need any prior knowledge to learn it and the Python code is easy to understand for beginners. What you will learn? In this course, we will start from the very beginning and go all the way to programming with hands-on examples. We will first learn how to set up a lab and install the needed software on your machine. Then during the course, you will learn the fundamentals of Python development like Variables, Data types, Numbers, Strings Conditionals and Loops Functions and modules Lists, Dictionaries, and Tuples File operations Object-Oriented Programming How to use Anaconda and Jupyter notebook, fundamental things about the Matplotlib library such as Pyplot, Pylab and Matplotlb concepts What Figure, Subplot, and Axes are How to do figure and plot customization Python Python Data science Numpy Numpy python Pandas Python pandas. and Data Science from Scratch With Real Life Exercises course Dive in now! See you in the course!.
https://www.udemy.com/course/python-and-data-science-from-scratch-with-reallife-exercises/?referralCode=54C89B52CAF64DD2D876
CC-MAIN-2022-40
en
refinedweb
rosdoc generation for wiki I use the following command to generate a package's documentation: rosdoc_lite [name-of-pkg] When inspecting the resulting offline documentation, I noticed that it's different from the one autogenerated for the ros wiki under the Code API tab:[name-of-pkg]/html/index.html In particular, the online version has all the auxiliary srv:: and cfg:: namespaces (and possibly others) hidden. Two questions: - Is there a way to generate documentation offline which is the same as the online one? - If not, what exactly are the differences between the two?
https://answers.ros.org/question/54243/rosdoc-generation-for-wiki/
CC-MAIN-2022-40
en
refinedweb
Mixing SDK and NDK Android app development is split into two, very distinct worlds. On the one side, there's the Android SDK, which is what the bulk of Android apps is being developed in. The SDK is based on the Java Runtime and the standard Java APIs, and it provides a very high-level development experience. Traditionally, the Java language or Kotlin would be used to develop in this space. And then there's the Android NDK, which sits at a much lower level and allows to write code directly for the native CPUs (e.g. ARM or x86). This code works against lower-level APIs provided by Android and the underlying Linux operating system that Android is based on, and traditionally one would use a low-level language such as C to write code at this level. The Java Native Interface, or JNI, allows the two worlds to interact, making it possible for SDK-level JVM code to call NDK-level native functions, and vice versa. Elements makes it really easy to develop apps that mix SDK and NDK, in several ways: - A shared language for SDK and NDK - Easy bundling, with Project References - Automatic generation of JNI imports - Mixed Mode Debugging A Shared Language for SDK and NDK The first part is the most obvious and trivial. Since Elements decouples language form platform, whatever the language of choice is, you can use it to develop both the JVM-based SDK portion of your app and the native NDK part. No need to fall back to a low-level language like C for the native extension. Easy Bundling of NDK Extensions, with Project References Once you have an SDK-based app and one or more native extensions in your project, you can bundle the extension(s) into your fina .apk simply adding a conventional Project Reference to them, for example by dragging the extension project onto the app project in Fire or Water. Even though the two projects are of a completely different type, the EBuild build chain takes care of establishing the appropriate relationship and adding the final NDK binaries into the "JNI" subfolder of your final .apk. Automatic Generation of JNI Imports Establishing a project reference to your NDK extension also automatically generates JNI imports for any APIs you expose from our native project. All you need to do is mark your native methods with the JNIExport aspect, as such: [JNIExport(ClassName := 'com.example.myandroidapp.MainActivity')] method HelloFromNDK(env: ^JNIEnv; this: jobject): jstring; begin result := env^^.NewStringUTF(env, 'Hello from NDK!'); end; [JNIExport(ClassName = "com.example.myandroidapp.MainActivity")] public jstring HelloFromNDK(^JNIEnv env, jobject thiz) { return (**env).NewStringUTF(env, "Hello from NDK!"); } @JNIExport(ClassName = "com.example.myandroidapp.MainActivity") public func HelloFromNDK(_ env: ^JNIEnv, _ this: jobject) -> jstring { return (**env).NewStringUTF(env, "Hello from NDK!"); } @JNIExport(ClassName = "com.example.myandroidapp.MainActivity") public jstring HelloFromNDK(^JNIEnv env, jobject thiz) { return (**env).NewStringUTF(env, "Hello from NDK!"); } As part of the build, the compiler will generate a source file with import stubs for any such APIs, and inject that into your main Android SDK project. That source file will contain Partial Classes (or Extensions, in Swift parlance) matching the namespace and class name you specified. All you need to do (in Oxygene, C# or Java) is to mark your own implementation of the Activity as partial ( __partial in Java, and no action is needed in Swift), and the new methods implemented in your NDK extension will automatically be available to your code: namespace com.example.myandroidapp; type MainActivity = public partial class(Activity) public method onCreate(savedInstanceState: Bundle); override; begin inherited; // Set our view from the "main" layout resource ContentView := R.layout.main; HelloFromNDK; end; end; end; namespace com.example.myandroidapp { public partial class MainActivity : Activity { public override void onCreate(Bundle savedInstanceState) { base(savedInstanceState); // Set our view from the "main" layout resource ContentView = R.layout.main; HelloFromNDK(); } } } public class MainActivity : Activity { override func onCreate(_ savedInstanceState: Bundle) { super(savedInstanceState) // Set our view from the "main" layout resource ContentView = R.layout.main HelloFromNDK() } } package com.example.myandroidapp; public partial class MainActivity : Activity { public override void onCreate(Bundle savedInstanceState) { base(savedInstanceState); // Set our view from the "main" layout resource ContentView = R.layout.main; HelloFromNDK(); } } Of course you can use any arbitrary class name in the JNIExport aspect, it does not have to match an existing type in your SDK project. If you do that, rather than becoming available as part of your Activity (or whatever other class), the imported APIs will be on a separate class you can just instantiate. To see the generated imports, search your build log for " JNI.pas" to get the full path to the file that gets generated and injected into your project. You can also just invoke "Go to Definition" (^⌥D in Fire, Ctrl+Alt+D in Water) on a call to one of the methods, to open the file, as the IDE will treat it as regular part of your project. (The same, by the way, is also true of the R.java file generated by the build that define the R class that gives access to all your resources.) Mixed Mode Debugging Finally, Elements allows to debug both your SDK app and its embedded NDK extensions at the same time. You can set breakpoints in both Java and native code, and explore both sides of your app and how they interact. All of this is controlled by two settings, but Fire and Water, our IDEs, automate the process for you so you don't even have to worry about them yourself. First, there's the "Support Native Debugging" option in your NDK project. It's enabled by default for the Debug configuration in new projects, and it instructs the build chain to deploy the LLDB debugger library as part of your native library (and have it, in turn, bundled into your .apk). This is what allows the debugger to attach to the NDK portion of your app later. Secondly, there's the "Debug Engine" option in your SDK project. It defaults to "Java", for JVM-only debugging, but as soon as you add a Project Reference to an NDK extension to your app, it will switch to "Both" (again, only for the Debug configuration), instructing the Elements Debugger to start both JVM and native debug sessions when you launch your app. This, of course, works both in the Emulator and on the device. See Also - Android SDK and Android NDK - Debugging Android Projects JNIExportAspect - Java Native Interface - Video: Mixed Mode Android Apps - Blog post: Debugging Mixed-Mode Android Apps Note that the video and blog post above were created before the automatic generation of JNI Imports was available, so it still mentions having to define the import manually.
https://docs.elementscompiler.com/Platforms/Android/Mixing/
CC-MAIN-2022-40
en
refinedweb
Table C++ With the Command Line Author: Many Contributors: Benjamin Qi, Hankai Zhang, Anthony Wang, Nathan Wang, Nathan Chen, Michael Lan, Arpan Banerjee OS-specific instructions for installing and running C++ via the command line. Prerequisites Table Command Line BasicsCommand Line Basics GeneralGeneral LinuxLinux MacMac Should be mostly the same as Linux ... Open the Terminal application and familiarize yourself with some basic commands. Upgrade to zsh if you haven't already. WindowsWindows Installing g++ USACO (and most contests) use GCC's g++ to compile and run your code. You'll need g++ specifically to use the #include <bits/stdc++.h> header file; see Running Code Locally for details. On LinuxOn Linux GCC is usually preinstalled on most Linux distros. You can check if it is installed with whereis g++ If it is not preinstalled, you can probably install it using your distro's package manager. On MacOn Mac Install XCode command line tools. xcode-select --install If you previously installed these you may need to update them: softwareupdate --list # list updates softwareupdate -i -a # installs all updates After this step, clangshould be installed (try running clang --versionin Terminal). - - You should be able to compile with g++-#, where # is the version number (e.g., 10). Running the following command g++-10 --version should display something like this: g++-10 (Homebrew GCC 10.2.0_2) 10.2.0 Copyright (C) 2020 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. If you want to be able to compile with just g++, write a shell alias! Put the following lines into your shell's rc file ( ~/.bashrcif you use bash, and ~/.zshrcif you use zsh). alias g++=g++-10 Once you do so, g++ --versionshould now output the same thing as g++-10 --version. Note: avoid overriding the system g++with symlinking or hard-linking as that will almost surely cause problems. Don't worry if you don't know what those terms mean. On WindowsOn Windows Mingw-w64 (Minimalist GNU for Windows)Simpler: Windows Subsystem for Linux (WSL)Harder: If you're already accustomed to the Linux Command line, this might be the best option for you. Windows Subsystem for Linux, commonly referred to as WSL, runs the linux kernel (or an emulation layer, depending on which version you use) within your windows installation. This allows you to use Linux binaries without needing to use Linux as your main Operating System. Many people use WSL (such as Anthony), but it can be difficult to properly set up. If you want to code in (neo)vim, you can install WSL and code through WSL bash. To install the necessary tools after setting up WSL, you can run the following commands. On Debian based distributions like Ubuntu: sudo apt-get install build-essential On Arch based distributions like Arch Linux: sudo pacman -Sy base-devel You can find many tutorials on how to style up WSL and make it feel more cozy. The first step is to use a proper terminal and not the default one that Windows provides. An easy to use option is Windows Terminal, which can be found on the Microsoft Store. C++ with the Command LineC++ with the Command Line Basics of Compiling & RunningBasics of Compiling & Running Consider a simple program such as the following, which we'll save in name.cpp. #include <iostream>using namespace std;int main() {int x; cin >> x;cout << "FOUND " << x << "\n";} It's not hard to compile & run a C++ program. First, open up Powershell on Windows, Terminal on Mac, or your distro's terminal in Linux. We can compile name.cpp into an executable named name with the following command: g++ name.cpp -o name Then we can execute the program: ./name If you type some integer and then press enter, then the program should produce output. We can write both of these commands in a single line: g++ name.cpp -o name && ./name Note that && ensures that ./name only runs if g++ name.cpp -o name finishes successfully. Redirecting Input & OutputRedirecting Input & Output If you want to read standard input from inp.txt, use the following: ./name < inp.txt If you want to write standard output to out.txt, then use the following: ./name > out.txt They can also be used in conjunction, as shown below: ./name < inp.txt > out.txt See Input & Output for how to do file input and output within the program. Compiler Options (aka Flags) Use compiler flags to change the way GCC compiles your code. Usually, we use something like the following in place of g++ name.cpp -o name: g++ -std=c++17 -O2 name.cpp -o name -Wall - -std=c++17allows you to use features that were added to C++ in 2017. USACO recently upgraded from C++11 to C++17. - You should always compile with these flags. Adding Shortcuts (Mac)Adding Shortcuts (Mac) For Users of Linux & Windows The process is similar for Linux. If you're on Windows, you can use an IDE to get these shortcuts, or you can install WSL (mentioned above). Retyping the compiler flags above can get tedious. You should define shortcuts so you don't need to type them every time! First, create your .zshrc if it doesn't already exist. touch ~/.zshrc Open your .zshrc with a text editor. open ~/.zshrc or some text editor (ex. sublime text with subl). subl ~/.zshrc You can add aliases and functions here, such as the following to compile and run C++ on Mac. co() { g++ -std=c++17 -O2 -o "${1%.*}" $1 -Wall; } run() { co $1 && ./${1%.*} & fg; } Now you can easily compile and run name.cpp from the command line with co name.cpp && ./name or run name.cpp. Note that all occurrences of $1 in the function are replaced with name.cpp, while ${1%.*} removes the file extension from $1 to produce name. What is & fg for? Let prog.cpp denote the following file: #include <iostream>#include <vector>using namespace std;int main() {vector<int> v;cout << v[-1];} According to the resource above, the & fg is necessary for getting zsh on Mac to display crash messages (such as segmentation fault). For example, consider the running the first prog.cpp above with run prog.cpp. If & fg is removed from the run command above then the terminal displays no message at all. Leaving it in produces the following (ignore the first two lines): [2] 30594 [2] - running ./${1%.*} zsh: segmentation fault ./${1%.*} Measuring Time & Memory Usage (Mac)Measuring Time & Memory Usage (Mac) For example, suppose that prog.cpp consists of the following: #include <bits/stdc++.h>using namespace std;const int BIG = 1e7;int a[BIG];int main() {int sum = 0;for (int i = 0; i < BIG; ++i) sum += a[i];cout << sum;} Then co prog.cpp && gtime -v ./prog gives the following: Command being timed: "./prog" User time (seconds): 0.01 System time (seconds): 0.01 Percent of CPU this job got: 11% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.22 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 40216 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 91 Minor (reclaiming a frame) page faults: 10088 Voluntary context switches: 3 Involuntary context switches: 38 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 Note that integers require kilobytes of memory, which is close to in the above output as expected. Adjusting Stack Size (Mac)Adjusting Stack Size (Mac) Warning! This section might be out of date. Let A.cpp denote the following program: #include <iostream>using namespace std;int res(int x) {if (x == 200000) return x;return res(x+1);}int main() {cout << res(0) << "\n";} If we compile and run this with g++ A.cpp -o A && ./A, this outputs 200000. However, changing 200000 to 300000 gives a segmentation fault. Similarly, #include <iostream>using namespace std;int main() {int arr[2000000];cout << arr[0] << "\n";} runs, but changing 2000000 to 3000000 also gives a segmentation fault. This is because the stack size on Mac appears to be limited to 8 megabytes by default. Note that USACO does not have a stack size limit, aside from the usual 256 MB memory limit. Therefore, code that crashes locally due to a stack overflow error may still pass on the USACO servers. To get your code running locally, use one of the methods below. Warning! This matters particularly for contests such as Facebook Hacker Cup where you submit the output of a program you run locally. Method 1Method 1 ulimit -s 65532 will increase the stack size to about 64 MB. Unfortunately, this doesn't work for higher numbers. Method 2Method 2 To get around this, we can pass a linker option. According to the manual for ld (enter man ld in Terminal), the option -stack_size size does the following: Specifies the maximum stack size for the main thread in a program. Without this option a program has a 8MB stack. The argument size is a hexadecimal number with an optional leading 0x. The size should be a multiple of the architecture's page size (4KB or 16KB). So including -Wl,-stack_size,0x10000000 as part of your compilation command will set the maximum stack size to bytes megabytes, which is usually sufficient. However, running the first program above with 200000 replaced by 1e7 still gives an error. In this case, you can further increase the maximum stack size (ex. changing 0x10000000 to 0xF0000000). On windows, adding -Wl,--stack,268435456 as a part of your compilation flags should do the trick. The 268435456 corresponds to 268435456 bytes, or 256 megabytes. If you are using Windows PowerShell, make sure to wrap it in quotations (like so: "-Wl,--stack,268435456"), since commas are considered to be special characters. Module Progress: Join the USACO Forum! Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!
https://usaco.guide/general/cpp-command/
CC-MAIN-2022-40
en
refinedweb
In earlier versions of Visual Studio it was very difficult to do multiple file uploads because we had to write many lines of code but in ASP.Net 4.5 Visual Studio provides the feature to enable the upload of multiple files at a time without writing much code. So let us learn about the ASP.Net 4.5 File Upload control step-by-step. The following are the methods of the ASP.Net File Upload control: Now let us demonstrate the preceding explanation by creating a sample web application as follows: Now the default.aspx page source code will look as follows. File Upload ControlThe File Upload control displays a text box control and a browse button that enables users to select a file to upload to the server. The following are some of the common properties of the ASP.Net File Upload control: - AllowMultiple: Gets or sets a true or false value that specifies whether multiple files can be selected for upload, if true then it allows multiple file uploads, the default is false. - FileBytes: Gets an array of the bytes in a file that is specified using a System.Web.UI.WebControls.FileUpload control - FileContent: Gets a System.IO.Stream object that points to a file to upload using the System.Web.UI.WebControls.FileUpload control. - FileName: Gets the name of a file on a client to upload using the System.Web.UI.WebControls.FileUpload control. - HasFile: Gets a value indicating whether the System.Web.UI.WebControls.FileUpload control contains a file. - HasFiles: Gets a value indicating whether the System.Web.UI.WebControls.FileUpload control contains a files. - PostedFile: Gets the underlying System.Web.HttpPostedFile object for a file that is uploaded using the System.Web.UI.WebControls.FileUpload control. - PostedFiles: Gets the collection of uploaded files. - AddAttributesToRender: Adds the HTML attributes and styles of a System.Web.UI.WebControls.FileUpload control to render to the specified System.Web.UI.HtmlTextWriter object. - OnPreRender: Raises the System.Web.UI.Control.PreRender event for the System.Web.UI.WebControls.FileUpload control. - Render: Sends the System.Web.UI.WebControls.FileUpload control content to the specified System.Web.UI.HtmlTextWriter object, that writes the content to render on the client. - SaveAs: Saves the contents of an uploaded file to a specified path on the Web server. - "Start" - "All Programs" - "Microsoft Visual Studio 2010". - "File" - "New WebSite" - "C#" - "Empty WebSite" (to avoid adding a master page). - Provide the web site a name such as "UsingMultiUpload" or another as you wish and specify the location. - Then right-click on Solution Explorer and select "Add New Item" and Add Web Form. - Drag and drop one Button, a Label and a FileUploader control onto the <form> section of the Default.aspx page. <%@ Page <head runat="server"> <title>Article by Vithal Wadje</title> </head> <body bgcolor="gray"> <form id="form1" runat="server"> <br /> <br /> <div style="color: white"> <h4>Article for</h4> <table> <tr> <td>Select Files</td> <td> <asp:FileUpload </td> <td></td> <td> <asp:Button </td> </tr> </table> </div> <asp:Label</asp:Label> </form> </body> </html> Now set the File Upload control AllowMultiple to true as in the following: <asp:FileUpload Create the folder in Solution Explorer by right-clicking to save uploaded Multiple files as in the following: Write the following code for the Upload button click event to Upload and save files on the server folder as in the following:"; } The entire code of the default.aspx.cs page will look as follows: using System; using System.Web; using System.IO; public partial class _Default : System.Web.UI.Page { string getFileName; protected void Page_Load(object sender, EventArgs e) { }"; } } Now run the application. The UI will look such as follows: /> /> Now click on the Browse Button and select multiple files by Pressing the Ctrl button of the keybord as in the following: /> Now click on Open after selecting the files then the File Upload control saves the comma (,) separated file paths multiple files with a minimal amount of code and effort. Note - Do a proper validation such as if it has a file or not of the File Upload control when implementing. - For more details and explanation, download the Uploaded Zip file UploadMultipleFiles.zip Summary From all the preceding examples you have learned how to Upload multiple files. I hope this article is useful for all readers, if you have a suggestion then please contact me.
https://www.compilemode.com/2015/05/uploading-multiple-files-using-Asp-Net-4-5.html
CC-MAIN-2022-40
en
refinedweb
Introduction A distributed system consists of multiple components located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. Fault-tolerant applications can continue operating despite the system, hardware, and network faults of one or more components. Idempotency in Web APIs ensures that the API works correctly (as designed) even when consumers (clients) send the same request multiple times. To simplify the integration of Idempotency in an API project, we could use the IdempotentAPI open-source NuGet library. IdempotentAPI implements an ASP.NET Core attribute (filter) to handle the HTTP write operations (POST and PATCH) to affect only once for the given request data and idempotency key. In July 2021, we saw how the IdempotentAPI v0.1.0-beta in .NET Nakama (2021, July 4) provides an easy way to develop idempotent Web APIs in .NET Core. Since then, with the community’s help, several issues and improvements have been identified and implemented. The complete journey of the IdempotentAPI is available in the CHANGELOG.md file. Now, the IdempotentAPI 1.0.0-RC-01 is available with many improvements 🎉✨. In the following sections, we will see the complete features, details regarding the improvements, the available NuGet packages, and instructions to start using the IdempotentAPI library quickly. Features ⭐ Simple: Support idempotency in your APIs easily with three simple steps 1️⃣2️⃣3️⃣. 🔍 Validations: Performs validation of the request’s hash-key to ensure that the cached response is returned for the same combination of Idempotency Key and Request to prevent accidental misuse. 🌍 Use it anywhere!: IdempotentAPI targets .NET Standard 2.0. So, we can use it in any compatible .NET implementation (.NET Framework, .NET Core, etc.). Click here to see the minimum .NET implementation versions that support each .NET Standard version. ⚙ Configurable: Customize the idempotency in your needs. Configuration Options (see the GitHub repository for more details) Logging Level configuration. 🔧 Caching Implementation based on your needs. 🏠 DistributedCache: A build-in caching based on the standard IDistributedCache interface. 🦥 FusionCache: A high-performance and robust cache with an optional distributed 2nd layer and advanced features. … or you could use your implementation 😉 Improvement Details Improving Concurrent Requests Handling The standard IDistributedCache interface doesn’t support a command to GetOrSet a cached value with atomicity. However, it defines the Get and Set methods. In our previous implementation, we used these two methods without locking (i.e. without grouping into a single logical operation). As a result, we had an issue with concurrent requests with the same idempotency key. The problem was that the controller action could be executed multiple times. As we can observe in Figure 1, this issue happens when the API Server receives a second request (with the same idempotency key) before we flag the first idempotency key as Inflight (i.e., execution in progress). Thus, racing conditions occur when setting idempotency key as Inflight. Figure 1. – The issue of executing the controller more than once when concurrent requests with the same idempotency key are performed. To overcome this issue, we defined the IIdempotencyCache interface and implemented the GetOrSet method, which performs a lock (locally) for each idempotency key (see code below). In Figure 2, we can see how we used the GetOrSet method to execute the controller action only once on concurrent requests with the same idempotency key. The idea is to use GetOrSet method to set an Inflight object with a dynamic unique id per request when the Get method returns Null (it doesn’t have a value) as a single logical operation. The second call of the GetOrSet will wait for the first call to complete. Thus, only the execution that receives its unique id can continue with the execution of the controller action. Figure 2. – Concurrent requests with the same idempotency key, execute the controller action only once. public byte[] GetOrSet( string key, byte[] defaultValue, object? options = null, CancellationToken token = default) { if (key is null) { throw new ArgumentNullException(nameof(key)); } if (options is not null && options is not DistributedCacheEntryOptions) { throw new ArgumentNullException(nameof(options)); } using (var valuelocker = new ValueLocker(key)) { byte[] cachedData = _distributedCache.Get(key); if (cachedData is null) { _distributedCache.Set(key, defaultValue, (DistributedCacheEntryOptions?)options); return defaultValue; } else { return cachedData; } } } Caching as Implementation Detail To overcome the concurrent requests with the same idempotency key issue, we defined the IIdempotencyCache interface and implemented the GetOrSet method. This is implemented in our build-in DistributedCache caching project, which is based on the standard IDistributedCache interface. Our implementation provides basic caching functionality. However, by defining the IIdempotencyCache interface, our IdempotencyAPI logic becomes independent from the caching implementation. Thus, we can support other caching implementations with advanced features, such as the FusionCache. FusionCache is a high-performance and robust caching .NET library with an optional distributed 2nd layer with advanced features, such as fail-safe mechanism, cache stampede prevention, fine-grained soft/hard timeouts with background factory completion, extensive customizable logging, and more. BinaryFormatter is Obsolete The BinaryFormatter serialization methods become obsolete from ASP .NET Core 5.0. In the IdempotentAPI project, the BinaryFormatter was used in the Utils.cs class for serialization and deserialization. As a result, our library was not working in .NET Core 5.0 and later versions unless we enabled the BinaryFormatterSerialization option in the .csproj file. The recommended action based on the .NET documentation is to stop using BinaryFormatter and use a JSON or XML serializer. In our case, we used the Newtonsoft JsonSerializer, which can include type information when serializing JSON, and read this type of information when deserializing JSON to create the target object with the original types. In the following JSON example, we can see how the type of information is included in the data. { “$type”: “System.Collections.Generic.Dictionary`2[[System.String, System.Private.CoreLib],[System.Object, System.Private.CoreLib]], System.Private.CoreLib”, “Request.Method”: “POST”, “Response.StatusCode”: 200, “Response.Headers”: { “$type”: “System.Collections.Generic.Dictionary`2[[System.String, System.Private.CoreLib],[System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]], System.Private.CoreLib]], System.Private.CoreLib”, “myHeader1”: { “$type”: “System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]], System.Private.CoreLib”, “$values”: [ “value1-1”, “value1-2” ] }, “myHeader2”: { “$type”: “System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]], System.Private.CoreLib”, “$values”: [ “value2-1”, “value2-1” ] } } } Quick Start Step 1: Register the Caching Storage Storing-caching data is necessary for idempotency. Therefore, the IdempotentAPI library needs an implementation of the IIdempotencyCache to be registered in the Program.cs or Startup.cs file depending on the used style (.NET 6.0 or older). The IIdempotencyCache defines the caching storage service for the idempotency needs. Currently, we support the following two implementations (see the following table). However, you can use your implementation 😉. Both implementations support the IDistributedCache either as primary caching storage (requiring registration) or secondary (optional registration). Thus, we can define our caching storage service in the IDistributedCache, such as in Memory, SQL Server, Redis, NCache, etc. See the Distributed caching in the ASP.NET Core article for more details about the available framework-provided implementations. IdempotentAPI.Cache Implementation Support Concurrent Requests Primary Cache 2nd-Level Cache Advanced Features DistributedCache (Default) ✔ IDistributedCache ❌ ❌ FusionCache ✔ Memory Cache ✔(IDistributedCache) ✔ Choice 1 (Default): IdempotentAPI.Cache.DistributedCache Install the IdempotentAPI.Cache.DistributedCache via the NuGet UI or the NuGet package manager console. // Register an implementation of the IDistributedCache. // For this example, we are using a Memory Cache. services.AddDistributedMemoryCache(); // Register the IdempotentAPI.Cache.DistributedCache. services.AddIdempotentAPIUsingDistributedCache(); Choice 2: Registering: IdempotentAPI.Cache.FusionCache Install the IdempotentAPI.Cache.FusionCache via the NuGet UI or the NuGet package manager console. To use the advanced FusionCache features (2nd-level cache, Fail-Safe, Soft/Hard timeouts, etc.), configure the FusionCacheEntryOptions based on your needs (for more details, visit the FusionCache repository). // Register the IdempotentAPI.Cache.FusionCache. // Optionally: Configure the FusionCacheEntryOptions. services.AddIdempotentAPIUsingFusionCache(); Tip: To use the 2nd-level cache, we should register an implementation for the IDistributedCache and register the FusionCache Serialization (NewtonsoftJson or SystemTextJson). For example, check the following code: // Register an implementation of the IDistributedCache. // For this example, we are using Redis. services.AddStackExchangeRedisCache(options => { options.Configuration = “YOUR CONNECTION STRING HERE, FOR EXAMPLE:localhost:6379”; }); // Register the FusionCache Serialization (e.g. NewtonsoftJson). // This is needed for the a 2nd-level cache. services.AddFusionCacheNewtonsoftJsonSerializer(); // Register the IdempotentAPI.Cache.FusionCache. // Optionally: Configure the FusionCacheEntryOptions. services.AddIdempotentAPIUsingFusion each action separately. The following two sections describe these two cases. First, however, we should define the Consumes and Produces attributes on the controller in both cases. using IdempotentAPI.Filters; Using the Idempotent Attribute on a Controller’s Class By using the Idempotent attribute on the API controller’s class, all POST and PATCH actions will work as idempotent operations (requiring the IdempotencyKey header). [ApiController] [Route(“[controller]“)] [Consumes(“application/json”)] // We should define this. [Produces(“application/json”)] // We should define this. )] public IActionResult Post([FromBody] SimpleRequest simpleRequest) { // … } NuGet Packages Package Name Description IdempotentAPI The implementation of the IdempotentAPI library. IdempotentAPI.Cache Defines the caching abstraction (IIdempotencyCache) that IdempotentAPI is based. IdempotentAPI.Cache.DistributedCache The default caching implementation, based on the standard IDistributedCache interface. IdempotentAPI.Cache.FusionCache Supports caching via the FusionCache third-party library. Summary The IdempotentAPI 1.0.0-RC-01 is available with many improvements 🎉✨. With the community’s help, several issues and improvements have been identified and implemented. I want to take this opportunity to thank @apchenjun, @fjsosa, @lvzhuye, @RichardGreen-IS2, and @william-keller for your support, ideas, and time to improve this library. Any help in coding, suggestions, questions, giving a GitHub Star, etc., are welcome 😉. If you are using this library, don’t hesitate to contact me. I would be happy to know your use case 😁.
https://online-code-generator.com/net-idempotentapi-1-0-0-release-candidate/
CC-MAIN-2022-40
en
refinedweb
Advanced setup The Installation guide describes the easiest ways to run CKEditor builds in your project and the Custom builds guide explains how to add or remove features from the build or change webpack configuration. In this guide, we would like to show you ways to closer integrate CKEditor 5 with your application. Thanks to that, you will be able to optimize the bundling process of your project and customize the builds in a more convenient way. # Requirements In order to start developing CKEditor 5 you will require: - Node.js 12.0.0+ - npm 5.7.1+ (note: some npm 5+ versions were known to cause problems, especially with deduplicating packages; upgrade npm when in doubt) - Git # Bundler CKEditor 5 is currently built using webpack@4. All builds, examples and demos are generated using this bundler. It should also be possible to build CKEditor using other bundlers (if they are configured properly), such as Rollup or Browserify, but these setups are not officially supported yet. Also, the @ckeditor/ckeditor5-dev-webpack-plugin that allows to localize the editor is only available for webpack. More work on this subject will be done in the future. Therefore, a prerequisite to this guide is that you are using webpack as your build tool. # Scenario 1: Integrating existing builds This is the simplest scenario. It assumes that you want to use one of the existing builds “as-is” (you can, of course, still configure the rich text editor). It also gives the fastest build times. First, install the build of your choice from npm: npm install --save @ckeditor/ckeditor5-build-classic Now, import the editor build into your code: // Using ES6 imports: import ClassicEditor from '@ckeditor/ckeditor5-build-classic'; // Or CJS imports: const ClassicEditor = require( '@ckeditor/ckeditor5-build-classic' ); And use it: ClassicEditor .create( document.querySelector( '#editor' ) ) .then( editor => { console.log( editor ); } ) .catch( error => { console.error( error ); } ); Since you are using an already built editor (so a result of passing CKEditor 5 source through webpack), you do not need any additional webpack configuration. In this case CKEditor works as a ready-to-use library. # Scenario 2: Building from source This scenario allows you to fully control the building process of CKEditor. This means that you will not actually use the builds anymore, but instead build CKEditor from source directly into your project. This integration method gives you full control over which features will be included and how webpack will be configured. Similar results to what this method allows can be achieved by customizing an existing build and integrating your custom build like in scenario 1. This will give faster build times (since CKEditor will be built once and committed), however, it requires maintaining a separate repository and installing the code from that repository into your project (e.g. by publishing a new npm package or using tools like Lerna). This makes it less convenient than the method described in this scenario. First of all, you need to install source packages that you will use. If you base your integration on one of the existing builds, you can take them from that build’s package.json file (see e.g. classic build’s package.json). At this moment you can choose the editor creator and the features you want. Copy these dependencies to your package.json and call npm install to install them. The dependencies (or devDependencies) section of package.json should look more or less like this: "dependencies": { // ... "@ckeditor/ckeditor5-adapter-ckfinder": "^x.y.z", "@ckeditor/ckeditor5-autoformat": "^x.y.z", "@ckeditor/ckeditor5-basic-styles": "^x.y.z", "@ckeditor/ckeditor5-block-quote": "^x.y.z", "@ckeditor/ckeditor5-easy-image": "^x.y.z", "@ckeditor/ckeditor5-editor-classic": "^x.y.z", "@ckeditor/ckeditor5-essentials": "^x.y.z", "@ckeditor/ckeditor5-heading": "^x.y.z", "@ckeditor/ckeditor5-image": "^x.y.z", "@ckeditor/ckeditor5-link": "^x.y.z", "@ckeditor/ckeditor5-list": "^x.y.z", "@ckeditor/ckeditor5-paragraph": "^x.y.z", "@ckeditor/ckeditor5-theme-lark": "^x.y.z", "@ckeditor/ckeditor5-upload": "^x.y.z" // ... } The second step is to install dependencies needed to build the editor. The list may differ if you want to customize the webpack configuration, but this is a typical setup: npm install --save \ @ckeditor/ckeditor5-dev-webpack-plugin \ @ckeditor/ckeditor5-dev-utils \ postcss-loader@3 \ raw-loader@3 \ style-loader@1 \ webpack@4 \ webpack-cli@3 \ # Webpack configuration You can now configure webpack. There are a couple of things that you need to take care of when building CKEditor 5: - Handling CSS files of the CKEditor theme. They are included in the CKEditor 5 sources using import 'path/to/styles.css'statements, so you need proper loaders. - Similarly, you need to handle bundling SVG icons, which are also imported directly into the source. For that you need the raw-loader. - Finally, to localize the editor you need to use the @ckeditor/ckeditor5-dev-webpack-pluginwebpack plugin. The minimal configuration, assuming that you use the same methods of handling assets as CKEditor 5 builds, will look like this: const CKEditorWebpackPlugin = require( '@ckeditor/ckeditor5-dev-webpack-plugin' ); const { styles } = require( '@ckeditor/ckeditor5-dev-utils' ); module.exports = { plugins: [ // ... new CKEditorWebpackPlugin( { // See language: 'pl' } ) ], } ) }, ] } ] } }; # Webpack Encore If you use Webpack Encore, you can use the following configuration: const CKEditorWebpackPlugin = require( '@ckeditor/ckeditor5-dev-webpack-plugin' ); const { styles } = require( '@ckeditor/ckeditor5-dev-utils' ); Encore. // ... your configuration ... .addPlugin( new CKEditorWebpackPlugin( { // See language: 'pl' } ) ) // Use raw-loader for CKEditor 5 SVG files. .addRule( { test: /ckeditor5-[^/\\]+[/\\]theme[/\\]icons[/\\][^/\\]+\.svg$/, loader: 'raw-loader' } ) // Configure other image loaders to exclude CKEditor 5 SVG files. .configureLoaderRule( 'images', loader => { loader.exclude = /ckeditor5-[^/\\]+[/\\]theme[/\\]icons[/\\][^/\\]+\.svg$/; } ) // Configure PostCSS loader. .addLoader({ test: /ckeditor5-[^/\\]+[/\\]theme[/\\].+\.css$/, loader: 'postcss-loader', options: styles.getPostCssConfig( { themeImporter: { themePath: require.resolve('@ckeditor/ckeditor5-theme-lark') } } ) } ) # Running the editor – method 1 You can now import all the needed plugins and the creator directly into your code and use it there. The easiest way to do so is to copy it from the src/ckeditor.js file available in every build repository. import ClassicEditorBase'; export default class ClassicEditor extends ClassicEditorBase {} ClassicEditor.builtinPlugins = [ EssentialsPlugin, UploadAdapterPlugin, AutoformatPlugin, BoldPlugin, ItalicPlugin, BlockQuotePlugin, EasyImagePlugin, HeadingPlugin, ImagePlugin, ImageCaptionPlugin, ImageStylePlugin, ImageToolbarPlugin, ImageUploadPlugin, LinkPlugin, ListPlugin, ParagraphPlugin ]; ClassicEditor.defaultConfig = { toolbar: { items: [ 'heading', '|', 'bold', 'italic', 'link', 'bulletedList', 'numberedList', 'imageUpload', 'blockQuote', 'undo', 'redo' ] }, image: { toolbar: [ 'imageStyle:full', 'imageStyle:side', '|', 'imageTextAlternative' ] }, language: 'en' }; This module will export an editor creator class which has all the plugins and configuration that you need already built-in. To use such editor, simply import that class and call the static .create() method like in all examples. import ClassicEditor from './ckeditor'; ClassicEditor // Note that you do not have to specify the plugin and toolbar configuration — using defaults from the build. .create( document.querySelector( '#editor' ) ) .then( editor => { console.log( 'Editor was initialized', editor ); } ) .catch( error => { console.error( error.stack ); } ); # Running the editor – method 2 The second variant how to run the editor is to use the creator class directly, without creating an intermediary subclass. The above code would translate to: import ClassicEditor'; ClassicEditor .create( document.querySelector( '#editor'), { // The plugins are now passed directly to .create(). plugins: [ EssentialsPlugin, AutoformatPlugin, BoldPlugin, ItalicPlugin, BlockQuotePlugin, HeadingPlugin, ImagePlugin, ImageCaptionPlugin, ImageStylePlugin, ImageToolbarPlugin, EasyImagePlugin, ImageUploadPlugin, LinkPlugin, ListPlugin, ParagraphPlugin, UploadAdapterPlugin ], // So is the rest of the default configuration. toolbar: [ 'heading', 'bold', 'italic', 'link', 'bulletedList', 'numberedList', 'imageUpload', 'blockQuote', 'undo', 'redo' ], image: { toolbar: [ 'imageStyle:full', 'imageStyle:side', '|', 'imageTextAlternative' ] } } ) .then( editor => { console.log( editor ); } ) .catch( error => { console.error( error ); } ); # Building Finally, you can build your application. Run webpack on your project and the rich text editor will be a part of it. # Option: Minifying JavaScript Webpack 4 introduced the concept of modes. It comes with two predefined modes: development and production. The latter automatically enables uglifyjs-webpack-plugin which takes care of JavaScript minification. Therefore, it is enough to execute webpack with the --mode production option or set mode: 'production' in your webpack.config.js to optimize the build. Prior to version 1.2.7 uglifyjs-webpack-plugin had a bug which caused webpack to crash with the following error: TypeError: Assignment to constant variable.. If you experienced this error, make sure that your node_modules contains an up-to-date version of this package (and that webpack uses this version). CKEditor 5 Builds use Terser instead of uglifyjs-webpack-plugin because the later one seems to be unsupported anymore. # Option: Extracting CSS One of the most common requirements is to extract CKEditor 5 CSS to a separate file (by default it is included in the output JavaScript file). To do that, you can use mini-css-extract-plugin: npm install --save \ mini-css-extract-plugin \ css-loader And add it to your webpack configuration: const MiniCssExtractPlugin = require( 'mini-css-extract-plugin' ); module.exports = { // ... plugins: [ // ... new MiniCssExtractPlugin( { filename: 'styles.css' } ) ], module: { rules: [ { test: /ckeditor5-[^/\\]+[/\\]theme[/\\]icons[/\\][^/\\]+\.svg$/, use: [ 'raw-loader' ] }, { test: /ckeditor5-[^/\\]+[/\\]theme[/\\].+\.css$/, use: [ MiniCssExtractPlugin.loader, 'css-loader', { loader: 'postcss-loader', options: styles.getPostCssConfig( { themeImporter: { themePath: require.resolve( '@ckeditor/ckeditor5-theme-lark' ) }, minify: true } ) } ] } ] } }; Webpack will now create a separate file called styles.css which you will need to load manually into your HTML (using the <link rel="stylesheet"> tag). # Option: Building to ES5 target CKEditor 5 is written in ECMAScript 2015 (also called ES6). All browsers in which CKEditor 5 is currently supported have sufficient ES6 support to run CKEditor 5. Thanks to that, CKEditor 5 Builds are also published in the original ES6 format. However, it may happen that your environment requires ES5. For instance, if you use tools like the original UglifyJS which do not support ES6+ yet, you may need to transpile CKEditor 5 source to ES5. This will create ~80% bigger builds but will ensure that your environment can process CKEditor 5 code. In the production mode webpack uses uglifyjs-webpack-plugin which supports ES6+ code. This is because it does not use the original UglifyJS plugin (which does not support ES6+), but instead it uses the uglify-es package. We recommend upgrading your setup to webpack@4 and its built-in modes which allows you to avoid transpiling the source to ES5. In order to create an ES5 build of CKEditor 5 you can use Babel: npm install --save babel-loader @babel/core @babel/preset-env regenerator-runtime Then, add this item to webpack module.rules section: module: { rules: [ { test: /ckeditor5-[^\/\\]+[\/\\].+\.js$/, use: [ { loader: 'babel-loader', options: { presets: [ require( '@babel/preset-env' ) ] } } ] }, ... ] } And load regenerator-runtime (needed to make ES6 generators work after transpilation) by adding it as the first entry point: entry: [ require.resolve( 'regenerator-runtime/runtime.js' ), // Your entries... ] This setup ensures that the source code is transpiled to ES5. However, it does not ensure that all ES6 polyfills are loaded. Therefore, if you would like to, for example, give bringing IE11 compatibility a try, make sure to also load babel-polyfill. The babel-preset-env package lets you choose the environment that you want to support and transpiles ES6+ features to match that environment’s capabilities. Without configuration it will produce ES5 builds. # Scenario 3: Using two different editors The ability to use two or more types of rich text editors on one page is a common requirement. For instance, you may want to use the classic editor next to a couple of inline editors. Do not load two builds on one page. This is a mistake which leads to: - Code duplication. Both builds share up to 99% of the code, including CSS and SVGs. By loading them twice you make your page unnecessarily heavy. - Duplicated CSS may lead to conflicts and, thus, broken UI of the editors. - Translation repository gets duplicated entries which may cause loading incorrect strings with translations. # Solutions If you want to load two different editors on one page you need to make sure that they are built together (once). This can be achieved in at least two ways: - Integrating CKEditor 5 from source directly into your application. Since you build you application once, the editors that you use will be built together, too. - Creating a “super build” of CKEditor 5. Instead of creating a build which exports just one editor, you can create a build which exports two or more at the same time. # Creating “super builds” There is no limit for how many editor classes a single build can export. By default, the official builds export a single editor class only. However, they can easily import more. You can start from forking (or copying) an existing build like in the “Creating custom builds” guide. Let’s say you forked and cloned the ckeditor5 repository and want to add InlineEditor to the classic build: git clone -b stable [email protected]:<your-username>/ckeditor5.git cd ckeditor5/packages/ckeditor5-build-classic npm install Now it is time to add the missing editor package and install it: npm install --save-dev @ckeditor/ckeditor5-editor-inline Once all the dependencies are installed, modify the webpack’s entry point which is the src/ckeditor.js file. For now it was exporting just a single class: // The editor creator to use. import ClassicEditorBase from '@ckeditor/ckeditor5-editor-classic/src/classiceditor'; // ... export default class ClassicEditor extends ClassicEditorBase {} // Plugins to include in the build. ClassicEditor.builtinPlugins = [ // ... ]; // Editor configuration. ClassicEditor.defaultConfig = { // ... }; Let’s make it export an object with two classes: ClassicEditor and InlineEditor. To make both constructors work in the same way (load the same plugins and default configuration) you also need to assign builtinPlugins and defaultConfig static properties to both of them: // The editor creators to use. import ClassicEditorBase from '@ckeditor/ckeditor5-editor-classic/src/classiceditor'; import InlineEditorBase from '@ckeditor/ckeditor5-editor-inline/src/inlineeditor'; // ... class ClassicEditor extends ClassicEditorBase {} class InlineEditor extends InlineEditorBase {} // Plugins to include in the build. const plugins = [ // ... ]; ClassicEditor.builtinPlugins = plugins; InlineEditor.builtinPlugins = plugins; // Editor configuration. const config = { // ... }; ClassicEditor.defaultConfig = config; InlineEditor.defaultConfig = config; export default { ClassicEditor, InlineEditor }; Since you now export an object with two properties ( ClassicEditor and InlineEditor), it is also reasonable to rename the global variable to which webpack will assign this object. So far it was called ClassicEditor. A more adequate name now would be for example CKEDITOR. This variable is defined in webpack.config.js in the output.library setting: diff --git a/webpack.config.js b/webpack.config.js index c57e371..04fc9fe 100644 --- a/webpack.config.js +++ b/webpack.config.js @@ -21,7 +21,7 @@ module.exports = { output: { // The name under which the editor will be exported. - library: 'ClassicEditor', + library: 'CKEDITOR', path: path.resolve( __dirname, 'build' ), filename: 'ckeditor.js', Once you changed the src/ckeditor.js and webpack.config.js files it is time to rebuild the build: yarn run build Finally, when webpack finishes compiling your super build, you can change the samples/index.html file to test both editors: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>CKEditor 5 – super build</title> <style> body { max-width: 800px; margin: 20px auto; } </style> </head> <body> <h1>CKEditor 5 – super build</h1> <div id="classic-editor"> <h2>Sample</h2> <p>This is an instance of the <a href="">classic editor build</a>.</p> </div> <div id="inline-editor"> <h2>Sample</h2> <p>This is an instance of the <a href="">inline editor build</a>.</p> </div> <script src="../build/ckeditor.js"></script> <script> CKEDITOR.ClassicEditor .create( document.querySelector( '#classic-editor' ) ) .catch( err => { console.error( err.stack ); } ); CKEDITOR.InlineEditor .create( document.querySelector( '#inline-editor' ) ) .catch( err => { console.error( err.stack ); } ); </script> </body> </html>
https://ckeditor.com/docs/ckeditor5/22.0.0/builds/guides/integration/advanced-setup.html
CC-MAIN-2020-40
en
refinedweb
In this tutorial you will learn how to use the as3flickrlib library to create a Flash Flickr photo viewer. Step 1: The as3flickrlib There are many libraries available for Flex developers that interface with Flickr. The as3flickrlib library was created by Adobe and is the library that we'll use to create this photo viewing application. You'll need to download a copy of the as3flickrlib code for yourself, as well as the as3corelib library (as3flickrlib depends on as3corelib). Both can be obtained from here. Step 2: TweenMax You'll also need the TweenMax library. TweenMax is a tweening library, which allows us to easily change the properties of an object over time. You can get TweenMax here. Step 3: New Project Create a new Flex web project and add the three libraries mentioned above to the Source Path of the application. Step 4: Wrapper Class This application works by taking the images loaded from Flickr and adding them to the main Application object (i.e. the object that is created by the MXML file). When you load an image off the web it is returned to you as a Bitmap. While the Bitmap class extends the DisplayObject class (which is what the addChild function requires), Flex will only allow those classes that extend the UIComponent class to be added as a child of the main Application object, and the Bitmap does not extend the UIComponent. The compiler will not flag adding a Bitmap to the Application object via the addChild function as an error, but you will get an exception at run time. Still, it would be nice to be able to add the Bitmap objects as children of the Application object. We need to create a small wrapper class that does extend the UIComponent class (so it can be added to the Application), but also adds a Bitmap as a child of itself. That wrapper class is called DisplayObjectUIComponent. package { import flash.display.DisplayObject; import mx.core.UIComponent; public class DisplayObjectUIComponent extends UIComponent { public function DisplayObjectUIComponent(displayObject:DisplayObject) { super (); explicitHeight = displayObject.height; explicitWidth = displayObject.width; addChild (displayObject); } } } Step 5: New MXML File Now we need to create the MXML file. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:</a>" layout="absolute" backgroundGradientAlphas="[1.0, 1.0]" backgroundGradientColors="[#000000, #5B5B5B]" creationComplete="onComplete()"> ... </mx:Application> This is the shell of the MXML file. Most of the code is the same as the empty template that is created when you open up a new Flex application in Flex Builder. In addition we've specified the background colors (with the backgroundGradientAlphas and backgroundGradientColors attributes) and set the onComplete function to be called when the Application object has created itself (with the creationComplete attribute). Step 6: mx:script Tag The code that will do the job of downloading and displaying the Flickr images needs to be contained in an mx:script tag. The <![CDATA[ ... ]]> tag just allows us to write code without having to worry about special characters like greater-than and less-than (< and >) being interpreted as part of the XML document. <mx:Script> <![CDATA[ ... ]]> </mx:Script> Step 7: Import Classes We need to import some classes for use within our application. import mx.collections.ArrayCollection; import mx.controls.Alert; import com.adobe.webapis.flickr.*; import com.adobe.webapis.flickr.events.*; import gs.TweenMax; import gs.easing.*; Step 8: Define Constants Next we need to define some constants that will control how our application works. private static const SEARCH_STRING:String = "sunset"; private static const MAX_RESULTS:int = 50; private static const API_KEY:String = "your key goes here"; private static const TRANSITION_TIME:Number = 1; private static const DISPLAY_TIME:Number = 3; - SEARCH_STRING defines the query that will be sent to Flickr. In essence we'll be querying Flickr for images much like you would query Google for web pages. We have set the query to "sunset" here, but this string could be anything like "kittens", "mountains", "cars" etc. - MAX_RESULTS defines how many images Flickr will return once it has been queried. - API_KEY is your own Flickr API key, which you can apply for here. - TRANSITION_TIME defines how quickly the images will fade into each other in seconds. Here we've set the transition time to take 1 second. - DISPLAY_TIME defines how long each image will be displayed before the next image is loaded. Here we've set each image to be displayed for 3 seconds. Step 9: Define Variables We need to define a few variables for our application. private var photos:ArrayCollection = null; private var currentImage:int = 0; private var displayImage:Bitmap = null; private var backgroundImage:Bitmap = null; - The photos variable is a collection of the photo definitions sent back by Flickr. It's important to note that Flickr does not actually send back the photos themselves, but only the information needed to find the URL of the photo, which then has to be downloaded separately. - The currentImage variable maintains an index in the photos collection. This is so we know what photo needs to be displayed next. - The displayImage and backgroundImage variables are references to the Bitmap objects that are created by loading the Flickr images. Step 10: Policy Files By default a Flash application can only load resources from it's own domain. In order to load resources from another domain (like Flickr) the owner of that domain needs to have a policy file, usually called crossdomain.xml, that lets the Flash runtime know that it's OK to load their resources. This policy file needs to be loaded before any attempts are made to load the resources. Flickr hosts it's images on a number of servers, so here we load the policy file of these servers. If you don't perform this step you'll get an exception when trying to load images off these domains. Security.loadPolicyFile(""); Security.loadPolicyFile(""); Security.loadPolicyFile(""); Security.loadPolicyFile(""); Step 11: onComplete Function When the Flex application has finished creating itself, the onComplete function will be called (this is what we specified in Step 5). The onComplete function is the entry point of the application. private function onComplete():void { var service:FlickrService = new FlickrService(API_KEY); service.addEventListener(FlickrResultEvent.PHOTOS_SEARCH, onPhotosSearch); service.photos.search("", SEARCH_STRING, "any", "", null, null, null, null, -1, "", MAX_RESULTS, 1); } The first thing we need to do is create a new instance of the FlickrService class. The FlickrService object is our gateway to Flickr, and we use that to submit our search for our sunrise images. You need to supply the Flickr API key (from Step 8) to the FlickrService constructor. var service:FlickrService = new FlickrService(API_KEY); Next we attach a function to the FlickrResultEvent.PHOTOS_SEARCH event. This function will be called when Flickr has returned some information about a search. Here we attach the onPhotosSearch function. service.addEventListener(FlickrResultEvent.PHOTOS_SEARCH, onPhotosSearch); Finally we perform the actual search itself. The search function has a lot of parameters that can be used to narrow a search down to a specific user, date, title and more. We're only interested in finding photos with the tag sunset, and so supply either a null, empty string or -1 to these other parameters. service.photos.search("", SEARCH_STRING, "any", "", null, null, null, null, -1, "", MAX_RESULTS, 1); Step 12: onPhotoSearch Function The onPhotoSearch function is called when Flickr has returned some information about our search. private function onPhotosSearch(event:FlickrResultEvent):void { if (event.success) { var photoList:PagedPhotoList = event.data.photos; photos = new ArrayCollection( photoList.photos ); loadNextImage(); } else { Alert.show("Flickr call failed. Did you update the API Key?"); } } We first need to determine if the call to Flickr was successful. This is done by checking the event.success flag. If this is true Flickr has successfully returned some information about the photos we queried it for. If event.success is false then the call failed. This usually happens because the API key that was supplied was incorrect. if (event.success) { ... } else { ... } If the call was successful we need to get access to the collection of photo data that was returned. var photoList:PagedPhotoList = event.data.photos; The PagedPhotoList then contains the details of the photos themselves, which we then save in the photos collection. photos = new ArrayCollection( photoList.photos ); At this point the photos collection contains a list of photo details which can then be used to load the actual photo images. From here on in we'll just be downloading images, from the URLs we created using the information in the photos collection, without any more special calls using the Flickr API. To start the photo album, we need to call the loadNextImage function. loadNextImage(); If there was a problem calling Flickr the user is notified with an Alert window. Alert.show("Flickr call failed. Did you update the API Key?"); Step 13: loadNextImage Function Now that we have the details of the photos that relate to our search, we need to actually download the images so they can be displayed. This is done by the loadNextImage function. private function loadNextImage():void { var imageURL:String = '' + photos[currentImage].server + '/' + photos[currentImage].id + '_' + photos[currentImage].secret + '_m.jpg'; ++currentImage; currentImage %= photos.length; var request:URLRequest = new URLRequest(imageURL); var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, switchImages); loader.load(request); } Remember that I said that the call to Flickr does not actually return the images themselves? What it does return is the information needed to construct the URL that we can use to download the image. By using the server, id and secret information of the photos we can create the full URL that will display the image. Each image has a number of resolutions. We pick which size image we are downloading by the suffix of the url. The _m suffix indicates that we are downloading a medium sized version of the image. Other suffixes can be found here, which allows you to download more or less detailed versions of the images. var imageURL:String = '' + photos[currentImage].server + '/' + photos[currentImage].id + '_' + photos[currentImage].secret + '_m.jpg'; Now that we've requested the image, we increment the currentImage variable so the next time loadNextImage is called we'll pull down the next image in the search list. ++currentImage; currentImage %= photos.length; Next we have to actually load the images. We create a new URLRequest object (supplying the URL that we created above to the constructor), a new Loader object, and attach the switchImages function to the Loaders Event.COMPLETE event. var request:URLRequest = new URLRequest(imageURL); var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, switchImages); Finally, we load the image from Flickr by calling the Loaders load function. loader.load(request); Step 14: switchImages Function The switchImages function is called when we've loaded an new image from Flickr. private function switchImages(event:Event):void { displayImage = event.currentTarget.content; displayImage.smoothing = true; displayImage.width = this.width; displayImage.height = this.height; displayImage.alpha = 0; this.addChild(new DisplayObjectUIComponent(displayImage)); TweenMax.to(displayImage, TRANSITION_TIME, {alpha:1, ease:Linear, onComplete:imageTweenComplete}); if (backgroundImage != null) TweenMax.to(backgroundImage, TRANSITION_TIME, {alpha:0, ease:Linear}); } The Bitmap object that is returned by the loading process is saved in the displayImage variable. displayImage = event.currentTarget.content; This new Bitmap is then initialized so that it is smoothed (to help with the pixelization that can occur when you scale up small images), resized to fill the window, and set to be completely transparent by setting it's alpha to 0. displayImage.smoothing = true; displayImage.width = this.width; displayImage.height = this.height; displayImage.alpha = 0; We then add the Bitmap to the Application via a new instance of the DisplayObjectUIComponent class that we described in Step 4. this.addChild(new DisplayObjectUIComponent(displayImage)); At this point we have the new image added as a child of the Application object. It isn't visible though because we've set the alpha to 0. What we want to do is fade this new image into view by increasing it's alpha value, while at the same time fading out the last image by decreasing it's alpha value. This is where the TweenMax library comes in. We make a call to the TweenMax.to function, and TweenMax then takes care of modifing the alpha values for us. By setting the onComplete parameter to imageTweenComplete we schedule the imageTweenComplete function to be called once this tweening operation is compete. We do need to check if the backgroundImage variable is null because when the first image is loaded there is no existing backgroundImage that it is displaying on top of. TweenMax.to(displayImage, TRANSITION_TIME, {alpha:1, ease:Linear, onComplete:imageTweenComplete}); if (backgroundImage != null) TweenMax.to(backgroundImage, TRANSITION_TIME, {alpha:0, ease:Linear}); Step 15: imageTweenComplete Function The imageTweenComplete function is called when a newly loaded image has been faded into view by TweenMax. private function imageTweenComplete():void { if (backgroundImage != null) this.removeChild(backgroundImage.parent); backgroundImage = displayImage; displayImage = null; TweenMax.delayedCall(DISPLAY_TIME, loadNextImage); } Once the displayImage has been faded in, the backgroundImage is removed from the application and the displayImage becomes the backgroundImage. The displayImage is then set to null. if (backgroundImage != null) this.removeChild(backgroundImage.parent); backgroundImage = displayImage; displayImage = null; We then use TweenMax to schedule a call to the loadNextImage function. This starts the cycle of loading a new image and fading it in again. TweenMax.delayedCall(DISPLAY_TIME, loadNextImage); Conclusion Using Flickr with Flash does require a few steps, but once you get your head around the Flickr API, finding out the Flickr image URLs, loading the images from Flickr (taking the Flash security restrictions into consideration) it's then quite easy to use these images to create an appealing photo album. This particular example could be used to add an animated photo album to a web page, and by changing the SEARCH_STRING variable you can display different types of images. You could even pass FlashVars to the Flash applet to determine which images are displayed without having to recompile the application. You could also modify the service.photos.search function to return only your own photos, or those that you have tagged specifically. Thanks for reading. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/build-a-photo-viewer-using-flex-and-the-flickr-api--active-1918
CC-MAIN-2020-40
en
refinedweb
Short them to sort the array. Choose swap. Link Complexity: time complexity is O(N) space complexity is O(1) Execution: I wrote this code 4 years ago and I already have no clue what it does. It is linear time and it works even today. Today, I would have broken this logic down into smaller functions with clear purpose. If you have solved this in a readable way, please share! Solution: def getReverseAction(arr): is_sorted = True low_idx = 0 high_idx = len(arr)-1 while (low_idx < high_idx and arr[low_idx] < arr[low_idx+1]): low_idx += 1 if low_idx == high_idx: print "yes" return while (high_idx > 0 and arr[high_idx] > arr[high_idx-1]): high_idx -= 1 #print "low", low_idx, arr[low_idx] #print "high", high_idx, arr[high_idx] if low_idx == 0 or arr[high_idx] > arr[low_idx -1]: #print "high index swapable" if arr[high_idx] < arr[low_idx +1] or low_idx+1 == high_idx: #print "high index swapable" if high_idx == len(arr)-1 or arr[low_idx] < arr[high_idx+1]: #print "low index swapable" if arr[low_idx] > arr[high_idx -1] or low_idx == high_idx-1: #print "low index swapable" low_idx_runner = low_idx+1 while (low_idx_runner < high_idx and arr[low_idx_runner] < arr[low_idx_runner+1]): low_idx_runner += 1 if low_idx_runner == high_idx-1 or low_idx == high_idx-1: print "yes" print "swap", low_idx+1, high_idx+1 return low_idx_runner = low_idx+1 while (low_idx_runner < high_idx and arr[low_idx_runner] > arr[low_idx_runner+1]): low_idx_runner += 1 if low_idx_runner == high_idx: if low_idx == 0 or arr[high_idx] > arr[low_idx -1]: if high_idx == len(arr)-1 or arr[low_idx] < arr[high_idx+1]: print "yes" print "reverse", low_idx+1, high_idx+1 return print "no" if __name__ == '__main__': n = input() arr = map(int, raw_input().split()) getReverseAction(arr)
https://nerdprogrammer.com/hackerrank-almost-sorted-solution/
CC-MAIN-2020-40
en
refinedweb
Prometheus binary format metrics data structures for Python client libraries Project description prometheus_metrics_proto The prometheus_metrics_proto package provides Prometheus Protobuf data structures and a set of helper functions to assist generating Prometheus Protocol Buffer format metrics and serializing them in preparation for network transfer. The collection of metrics and the management of Summary Quantiles and Histogram Buckets are outside the scope of functionality provided by this package. An example of a project using prometheus_metrics_proto is aioprometheus which uses it within the the BinaryFormatter. The Protocol Buffer specification used by prometheus_metrics_proto was obtained from the Prometheus [client model[() repo. Install $ pip install prometheus_metrics_proto Example The prometheus_metrics_proto package provides helper functions to assist with generating Prometheus metrics objects. The example below shows how these functions can be used to construct metrics and encode them into a format suitable to send to Prometheus server in a response. #!/usr/bin/env python """ This script demonstrates the high level helper functions used to assist creating various metrics kinds as well as how to encode the metrics into a form that can be sent to Prometheus server. """ import prometheus_metrics_proto as pmp def main(): # Define some labels that we want added to all metrics. These labels are # independent of the instance labels that define a metric as unique. # These could be used to add hostname, app name, etc. const_labels = {"host": "examplehost", "app": "my_app"} # Create a Counter metric to track logged in users. This counter has # 5 separate instances. # We'll make use of the optional const_labels argument to add extra # constant labels. # We will also add a timestamp to the metric instances. # We will request that the labels be sorted. cm = pmp.create_counter( "logged_users_total", "Logged users in the application.", ( ({"country": "sp", "device": "desktop"}, 520), ({"country": "us", "device": "mobile"}, 654), ({"country": "uk", "device": "desktop"}, 1001), ({"country": "de", "device": "desktop"}, 995), ({"country": "zh", "device": "desktop"}, 520), ), timestamp=True, const_labels=const_labels, ordered=True, ) # Create a Gauge metric, similar to the counter above. gm = pmp.create_gauge( "logged_users_total", "Logged users in the application.", ( ({"country": "sp", "device": "desktop"}, 520), ({"country": "us", "device": "mobile"}, 654), ({"country": "uk", "device": "desktop"}, 1001), ({"country": "de", "device": "desktop"}, 995), ({"country": "zh", "device": "desktop"}, 520), ), timestamp=True, const_labels=const_labels, ordered=True, ) # Now lets create a Summary and Histogram metric object. These forms # of metrics are slightly more complicated. # # Remember, the collection of metrics and the management of Summary # Quantiles and Histogram Buckets are outside the scope of # functionality provided by this package. # # The following examples assume they are taking the data values from # a management library that can also emit the sum and count fields # expected for both Summary and Histogram metrics. # Create a Summary metric. The values for a summary are slightly # different to a Counter or Gauge. They are composed of a dict representing # the various quantile values of the metric. The count and sum are # expected to be present in this dict. sm = pmp.create_summary( "request_payload_size_bytes", "Request payload size in bytes.", ( ({"route": "/"}, {0.5: 4.0, 0.9: 5.2, 0.99: 5.2, "sum": 25.2, "count": 4}), ( {"route": "/data"}, {0.5: 4.0, 0.9: 5.2, 0.99: 5.2, "sum": 25.2, "count": 4}, ), ), timestamp=True, const_labels=const_labels, ordered=True, ) # Create a Histogram metric. The values for a histogram are slightly # different to a Counter or Gauge. They are composed of a dict representing # the various bucket values of the metric. The cumulative count and sum # values are expected to be present in this dict. # # Libraries manageing buckets typically have add a POS_INF upper bound to # catch values beyond the largest bucket bound. Simulate this behavior in # the data below. POS_INF = float("inf") hm = pmp.create_histogram( "request_latency_seconds", "Request latency in seconds.", ( ( {"route": "/"}, {5.0: 3, 10.0: 2, 15.0: 1, POS_INF: 0, "count": 6, "sum": 46.0}, ), ( {"route": "/data"}, {5.0: 3, 10.0: 2, 15.0: 1, POS_INF: 0, "count": 6, "sum": 46.0}, ), ), timestamp=True, const_labels=const_labels, ordered=True, ) # Serialize a sequence of metrics into a payload suitable for network # transmission. input_metrics = (cm, gm, sm, hm) payload = pmp.encode(*input_metrics) assert isinstance(payload, bytes) # De-serialize the payload into a sequence of MetricsFamily objects. recovered_metrics = pmp.decode(payload) # Confirm that the round trip re-produced the same number of metrics # and that the metrics are identical. assert len(recovered_metrics) == len(input_metrics) for recovered_metric, input_metric in zip(recovered_metrics, input_metrics): assert recovered_metric == input_metric for metric in input_metrics: print(metric) if __name__ == "__main__": main() If you simply want to access the Prometheus Protocol Buffer objects directly and generate instances yourself simply import them from the package as follows: from prometheus_metrics_proto import ( COUNTER, GAUGE, SUMMARY, HISTOGRAM, Bucket, Counter, Gauge, Histogram, LabelPair, Metric, MetricFamily, Summary, Quantile) License This project is released under the MIT license. Background Creating metrics that can be ingested by Prometheus is relatively simple, but does require knowledge of how they are composed. The Prometheus server expects to ingest MetricsFamily objects when it scrapes an endpoint exposing Protocol Buffer format data. A MetricFamily object is a container that holds the metric name, a help string and Metric objects. Each MetricFamily within the same exposition must have a unique name. A Metric object is a container for a single instance of a specific metric type. Valid metric types are Counter, Gauge, Histogram and Summary. Each Metric within the same MetricFamily must have a unique set of LabelPair fields. This is commonly referred to as multi-dimensional metrics. Development Get the source $ git clone [email protected]:claws/prometheus_metrics_proto.git $ cd prometheus_metrics_proto Setup The best way to work on prometheus_metrics_proto is to create a virtual env. This isolates your work from other project's dependencies and ensures that any commands are pointing at the correct tools. You may need to explicitly specify which Python to use if you have multiple Python's available on your system (e.g. python3, python3.8). $ python3 -m venv venv --prompt pmp $ source venv/bin/activate (pmp) $ (pmp) $ pip install pip --upgrade The following steps assume you are operating in a virtual environment. To exit the virtual environment simply type deactivate. Install Development Environment Rules in the convenience Makefile depend on the development dependencies being installed. Install the developmental dependencies using pip. Then install the prometheus_metrics_proto package (and its normal dependencies) in a way that allows you to edit the code after it is installed so that any changes take effect immediately. (pmp) $ pip install -r requirements.dev.txt (pmp) $ pip install -e . Familiarise yourself with the convenience Makefile rules by running make without any rule specified. $ make Code Style This project uses the Black code style formatter for consistent code style. A Makefile convenience rule is available to apply code style compliance. (pmp) $ make style Test The easiest method to run all of the unit tests is to run the make test rule from the top level directory. This runs the standard library unittest tool which discovers all the unit tests and runs them. (pmp) $ make test or (pmp) $ make test-verbose Coverage A Makefile convenience rule is available to check how much of the code is covered by tests. (pmp) $ make coverage The test code coverage report can be found here <htmlcov/index.html>_ Regenerate The project has placed the code stub ( prometheus_metrics_pb2.py), generated by the Google Protocol Buffers code generation tool, under source control. If this file needs to be regenerated in the future use the following procedure: (pmp) $ make regenerate Release Process The following steps are used to make a new software release: Ensure that the version label in __init__.pyis updated. Create the distribution. This project produces an artefact called a pure Python wheel. Only Python3 is supported by this package. (pmp) $ make dist Test distribution. This involves creating a virtual environment, installing the distribution in it and running the tests. These steps have been captured for convenience in a Makefile rule. (pmp) $ make dist-test Upload to PyPI. (pmp) $ make dist-upload Create and push a repo tag to Github. git tag YY.MM.MICRO -m "A meaningful release tag comment" git tag # check release tag is in list git push --tags origin master Github will create a release tarball at:{username}/{repo}/tarball/{tag}.tar.gz Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/prometheus-metrics-proto/
CC-MAIN-2020-40
en
refinedweb
I'm trying to hook BIOS Int 13h to add my custom functionality to it and hijack some of existing one. Old Int 13h vector is stored in global variable. When interrupt handler is called the DS is set to some value that doesn't match the original data segment of caller. Therefore accessing global variables of caller turns into a headache. What is best practice to chain interrupt handlers? Hook is installed this way: #ifdef __cplusplus # define INTARGS ... #else # define INTARGS unsigned bp, unsigned di, unsigned si,\ unsigned ds, unsigned es, unsigned dx,\ unsigned cx, unsigned bx, unsigned ax #endif void interrupt (far *hackInt13h)(INTARGS) = NULL; void interrupt (far *biosInt13h)(INTARGS) = (void interrupt (far *)(INTARGS))0xDEADBEEF; void main(void) { struct REGPACK reg; biosInt13h = getvect(0x13); hackInt13h = int13h; setvect(0x13, hackInt13h); // Calling CAFE reg.r_ax = 0xCAFE; intr(0x13, ®); printf("Cafe returned: 0x%04x\n", reg.r_ax); // Resetting FDD just to check interrupt handler chaining reg.r_ax = 0; reg.r_dx = 0; intr(0x13, ®); printf("CF=%i\n", reg.r_flags & 0x01); setvect(0x13, biosInt13h); } Int 13h hook code: P286 .MODEL TINY _Data SEGMENT PUBLIC 'DATA' EXTRN _biosInt13h:FAR _Data ENDS _Text SEGMENT PUBLIC 'CODE' PUBLIC _int13h _int13h PROC FAR pusha cmp AX,0CAFEh jnz chain popa mov AX, 0BEEFh iret chain: popa call far ptr [_biosInt13h] ; <-- at this moment DS points to outer space ; and _biosInt13h is not valid _int13h ENDP _Text ENDS END I'm using Borland C++ if it matters Thanks guys, I've found solution! First thing I've missed is moving variable to code segment and explicitly specifying it. Second one is using hacked (pushed on stack) return address and retf instead of call that adds real return address on stack. No need to pushf explicitly 'cause flags are already on stack after int. And flags will be popped on iret no matter in my handler or in chained one. P286 .MODEL TINY _Text SEGMENT PUBLIC 'CODE' EXTRN _biosInt13h:FAR ; This should be in CODE 'cause CS is only segreg reliable PUBLIC _int13h _int13h PROC FAR pusha cmp AX, 0CAFEh jnz chain popa mov AX, 0BEEFh iret chain: popa push word ptr cs:[_biosInt13h + 2] ; Pushing chained handler SEG on stack push word ptr cs:[_biosInt13h] ; Pushing chained handler OFFSET on stack retf ; ...actually this is JMP FAR to address on stack _int13h ENDP _Text ENDS END User contributions licensed under CC BY-SA 3.0
https://windows-hexerror.linestarve.com/q/so63763426-Interrupt-handler-chaining-in-real-mode
CC-MAIN-2020-40
en
refinedweb
Since making the jump to developing all of our new production mobile/web apps with Bullet Train for feature flags, one of the latest benefits I've come to realise is being able to toggle your app to simulate tedious and complicated scenarios. The problem: wasting time replicating app scenarios I know myself a developer have often spent hours replicating issues or developing a new feature that requires the app to be in a certain state. For example, improving onboarding that would require me to signup to my site with around a million Mailinator emails. This pain point is perhaps the main reason we also employ end to end automated tests on the frontend using Nightwatch JS. The solution: develop your app "simulation first" Developing new projects with feature flags in mind changed the way I thought about implementing new features. Now, whenever a new feature gets developed I have to make the application flexible enough to behave well with and without it. With Bullet Train I then have a really easy way to simulate that feature being on and off or change the settings. This idea resulted in me questioning, what if I could toggle scenarios rather than just features? Now alongside a feature, I create simulation flags that when enabled, fabricate data and conditions that force my application into a certain state. Perhaps the biggest gain I saw from this recently was in a client facing meeting, I was able to reel off a number of edge case scenarios and show exactly how the app would react. Previously this process would have been a lot less fluid, being able to quickly demonstrate the scenario meant that the client was able to immediately see and feedback without losing train of thought. A practical example Rather than waffle on at a high level about how much I enjoy the idea, here's an end to end example of how I added the ability to simulate loads of data in one of our more recent projects. This helps us test both UI performance and how it handles UI wrapping onto new lines. This GIF shows me changing the value of a "data_multiplier" feature remotely, then when I open my app it will act as if the API gave me x times the number of items. Here's how I achieved this: Step 1 - Initialise bullet-train I created a project at, and copied and pasted the JavaScript snippet. npm i bullet-train-client --save; //or react-native-bullet-train in my case import bulletTrain from 'bullet-train-client'; //or react-native-bullet-train in my case bulletTrain.init({ environmentID: Project.bulletTrain, onChange: controller.loaded }); Step 2 - Create the simulation feature In my project, I created a feature called "data_multiplier" and set its initial value to 0. Whenever I get a list of projects from the API in my mobile application I check for this flag and if it has a value I just duplicate the data. if (bulletTrain.getValue("data_multiplier")) { _.map(_.times(bulletTrain.getValue("data_multiplier") - 1), (i) => { projects = projects.concat(res.map((item) => { return Object.assign({}, item, { id: item.id + "" + i, key: item.key + i, name: item.name + " Copy " + i }) })); }); } This idea will obviously depend on what frameworks you're using, but I've found it's best to implement this idea at the point where the app receives data. Use cases Developing this way predominantly saves time in testing and future development by making your applications really flexible. In my opinion, this should be implemented in any scenario that you find cumbersome to replicate so will obviously depend on the business logic of your application. Having said that here are a few common use cases I've started to simulate in new projects: - A "new user" on login to always show onboarding - The browser or device being offline - Device support (e.g. no GPS) - Enabling / disabling ads Let me know if you employ a similar approach on your projects and how useful you've found it. Happy Developing! Posted on by: Kyle Johnson I drink coffee and make things. Discussion Ah that reminds me, I should write about the tech and product benefits of having feature flags and abtests. The downside is complexity, a lot. Each new flag will add an exponential complexity on all related non exclusive features, ex One button, 2 positions, 2 colors So do not abuse this function, clear the not used flags as fast as possible. Awesome I'll be sure to read it if you post it on here! Yeah totally agree with this, would only use the config side (colours etc) on something meaningful. Also, I tend to slowly remove flags as they become more stable/mature. Having stagnant flags is definitely something to avoid. I kind already talked about them, but only from the A/B tests perspective. There are a lot of other good usages for flags, like soft rollouts/releases, dual writes and so on. Yeah totally, I was genuinely surprised how using them totally changed my dev experience at times especially being able to do smaller releases What are your thoughts on Launch Darkly, I thought they are the leader in this space, and they aren't mentioned at all in this article. Hey, the post was mainly to discuss the idea of using feature flags for simulating scenarios rather than going on too much of a tangent into comparing services. I've evaluated a few in-depth (airship hq was another big one), the problem I have is that a lot of them are clearly targetted towards enterprise clients rather than solo developers/startups. Also, I feel a trick was missed with a few of them, they offer flags and multivariant features but they don't have a simple remote config flag like firebase's remote config. Unfortunately, Firebase Remote config doesn't work for web which I highlighted here github.com/firebase/firebase-js-sd....
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kylessg/using-feature-flags-for-client-demos-and-simulating-complex-scenarios-1gih
CC-MAIN-2020-40
en
refinedweb
codes run well on my local machine will not compile on your system. i have actually tried several problems but ALL Compile Error!!!!!!! My code is very simple , and i submit through the web page, not through email. Is there anyone who can explain why theres so many compile error? Too much on this online judge system Post here if you don't find any other place for your post. But please, stay on-topic: algorithms, programming or something related to this web site and its services. Moderator: Board moderators Post Reply 6 posts • Page 1 of 1 #include<stdio.h>#include<stdio.h>Erik wrote:Hello, I receive emails in reply to every submission. If there was a compile error it tells you the error messages. the system acclaim that fixed undeclared, which would not happen if it successfully included <iomanip.h> Cu, Erik #include<math.h> #include<iomanip.h> using namespace std; int main(){ double Ha,Hb,Hc; int wrong,i=0; cin>>wrong; while(i<wrong){ cin>>Ha>>Hb>>Hc; if(1/Hb+1/Hc<=1/Ha || 1/Ha+1/Hc<=1/Hb || 1/Ha+1/Hb<=1/Hc) {cout<<"These are invalid inputs!"<<endl; i++;} else cout<<fixed<<setprecision(3)<<sqrt(1/((1/Hb+1/Hc+1/Ha)*(1/Hb+1/Hc-1/Ha)*(1/Ha+1/Hc-1/Hb)*(1/Ha+1/Hb-1/Hc)))<<endl; } return 0; } Here are the compiler error messages: 05465393_24.c: In function `int main()': 05465393_24.c:12: `fixed' undeclared (first use this function) 05465393_24.c:12: (Each undeclared identifier is reported only once 05465393_24.c:12: for each function it appears in.) Thank you very much Erik, my firt AC!!Thank you very much Erik, my firt AC!!Erik wrote:Hi, take a look here: Cu, Erik Post Reply 6 posts • Page 1 of 1
https://onlinejudge.org/board/viewtopic.php?f=12&t=16097
CC-MAIN-2020-40
en
refinedweb
Yet again, intern season is coming to a close, and so it’s time to look back at what the interns have achieved in their short time with us. I’m always impressed by what our interns manage to squeeze into the summer, and this year is no different. There is, as you can imagine, a lot of ground to cover. With 45 interns between our NY, London and Hong Kong offices, there were a lot of exciting projects. Rather than trying to do anything even remotely exhaustive, I’m just going to summarize a handful of interesting projects, chosen to give a sense of the range of the work interns do. The first project is about low-level networking: building the bones of a user-level TCP/IP stack. The second is more of a Linux-oriented security project: building out support for talking to various kernel subsytems via netlink sockets, to help configuration and management of firewalls. And the last is a project that I mentored, which has to do with fixing some old design mistakes in Incr_dom, our framework for building efficient JavaScript Web-UIs in OCaml. (You should remember, every intern actually gets two projects, so this represents just half of what an intern might do here in a summer.) Reimplementing TCP/IP Trading demands a lot in performance terms from our networking gear and networking code. Much of this has to do with how quickly exchanges generate marketdata. The US equity markets alone can peak at roughly 5 million messages per second, and volumes on the options markets are even higher. For that reason, we end up using some pretty high-performance 10G (and 25G) network cards. But fast hardware isn’t enough; it’s hard to get really top-notch networking performance while going through the OS kernel. For that reason, several of these cards have user-space network stack implementations to go along with them. But these implementations are a mixed bag. They work well, but the subtle variations in behavior between vendors make it hard to build portable code. And the need for these user-space layers to fit to traditional networking APIs means that it’s hard to get the maximum performance that is achievable by the hardware. For this reason, we’ve been finding ourselves spending more time writing directly to lower-level, frame-oriented APIs that are exported by these cards. That’s relatively straightforward for a stateless protocol like UDP, but TCP is a different beast. That’s where intern Sam Kim came in. He spent half the summer reading over a copy of TCP/IP Illustrated (volumes 1 and 2!), and building up a user-space TCP implementation in pure OCaml. He was able to leverage our existing APIs (and, critically, the testing framework we had in place for such protocols) to build up a new implementation of the protocol, optimized for our environment of fast local LANs. And he wrote a lot of tests, helping exercise many different aspects of the code. This is not a small amount of work. TCP is a complex protocol, and there’s a lot of details to learn, including connection setup, retransmission, and congestion control. One of the more exciting moments of this project was at the end, when, after doing all the testing, we connected Sam’s implementation to a real network card and ran it. After some small mistakes in wiring it up (not Sam’s mistakes, I should mention!) it worked without a hitch, and kept on working after he added a bunch of induced packet drops. Surely there’s more work to do on the implementation, but it’s an auspicious start. Talking to the Kernel via Netlink We have an in-house, Linux-based firewall solution called nap-enforcer, which relies on the built-in stateful firewall functionality in Linux’s netfilter subsystem. Part of this stateful firewall support is the ability to keep track of the protocol state of connections going through the firewall, i.e., connection tracking, or conntrack for short. Conntrack is necessary for the correct handling of stateful protocols, like FTP. When troubleshooting firewall issues, it’s helpful to be able to inspect and modify the tables that carry this state. We also want to be able to subscribe to events from conntrack and generate log messages for interesting changes, like a connection being open or closed. This functionality can be controlled via a netlink socket, which is a special kind of socket that enables message-oriented communication between userspace processes and the kernel. Initially, we built nap-enforcer on top of the command-line conntrack utility. This worked well enough at first, but it doesn’t work well for subscribing to streams of events, and conntrack itself has some issues: it’s easy to crash it, and it’s inconsistent in its behavior, which just makes it hard to use. Cristian Banu’s project was to fix this by writing an OCaml library that lets us talk directly to various kernel subsystems (primarily conntrack) over netlink sockets. This is trickier than it might seem. Some of these interfaces are rather poorly documented, and existing C libraries don’t always offer very convenient APIs, so a large part of the job was reading the Linux kernel code to understand what really is happening and then figuring out a convenient and type-safe way to make this functionality available to OCaml. The resulting library offers a generic and safe high-level interface to netlink sockets, plus some abstractions built on top for specific netlink-based protocols. One tricky corner of a high-level netlink API is providing a safe interface for constructing valid Netlink messages without making assumptions about the higher-level protocol. Cristian’s library wraps those computations in an Atkey-style indexed monad which guarantees that the underlying C library (libmnl) is used in a safe way and that the resulting message is valid at the generic netlink level. Cristian also worked out a way to have repeatable automated tests for the netlink library under our build system, jenga. This is a non-trivial problem because most of these kernel APIs require root access and kernel modules that aren’t loaded by default. His solution involves running tests in a network namespace with an owning user namespace that maps the unprivileged user running the test suite to the root user. This allows the test cases to use otherwise privileged network-related system calls, but only on the subset of network resources governed by the testing namespace. The project is not yet finished, but the results are very promising, and we hope to move this to production over the next few months. Streamlining Incr_dom For a while now, we’ve been using a library we developed internally, called Incr_dom, for building web front-ends in OCaml. You can think of Incr_dom as a variation on React or the Elm Architecture, except with a different approach to performance. A key feature of React and Elm is that they let you express your UI via simple data-oriented models plus simple functions that do things like compute the view you want to present, typically in the form of a so-called virtual DOM. What Incr_dom adds to the mix is a lot of power to optimize the computations that need to be done when doing things like computing the view given the current value of the model. (Elm and React both have nice approaches to this as well, though they err on the side of having an easier to use optimization framework that isn’t as powerful.) This is important to us because of the nature of our business: trading applications often have complex, fast-changing models, and being able to render those efficiently is of central importance. That’s why Incr_dom is built on Incremental, a library whose entire purpose is optimization. Incremental is good at constructing, well, incremental computations, i.e., computations that only need to do a small amount of work when the input changes in small ways. The key is that Incremental lets you write your code so that it reads like a simple all-at-once computation, but executes like a hand-tuned, incremental one. Incremental computations are very useful when constructing UIs in this style, since your data model doesn’t typically change all at once. I’ve written more than a few blog posts about the basic ideas, and since then, we actually had some interns do much of the work of getting it up and running. But that initial design had some sharp edges that we didn’t know how to fix. And that’s where Jeanne Luning Prak’s project this year came in. The key problem with the original design was something called the “derived model”. To understand where the derived model comes into play, you need to know a bit more about Incr_dom. An Incr_dom app needs to know how to do more than how to render its model. Here’s a simplified version of the interface that a simple Incr_dom app needs to satisfy which shows a bit more of the necessary structure. module type App = sig type model type action val view : model Incr.t -> schedule:(action -> unit) -> Vdom.t Incr.t val apply_action : model -> action -> model end The view function is what we described above. It takes as its input an incremental model, and returns an incremental virtual-dom tree. Note that it also takes a function argument, called schedule, whose purpose is to allow the virtual-dom to have embedded callbacks that can in turn trigger actions that update the model. This is essentially how you wire up a particular behavior to, say, a button click. Those actions are then applied to the model using the provided apply_action function. This all works well enough for cases where the required optimization is fairly simple. But it has real limitations, because the apply_action function, unlike the view function, isn’t incremental. To see why this is important, imagine you have a web app that’s rendering a bunch of data in a table, where that table is filtered and sorted inside of the browser. The filtering and sorting can be done incrementally in the view function, so that changing data can be handled gracefully. But ideally, you’d like for the apply_action function to have access to some of the same data computed by view. In particular, if you define an action that moves you to the next row, the identity of that row depends on how the data has been sorted and filtered. And you don’t want to recompute this data every time someone wants to move from one row to the next. In the initial design, we came up with a somewhat inelegant solution, which was to add a new type, the derived model, which is computed incrementally, and then shared between the view and apply_action functions. The resulting interface looks something like this: module type App = sig type model type derived_model type action val derive : model Incr.t -> derived_model Incr.t val view : model Incr.t -> derived_model Incr.t -> schedule:(action -> unit) -> Vdom.t Incr.t val apply_action : model -> derived_model -> action -> model end And this works. You can now structure your application so that the information that both the view and the action-application function need to know can be shared in this derived model. But while it works, it’s awkward. Most applications don’t need a derived model, but once any component needs to use it, every intermediate part of your application now has to think about and handle the derived model as well. I came into the summer with a plan for how to resolve this issue. On some level, what we really want is a compiler optimization. Ideally, both view and apply_action would be incremental functions, say, with this signature: module type App = sig type model type action val view : model Incr.t -> schedule:(action -> unit) -> Vdom.t Incr.t val apply_action : model Incr.t -> action Incr.t -> model Incr.t end Then, both apply_action and view can independently compute what they need to know about the row structure, and do so incrementally. At that point there’s only one problem left: these computations are incremental, but they’re still being duplicated. But that’s easy enough to fix, I thought: we can do some form of clever common-subexpression elimination. The basic idea was to cache some computations in a way that when view and apply_action tried to compute the very same thing, they would end up with a single copy of the necessary computation graph, rather than two. This turned out to be complicated for a few reasons, one of them being the rather limited nature of JavaScript’s support for weak references, which are needed to avoid memory leaks. Luckily, Jeanne had a better idea. Rather than some excessively clever computation-sharing, we could just change the shape of the API. Instead of having separate functions for view and apply_action, we would have one function that computed both. To that end, she created a new type, a Component.t, which had both the view and the apply_action logic. The type is roughly this: module Component : sig type ('model,'action) t = { view : Vdom.t ; apply_action : 'action -> 'model } end And now, the app interface looks like this: module type App = sig type model type action val create : model Incr.t -> schedule:(action -> unit) -> (action,model) Component.t Incr.t end Because create is a single function, it can behind the scenes structure the computation any way it wants, and so can share work between the computation of the view and the computation of the action-application function. This turned out to be a really nice design win, totally eliminating the concept of the derived model and making the API a lot simpler to use. And she’s gotten to see the full lifecycle of the project: figuring out how to best fix the API, implementing the change, testing it, documenting it, and figuring out how to smash the tree to upgrade everyone to the new world. And actually, this is only about half of what Jeanne did in this half of the summer. Her other project was to write a syntax extension to create a special kind of incremental pattern-match, which has applications for any use of Incremental, not just for UIs. That should maybe be the subject of another blog post. Apply to be an intern! I hope this gives you a sense of the nature and variety of the work that interns get to do, as well as a sense of the scope and independence that they get in choosing how to tackle these problems. If this sounds like a fun way to spend the summer, you should apply! And in case you’re wondering: no, you don’t need to be a functional programming wizard, or have ever programmed in OCaml, or know anything about finance or trading, to be an intern. Most of our interns come in with none of that, and they still do great things!
https://blog.janestreet.com/what-the-interns-have-wrought-2018/
CC-MAIN-2020-40
en
refinedweb
Get USB Drive Serial Number on Windows in C++ Getting the serial number of a USB device in Windows is a lot harder than it should be. ( But it’s a lot easier than getting the USB serial number on Os X!) It is relatively simple to get USB information if you have the device handle of the USB device itself. And you can get information about a mounted volume pretty easily. But to match up a mounted volume with a USB device is tricky and annoying. There are many code examples on the net about how to do it in C#, visual basic, and similar broken garbage languages. If you are forced to program in C# or visual basic – get help. There are alternatives to suicide. There are some example of how to do it through WMI, which is a Windows operating system service – which is slow, buggy and unreliable. If you have to do it in WMI, you are beyond help. First here is the code: #include <WinIOCtl.h> #include <api/usbioctl.h> #include <Setupapi.h> DEFINE_GUID( GUID_DEVINTERFACE_USB_DISK, 0x53f56307L, 0xb6bf, 0x11d0, 0x94, 0xf2, 0x00, 0xa0, 0xc9, 0x1e, 0xfb, 0x8b ); void getDeviceInfo( int vol ) { UsbDeviceInfo info; // get the device handle char devicePath[7] = "\\\\.\\@:"; devicePath[4] = (char)( vol + 'A' ); HANDLE deviceHandle = CreateFile( devicePath, 0, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL ); if ( deviceHandle == INVALID_HANDLE_VALUE ) return; // to get the device number DWORD volumeDeviceNumber = getDeviceNumber( deviceHandle ); CloseHandle( deviceHandle ); // Get device interface info set handle // for all devices attached to the system HDEVINFO hDevInfo = SetupDiGetClassDevs( &GUID_DEVINTERFACE_USB_DISK, NULL, NULL, DIGCF_PRESENT | DIGCF_DEVICEINTERFACE ); if ( hDevInfo == INVALID_HANDLE_VALUE ) return; // Get a context structure for the device interface // of a device information set. BYTE Buf[1024]; PSP_DEVICE_INTERFACE_DETAIL_DATA pspdidd = (PSP_DEVICE_INTERFACE_DETAIL_DATA)Buf; SP_DEVICE_INTERFACE_DATA spdid; SP_DEVINFO_DATA spdd; spdid.cbSize = sizeof( spdid ); DWORD dwIndex = 0; while ( true ) { if ( ! SetupDiEnumDeviceInterfaces( hDevInfo, NULL, &GUID_DEVINTERFACE_USB_DISK, dwIndex, &spdid )) break; DWORD ) { DWORD usbDeviceNumber = getDeviceNumber( hDrive ); if ( usbDeviceNumber == volumeDeviceNumber ) { fprintf( "%s", pspdidd->DevicePath ); } } CloseHandle( hDrive ); } } dwIndex++; } SetupDiDestroyDeviceInfoList(hDevInfo); return; } You pass in the volume number. This is just the drive letter, represented as an integer. Drive “A:” is zero. So the first thing we do is create a drive path. For example, if the mounted volume you want to get a serial number for is “F:”, you’d pass in “5”, and construct a device path \\.\F:. Next you get a device handle for that volume using CreateFile(). Originally this function was meant to create regular file system files. But today it can be used to open handles to devices of all kinds. Each device type is represented by different device paths. Next, you get the device number. When a volume is mounted, it will be associated with a device, and this function returns its number. Why the OS doesn’t just give you a device path here is ridiculous. The device numbers will be low. Typically a number under 10. Don’t be surprised. I was. You get the device number by calling DeviceIOControl() with the handle to your device: DWORD getDeviceNumber( HANDLE deviceHandle ) { STORAGE_DEVICE_NUMBER sdn; sdn.DeviceNumber = -1; DWORD dwBytesReturned = 0; if ( !DeviceIoControl( deviceHandle, IOCTL_STORAGE_GET_DEVICE_NUMBER, NULL, 0, &sdn, sizeof( sdn ), &dwBytesReturned, NULL ) ) { // handle error - like a bad handle. return U32_MAX; } return sdn.DeviceNumber; } There is an old windows API, in setupaip.h that was designed to help write installers. It has now been obsoleted by newer installation APIs, but you can still use it to enumerate devices. Basically you pass in the GUID of the type of device interface you want. In this case we just want to enumerate USB flash disks. That guid is defined at the top of the file. You setup for enumeration with SetupDiGetClassDevs(), and iterate over the devices with SetupDiEnumDeviceInterfaceDetail(). When you are done you close the iterator with SetupDiDestroyDeviceInfoList(). Then for each device you get the device name and number using SetupDiGetDeviceInterfaceDetail, and match up the device number of each device with the one you got for the volume. When you find a match then you have the device path for your actual USB flash drive. At that point you could start querying the device itself using functions like DeviceIoControl(), but in this case the information we want is coded right into the device path Here is a typical device path for a USB flash disk: \\?\usbstor#disk&ven_cbm&prod_flash_disk&rev_5.00#31120000dc0ce201&0# {53f56307-b6bf-11d0-94f2-00a0c91efb8b} The device path for a flash disk must start with “usbstor”. This first part of the path is name of the driver, which on windows is called “usbstor”. The vendor id, product id, product revision and serial number are highlighted in red. The following regular expressions will extract this information from the device path: ven_([^&#]+) // vendor id prod_([^&#]+) // product id rev_([^&#]+) // revision id &[^#]*#([^&#]+) // serial number Next here is a method to recognize if a volume is removable media (e.g. like a usb or firewire disk): bool isRemovableMedia( s32 vol ) { char rootPath[5] = "@:\\"; rootPath[0] = (char)( vol + 'A' ); char szDosDeviceName[MAX_PATH]; char dosDevicePath[3] = "@:"; // get the drive type UINT DriveType = GetDriveType( rootPath ); if ( DriveType != DRIVE_REMOVABLE ) return false; dosDevicePath[0] = (char)( vol + 'A' ); QueryDosDevice( dosDevicePath, szDosDeviceName, MAX_PATH ); if ( strstr( szDosDeviceName,"\\Floppy") != NULL ) { // its a floppy return false; } return true; } 20 Responses Good article, thanks. Can you provide c++ code using regular expressions to extract the serial number? I use my own regular expression class, and to work it requires a whole bunch of my supporting libraries, so it wouldn’t make sense to post it here. I know that the STL now supports regular expressions. Try #include <regexp>, and using the regexpclass. There is a tutorial on it here: What if the USB device is not a drive? What if it’s a mouse? Or is everything on USB handled as a “drive”? Without going into all the gory details, I have put some additional circuitry into an old USB mouse to monitor some external events by whether the computer detects the old mouse or not. Now, all I need is the software component, under Windows, that would monitor that old mouse. Of course, the thing is, I need to make sure the program monitors the CORRECT USB device. Another problem is that I am a Unix/Linux person (and could do this with a simple script under Linux), but this aspect of Windows is an alien environment for me. I have installed the free Visual C++ Express, and what you mentioned in your article sounds good, but I don’t know if I’ll be able to apply it to a mouse, instead of a drive. What do you think? Excellent post! How to go about getting the serial number of a fixed disk drive? Thanks. Great post! One question- where is getDeviceNumber defined? I can’t get this to compile because of that function. That is one of my own functions. I’ll update the article with the implementation for that. But its pretty simple you get it from a call to DeviceIOControl. select DeviceID from Win32_USBHub where Name=’USB Mass Storage Device’ I found on my USB drives, this string includes a unique id for the drive. how to include the three header files.. #include #include #include i didn’t find any source for these three files.. please help me … Do you mean the three headers from the first code listing? Like “WinIOCtl.h”? These are standard windows header files. You should have them if you install Microsoft’s c++ compiler (Visual Studio). It comes with the windows platform sdk. These should be included with that. #include comes from the Windows Driver Kit, not from Visual Studio. Reshma should download the WDK from the link that you gave. The other two are in the SDK. Although your GUID is correct, you don’t have to do it. DEFINE_GUID( GUID_DEVINTERFACE_USB_DISK, 0x53f56307L, 0xb6bf, 0x11d0, 0x94, 0xf2, 0x00, 0xa0, 0xc9, 0x1e, 0xfb, 0x8b ); Instead, you can get it from the WDK. To use the WDK, this looks redundant but you have to do it this way. #include “ap/\Ntddstor.h” #include “initguid.h” #include “api/Ntddstor.h” UsbDeviceInfo info; isn’t needed. In order to compile, just delete it ^_^ I will try to correct an editing error in my previous reply. If the owner edits my previous reply and deletes this one, that will be good. #include “api/Ntddstor.h” #include “initguid.h” #include “api/Ntddstor.h” (The redundancy is that Ntddstor.h has to be included twice. The pathname shouldn’t be messed up like I did.) Sorry, I have to edit this again. After this: #include “api/Ntddstor.h” #include “initguid.h” #include “api/Ntddstor.h” Use GUID_DEVINTERFACE_DISK instead of GUID_DEVINTERFACE_USB_DISK In the device info in your example, Windows has set the device’s serial number to 31120000dc0ce201 I bet the device’s actual serial number might be 31120000DC0CE201 Windows’s case insensivity is OK for Windows, but it might not be OK for some application that needs to know the real serial number. Actually Windows did more than just be case insensitive. In filenames, Windows is case insensitive in matching names, but when writing names in the first place it preserves the case that the user used when creating the file. If you look in Windows Explorer or a dir command, You CAN Still See Names like tHis. But Windows gives us the device’s serial number as if it were all lower case. Does anyone know how to find the real serial number? Can this code work in Windows 8.1 Metro Style app? Could not get this to work, do you have the source code? I get a lnk2019 error maybe i falied to add some libs? I was trying the same on python,. when i saw your post, i tried to compile it (I am new to all programming, i installed mingw ) with this command g++ findserial.cpp -o findserial.out i get this following error findserial.cpp:2:26: fatal error: api/usbioctl.h: No such file or directory compilation terminated. can anyone please help In case you need the serialnumber to be case sensitive, you can utilize the following code to extract it from the registry. The parameter usbDeviceId is expected to be in the format “USB\\VID_XXXX&PID_XXXX” and can be obtained using CM_Get_Device_ID. Although my example uses Qt it might help: QString ExtractSerialNumberFromRegistry(const QString& usbDeviceId) { QString serial; QStringList parts = usbDeviceId.split(“\\”); if (parts.size() == 3) { QString parentNode(“SYSTEM\\CurrentControlSet\\Enum\\%1\\%2”); HKEY hKey; long result = RegOpenKeyEx( HKEY_LOCAL_MACHINE, reinterpret_cast(parentNode.arg(parts.at(0)).arg(parts.at(1)).utf16()), 0, KEY_READ, &hKey); if (result == ERROR_SUCCESS) { wchar_t* subkey = new wchar_t[255]; DWORD subkey_length = 255; DWORD counter = 0; while (result != ERROR_NO_MORE_ITEMS) { result = RegEnumKeyEx(hKey, counter, subkey, &subkey_length, 0, NULL, NULL, NULL); if (result == ERROR_SUCCESS) { // find best match using the uppercase serial nested in the usb device id const QString currSerial(reinterpret_cast(subkey)); if (result == ERROR_SUCCESS && parts.last().toUpper() == currSerial.toUpper()) serial = currSerial; } ++counter; } delete subkey; subkey = 0; RegCloseKey(hKey); } } return serial; } By the way, removing the “api/usbioctl.h” include and the line with “UsbDeviceInfo” seems to work fine also. That looks like a spurious dependency.
https://oroboro.com/usb-serial-number/
CC-MAIN-2020-40
en
refinedweb
Serious software development calls for performance optimization. When you start optimizing application performance, you can’t escape looking at profilers. Whether monitoring production servers or tracking frequency and duration of method calls, profilers run the gamut. In this article, I’ll cover the basics of using a Python profiler, breaking down the key concepts, and introducing the various libraries and tools for each key concept in Python profiling. First, I’ll list each key concept in Python profiling. Then I’ll break each key concept into three key parts: - definition and explanation - tools that work for generic Python applications - application performance monitoring (APM) tools that fit APM tools are ideal for profiling the entire life cycle of transactions for web applications. Most of the APM tools probably aren’t written in Python, but they work well regardless of the language your web app is written in. Before we begin, note that I’ll focus only on Python 3 examples because 2.7 is scheduled to be retired on January 1, 2020. Therefore, the code examples in this post will use python3 as the Python 3 executable. With that structure in mind, let’s begin! Tracing Formally, tracing is a special use case of logging in order to record information about a program’s execution. Because this use case is so similar to event logging, the differences between event logging and tracing aren’t clear-cut. Event logging tends to be ideal for systems administrators, whereas software developers are more concerned with tracing to debug software programs. Here’s a one-liner for thinking about tracing—it’s when software developers use logging to record information about a software execution. In the open source Python Standard Library, the trace and faulthandler modules cover basic tracing. Generic Python option: trace module The Python docs for the trace module don’t say much, but the Python Module of the Week (PyMOTW) has a succinct description that I like. It says that trace will “follow Python statements as they are executed.” The purpose of trace module is to “monitor which statements and functions are executed as a program runs to produce coverage and call-graph information.” I don’t want to get into too many details with trace, so I’ll use some of the excellent examples in PyMOTW and leave you to dive deeper if you want. Using the same sample code in PyMOTW: from recurse import recurse def main(): print 'This is the main program.' recurse(2) return print 'recurse(%s)' % level if level: recurse(level-1) return def not_called(): print 'This function is never called.' if __name__ == '__main__': main() def recurse(level): You can do several things with trace: - Produce a code coverage report to see which lines are run or skipped over (python3 -m trace –count trace_example/main.py). - Report on the relationships between functions that call one other (python3 -m trace –listfuncs trace_example/main.py | grep -v importlib). - Track which function is the caller (python3 -m trace –listfuncs –trackcalls trace_example/main.py | grep -v importlib). You can dig into more of the details at the Python Module of the Week documentation. Generic Python option: faulthandler module By contrast, faulthandler has slightly better Python documentation. It states that its purpose is to dump Python tracebacks explicitly on a fault, after a timeout, or on a user signal. It also works well with other system fault handlers like Apport or the Windows fault handler. Both the faulthandler and trace modules provide more tracing abilities and can help you debug your Python code. For more profiling statistics, see the next section. If you’re a beginner to tracing, I recommend you start simple with trace. Open source APM options For APM options, there are tools like Jaeger and Zipkin. Although they’re not written in Python, they work well for web and distributed applications. Jaeger officially supports Python, is part of the Cloud Native Computing Foundation, and has a more extensive deployment documentation. For these reasons, I recommend starting with Jaeger if you want tracing requests in a distributed web architecture. If it doesn’t suit your tracing needs in a distributed system, then you can look at Zipkin. What part of the code should I profile? Now let’s delve into profiling specifics. The term “profiling” is mainly used for performance testing, and the purpose of performance testing is to find bottlenecks by doing deep analysis. So you can use tracing tools to help you with profiling. Recall that tracing is when software developers log information about a software execution. Therefore, logging performance metrics is also a way to perform profiling analysis. But we’re not restricted to tracing. As profiling gains mindshare in the mainstream, we now have tools that perform profiling directly. Now the question is, what parts of the software do we profile (measure its performance metrics)? Typically, we profile: - Method or function (most common) - Lines (similar to method profiling, but doing it line by line) - Memory (memory usage) Before I go into each of these and provide the generic Python and APM options, let’s explore what metrics to use for profiling and the profiling techniques themselves. What metrics should I profile? Speed (time) Typically, one thing we want to measure when profiling is how much time is spent executing each method. When we use a method profiling tool like cProfile (which is available in the Python language), the timing metrics for methods can show you statistics, such as the number of calls (shown as ncalls), total time spent in the function (tottime), time per call (tottime/ncalls and shown as percall), cumulative time spent in a function (cumtime), and cumulative time per call (quotient of cumtime over the number of primitive calls and shown as percall after cumtime). The specific timing metrics may vary from tool to tool, but generally, you can expect something similar to cProfile’s choice of timing metrics in similar tools. Calls (frequency) Another metric to consider when profiling is the number of calls made on the method. If a method has an acceptable speed but is so frequently called that it becomes a huge time sink, you would want to know this from your profiler. For example, cProfile highlights the number of function calls and how many of those are native calls. Method and line profiling Most profiling tutorials will tell you how to track a method’s timing metrics. That’s also what I recommend you start with, especially if you’re a beginner to profiling. Line profiling, as the name suggests, means to profile your Python code line by line. The most common metrics used for line profiling are timing metrics. Think of it as similar to method profiling, but more granular. If you’re a beginner, start with profiling methods first. When you’re comfortable with profiling methods, and you need to profile lines, then feel free to proceed as such. Generic Python option: cProfile and profile modules Both cProfile and profile are modules available in the Python 3 language. The numbers produced by these modules can be formatted into reports via the pstats module. Here’s an example of cProfile showing the numbers for a script: import cProfile import re cProfile.run('re.compile("foo|bar")') 197 function calls (192 primitive calls) in 0.002 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.001 0.001 :1()) As you can see, the various time metrics covered under Profile by Speed (Time) (such as ncalls and tottime) are in this example of cProfile as well. The profile module gives similar results with similar commands. Typically, you switch to profile if cProfile isn’t available. APM options Most APM tools are pretty fully-fledged monitoring tools. They’ll typically provide line and method profiling. Timing metrics are first-class citizens in these tools. I won’t list out the tools here because almost all will have these features. Memory profiling Another common component to profile is the memory usage. The purpose is to find memory leaks and optimize the memory usage in your Python programs. In terms of generic Python options, the most recommended tools for memory profiling for Python 3 are the pympler and the objgraph libraries. Generic Python option: Pympler library Pympler’s documentation offers more details. You can use pympler to: - determine how much memory specific Python objects consume, - identify whether objects got leaked out of scope, and - track the lifetime of objects of certain classes. The documentation will give you explicit examples. Here, I want to highlight an example I find most useful—tracking the lifetime of objects for classes: >>>% This example shows that the total measured memory footprint is 1.42 MB, with 1,000 active nodes averaging 200B in size. There are many tutorials in the pympler documentation, including one to track memory usage in Django with the Django Debug toolbar. Generic Python option: objgraph library According to its creator, objgraph’s purpose was to help find memory leaks. As Marius Gedminas said, “The idea was to pick an object in memory that shouldn’t be there and then see what references are keeping it alive.” I’d say that Marius emphasized making the visualization better in objgraph than in other memory profiling tools. And that’s its strength. Marius once demonstrated how objgraph helps find memory leaks, but I won’t reproduce it here due to space constraints. APM options There’s no APM tool that specializes in memory profiling. Deterministic profiling versus statistical profiling When we do profiling, it means we need to monitor the execution. That in itself may affect the underlying software being monitored. Either we monitor all the function calls and exception events, or we use random sampling and deduce the numbers. The former is known as deterministic profiling, and the latter is statistical profiling. Of course, each method has its pros and cons. Deterministic profiling can be highly precise, but its extra overhead may affect its accuracy. Statistical profiling has less overhead in comparison, with the drawback being lower precision. cProfile, which I covered earlier, uses deterministic profiling. Let’s look at another open source Python profiler that uses statistical profiling: pyinstrument. Generic Python profiler: pyinstrument Pyinstrument differentiates itself from other typical profilers in two ways. First, it emphasizes that it uses statistical profiling instead of deterministic profiling. It argues that while deterministic profiling can give you more precision than statistical profiling, the extra precision requires more overhead. The extra overhead may affect the accuracy and lead to optimizing the wrong part of the program. Specifically, it states that using deterministic profiling means that “code that makes a lot of Python function calls invokes the profiler a lot, making it slower.” This is how results get distorted and the wrong part of the program gets optimized. Second, pyinstrument differentiates itself by being a “full-stack recording.” Let’s compare it with cProfile. cProfile typically measures a list of functions and then orders them by the time spent in each function. By contrast, Pyinstrument is designed such that it will track, for example, the reason every single function gets called during a web request—hence, the full-stack recording feature. This makes pyinstrument ideal for popular Python web frameworks like Flask and Django. And full-stack recording is exactly the last concept I’m going to cover. Full-stack recording Arguably, all the various APM tools out in the market can be said to have the feature of full-stack recording. The idea behind full-stack recording is that, as a request progresses through each layer in the stack, we want to see in which layer of the stack the bottleneck in the performance occurs. Of course, sometimes the slowness can occur outside your Python script. Earlier, I covered an open source Python profiler option: pyinstrument. Here, I’ll cover other well-known APM options. APM options You can divide the APM options into two types: - Open source APM and Python-specific - Hosted APM and Python-specific For a Python-specific open source APM, you can check out Elastic APM. Python-specific hosted APM options include New Relic, AppDynamics, and Scout. The hosted APM options are similar to Stackify’s own Retrace. Retrace, however, is a one-stop shop, replacing several other tools and only charges by usage. On top of profiling your application code, these tools also trace your web request. You can see how your web request consumes wall-clock time through the technology stack, including database queries and web server requests. This makes these options great as profiling tools, if you have a web or distributed application. Bonus section: profile viewers Strictly speaking, profile viewers aren’t profilers, but they can help turn your profiling statistics into a more visually pleasing display. One example is SnakeViz, which is a browser-based graphical viewer for the output of Python’s cProfile module. One thing I like about SnakeViz is that it provides a sunburst diagram, as seen below: Another option to better display statistics from your cProfile statistics is tuna. Tuna handles runtime and import profiles, and it uses D3 and Bootstrap as the underlying technologies for display. Conclusion I’ve covered all the major concepts, running from tracing to profile viewers, in the area of Python profiling. So, use this post to pick the level and area of profiling you want to do. I recommend starting small and easy, if you’ve never done profiling before. Once you get the hang of it, you can experiment with more complex tooling. Good luck optimizing! - How to Use Python Profilers: Learn the Basics - March 7, 2019 - How to Log to Console in PHP - February 22, 2019 - Laravel Logging Tutorial - December 21, 2018 - PHP Profiling: How to Find Slow Code - October 22, 2018
https://stackify.com/how-to-use-python-profilers-learn-the-basics/
CC-MAIN-2020-40
en
refinedweb
Sending SMS Messages with Visual Basic WEBINAR: On-Demand How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 Introduction We all know what an SMS is (hopefully!), but sadly, SMS is said to be a dying medium of communication. I honestly do not actually think it is, because most countries are still battling with proper internet communication, and the fact that internet prices are quite expensive, especially in South Africa. Anyway, enough rambling… Today, I will show you how to send and receive an SMS with your Windows Phone Let's get straight down to business. Our Project Figure 1: Our Design You may name your objects anything you like; but, keep in mind that my object names may be different than yours. If you want to follow my example to the letter, here is the resulting XAML code for this design: <Page x: <Grid> <Button x: <TextBox x: <TextBox x: <Button x: </Grid> </Page> Code First, let me start with sending an SMS. Add the following code to add the appropriate namespaces and their functionalities to your project: Imports System Imports Windows.Devices.Sms This simply imports SMS communication capabilities to your project. Speaking about capabilities, you could also add this capability to your project. To add any Capability to a project, follow these steps: - Double-click Package.appxmanifest in the Solution Explorer, as shown in Figure 2. Figure 2: Solution Explorer - This will produce the screen as shown in Figure 3. Figure 3: Settings - Select the Capabilities tab and choose it from the items in the list. - You also could click on Project, Properties and click the Package Manifest… button, as shown in Figure 4. Figure 4: Project Properties If this is your first time playing with Windows Phone apps, have a thorough look through each of the tabs and play around a little. Here are a few articles that can assist you to get started with Windows Phone apps: Add a modular variable. This is a variable object that will be used through all the objects on the page: Private sdDevice As SmsDevice Add the following code behind the button labeled Send: Private Async Sub btnSend_Click(sender As Object, _ e As RoutedEventArgs) Handles btnSend.Click If sdDevice Is Nothing Then Try sdDevice = Await SmsDevice.GetDefaultAsync() Catch ex As Exception Return End Try End If Try Dim stmMessage As New SmsTextMessage() stmMessage.To = txtSendTo.Text stmMessage.Body = txtBody.Text() Await sdDevice.SendMessageAsync(stmMessage) Catch err As Exception sdDevice = Nothing End Try End Sub You can say "Thank You, Windows.Devices.SMS Namespace." Why? Well, I honestly did not think it will be so easy. Okay, it is not easy code, but it is not too overly complicated either! What happens in this sub is set out in the following list: - The Sub itself has been changed to Async, because it will deal with an Asynchronous process. If you have not heard about Async yet, I suggest you read the following article: Async Programming with Visual Basic and Windows 8 / 8.1 - It then tries to get hold of a valid SMS capable device. If it cannot find a valid device, it will do nothing; else, it will continue to the next Try & Catch block. If you have never encountered a Try & Catch block before, I recommend this (somewhat old) article: Handling Exceptions in Visual Basic - Inside the last Try & Catch block, it simply set up the SMS message to be sent. I provided the number to where the SMS should be sent, as well as the body of the SMS. Then, I send the message. - Easy as Pie! Receiving SMSs Add the following modular variable: Private blnListening As Boolean This flag indicates that the device is listening for incoming messages. Add the next code segment behind the btnRecieve button's Click event: Private Async Sub btnReceive_Click(ByVal sender As Object, _ ByVal e As RoutedEventArgs) Handles btnRecieve.Click If sdDevice Is Nothing Then Try sdDevice = Await SmsDevice.GetDefaultAsync() Catch ex As Exception Return End Try End If If Not blnListening Then Try AddHandler sdDevice.SmsMessageReceived, _ AddressOf sdDevice_SmsMessageReceived blnListening = True Catch ex As Exception sdDevice = Nothing End Try End If End Sub Also, not too complicated. There is one spanner in the works, however. Notice the AddHandler line? Well, this line creates a handler named sdDevice.SmsMessageRecieved that is the actual method that will show the SMS details—note, not the content, just the details such as where this message is from. I may talk about reading SMSs later. Add the sdDevice.SmsMessageRecieved event now: Private Async Sub sdDevice_SmsMessageReceived(ByVal sender As _ SmsDevice, ByVal args As SmsMessageReceivedEventArgs) Await Dispatcher.RunAsync(Windows.UI.Core. _ CoreDispatcherPriority.Normal, Sub() ' Get message from the event args. Try Dim stMessage As SmsTextMessage = args.TextMessage ReceivedFromText.Text = stMessage.From ReceivedMessageText.Text = stMessage.Body Catch ex As Exception End Try End Sub) End Sub Not rocket science. If you have not yet read my article about Async Programming that I spoke about, I suggest you do so now. Reading an SMS Add another button your page and name it btnRead. Give it a content value of Read. This button will be used to read the physical SMS that has been received. Add its code now: Private Async Sub btnRead_Click(ByVal sender As Object, _ ByVal e As RoutedEventArgs) Handles btnRead.Click If sdDevice Is Nothing Then Try sdDevice = Await SmsDevice.GetDefaultAsync() Catch ex As Exception Return End Try End If txtSendTo.Text = "" txtBody.Text = "" Try Dim id As UInteger If id >= 1 AndAlso (id <= sdDevice.MessageStore.MaxMessages) Then Dim strMessage As ISmsMessage = _ Await device.MessageStore.GetMessageAsync(id) Dim strTextMessage As ISmsTextMessage = _ TryCast(strMessage, ISmsTextMessage) If strTextMessage Is Nothing Then If TypeOf strMessage Is SmsBinaryMessage Then strTextMessage = _ SmsTextMessage.FromBinaryMessage(TryCast(strMessage, _ SmsBinaryMessage)) End If End If If strTextMessage IsNot Nothing Then txtSendTo.Text = strTextMessage.From txtBody.Text = strTextMessage.Body End If Else End If Catch ex As Exception sdDevice = Nothing End Try End Sub You should know the drill by now! First, it determines whether or not we have a valid SMS device. If it is dealing with a valid SMS device, it will continue to the next steps; else, it will not do anything. Because space is very limited, I decided not to put too many controls onto the page. So, excuse the fact that it will keep on using txtSendTo and txtBody, although for a different purpose. The text gets cleared, and then it establishes the total number of messages received. If there are more than 0 received, it determines what type of message it is. If it is a Text message, it converts the binary message to a text message and simply displays it. Conclusion I hope you have enjoyed this article. Until next time, cheers! About the Author Hannes du Preez is a Microsoft MVP for Visual Basic for the seventh year in a row. MSG91Posted by Teenabiswas on 08/11/2016 03:23am thanks for sharing blog now I am sending SMS using phpReply
https://www.codeguru.com/columns/vb/sending-sms-messages-with-visual-basic.html
CC-MAIN-2018-05
en
refinedweb
Broadband Adapter support. More... #include <sys/cdefs.h> Go to the source code of this file. Broadband Adapter support. This file contains declarations related to support for the HIT-0400 "Broadband Adapter". There's not really anything that users will generally have to deal with in here. Don't block waiting for the transfer. Wait, if needed on transfer. Receive packet callback function type. When a packet is received by the BBA, the callback function will be called to handle it. Retrieve the MAC Address of the attached BBA. This function reads the MAC Address of the BBA and places it in the buffer passed in. The resulting data is undefined if no BBA is connected. Set the ethernet packet receive callback. This function sets the function called when a packet is received by the BBA. Generally, this inputs into the network layer. Transmit a single packet. This function transmits a single packet on the bba, waiting for the link to become stable, if requested.
http://cadcdev.sourceforge.net/docs/kos-2.0.0/broadband__adapter_8h.html
CC-MAIN-2018-05
en
refinedweb
When you instantiate a COM object, you are actually working with a proxy known as the Runtime Callable Wrapper (RCW). The RCW is responsible for managing the lifetime requirements of the COM object and translating the methods called on it into the appropriate calls on the COM object. When the garbage collector finalizes the RCW, it releases all references to the object it was holding. For situations in which you need to release the COM object without waiting for the garbage collector to finalize the RCW, you can use the static ReleaseComObject method of the System.Runtime.InteropServices.Marshal type. The following example demonstrates how to change your MSN Instant Messenger friendly name using C# via COM Interop: // RenameMe.cs - compile with: // csc RenameMe.cs /r:Messenger.dll // Run RenameMe.exe "new name" to change your name // as it is displayed to other users. // Run TlbImp.exe "C:\Program Files\Messenger\msmsgs.exe" // to create Messenger.dll using System; using Messenger; class MSNFun { static void Main(string[ ] args) { MsgrObject mo = new MsgrObject( ); IMsgrService ims = mo.Services.PrimaryService; ims.FriendlyName = args[0]; } } You can also work with COM objects using the reflection API. This is more cumbersome than using TlbImp.exe, but is handy in cases in which it's impossible or inconvenient to run TlbImp.exe. To use COM through reflection, you have to get a Type from Type.GetTypeFromProgID( ) for each COM type you want to work with. Then, use Activator.CreateInstance( ) to create an instance of the type. To invoke methods or set or get properties, use the reflection API, which is covered in Chapter 13: using System; using System.Reflection; public class ComReflect { public static void Main( ) { object obj_msword; // Microsoft Word Application Type wa = Type.GetTypeFromProgID("Word.Application", true); // Create an instance of Microsoft Word obj_msword = Activator.CreateInstance(wa); // Use the reflection API from here on in... } }
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+18.+Integrating+with+COM+Components/18.2+Exposing+COM+Objects+to+C/
CC-MAIN-2018-05
en
refinedweb
On Fri, Sep 28, Thorsten Kukuk wrote: > ;-) Quick hack (nearly untested): diff -u -r1.22 pam_group.c --- modules/pam_group/pam_group.c 16 Jun 2006 06:35:16 -0000 1.22 +++ modules/pam_group/pam_group.c 28 Sep 2007 13:58:42 -0000 @@ -329,6 +329,13 @@ return FALSE; } } + + /* Ok, we know that b is a substring from A and does not contain + wildcards, but now the length of both strings must be the same, + too. */ + if (strlen (a) != strlen(b)) + return FALSE; + return ( !len ); } -- Thorsten Kukuk, Project Manager/Release Manager SLES SUSE LINUX Products GmbH, Maxfeldstr. 5, D-90409 Nuernberg GF: Markus Rex, HRB 16746 (AG Nuernberg)
http://listman.redhat.com/archives/pam-list/2007-September/msg00058.html
CC-MAIN-2018-05
en
refinedweb
Translation(s): none Introduction A lot of git and git-buildpackage commands needed got more than one option, are needed seldom and are hard to remember. This page is supposed to be a reminder, not a detailed explanation. Like: You remember you had to use a git-buildpackage command, but can't find it in your notes any more (and neither online). git-buildpackage clone from gitorious for rebuilding gbp-clone --pristine-tar [email protected]:debian-diaspora/ruby-columnize.git By cloning like this you will be able to build the package with git-buildpackage for example if you want to have a look at the package of someone else. import new upstream version If there is a working debian/watch file: git-import-orig --pristine-tar --uscan If you need to download a new upstream tarball manually: git-import-orig --pristine-tar path/to/<new-upstream-version>.tar.gz git simple push To push to gitorious run git push --all git push --tags A typical gotcha is to "git push origin master" instead (and or to forget the --tags push). git commit --amend To make it short: it can help you if you edit the same file a few times to figure out what to do, but will keep your commit log clean. The manpage man git-commit is elaborate. add further remotes Example: You got your package at alioth, now upgrade it to a new upstream and want to push your changes to the gitorious/debian-diaspora team repo to show it to others. Try the following (but don't expect it to work perfectly, don't to this the other way around, to alioth, until you are sure how it works).Another example is that you want to have a backup of your repo at another git-server, for example a local version of gitolite. gbp-clone --pristine-tar [email protected]:/git/pkg-ruby-extras/ruby-xpath.git and your remote will be: git remote -v origin [email protected]:/git/pkg-ruby-extras/ruby-xpath.git (fetch) origin [email protected]:/git/pkg-ruby-extras/ruby-xpath.git (push) Now add another remote at the gitorious/debian-diaspora repo git remote add gitorious [email protected]:debian-diaspora-repos/ruby-xpath.git git remote -v will confirm that it was created, now push to gitorious. gitorious is just the handle you give it. You can choose what makes most sense to you. git push --all gitorious git push --tags gitorious
https://wiki.debian.org/Diaspora/Packaging/git
CC-MAIN-2018-05
en
refinedweb
Ahoy there, Thanks for taking the time to read this. I'm in a programming class and this is my first adventure into the subject. I'm working on a code that is supposed to be a number guessing game. Here's where I am at so far import java.util.*; import java.io.*; public class number_guessing_game { public static void main (String[] args) { BufferedReader input = new BufferedReader (new InputStreamReader(System.in)); double attempts = 0; double random = 0; double guess = 0; String go = ""; Random generator = new Random (); int answer = generator.nextInt(10) + 1; while(go.equals("yes")) { System.out.println("How many chances do you want? "); try { attempts = Double.parseDouble(input.readLine()); } catch (IOException E) { System.out.println(E); } System.out.println("What's the magic number? (1-10)? "); try { guess = Double.parseDouble(input.readLine()); } catch (IOException E) { System.out.println(E); } if (guess == answer) { System.out.println("Congrats! You're guess is right"); } else if (guess != answer) { System.out.println("Sorry, that is not correct! Would you like to try again?"); try { go = input.readLine(); } catch (IOException E) { System.out.println(E); } } } } } I exit nano, compile with no exceptions, but when I go to run it, I just get the next line: student21@slisdl1:~/LS590/Labs/Lab4> javac number_guessing_game.java student21@slisdl1:~/LS590/Labs/Lab4> java number_guessing_game student21@slisdl1:~/LS590/Labs/Lab4> Any ideas? I've gone over it a ton and still can't put my finger on the problem. Thank you in advance for any suggestions
http://www.javaprogrammingforums.com/whats-wrong-my-code/14332-beginner-stuck-implementing-while-loop-compiles-fine-but-still-wont-run.html
CC-MAIN-2018-05
en
refinedweb
In the .NET world there hasn't been much choice in web server technology aside from IIS and all the caveats that come with it. IIS has been around for a long time now, longer than ASP.NET itself, and for a junior programmer, tackling it and its years worth of libraries can be quite a daunting task. Another barrier is System.Web, a monolithic assembly that contains everything under the sun all tightly coupled into one namespace, often being coupled into IIS. With more and more processing moving onto the client, servers have stopped processing and returning html and are instead just returning data for the client to parse and present. Modern approaches such as node.js require minimal effort to act as a web server, containing only what is needed to build the application and nothing else. OWIN The Open Web Interface for .NET (OWIN) specification aims to provide a similar experience for .NET. This open standard provides a simple API for web servers and frameworks to interact with. The mission statement from the guys.org I find that a good way of viewing OWIN is as an abstraction layer between .NET web servers and web applications. This API has minimal dependencies on other frameworks, to allow applications to be moved between hosts and platforms with minimal effort. The request/response flow acts as a pipeline, with all requests having to pass through the various middleware components before reaching the application. In fact the request may never reach your application, with the middleware responding for you (e.g. when using an authentication middleware component) Katana Katana is the Microsoft implementation of the OWIN specification and can be self-hosted or integrated with the IIS pipeline. Katana components use the Microsoft.Owin.* namespace and currently include support for Microsoft/ASP.NET technologies such as SignalR, ASP.NET Identity and also a large amount of authentication/authorization packages, most notably enabling simple external authentication with providers such as Google. IIS By no means am I saying not to use IIS, it's still fantastic at what it does, but there are now alternatives for when IIS is a bit overkill for your requirements. As I said before, Katana and IIS can be used together (e.g. in current MVC projects) and System.Web can still be called upon. Just know your requirements and plan accordingly. ASP.NET vNext Microsoft's experience with Katana has had a big effect on the direction of ASP.NET 5. ASP.NET 5 still uses OWIN but the internals have had a complete internal rewrite with some breaking changes (for instance some of the basic method and variable names have been changed). Further Reading I'll make some more posts about the basics of Katana middleware components and how requests/responses are handled but good places to start are: - OWIN Specification v1.0.0 - The original OWIN specification. There is also a draft v.1.1.0 available. - ASP.NET - An Overview of Project Katana - A good intro to Katana and basic OWIN middleware. However, do take some of the ASP.NET team's implementation documentation with a pinch of salt (e.g. do not put business logic & data access in your presentation layer. Also stay away from some of their OAuth implementations). - Pluralsight - ASP.NET MVC 5 Fundamentals - Scott Allen's Pluralsight course has a very good chapter on OWIN and Katana implementation.
https://www.scottbrady91.com/Katana/OWIN-Katana-Introduction
CC-MAIN-2018-05
en
refinedweb
For an n by n real matrix A, Hadamard’s upper bound on determinant is where aij is the element in row i and column j. See, for example, [1]. How tight is this upper bound? To find out, let’s write a little Python code to generate random matrices and compare their determinants to Hadamard’s bounds. We’ll take the square root of both sides of Hadamard’s inequality to get an upper bound on the absolute value of the determinant. Hadamard’s inequality is homogeneous: multiplying the matrix A by λ multiplies both sides by λn. We’ll look at the ratio of Hadamard’s bound to the exact determinant. This has the same effect as generating matrices to have a fixed determinant value, such as 1. from scipy.stats import norm from scipy.linalg import det import matplotlib.pyplot as plt import numpy as np # Hadamard's upper bound on determinant squared def hadamard(A): return np.prod(np.sum(A**2, axis=1)) N = 1000 ratios = np.empty(N) dim = 3 for i in range(N): A = norm.rvs(size=(dim, dim)) ratios[i] = hadamard(A)**0.5/abs(det(A)) plt.hist(ratios, bins=int(N**0.5)) plt.show() In this simulation the ratio is very often around 25 or less, but occasionally much larger, 730 in this example. It makes sense that the ratio could be large; in theory the ratio could be infinite because the determinant could be zero. The error is frequently much smaller than the histogram might imply since a lot of small values are binned together. I modified the code above to print quantiles and ran it again. print(min(ratios), max(ratios)) qs = [0.05, 0.25, 0.5, 0.75, 0.95] print( [np.quantile(ratios, q) for q in qs] ) This printed 1.0022 1624.9836 [1.1558, 1.6450, 2.6048, 5.7189, 32.49279] So while the maximum ratio was 1624, the ratio was less than 2.6048 half the time, and less than 5.7189 three quarters of the time. Hadamard’s upper bound can be very inaccurate; there’s no limit on the relative error, though you could bound the absolute error in terms of the norm of the matrix. However, very often the relative error is moderately small. More posts on determinants [1] Courant and Hilbert, Methods of Mathematical Physics, Volume 1. One thought on “Hadamard’s upper bound on determinant” Random Comments: Since the determinant is just the (signed) volume of a box spanned by (say) the row space, just by treating the rows as being orthogonal (even if they aren’t) gives that bound. Since the Hadamard bound is asymmetric between rows and columns, you could get a slightly tighter bound my considering the minimum of the Hadamard bounds of A and transpose(A). I suspect it doesn’t help much.
https://www.johndcook.com/blog/2020/07/22/hadamard-inequality/
CC-MAIN-2022-40
en
refinedweb
Name HashMap<K,V> Synopsis This class implements the Map interface using an internal hashtable. It supports all optional Map methods, allows key and value objects of any types, and allows null to be used as a key or a value. Because HashMap is based on a hashtable data structure, the get( ) and put( ) methods are very efficient. HashMap is much like the Hashtable class, except that the HashMap methods are not synchronized (and are therefore faster), and HashMap allows null to be used as a key or a value. If you are working in a multithreaded environment, or if compatibility with previous versions of Java is a concern, use Hashtable. Otherwise, use HashMap. If you know in advance approximately how many mappings a HashMap will contain, you can improve efficiency by specifying initialCapacity when you call the HashMap( ) constructor. The initialCapacity argument times the loadFactor argument should be greater than the number of mappings the HashMap will contain. A good value for loadFactor is 0.75; this is also the default value. See Map for details on the methods of HashMap. See also TreeMap and HashSet. Figure 16-24. java.util.HashMap<K,V> public class HashMap<K,V> extends AbstractMap<K,V> implements Map<K,V>, Cloneable, Serializable { // Public Constructors public HashMap( ); public HashMap(int initialCapacity); public HashMap(Map<? extends K,? extends V> m); public HashMap(int initialCapacity ... Get Java in a Nutshell, 5th Edition now with the O’Reilly learning platform. O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.
https://www.oreilly.com/library/view/java-in-a/0596007736/re624.html
CC-MAIN-2022-40
en
refinedweb
Introduction The BackgroundColor property specifies what the background color will be cleared to every frame. To prevent the background from clearing every frame, set the Alpha of the BackgroundColor to 0. See more information below on this topic. Code Example The following changes the BackgroundColor in response to user input. Add the following using statement: using FlatRedBall.Input; Add the following to Update: if (InputManager.Keyboard.KeyPushed(Keys.Up)) { Camera.Main.BackgroundColor = Microsoft.Xna.Framework.Graphics.Color.Red; } if (InputManager.Keyboard.KeyPushed(Keys.Down)) { Camera.Main.BackgroundColor = Microsoft.Xna.Framework.Graphics.Color.Blue; } Transparent Background Color When a Camera is drawn, the first thing that is done is its DestinationRectangle is painted to the background color. If you have multiple Cameras which overlap and you’d like Cameras which are on top to not write over what’s already been drawn by previous Cameras, you can set the BackgroundColor to a color that has an alpha of 0. BackgroundColor for any Cameras except the default one contained in the SpriteManager is transparent. You will need to change the background color if you are making a split-screen game to something no-transparent. Drawing before FlatRedBall is Drawn If you would like to draw to the screen before FlatRedBall draws, you can do so, but you must do the following: - Set the Camera’s Color’s Alpha to 0 - Turn off RenderTargets.
https://flatredball.com/documentation/api/flatredball/flatredball-camera/flatredball-camera-backgroundcolor/
CC-MAIN-2022-40
en
refinedweb
Python library implementing proximal operators to allow solving non-smooth, constrained convex problems with proximal algorithms. Project description PyProximal :vertical_traffic_light: :vertical_traffic_light: This library is under early development. Expect things to constantly change until version v1.0.0. :vertical_traffic_light: :vertical_traffic_light: Objective: min ||x - y||_2^2 + ||Dx||_1:. Moreover many of the problems that are solved in PyLops can now be also solved by means of proximal algorithms! Project structure This repository is organized as follows: - pyproximal: python library containing various orthogonal projections, proximial operators, and solvers - pytests: set of pytests - testdata: sample datasets used in pytests and documentation - docs: sphinx documentation - examples: set of python script examples for each proximal operator to be embedded in documentation using sphinx-gallery - tutorials: set of python script tutorials to be embedded in documentation using sphinx-gallery Getting started You need Python 3.8 or greater. Note: Versions prior to v0.3.0 work alsi with Python 3.6 or greater, however they require scipy version to be lower than v1.8.0. From PyPi If you want to use PyProximal within your codes, install it in your Python environment by typing the following command in your terminal: pip install pyproximal Open a python terminal and type: import pyproximal From Github You can also directly install from the master node (although this is not reccomended) pip install git+ Contributing Feel like contributing to the project? Adding new operators or tutorial? We advise using the Anaconda Python distribution to ensure that all the dependencies are installed via the Conda package manager. Follow the following instructions and read carefully the CONTRIBUTING file before getting started. 1. Fork and clone the repository Execute the following command in your terminal: git clone 2. Install PyLops in a new Conda environment pyproximal Documentation The official documentation of PyProximal is available here. Moreover, if you have installed PyProximal. Contributors - Matteo Ravasi, mrava87 - Nick Luiken, NickLuiken - Eneko Uruñuela, eurunuela - Marcus Valtonen Örnhag, marcusvaltonen Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyproximal/
CC-MAIN-2022-40
en
refinedweb
Common Data tutorial has an example where a DataClient is created from a client application and is stored in runtime data share. This common data is then accessed by a deployed service using ServiceContext.GetDataClient method. In our application, we wish to add the computational results generated by Service also to be uploaded in runtime data share. However HPCServiceHost.exe is throwing an error while trying to access DataClient class (namespace Microsoft.Hpc.Scheduler.Session.Data) Following is the snippet : DataClient dclient = DataClient.Create(instanceData.DataServer, instanceData.DataID); dclient.WriteAll<T[]>(tobject.ToArray(), true); Error :) We have added runtime dependency in app.config of the service class library -10.0.0.0" newVersion="10.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> This code was working perfectly fine for HPC pack 2012 and we are facing this error when we tried to upgrade application to HPC Pack 2019. HPC Pack 2019 documentation has just mentioned about the following change with respect to DataClient class : If your service calls ServiceContext.GetDataClient to obtain a reference of DataClient before, no code change is needed. Instead if your service explicitly uses DataClient.Open, you'll need to change the call to a new API if your service is running on a non-domain joined node.
https://social.microsoft.com/Forums/en-US/35195c3f-a8fb-486f-b238-852bad05e4d7/does-hpc-2019-pack-allow-deployed-soa-service-to-write-data-in-runtime-data-share?forum=windowshpcdevs
CC-MAIN-2020-50
en
refinedweb
Suppose there are N cars that are going to the same destination along a one lane road. The destination is ‘target’ miles away. Now each car i has a constant speed value speed[i] (in miles per hour), and initial position is position[i] miles towards the target along the road. A car can never pass another car ahead of it, but it can catch up to it, and drive bumper to bumper at the same speed. Here the distance between these two cars is ignored - they are assumed to have the same position. A car fleet is some non-empty set of cars driving at the same position and same speed. If one car catches up to a car fleet right at the destination point, it will still be considered as one car fleet. So we have to find how many car fleets will arrive at the destination. So if the target is 12, if position is [10,8,0,5,3] and speed is [2,4,1,1,3] then the output will be 3. This is because the cars starting at 10 and 8 become a fleet, meeting each other at 12. Now the car starting at 0 doesn't catch up to any other car, so it is a fleet by itself. Again the cars starting at 5 and 3 become a fleet, meeting each other at 6. To solve this, we will follow these steps − Let us see the following implementation to get better understanding − #include <bits/stdc++.h> using namespace std; class Solution { public: int carFleet(int t, vector<int>& p, vector<int>& s) { vector < pair <double, double> > v; int n = p.size(); for(int i = 0; i < n; i++){ v.push_back({p[i], s[i]}); } int ret = n; sort(v.begin(), v.end()); stack <double> st; for(int i = 0; i < n; i++){ double temp = (t - v[i].first) / v[i].second; while(!st.empty() && st.top() <= temp){ ret--; st.pop(); } st.push(temp); } return ret; } }; main(){ vector<int> v1 = {10, 8, 0, 5, 3}; vector<int> v2 = {2,4,1,1,3}; Solution ob; cout << (ob.carFleet(12, v1, v2)); } 12 [10,8,0,5,3] [2,4,1,1,3] 3
https://www.tutorialspoint.com/car-fleet-in-cplusplus
CC-MAIN-2020-50
en
refinedweb
AttributeError("module 'pandas' has no attribute 'read_csv'") attributeerror django typeerror python python import attributeerror python custom exception attributeerror pplayoutblank attributeerror meaning filenotfounderror python I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying AttributeError("module 'pandas' has no attribute 'read_csv'"). I have looked all over StackOverflow and the consensus is that there is likely another file in my CWD with the same name but I believe I don't. Even if I create a new project and call it, for example, Firstproject.py, and immediately import pandas as pd, I get the error. I would appreciate the help. I can provide more info if required. Your problem is this: The command import pandas as pd in your case didn't import the genuine pandas module, but some other one - and in that other one the read_csv() function is not defined. Highly likely you have in your project directory (or in your current directory) a file with the name "pandas.py". And - highly likely - you called the pd.read_csv() function in it. Rename this file, and you will be happy again. (Highly likely.) 6. Built-in Exceptions, exception AttributeError ¶. Raised when an attribute reference (see Attribute references) or assignment fails. (When an object does not support The AttributeError in Python is raised when an invalid attribute reference is made, or when an attribute assignment fails. While most objects support attributes, those that do not will merely raise a TypeError when an attribute access attempt is made. Your best bet is to type "pandas" in your console, and you will be able to see where your "pandas" name is originated from: >>> pandas <module 'pandas' from '/some-path/site-packages/pandas/__init__.py'> Python Exception Handling - AttributeError, The AttributeError in Python is raised when an invalid attribute reference is made, or when an attribute assignment fails. While most objects AttributeError: module 'asyncio' has no attribute 'coroutine' (Python 3.6.4) -1 AttributeError: Why does the module first imported not register as an attribute? There might be possibility that you are using this name for your script as read_csv.py hence pandas itself confused what to import, if or csv.py then you can rename it to something else like test_csv_read.py. also remove any files in the path naming read_csv.pyc or csv.pyc . Python: AttributeError, These errors yields to the program not executed. One of the error in Python mostly occurs is “ AttributeError “. AttributeError can be defined as an error that is raised this is my part of a code , whey it shows :AttributeError: 'numpy.ndarray' object has no attribute 'append' prets = [] pvols = [] for p in range (2500): weights = np.random.random(noa) Here is the solution when you downloaded python its automatically download 32 you need to delete if you don't have 32 and go download 64 and then problem solved :) Error Encyclopedia, Attribute Error. Attributes in Python. I usually think about attributes as nouns that belong to an object. For example, “the student has two eyes”. But in Python, an AttributeError: module 'tensorflow' has no attribute 'Summary' #9. Open palunel opened this issue Apr 23, 2019 · 1 comment Open Why does this AttributeError in python occur?, This happens because the scipy module doesn't have any attribute named sparse . That attribute only gets defined when you import AttributeError: module 'torch.jit' has no attribute 'unused' #26608. Closed haruiz opened this issue Sep 21, 2019 · 12 comments Closed Python AttributeError, AttributeError: module 'sortedcontainers' has no attribute 'SortedKeyList' #57. Open. KittyTechnoProgrammer opened this issue on Dec 16, Concrete exceptions¶ The following exceptions are the exceptions that are usually raised. exception AssertionError¶ Raised when an assert statement fails. exception AttributeError¶ Raised when an attribute reference (see Attribute references) or assignment fails. AttributeError: module 'sortedcontainers' has no attribute, Learn what to do when a Python command in your Azure Databricks notebook fails with AttributeError. I usually think about attributes as nouns that belong to an object. For example, “the student has two eyes ”. But in Python, an attribute can also be an action that an object can perform—“The cat can jump ”. An attribute in Python means some property that is associated with a particular type of object. - I suggest visiting How to Ask in order to get a better insight on how to ask a solid question. Additionally, some code, errors, and a slightly better explanation is necessary in order for others to offer you help (they 1st need to understand what the issue). Hang in there. You'll be great at this. - Esketit... pls check answers given below if any of in the help. - Edit your Question and show the full Traceback - Hey pygo this is not the case. - Hey pygo it happens even I use one line of code which is just to import pandas. ie. import pandas as pd - ill upload a screen shot? - @Esketit, sure that will help - Sorry I cant it wont let me but i have this: Backend TkAgg is interactive backend. Turning interactive mode on. AttributeError("module 'pandas' has no attribute 'read_csv'") Stack trace: > File "c:\users(my name was here)\source\repos\what the hell\what the hell\what_the_hell.py", line 1, in <module> > import pandas as pd Loaded 'main' The program 'python.exe' has exited with code -1 (0xffffffff).
http://thetopsites.net/article/52677658.shtml
CC-MAIN-2020-50
en
refinedweb
Hi everyone, I am performing cell-positive screening assays on DAB-stained whole IHC tissue sheets (CD8 +). I hope you can help me with the step by step to use the tissue classifier, specifically how to train the software to differentiate stroma from the rest of the tissue and thus identify the stroma in a new annotation to be able to apply the positive cell detection with DAB. Thank you! Can the pixel classifier or other QuPath feature be used to create a new annotation of interest (stroma) and apply DAB positive cell detection? Hi everyone, I am performing cell-positive screening assays on DAB-stained whole IHC tissue sheets (CD8 +). You can use the pixel classifier to identify different regions of interest, in which you could then apply positive cell detection. To do so: - Go to Classify > Pixel classification > Train pixel classifier. - Play around with the parameters to find a classification that suits you (don’t forget to annotate at least one annotation of 2 different classes!), you can also use the *Ignoreclass. - Save your classifier. - Click on Create objects, Full image and choose Annotationsas New object type. - You now have annotations for all your different classes, you might want to clean it up a bit by getting rid of annotations that are not necessary (this is up to you). You can run your Positive cell detection with your annotations as ‘parent annotation’. More info on the Pixel classifier here. Note: to select only annotations with a specific classification, you can do this in your script editor: def myClass = "Tumor" selectObjectsByClassification(myClass) EDIT: More convenient method in the script sample, as pointed out to me.
https://forum.image.sc/t/can-the-pixel-classifier-or-other-qupath-feature-be-used-to-create-a-new-annotation-of-interest-stroma-and-apply-dab-positive-cell-detection/44950
CC-MAIN-2020-50
en
refinedweb
This article shall describe the construction of three custom controls; each is used to format its text content to be either all upper case, all lower case, title case, or normal (as typed) case regardless of the format of the input. Introduction This article shall describe the construction of three custom controls; each is used to format its text content to be either all upper case, all lower case, title case, or normal (as typed) case regardless of the format of the input. Such controls may be useful if it is necessary to load the control's text from a source in which the values are in the wrong case; for example, if one were to load a ListBox from a column in a database where all of the values were stored as all upper case strings but the desire was to display the text using title case, the Case List control contained in the sample project will make the conversion once the values are loaded into its list. Figure 1: The Case Controls in Use. Getting Started The Case Controls solution contains two projects. The first project is called "CaseControls" and it contains three extended controls (the RichTextBox, the ListBox, and the ComboBox). Each of the controls was extended such that the modified version offered the option of formatting the text contained is the control into one of four options (Upper Case, Lower Case, Normal Case, and Title Case). The second project is called "TestCaseControl" and it is provided to demonstrate use of the controls in a Win Forms application. Figure 2: Solution Explorer with Both Projects Visible. The Case Controls Project Code: Case Text The CaseText control is an extended version of the RichTextBox control; this control was extended such that text sent to the control or keyed directly into it is immediately formatted into one of the four available options (Upper Case, Lower Case, Normal Case, or Title Case). In use, the developer may drop the control onto a form and set a single property "TextCase" that is used by the control to determine how to format the text. The control is built in a single class entitled "CaseText". The class begins with the following imports, namespace, and class declarations: using System; using System.Collections; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Globalization; using System.Threading; namespace CaseControls { /// <summary> /// Extend the RichTextBox control /// </summary> public class CaseText : RichTextBox { The class is declared to inherit from the RichTextBox control. By inheriting from the RichTextBox control, all of the functionality of that control is included. After declaring the class, the next section of code is used to declare an enumeration defining the case mode options, a private member variable used to hold the currently selected text case mode, and a public property used to set or retrieve the selected case mode: /// <summary> /// Enumeration of case type options /// </summary> public enum CaseType Normal, Title, Upper, Lower } /// Set the current case type for the control; /// default to normal case private CaseType mCaseType = CaseType.Normal; /// property used to maintain current case type public CaseType TextCase get { return mCaseType; } set mCaseType = value; UpdateTextCase(); } The next block of code contains the default constructor and the component initialization code; since this control is intended to serve as either a textbox or a richtextbox, the control's constructor contains a bit of code to make the control look more like a standard textbox control when it is created (the multi-line property is set to false and the height is set to 20). The initialize component method also includes the addition of a text changed event handler: /// Default Constructor public CaseText() InitializeComponent(); this.Text = string.Empty; this.Multiline = false; this.Height = 20; /// Initialization/Event Handlers private void InitializeComponent() this.SuspendLayout(); // // CaseText this.TextChanged += new System.EventHandler(this.CaseText_TextChanged); this.ResumeLayout(false); The last bit of code required by the control is the text changed event handler which is used merely to call a method used to update the case of the text based upon the selected case mode property. Since this method is called whenever text changed event fires, the textbox will update the case as the user types. When the UpdateTextCase method is called, the method stores the text currently contained in the control and it stores the position of the insert cursor. The copy of the text placed in the string varible is operated on within the method and then is used to replace the text contained in the control. The position of the insert is stored so that the cursor may be restored to its original position after the text has been replaced. This supports edits made to sections of the string other than the end or beginning of the string. /// Call the Update Text Case function each time /// the text changes /// <param name="sender"></param> /// <param name="e"></param> private void CaseText_TextChanged(object sender, EventArgs e) UpdateTextCase(); /// Depending upon the Case Type selected, /// process the textbox accordingly private void UpdateTextCase() string sControlText = this.Text; int cursorPosition = this.SelectionStart; switch (this.TextCase) case CaseType.Lower: // convert to all lower case this.Text = this.Text.ToLower(); break; case CaseType.Normal: // do nothing, leave as entered break; case CaseType.Title: // convert to title case string sTemp = this.Text.ToLower(); CultureInfo ci = Thread.CurrentThread.CurrentCulture; TextInfo ti = ci.TextInfo; this.Text = ti.ToTitleCase(sTemp); case CaseType.Upper: // convert to all upper case this.Text = this.Text.ToUpper(); default: // move to the corrent position in the string this.SelectionStart = cursorPosition; } The code used in the CaseList and CaseCombo controls is very similar and is all included in the download; for that reason I won't describe it here in this document. The only major difference between the code used in those controls is that the Update Text methods are made public in the list controls so that the user may evoke the method whenever the list is created or changed. Whenever the user evokes the method, the update method will loop through the text in the collection and update the case of each list item. Code: Test Case Control This project is used to test the custom controls. The project contains a single Windows form. The form contains four of each type of custom control, each of which is intended to demonstrate one of the case mode options. The form class begins with the default imports and class declaration: using System.Collections.Generic; namespace TestCaseControl public partial class Form1 : Form In the form constructor, the list type controls are all populated manually using strings formatting contrary to what is desired for display. For example, the if the custom listbox control is set to display upper case text, the text submitted to the control's list is passed in using lower case or mixed case strings. After each list is loaded, the controls update text case method is evoked to reformat the case used in the list items: public Form1() InitializeComponent(); // C O M B O B O X E X A M P L E // // load an Upper Case list with these items caseCombo1.Items.Add("popeye"); caseCombo1.Items.Add("olive oil"); caseCombo1.Items.Add("brutus"); caseCombo1.Items.Add("whimpy"); caseCombo1.Items.Add("sweet pea"); // update the case of these list items caseCombo1.UpdateListTextCase(); // load a Lower Case list with these items caseCombo2.Items.Add("CHOCOLATE CREAM"); caseCombo2.Items.Add("Rasberry Truffle"); caseCombo2.Items.Add("PINEAPPLE Sling"); caseCombo2.Items.Add("COCONUT HEaRt"); caseCombo2.Items.Add("VANILLA ICE Cream"); // update the case of these list items caseCombo2.UpdateListTextCase(); // load a Normal Case list with these items caseCombo3.Items.Add("George S. Patton"); caseCombo3.Items.Add("Mikhail Miloradovich"); caseCombo3.Items.Add("Bernard Montgomery"); caseCombo3.Items.Add("Carl von Clausewitz"); caseCombo3.Items.Add("Sun Tzu"); caseCombo3.UpdateListTextCase(); // load a Title Case list with these items caseCombo4.Items.Add("john lennon"); caseCombo4.Items.Add("paul mc cartney"); caseCombo4.Items.Add("ringo starr"); caseCombo4.Items.Add("george harrison"); caseCombo4.Items.Add("peter best"); caseCombo4.UpdateListTextCase(); // L I S T B O X E X A M P L E // // load an Upper Case list with these items caseList1.Items.Add("popeye"); caseList1.Items.Add("olive oil"); caseList1.Items.Add("brutus"); caseList1.Items.Add("whimpy"); caseList1.Items.Add("sweet pea"); caseList1.UpdateListTextCase(); // load a Lower Case list with these items caseList2.Items.Add("CHOCOLATE CREAM"); caseList2.Items.Add("Rasberry Truffle"); caseList2.Items.Add("PINEAPPLE Sling"); caseList2.Items.Add("COCONUT HEaRt"); caseList2.Items.Add("VANILLA ICE Cream"); // update the case of these list items caseList2.UpdateListTextCase(); // load a Normal Case list with these items caseList3.Items.Add("George Patton"); caseList3.Items.Add("Mikhail Miloradovich"); caseList3.Items.Add("Bernard Montgomery"); caseList3.Items.Add("Carl von Clausewitz"); caseList3.Items.Add("Sun Tzu"); caseList3.UpdateListTextCase(); caseList4.Items.Add("john lennon"); caseList4.Items.Add("paul mc cartney"); caseList4.Items.Add("ringo starr"); caseList4.Items.Add("george harrison"); caseList4.Items.Add("peter best"); caseList4.UpdateListTextCase(); The only other code in the form class are a set button event handlers that will pass an improperly formatted string to each of the four custom case text controls: /// Button event handlers used to send /// formatted strings to each of the /// CaseText controls on the page. Each /// example is set to a different type /// of case (Upper, Lower, Normal, and Title) private void btnToUpper_Click(object sender, EventArgs e) caseText1.Text = lblToUpperCase.Text; private void btnToLower_Click(object sender, EventArgs e) caseText2.Text = lblToLowerCase.Text; private void btnToNormal_Click(object sender, EventArgs e) caseText3.Text = lblToNormalCase.Text; private void btnToTitle_Click(object sender, EventArgs e) caseText4.Text = lblToTitleCase.Text; Summary. This article was intended to demonstrate an approach to building a set of custom controls that could be used to reformat the case of the text or list item text; the purpose of such a control would be to allow improperly formatted text obtained from an external source to be properly displayed in the context of a Windows application without the need to modify the source of text. Such a control may be useful if one is, for example, attempting to display data obtained from a database that is not stored in the proper format (e.g., the column contains all upper case strings but the desire is to display it as title case strings or all lower case string). View All
https://www.c-sharpcorner.com/article/enforce-text-case-with-custom-controls/
CC-MAIN-2020-50
en
refinedweb
Rename a wiki (Redirected from Rename a wiki domain)Jump to navigation Jump to search Domain rename draft (2015) TODO: Refactor this page to three parts - what to do to prepare; what to do at the execution (deployment); what to do after deployment (including testing.) This page deals with moving a wiki from one domain to another. - Deal with site language rename if relevant (i.e. probably not for a special wiki) - Make sure that Names.php, Messages*.php and */i18n/*.json in core have the new names and codes. - Make sure that langdb.yaml in UniversalLanguageSelector has new names and codes and redirects from the old names. (For als -> gsw there's also an extra redirection in UniversalLanguageSelector, so remove it when needed.) - Add new wiki domain to DNS zone (operations/dns.git) - Update operations/puppet.git - Update/add apache config (operations/puppet.git modules/mediawiki/files/apache/sites) to send new domain to MediaWiki (wikimedia.org subdomains etc. Normal project domains should already be covered by a wildcard) - Add new domain to RESTbase config (operations/puppet.git modules/restbase/templates/config.yaml.erb) - MediaWiki config (operations/mediawiki-config.git) to make new domain work as an alias: - Map new wiki domain to database name (multiversion/MWMultiVersion.php, setSiteInfoForWiki function) - If the database suffix was not 'wiki', the site code will not be 'wikipedia' so make sure that gets changed too (see ee.wikimedia.org as an example) - If moving a wiki from the wrong ISO code, also move the wgLanguageCode entry in wmf-config/InitialiseSettings.php up - You may have to purge some pages from Varnish (action=purge or SquidUpdate::purge) if they were used for testing the domain name before the mediawiki-config change went through, before you start seeing sane redirect behaviour for the new domain. - If WikimediaMaintenance's dumpInterwiki script had an alias in reverse of the rename, remove it like 236929, deploy commit and run updateinterwikicache on the deployment master host - TODO: Populate sites table (cache?) from wikidata (see task T111822) - Change MediaWiki config again to set wgServer/wgCanonicalServer to new domain, update the langlist file if relevant (for SiteMatrix - see task T111876) - If renaming a special site you may wish to deal with wgSiteName, wgMetaNamespace, etc. at this point - Changing namespace names requires an alias for the old name. - Update copy of the site matrix at mediawiki/services/parsoid.git lib/sitematrix.json so that v2+ APIs works on the new domain (v1 used database names) like 236831 - Update MassMessage delivery lists on meta to use the new domain (see task T111895) - Another operations/puppet.git update - Add entry redirecting from old domain to new domain to operations/puppet.git modules/mediawiki/files/apache/sites/redirects/redirects.dat, run refreshDomainRedirects and submit result to gerrit Remove old domain from RESTBase config - TODO: Work out with RESTBase devs what should actually happen. Note that currently /api/rest_v1/ on our domains does not follow Apache redirects. - ContentTranslation: - Remove the unnecessary redirection from SiteMapper - the ContentTranslationDomainCodeMapping global variable in extension.json. (like 236795) - After the migration, test that publishing to the new domain language works. (e.g. task T111818) - After the migration, test that loading a source article from the new domain works. (e.g. task T111850) - After the migration, test that link and category adaptation work (e.g. task T112285) - Eliminate existing interwiki links that point to the old domain, and make sure that href points to the new domain - For language-projects this is likely to be via dumpInterwiki.php in WikimediaMaintenance (like task T111853) - For special wikis it's likely to be at Interwiki map - Test that the langlinks API query works. (like task T112426) - If needed, rename the relevant messages and message keys in the following message files: - WikimediaMessages/i18n/wikimediaprojectnames - WikimediaMessages/i18n/interwikisearchresults - If needed, update $MassMessageWikiAliases with a map from the old to the new domain so that delivery lists keep working. Database rename draft (2011) Assumptions - The language code of a wiki shall be renamed. The wiki keeps its "class" (e.g. wikipedia, wikiversity, etc). - The languagecodes "old" and "new" will be used to name the old and new wikis. "newwiki" and "oldwiki" are the names of the new and old databases. Approach - Add the new language code and the language name to languages/Names.php - Create messages/MessagesNew.php and, if necessary, classes/LanguageNew.php - Add the language code to /home/wikipedia/common/langlist - Set oldwiki to read-only - Create a mysql dump of the database oldwiki - Create a mysql dump of the extstores. Very old wikis will have text records stored in different extstores! - php maintenance/addwiki.php --wiki=oldwiki new newwiki newwiki new.wikipedia.org - Set newwiki to read-only - Import the database dump of the old wiki into the new database. - Import the extstore dump into the new exstore database. - Change all settings in InitialiseSettings.php and, if necessary, in flaggedrevs.php etc pp - Move the images from /mnt/upload6/project/old to /mnt/upload6/project/new and from /mnt/thumbs/project/old to /mnt/thumbs/project/new - ??? Do we have to do any changes for SUL? - Update the interwiki cache: php maintenance/dumpInterwiki.php -o cache/interwiki.cdb - sync-common-all - Add the new wiki in the DNS zone - Set newwiki to read-write - Set up redirects from old.project.org to new.project.org in /home/wikipedia/conf/redirects.conf - If no other wikis with language code old exist, remove old from /home/wikipedia/common/langlist - Remove oldwiki from all *.dblist files - update Lucene search system - clean page caches for all projects since the interwiki is incorrect in Squid caches (redirection can handle it though) - need a script to verify the configuration before and after (compare $_GLOBALS ? ), or use if( $wgDbName == 'oldwiki' || $wgDbName == 'newiki' ) - synchronize the operation with toolserver. We might want them to do the same operation. - make sure search engines handle the redirection correctly (they should forget about the old page maybe 301 Moved Permanently). - this procedure should be fully tested - write a backup plan so we can easily apply it if anything goes wrong. - optionally clear memcached obsoletes keys
https://wikitech.wikimedia.org/wiki/Rename_a_wiki_domain
CC-MAIN-2020-50
en
refinedweb
#include <sys/time.h> int utimes(const char *file, struct timeval *tvp); If tvp is NULL, the access and modification times are set to the current time. A process must be the owner of the file or have write permission for the file to use utimes in this manner. If tvp is not NULL, it is assumed to point to an array of two timeval structures. The access time is set to the value of the first member, and the modification time is set to the value of the second member. Only the owner of the file or the privileged user may use utimes in this manner. In either case, the ``inode-changed'' time of the file is set to the current time. X/Open Portability Guide Issue 4, Version 2 (Spec-1170).
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=utimes&mansection=S&lang=en
CC-MAIN-2020-50
en
refinedweb
凸度偏差与收益率曲线 - Convexity Bias and the Yield Curve - INTRODUCTION - BASICS OF CONVEXITY [^1] - CONVEXITY, YIELD CURVE AND EXPECTED RETURNS - HISTORICAL EVIDENCE ABOUT CONVEXITY AND BOND RETURNS - APPENDIX A. HOW DOES CONVEXITY VARY ACROSS NONCALLABLE TREASURY BONDS? - APPENDIX B. RELATIONS BETWEEN VARIOUS VOLATILITY MEASURES - LITERATURE GUIDE 著:Antti Ilmanen 译:徐瑞龙 Convexity Bias and the Yield Curve 凸度偏差与收益率曲线 INTRODUCTION 引言 Few fixed-income assets' values are linearly related to interest rate levels; most bonds' price-yield curves exhibit positive or negative convexity. Market participants have long known that positive convexity can enhance a bond portfolio's performance. Therefore, convexity differentials across bonds have a significant effect on the yield curve's shape and on bond returns. This report describes these effects and presents empirical evidence of their importance in the US Treasury market. 只有少数固定收益资产的价值与收益率水平呈线性关系;大多数债券的价格-收益率曲线呈现或正或负的凸度。市场参与者早就知道,正的凸度可以增强债券投资组合的表现。因此,债券之间凸度的差异对收益率曲线的形状和债券回报有显着的影响。本报告描述了这些影响,并提出了其在美国国债市场中重要性的经验证据。 For a given level of expected returns, many investors are willing to accept lower yields for more convex bond positions. Long-term bonds are much more convex than short-term bonds because convexity increases very quickly as a function of duration. Because of the value of convexity, long-term bonds can have lower yields than short-term bonds and yet, offer the same near-term expected returns. Thus, the convexity differentials across bonds tend to make the Treasury yield curve inverted or "humped". We refer to the impact of such convexity differentials on the yield curve shape as the convexity bias. Our historical analysis shows that the bias is small at the front end of the curve, but it can be quite large at the long end. 对于给定水平的预期回报,许多投资者愿意接受较低的收益率来获得凸度更大的债券头寸。长期债券比短期债券凸度更大,因为凸度随着久期的增加而快速增加。由于凸度的价值,在提供相同近期预期回报的前提下,长期债券的收益率可能比短期债券低。因此,债券之间的凸度差异往往会使国债收益率曲线倒挂或“弯曲”。我们将这种由于凸度差异对收益率曲线形状产生的影响称作为凸度偏差。我们的历史分析表明,曲线短端的凸度偏差很小,但长端可能相当大。 Convexity bias can also be viewed from another perspective —— the value of convexity as a part of the expected bond return. Widely used relative value tools in the Treasury market, such as yield to maturity and rolling yield, assign no value to convexity. In this report, we show how yield-based expected return measures can be adjusted to include the value of convexity. The value of convexity depends crucially on the yield volatility level; the larger the yield shift, the more beneficial positive convexity is. In contrast, the rolling yield is a bond's expected holding-period return given one scenario, an unchanged yield curve. Thus, the rolling yield implicitly assumes zero volatility and ignores the value of convexity, making it a downward-biased measure of near-term expected bond return. To counteract this problem, we can simply add up the two sources of expected return. A bond's convexity-adjusted expected return is equal to the sum of its rolling yield and the value of convexity. Figure 1 shows that, at long durations, the convexity-adjusted expected returns can be substantially different from the yield-based expected returns. (We describe the construction of this figure further in the report.) 也可以从另一个角度来看待凸度偏差——凸度的价值作为债券预期回报的一部分。国债市场中广泛使用的相对价值工具,如到期收益率和滚动收益率,不涉及凸度的价值。在本报告中,我们展示了如何调整预期回报度量来包括凸度价值。凸度价值取决于收益率的波动率水平;收益率偏移越大,正凸度越有价值。相比之下,滚动收益率是指收益率曲线不变的情况下债券的持有期预期回报。因此,滚动收益率隐含地假定零波动率,并忽略凸度价值,使其成为近期债券预期回报的下偏度量。为了纠正这个问题,我们可以简单地加上预期回报的两个来源。债券的凸度调整预期回报等于其滚动收益率与凸度价值之和。图1显示,在长久期端,凸度调整的预期回报可能与基于收益率的预期回报有显著差异。(我们在报告中进一步描述这个特性的构造。) Figure 1 Three Alternative Expected Return Curves, as of 1 Sep 95 In the section "Basics of Convexity", we define convexity, describe how it varies across bonds and discuss the relation between volatility and the value of convexity. We then examine convexity's impact on the yield curve shape and on expected returns and explain why we advocate the use of convexity-adjusted expected returns in the evaluation of duration-neutral barbell-bullet trades. Finally, we present historical evidence about convexity's impact on realized long-term bond returns and on the performance of a barbell-bullet trade. 在“凸度基础”一节中,我们定义凸度,描述债券之间凸度的差异,并讨论波动率与凸度价值之间的关系。然后,我们研究凸度对收益率曲线形状和预期回报的影响,并解释为什么我们主张在久期中性的杠铃-子弹交易的评估中使用凸度调整的预期回报。最后,我们给出凸度对实现的长期债券回报和杠铃-子弹交易表现影响的历史证据。 While this report focuses on convexity's impact on the yield curve (and on bond returns), we stress that the convexity bias is not the only determinant of the yield curve shape. Positive bond risk premia tend to offset the negative impact of convexity, making the yield curve slope upward, at least at short durations. Moreover, the market's expectations about future rate changes can make the yield curve take any shape. This report is the fifth part of a series titled Understanding the Yield Curve; earlier reports in this series describe how the market's rate expectations and the required bond risk premia influence the curve shape. 虽然本报告着重于凸度对收益率曲线(以及债券回报)的影响,但我们强调凸度偏差不是收益率曲线形状的唯一决定因素。正的债券风险溢价倾向于抵消凸度的负面影响,使收益率曲线向上倾斜(至少是在短久期端)。此外,市场对未来收益率变化的预期也可以使收益率曲线发生变化。本报告是《理解收益率曲线》系列的第五部分,本系列的早期报告描述了市场的收益率预期和债券风险溢价如何影响曲线形状。 BASICS OF CONVEXITY [1] 凸度基础 What Is Convexity and How Does It Vary Across Treasury Bonds? 什么是凸度,以及如何因债券不同? Convexity refers to the curvature (nonlinearity) in a bond's price-yield curve. All noncallable bonds exhibit varying degrees of positive convexity. When a price-yield curve is positively convex, a bond's price rises more for a given yield decline than it falls for a similar yield increase. It is often stated that positive convexity can only improve a bond portfolio's performance. Figure 2, which shows the price-yield curve of a 30-year zero, illustrates in what sense this statement is true: A linear approximation of a positively convex curve always lies below the curve. That is, a duration-based approximation of a bond's price change for a given yield change will always understate the bond price. The error is small for small yield changes but large for large yield changes. We can approximate the true price-yield curve much better by adding a quadratic (convexity) term to the linear approximation. Thus, a bond's percentage price change (\(100 * \Delta P / P\)) for a given yield change is: [2] 凸度是指债券价格-收益率曲线中的曲率(非线性部分)。所有非可赎回债券均呈现不同程度的正凸度。当价格-收益率曲线形态呈下凸时,收益率下降带来的债券价格涨幅要大于同等程度收益率增长带来的价格跌幅。正如通常说的,正凸度只能改善债券组合的表现。图2显示了30年期零息债券的价格-收益率曲线,以说明这一说法在何种程度意义上是正确的:下凸曲线的线性近似总是位于曲线之下。也就是说,对于给定的收益率变化,基于久期近似的债券价格变动将总是低于债券实际价格。(上述偏差)对于小的收益率变化,偏差很小;但是对于大的收益率变化,偏差很大。我们可以通过在线性近似中加上二次项(凸度)来更好地逼近真实的价格-收益率曲线。因此,给定收益率变化的债券百分比价格变化(\(100 * \Delta P / P\))为:: Figure 2 Price-Yield Curve of a 30-Year Zero where duration = \(-(100/P) * (dP/dy)\), convexity = \(-(100/P) * (d^2P/dy^2)\), \(\Delta y\) is the yield change, and yields are expressed in percentage terms. 其中久期等于\(-(100/P) * (dP/dy)\),凸度等于\(-(100/P) * (d^2P/dy^2)\),\(\Delta y\)是收益率变化,收益率以百分比表示。 In general, the most important determinants of bond convexity are the option features attached to bonds. Bonds with embedded short options often exhibit negative convexity. The negative convexity arises because the borrower's call or prepayment option effectively caps the bond's price appreciation potential when yields decline. However, this report does not analyze bonds with option features. For noncallable bonds, convexity depends on duration and on the dispersion of cash flows (see Appendix A for details). 一般来说,债券凸度最重要的决定因素是债券附带的期权特征。具有嵌入式卖出期权的债券通常呈现负凸度。由于借款人的看涨期权或提前支付期权在收益率下降时有效地限制了债券的价格升值潜力,所以出现了负的凸度。但是,本报告并不分析具有期权特征的债券。对于非可赎回债券,凸度取决于久期和现金流的分散程度(详见附录A)。 Figure 3 shows the convexity of zero-coupon bonds as a function of (modified) duration. Convexity not only increases with duration, but it increases at a rising speed. For zeros, a good rule of thumb is that convexity equals the square of duration (divided by 100).[3] Convexity also increases with the dispersion of cash flows. A barbell portfolio of a short-term zero and a long-term zero has more dispersed cash flows than a duration-matched bullet intermediate-term zero. Of all bonds with the same duration, a zero has the smallest convexity because it has no cash flow dispersion. As discussed in Appendix A, a coupon bonds or a portfolio's convexity can be viewed as the sum of a duration-matched zero's convexity and the additional convexity caused by cash flow dispersion. 图3显示了零息债券的凸度作为(修正)久期的函数。凸度不仅随着久期的推移而增加,而且加速增加。对于零息债券,一个好的经验法则是凸度等于久期的平方(除以100)。随着现金流的分散程度增加,凸度也在增加。短期和长期零息债券的杠铃组合的现金流比久期匹配的中期零息债券的子弹组合更加分散。在具有相同久期的所有债券中,零息债券具有最小的凸度,因为它没有分散的现金流。如附录A所述,付息债券或债券组合的凸度可以被视为久期匹配的零息债券凸度与由现金流分散引起的附加凸度的总和。 Figure 3 Convexity of Zeros as a Function of Duration Volatility and the Value of Convexity 波动率和凸度价值 Convexity is valuable because of a basic characteristic of positively convex price-yield curves that we alluded to earlier: A given yield decline raises the bond price more than a yield increase of equal magnitude reduces it. Even if investors know nothing about the direction of rates, they can expect gains to be larger than losses because of the nonlinearity of the price-yield curve. Figure 2 illustrated that convexity has little impact on the bond price if the yield shift is small, but a big impact if the yield shift is large. The more convex the bond and the larger the absolute magnitude of the yield shift, the greater the realized value of convexity is. We do not know in advance how large the realized yield shift will be, but we can measure its expected magnitude with a volatility forecast.[4] If we expect high near-term yield volatility, we expect a high value of convexity. 凸度是有价值的,因为我们早些时候提到的价格-收益率曲线呈现下凸的基本特征:收益率下降提高的债券价格大于收益率增加减少的债券价格。即使投资者对收益率方向一无所知,由于价格-收益率曲线的非线性,他们也可以预期回报将大于损失。图2显示,如果收益率变动小,则凸度对债券价格几乎没有影响;但如果收益率变动较大,则其影响很大。凸度越大并且收益率变动的绝对量越大,实现的凸度价值越大。我们不知道预期实现的收益率变动将会有多大,但是我们可以通过波动率预测来衡量其预期的幅度。如果我们预期近期收益率波动率较大,那么我们预计凸度价值将会很高。 The value of convexity is a nebulous concept; it may be hard for investors to see how higher volatility can increase expected returns. We try to make the concept more concrete and intuitive with the following example. Figure 4 compares the expected value of a 30-year zero in a world of certainty and in a world of uncertainty. In a world of certainty, investors know that a bond's yield will remain unchanged at 8%; thus, there is no volatility and convexity has no value. In the second case, we introduce uncertainty in the simplest possible way: The bond's yield either moves to 10% or to 6% immediately, with equal probability. That is, investors do not know in which direction the rates are moving (on average, they expect no change), but they do know that the rates will shift up or down by 200 basis points. Note that the two possible final bond prices (y = 10%, P = $5.40 and y = 6%, P = $17.00) are higher than those implied by a linear approximation. The expected bond price is an average of the two possible final prices: \(E(P) = 0.5 * \$5.40 + 0.5 * \$17.00 = \$11.20\). This expected price is higher than the price given no yield change (y = 8%, P = $9.50). The $1.70 price difference reflects the expected value of convexity; the bond's expected price is $1.70 higher if volatility is 200 basis points than if volatility is 0 basis points. Thus, higher volatility enhances the (expected) performance of positively convex positions.[5] 凸度价值是一个模糊的概念,投资者可能很难看到波动率会如何提高预期回报。我们尝试通过以下示例使该概念更加具体和直观。图4比较了一个确定性世界和一个不确定性世界的30年期零息债券的预期值。在一个确定的世界中,投资者知道债券的收益率将维持在8%的水平。因此,没有波动性并且凸度没有价值。在第二种情况下,我们以最简单的方式介绍不确定性:债券的收益率跳到10%或6%的概率相等。也就是说,投资者不知道收益率在哪个方向上移动(平均而言,他们预计没有变化),但是他们知道收益率将上升或下降200个基点。请注意,两种可能的最终债券价格(y = 10%, P = $5.40 并且 y = 6%, P = $17.00)高于线性近似隐含的价格。预期债券价格是两种可能的最终价格的平均值:\(E(P) = 0.5 * \$5.40 + 0.5 * \$17.00 = \$11.20\)。这个预期价格高于收益率没有变化的价格(y = 8%, P = $9.50)。1.70的价格差异反映了凸度价值的预期;波动为200个基点时,债券的预期价格比波动为0个基点时高1.70。因此,较高的波动性增强了正凸度头寸的(预期)表现。 Figure 4 Value of Convexity in the Price-Yield Curve of a 30-Year Zero The impact of volatility is very clear in the spread behavior between positively and negatively convex bonds (noncallable government bonds versus callable bonds or mortgage-backed securities). It is more subtle in the spread behavior within the government bond market where all bonds exhibit positive convexity. When volatility is high, the yield curve tends to be more humped and is more likely to be inverted at the long end, widening the spreads between duration-matched barbells and bullets and between duration-matched coupon bonds and zeros. 波动率对正-负凸度债券(非可赎回的政府债券与可赎回债券或抵押贷款支持证券)之间利差的影响非常明确。波动率对政府债券市场利差的影响更为微妙,因为所有债券均呈现正凸度。当波动率高时,收益率曲线往往更加弯曲,并且更有可能在长端发生倒挂,扩大和零息债券之间的利差。 CONVEXITY, YIELD CURVE AND EXPECTED RETURNS 凸度,收益率曲线与预期回报 Convexity Bias: The Impact of Convexity on the Curve Shape 凸度偏差:凸度对曲线形状的影响 We have demonstrated that positive convexity is a valuable property for a fixed-income asset and that different-maturity bonds exhibit large convexity differences. Now we will show that these convexity differences give rise to offsetting yield differences across maturities. Investors tend to demand less yield for more convex positions because they have the prospect of enhancing their returns as a result of convexity. In particular, Figure 3 showed that long-term bonds exhibit very high convexity. Because of their high convexity, these bonds can offer lower yields than a short-term bond and still offer the same near-term expected returns. 我们已经证明,正的凸度对固定收益资产来说非常有价值,不同期限债券的凸度呈现出巨大差异。现在我们将说明这些凸度差异将会抵消不同期限债券之间的的收益率差异。投资者倾向于要求较低的收益率来获得更多的凸度,因为他们希望通过凸度来提高回报。特别地,图3显示长期债券表现出非常高的凸度。由于较高的凸度,这些债券可以提供比短期债券更低的收益率,并且仍然提供相同的近期预期回报。 We isolate the impact of convexity on the yield curve shape, or the convexity bias[6], by presenting a hypothetical situation where the other influences on the curve shape are neutral. Specifically, we assume that all bonds have the same expected return (8%) and that the market expects the short-term rates to remain at the current (8%) level, and we examine the behavior of the spot curve and the curve of one-year forward rates. With no bond risk premia and no expected rate changes, one might expect these curves to be horizontal at 8%. Instead, Figure 5 shows that they slope down at an increasing pace because lower yields are needed to offset the convexity advantage of long-duration bonds (and thus to equate the near-term expected returns across bonds). Note the symmetry between the curve shapes in Figures 3 and 5. 我们假定其他因素对曲线形状的影响是中性的,以分离凸度对收益率曲线形状的影响(或凸度偏差)。具体来说,我们假设所有债券具有相同的预期回报(8%),市场预期短期收益率将保持在当前(8%)水平,我们研究即期收益率曲线和1年期远期收益率曲线的行为。没有债券风险溢价,没有预期收益率变化,人们可能预期这些曲线在8%上保持水平形状。相反,图5显示曲线加速下降,因为需要较低的收益率来抵消长期债券的凸度优势(并因此使债券的近期预期回报相等)。注意图3和图5中曲线形状之间的对称性。 Figure 5 Pure Impact of Convexity on the Yield Curve Shape Where did the numbers in Figure 5 come from? Unlike the real world, where the spot rates are the easiest to observe, in this example, we take the expected returns as given and work our way back to forward rates and then to spot rates. Given our assumption that the market has no directional views about the yield curve, each zero earns the near-term expected return from the rolling yield[7] and from convexity:[8] 图5中的数字来自哪里?现实世界中即期收益率是最容易观察到的,与此不同,在这个例子中我们假定已知预期回报,并以我们的方式推算远期收益率,然后进一步得到即期收益率。鉴于我们的假设,市场对收益率曲线没有方向性的看法,每个零息债券都能从滚动收益率和凸度中获得近期预期回报: where value of convexity \(\approx 0.5 * convexity * (Vol(\Delta y))^2\). 其中凸度价值约等于\(0.5 * convexity * (Vol(\Delta y))^2\)。 Using our assumption that all bonds have convexity-adjusted expected return of 8% and using some volatility assumption (which determines the value of convexity), we can back out the rolling yields for various-maturity zeros from Equation (2). Our volatility assumption of 100 basis points means roughly that we expect all rates to move 100 basis points (up or down) from their current level over the next year. For example, if the convexity of a long zero is 2.25 (see footnote 3), the value of convexity is approximately \(0.5 * 2.25 * 1^2 = 1.125\%\). The zero's rolling yield is 6.875% but its annualized near-term expected return is 8%, by assumption. For coupon bonds, which have smaller convexities, the value of convexity is much smaller. The final step in constructing Figure 5 is to compute the spot curve from the curve of one-year forward rates (the rolling yield curve). 基于我们的假设,所有债券的凸度调整预期回报为8%,并使用一些波动率假设(决定凸度价值),我们可以从方程(2)中算出各期限零息债券的滚动收益率。我们的100个基点的波动率假设大概意味着我们预计所有收益率都会从今年的当前水平变化100个基点(上涨或者下降)。例如,如果长期零息债券的凸度为2.25(见注3),凸度价值约为\(0.5 * 2.25 * 1^2 = 1.125\%\)。零息债券的滚动收益率为6.875%,但其年化近期预期回报为8%。对于具有较小凸度的付息债券,凸度价值要小得多。构建图5的最后一步是从1年期远期收益率曲线(即滚动收益率曲线)计算即期收益率曲线。 Convexity bias is simply the inverse of the value of convexity, or \(0.5 * convexity * (Vol(\Delta y))^2\). Figure 5 shows that the convexity bias, by itself, tends to make the yield curve inverted, especially at long durations. However, actual yield curves rarely invert as they do in this hypothetical example, in which we assumed, in particular, that all bonds across the curve have the same near-term expected return and the same basis-point yield volatility. We now relax each of these two assumptions, one at a time. First, convexity is not the only influence on the curve shape. The typical historical yield curve shape is upward sloping, probably reflecting positive bond risk premia (the fact that investors require higher expected returns for long-term bonds than for short-term bonds). At the front end of the curve, the convexity bias is so small that it does not offset the impact of positive bond risk premia. At the long end, the convexity bias can be so large that the yield curve becomes inverted in spite of positive risk premia. Figure 6 shows that in the presence of positive risk premia, convexity bias tends to make the yield curve humped rather than inverted. In this figure, we use historical average returns of various maturity subsectors to proxy for expected returns. 简单讲,凸度偏差是凸度价值的倒数,或者\(0.5 * convexity * (Vol(\Delta y))^2\)。图5显示,凸度偏差本身倾向于使收益率曲线倒挂,特别是在长久期端。然而,实际上收益率曲线很少像这个假设的例子一样出现倒挂,这个例子中我们假设所有期限上的债券都具有相同的近期预期回报并且有相同的基点收益率波动率。我们现在逐个放松这两个假设。首先,凸度不是对曲线形状的唯一影响。典型的收益率曲线形状向上倾斜,可能反映了正的债券风险溢价(相对于短期债券,投资者对长期债券要求更高的预期回报)。在曲线的前端,凸度偏差很小,不能抵消正的债券风险溢价的影响。在曲线长端,尽管有正的风险溢价,但是凸度偏差可能会很大以至于收益率曲线倒挂。图6显示,在存在正的风险溢价的情况下,凸度偏差倾向于使收益率曲线呈现弯曲而不是倒挂。在这个图中,我们使用不同期限的历史平均回报来代表预期回报。 Figure 6 Impact of Convexity with Positive Bond Risk Premia As explained earlier, the value of convexity increases with yield volatility. Thus far we have assumed that yield volatility is equally high across the curve. Figure 7 shows that historically, the term structure of volatility has often been inverted —— long-term rates have been less volatile than short-term rates. Therefore, the value of convexity does not increase quite as a square of duration even though convexity itself does. However, the value of convexity does increase quite quickly with duration even when the volatility term structure is taken into account; its inversion only dampens the rate of increase (see Figure 8). 如前所述,凸度价值随着收益率的波动率而增加。到目前为止,我们假设收益率波动率在整个曲线上同样高。图7显示,从历史上看,波动率的期限结构往往倒挂,长期收益率的波动幅度低于短期收益率。因此,凸度价值不会像凸度本身一样依久期的平方增大。然而,即使考虑到波动率期限结构,凸度价值确实随着久期的增加而增加;波动率的倒挂只会抑制增加程度(见图8)。 Figure 7 Historical Term Structure of (Basis-Point) Yield Volatility Figure 8 Value of Convexity Given Various Volatility Structures The levels and shapes of the volatility term structures are very different in Figure 7, depending on the sample period. In the 1980s —— and especially at the beginning of the decade —— yield volatilities were very high and the term structure of volatility was inverted. In the 1990s, volatilities have been lower and the term structure of volatility has been flat or humped. It is difficult to choose the appropriate sample period for computing the yield volatility and Figure 8 shows that this choice will have a significant impact on the estimated value of convexity. Our view is that the relevant choice is between the 1983-95 and the 1990-95 sample periods because we do not expect to see again the volatility levels experienced in 1979-82 —— at least not without clear warning signs. This period coincided with a different monetary policy regime in which the Federal Reserve targeted the money supply and tolerated much higher yield volatility than after October 1982.[9] 图7中,波动率期限结构的水平和形状因样本周期而非常不同。1980年代,特别是初期,收益率的波动率非常高,波动率期限结构倒挂。在1990年代,波动率一直较低,波动率期限结构是平坦或弯曲的。很难选择合适的样本周期计算收益率的波动率,并且图8表明这种选择对凸度价值的估计有重要影响。我们认为合适的选择是在1983-95年和1990-95年期间,因为我们不期望再次看到1979-82年的波动率水平,至少没有明确的警告。这一时期恰好与不同的货币政策制度相联系,美联储盯住货币供应量,容忍收益率波动幅度高于1982年10月以后的水平。 Instead of sample-specific historical volatilities, we could use implied volatilities from current option prices (based on the cap-curve, options on various futures contracts, OTC options on individual on-the-run bonds) to compute the (expected) value of convexity. The main reason that we have not done this is that such implied volatilities are not available for all maturities. In addition, it is not clear from empirical evidence that implied volatilities predict future yield volatilities any better than historical volatilities do. 其实,我们可以使用当前期权价格的隐含波动率(根据cap曲线、各种期货合约的期权、特定债券的OTC期权)来计算(预期)凸度价值,而不是特定样本期的历史波动率。我们没有这样做的主要原因是并不是所有期限都存在这种隐含波动率。此外,从实证证据可以看出,隐含波动率预测未来收益率波动率未必比历史波动率更好。 In Appendix B, we describe the various volatility measures used in this report and discuss the relations between them. In particular, we emphasize that the option prices are typically quoted in relative yield volatilities (\(Vol(\Delta y / y)\)) rather than in the basis-point volatilities (\(Vol(\Delta y)\)) that we use. For example, a 13% implied volatility quote has to be multiplied by the yield level, say 7%, to get the basis-point volatility (91 basis points = 0.91% = 13% * 7%). 在附录B中,我们描述了本报告中使用的各种波动率估计方法,并讨论了它们之间的关系。特别是,我们强调期权价格通常针对相对收益率波动率(\(Vol(\Delta y / y)\)),而不是我们使用的基点波动率(\(Vol(\Delta y)\))。例如,13%的隐含波动率必须乘以收益率水平,如7%,以获得基点波动率(91基点 = 0.91% = 13% * 7%)。 The Impact of Convexity on Expected Bond Returns 凸度对债券预期回报的影响 Figure 8 shows that positive convexity can be quite valuable, especially in a high-volatility environment. However, yield-based measures of expected bond return assign no value to convexity. For example, the rolling yield is a bond's holding-period return given one scenario (an unchanged yield curve), essentially assuming no rate uncertainty. Because volatility can only be positive, the rolling yield is a downward-biased measure of expected return for bonds with positive convexity.[10] Fortunately, it is possible to add the impact of rate uncertainty (the expected value of convexity) to rolling yields. Equation (2) showed that if the base case expectation is an unchanged yield curve, a bond's near-term expected return is simply the sum of the rolling yield and the value of convexity.[11] This relation holds approximately for coupon bonds as well as for zeros. 图8显示,正凸度可以是非常有价值的,特别是在高波动率环境中。然而,基于收益率的债券预期回报度量不认为凸度有任何价值。例如,滚动收益率是给定一种情况(不变收益率曲线)的债券持有期回报,基本上假定没有收益率不确定性。因为波动率只能是正数,所以滚动收益率是对具有正凸度债券的预期回报的下偏度量。幸运的是,可以对滚动收益率添加收益率不确定性(凸度价值的预期)的影响。方程式(2)表明,如果基本情况是期望收益率曲线不变,则债券的近期预期回报只是滚动收益率与凸度价值之和。这种近似关系适用于付息债券以及零息债券。 In Figure 9, we calculate three expected return measures (yield, rolling yield, convexity-adjusted expected return) and the value of convexity on September 1, 1995 for six Treasury par bonds and four long-duration zeros (estimated from the Salomon Brothers Treasury Model curve which represents off-the-run bonds). In addition, we describe two barbell positions that can be compared with duration-matched bullets. Figure 1 showed graphically the three alternative expected return curves as a function of duration. 在图9中,我们计算了1995年9月1日的六个国债和四个长期零息债券的三种预期回报度量(到期收益率、滚动收益率、凸度调整预期回报)和凸度价值(从 Salomon Brothers 国债模型曲线估计,代表不活跃的债券)。此外,我们描述了两个杠铃组合,并与久期匹配的子弹组合进行比较。图1显示了三种预期回报曲线作为久期的函数。 Figure 9 Expected One-Year Returns on Various Bonds as of 1 Sep 95 We use maturity-specific historical volatilities from the 1990-95 period to proxy for expected volatility, and we use a one-year horizon. These choices give one illustration of the ideas developed in this report; we stress that it is possible to use other volatility measures or other horizon. In particular, Figure 7 shows that the volatility estimates would be much higher if we extended our sample period to the 1980s. (The par bonds' yield volatilities are similar to those of the on-the-run bonds in Figure 7.) For a given yield curve, these higher volatility estimates could more than double the estimated value of convexity and, thus, increase the convexity-adjusted expected returns. Using a one-year horizon makes the notation easier because the value of convexity is expressed in annualized terms as are yields and volatilities. If we used a three-month horizon, all three expected return measures and the value of convexity would be roughly one fourth of the numbers in Figure 9. For example, if a 30-year par bond's convexity is 2.57 and the annual volatility is 82 basis points, the quarterly volatility is approximately 41 basis points (\(82 / \sqrt{4}\)), and the quarterly value of convexity is \(0.5 * 2.57 * 0.41^2 = 0.22\%\) (\(\approx 0.88\% / 4\)), or 22 basis points. 我们使用1990-95年期间的特定期限历史波动率来代替预期波动率,样本周期为1年期。这些选择可以说明本报告中提出的想法;我们强调,可以使用其他波动率估计或其他样本周期。特别是,图7显示,如果我们将样本期延长到1980年代,波动率估计将会更高。(债券的收益率波动率与图7中债券的收益率波动率类似)。对于给定的收益率曲线,这些较高的波动率估计可以将凸度价值的估计值增加一倍以上,从而增加凸度调整预期回报。样本周期使用一年使得标记更容易,因为凸度价值以年化收益率表示,正如收益率和波动率。如果我们使用三个月作为样本周期,所有三个预期回报和凸度价值将大约是图9中数字的四分之一。例如,如果30年期债券的凸度为2.57,年化波动率为82基点,季度波动率约为41个基点(\(82 / \sqrt{4}\)),凸度的季度值为\(0.5 * 2.57 * 0.41^2 = 0.22\%\)(\(\approx 0.88\% / 4\))或22个基点。 Figures 1 and 9 show that the convexity adjustment has little impact at short durations because short-term bonds exhibit little convexity. Even for the longest coupon bond, the annual impact is 88 basis points. In contrast, for the longest zeros, the value of convexity is very large both as an absolute number (253 basis points) and as a proportion of their expected return (30% = 2.53/8.46). More generally, the value of convexity can partly explain the rolling yield curve's typical concave (humped) shape, but even the convexity-adjusted expected return curve inverts after 25 years. The longest-maturity zeros appear to have genuinely low expected returns, perhaps reflecting their liquidity advantage and financing advantage. 图1和图9显示,由于短期债券几乎没有凸度,凸度调整对短久期端的影响不大。即使是期限最长的付息债券,年化影响也只是88个基点。相比之下,对于期限最长的零息债券,凸度价值的绝对数(253个基点)和关于预期回报的比例(30% = 2.53 / 8.46)都非常大。更一般地,凸度价值可以部分地解释滚动收益率曲线典型的上凸形状,即使凸度调整的预期回报曲线在25年后倒挂。最长期限的零息债券似乎具有相当低的预期回报,也许反映了其流动性优势和融资优势。 One advantage of this analysis is that it gives an improved view of the overall reward-risk trade-off in the government bond market. Until the 1970s, fixed-income investors evaluated this reward-risk trade-off by plotting bond yields on their maturities. Eventually investors learned that the rolling yield measures near-term expected return better than yield and that duration measures risk better than maturity.[12] In the mid-1980s, investors became familiar with the concept of convexity (see Literature Guide), although few have incorporated it formally into their expected return measures. However, convexity-adjusted expected returns are even better expected return measures than rolling yields —— and the adjustment is reasonably simple. To move all the way to mean-variance analysis, as advocated by the modern portfolio theory, we should adjust bond durations by their yield volatilities; then, Figure 1 would plot bonds' expected returns on their return volatilities. Of course, convexity-adjusted expected returns are not perfect; for example, if investors can predict yield curve reshapings consistently, they can construct even better expected return measures. 这种分析的一个优点是可以更好地看待政府债券市场的总体回报-风险权衡。直到1970年代,固定收益投资者通过画出债券收益率与期限的关系来评估这种回报-风险权衡。最终投资者意识到,在度量近期预期回报方面使用滚动收益率优于收益率,度量风险方面久期要优于期限。在1980年代中期,投资者熟悉了凸度的概念(参见文献指南),尽管很少有人将其纳入预期回报的度量。然而,凸度调整后的预期回报能比滚动收益率更好的度量预期回报,调整也相当简单。正如现代投资组合理论,一切归结为均值-方差分析,我们应该通过收益率波动率来调整债券久期。那么,图1将绘制债券预期回报关于回报波动率的关系。当然,凸度调整的预期回报并不完美。例如,如果投资者可以一致地预测收益率曲线的形变,他们可以构建更好的预期回报度量。 In addition, our analysis helps investors to interpret varying yield curve shapes, and more directly, it gives them tools to evaluate relative value trades between duration-matched barbells and bullets and between duration-matched coupon bonds and zeros. This is the topic of the next subsection. 此外,我们的分析帮助投资者解释不同的收益率曲线形状,并且更直接地,提供了评估组合和零息债券组合之间的相对价值交易的工具。这是下一节的主题。 Applications to Barbell-Bullet Analysis 应用于杠铃-子弹组合分析 A barbell-bullet trade involves the sale of an intermediate bullet bond and the purchase of a barbell portfolio of a short-term bond and a long-term bond. Often the trade is weighted so that it is cash-neutral and duration-neutral; that is, one unit of the intermediate bond is sold, a duration-weighted amount of the long bond is bought and the remaining proceeds from the sale are put into "cash" (a short-term bond that matures at the end of horizon). For simplicity, we will only study such barbells in this report. In Appendix A, we explain that a barbell portfolio has a convexity advantage over a duration-matched bullet because the barbell's duration varies more (inversely) with the yield level. Figure 3 provides another illustration of the convexity difference between barbells and bullets. If we draw a straight line between any two points on the zeros' convexity-duration curve, each point on this line corresponds to a barbell portfolio (with varying weights of the long-term and the short-term zero). The convexity of this barbell is the market-value-weighted average of the component bonds' convexities. Because the connecting straight line always lies above the zeros' convexity-duration curve, the barbell's convexity is always higher than that of a duration-matched bullet. Furthermore, the maximum convexity pick-up for any duration occurs when we connect the shortest and longest zeros. 杠铃-子弹交易涉及卖出中期债券(子弹部分)以及买入短期债券和长期债券投资组合(杠铃部分)。通常,交易通过加权使其保持现金中性和久期中性;即中期债券售出一个单位,相应依久期加权的长期债券被买入,剩余的回报被纳入“现金”(在投资期限结束时到期的短期债券) 。为简单起见,我们只会在本报告中研究上述这样的杠铃组合。在附录A中,我们解释说,相对于久期匹配的子弹组合,杠铃组合的长期债券具有凸度优势,因为杠铃组合的久期关于回报水平变化较大。图3提供了杠铃组合和子弹组合之间凸度差异的另一个说明。如果我们在零息债券的凸度-久期曲线上的任意两个点之间画一条直线,则该线上的每个点对应于一个杠铃组合(长期零息债券和短期零息债券具有的不同权重)。这种杠铃组合的凸度是付息债券凸度的市值加权平均。因为连接直线总是位于零息债券的凸度-久期曲线之上,所以杠铃组合的凸度总是高于久期匹配的子弹组合。此外,当我们连接期限最短和最长的零息债券时,任何久期上可以获得的的最大凸度就会出现。 In a similar way, we can connect any two points in Figure 5 and find that the rolling yield of any barbell is below the rolling yield of a duration-matched bullet. More generally, the rolling yield curve (as well as the yield curve) almost always has a concave shape as a function of duration; that is, the curve increases at a decreasing rate or decreases at an increasing rate. Therefore, a rolling yield disadvantage tends to offset the convexity advantage of a barbell-bullet trade. If an investor wants to evaluate the relative cheapness of a barbell-bullet trade, he needs to compare two numbers, the rolling yield give-up and the convexity pick-up. The advantage of the convexity-adjusted expected return is that it provides a single number to measure the attractiveness of these trades. For example, the ones-30s barbell in Figure 9 has a 71-basis-point rolling yield give-up relative to the ten-year bullet (= 6.23% - 6.94%), but how does this give-up compare with the convexity pick-up (1.38 versus 0.67)? The numbers in the last column show that the barbell still has a 51-basis-point give-up (= 6.70% - 7.21%) when measured in terms of convexity-adjusted expected returns and given our volatility forecasts. Incidentally, the shorter barbell in Figure 9 even picks up rolling yield over the duration-matched five-year bullet; this exceptional situation reflects the convex shape in parts of the rolling yield curve in Figure 1. 以类似的方式,我们可以连接图5中的任何两个点,发现任何杠铃组合的滚动收益率低于久期匹配的子弹组合的滚动收益率。更一般地,滚动收益率曲线(以及收益率曲线)几乎总是作为久期的上凸函数;也就是说,曲线以递减的速度增加或以递增的速度减小。因此,滚动收益率劣势倾向于抵消杠铃-子弹交易的凸度优势。如果投资者想要评估杠铃-子弹交易的相对便宜性,他需要比较两个数字,失去的滚动收益率和获得的凸度。凸度调整的预期回报的优点在于它提供了一个单一的数字来衡量这些交易的吸引力。例如,图9中的1-30年期杠铃组合相对于10年期子弹组合失去了71个基点(= 6.23% - 6.94%)的滚动收益率,但是这要如何与获得的凸度(1.38对0.67)比较?最后一列中的数字表明,依据凸度调整后的预期回报和给定的波动率预测值,杠铃组合仍然失去了51个基点(= 6.70% - 7.21%)。顺便说一句,图9中的较短的杠铃组合甚至相对于久期匹配的五年期子弹组合获得了滚动回报;这种特殊情况反映了图1中滚动收益率曲线中的下凸形状。 The performance of a duration-neutral barbell-bullet trade depends on curve reshaping, on parallel curve shifts and on the initial yields: (1) The trade profits from curve flattening and loses from curve steepening (between the two longer bonds); (2) the trade is constructed to be neutral to small parallel curve shifts, but the barbell profits from large shifts in either direction because of its convexity advantage; and (3) the initial rolling yield give-up is greater the more curved (concave) the yield curve is. Such a shape may be caused by the market's expectations of curve flattening or of high volatility, either of which would generate capital gains for the trade in the future. 久期中性的杠铃-子弹交易的表现取决于曲线形变、曲线平行偏移和初始收益率:(1)曲线变平使交易获得利润,曲线变陡(两个较长期限的债券之间)使交易损失;(2)该交易对小的曲线平行偏移保持中性,但杠铃组合由于其凸度优势可以从任一方向的大幅度变化中获利;(3)失去的初始滚动收益率越大,收益率曲线越弯曲(上凸)。这种形状可能是由于市场对曲线变平或高波动率的预期所致,其中任何一种原因都将为未来的交易带来资本回报。 Typical barbell-bullet trades are more curve flattening trades than convexity trades. The following break-even analysis illustrates this point. Consider the long barbell-bullet trade in Figure 9. It consists of selling a ten-year par bond (rolling yield 6.94%) and buying a barbell of the 30-year par bond (rolling yield 6.67%) and the one-year bond (rolling yield 5.73%), with a one-year investment horizon. Thus, at the end of horizon, the components will be a nine-year bond, a 29-year bond and cash. The constraints that the trade is duration-neutral and cash-neutral require weights 0.53 and 0.47 for the long bond and the short bond. Given the duration-neutral weighting of the barbell, the rolling yield give-up is 71 basis points (= 0.53 * 6.67% + 0.47 * 5.73% - 6.94%). We isolate the flattening and convexity effects in the trade by asking two questions: 典型的杠铃-子弹交易更多的是做平曲线,而不是凸度交易。以下盈亏平衡分析显示了这一点。考虑图9中的做多杠铃-子弹交易。卖空10年期债券(滚动收益率6.94%),并购买30年期债券(滚动收益率6.67%)和1年期债券(滚动收益率5.73%)的杠铃组合,投资期一年。因此,在投资期结束时,组合构成将是一个9年期债券,一个29年期债券和现金。久期中性和现金中性的限制要求长期债券和短期债券的权重分别为0.53和0.47。给定杠铃组合的久期中性权重,失去的滚动收益率为71个基点(= 0.53 * 6.67% + 0.47 * 5.73% - 6.94%)。我们通过提出两个问题来区分交易中的做平因素和凸度因素。 - How much would the yield spread between tens and 30s (or more exactly, between nines and 29s at the end of horizon) have to narrow to offset this give-up, if no parallel shifts occur? - 如果曲线没有发生平行偏移,10年期到30年期之间(或更准确地说是期末的9年期和29年期之间)的利差要收窄多少来抵消失去的滚动收益率? - How large must the parallel shifts be to make the convexity advantage offset this give-up, if no curve reshaping occurs? - 如果没有曲线形变,平行偏移必须达到多大才能使凸度优势抵消失去的滚动收益率? A little math shows that the necessary break-even changes are an 11-basis-point spread narrowing (curve flattening) and a 138-basis-point parallel shift. Historical experience suggests that the former event is more plausible than the latter: Over the past 15 years, the tens-30s spread narrowed by at least 11 basis points in a year 30% of the time, while the ten-year yield level shifted by more than 138 basis points in a year only 17% of the time. Thus, it is more likely that a given rolling yield disadvantage is offset via curve flattening than via the barbell's convexity advantage. However, the relative roles of curve-reshaping and convexity vary across different barbell-bullet trades. The reshaping effects are clearly more important at shorter durations (between most coupon bonds), while convexity can be more important at longer durations (between very long zeros). It follows that the time-variation in the rolling yield spread between barbell and bullet coupon bonds —— or in the yield curve curvature below the ten-year duration —— depends more on the market's changing expectations about future curve flattening/steepening than on its changing volatility expectations. 只需一点数学计算,必要的盈亏平衡变化是一个11个基点的利差收窄(曲线变平)和138个基点的平行偏移。历史经验表明,前一事件比后者更可能发生:在过去15年中,有30%的时间内10年期与30年期的利差收窄至少11个基点,而10年期收益率移动138个基点以上只占17%的时间。因此,给定的滚动收益率劣势更可能通过曲线变平抵消,而不是通过杠铃组合的凸度优势。然而,曲线变形和凸度的相对作用在不同的杠铃-子弹交易中有所不同。在短久期债(大多数付息债券)之间,形变效应显然更为重要,而在长久期债(长期零息债券)之间,凸度可能更为重要。因此,杠铃和子弹债券之间的滚动收益率利差或久期低于10年部分的收益率曲线曲率的时变性,更多取决于市场对未来曲线变平/变陡的预期,而不是其变化的波动率预期。 The convexity aspect of the previous example illustrates the similarity between a barbell-bullet trade and a purchase of a long option straddle (a purchase of a call and a put with the same strike price and exercise date). Figure 10 shows the almost U-shaped pattern that is familiar from option analysis. The rolling-yield disadvantage corresponds to the long call and put positions' initial cost (premium), which large market movements in either direction would offset. The trade would only be profitable if the yield level increased or declined by at least 138 basis points, assuming parallel yield shifts. If the yield curve does not move at all from the initial level, the maximum loss (71 basis points) occurs. Of course, Figure 10 ignores the substantial curve-reshaping risk in this trade.[13] 前一个例子凸度方面的分析说明了杠铃-子弹交易与做多跨式期权交易(购买相同执行价格和执行日期的看涨和看跌期权)之间的相似性。图10显示的几乎呈U形的图案相似于对期权的分析。滚动收益率劣势对应于看涨和看跌期权头寸的初始成本,其中任何一个方向上的大变动都将被抵消这一成本。假设收益率平行变动,如果收益率水平上升或下降至少138个基点,那么交易只会是有利可图的。如果收益率曲线根本不发生移动,则发生最大损失(71个基点)。当然,图10忽略了这个交易中的曲线形变风险。 Figure 10 The Payoff Profile of a Barbell-Bullet Trade, Assuming Parallel Yield Shifts Another way to measure the cheapness of the barbell-bullet trade is to compute its implied yield volatility and compare it with the implied volatility in option markets. We can back out an implied volatility number for each barbell-bullet trade based on the observable rolling yield spread and convexity difference, if we assume that the duration-matched barbell and bullet earn the same expected returns and that the rolling yield spread reflects only the value of convexity —— and no curve-flattening expectations.[14] In that case, high curvature (concavity) in the yield curve and high bullet-barbell rolling yield spreads indicate high implied volatility. In contrast, if the yield curve is a convex function of duration, barbells pick up yield and convexity and the implied volatility is negative —— typically an indication of the market's strong expectations about near-term curve steepening. 衡量杠铃-子弹交易的便宜性的另一种方法是计算其隐含的收益率波动率,并将其与期权市场的隐含波动率进行比较。如果我们假设久期匹配的杠铃组合和子弹组合获得相同的预期回报,并且滚动收益率利差仅反映了凸度价值,没有曲线变平的预期,则可以根据观察到的滚动收益率利差和凸度差异计算出杠铃-子弹交易的隐含波动率。在这种情况下,收益率曲线的高曲率(上凸)和子弹-杠铃组合的高滚动收益率利差表示高的隐含波动率。相比之下,如果收益率曲线是久期的下凸函数,杠铃组合会获得收益率和凸度,并且隐含波动率为负值,这通常表明市场对近期曲线变陡的强烈预期。 HISTORICAL EVIDENCE ABOUT CONVEXITY AND BOND RETURNS 凸度与债券回报的历史依据 The intuition behind convexity-adjusted expected returns is that if investors care about expected return rather than yield, they will rationally accept lower yields and rolling yields from more convex bonds. In this sense, convexity is priced: It influences bond yields. However, a more subtle question is whether convexity also influences expected returns that are not directly observable. It is possible that the rolling yield disadvantage exactly offsets convexity advantage so that two bond positions with the same duration but different convexities have the same near-term expected return. It is also possible that convexity is such a desirable characteristic —— because of the insurance-type payoff pattern —— that the market (investors in the aggregate) accepts lower expected returns for more convex bonds. Finally, it is possible that current-income seekers dominate the marketplace, leading to a price premium (lower expected returns) for higher-yielding, less convex bonds. The jury is still out on this question. The evidence from historical bond returns that we present below suggests that more convex positions earn somewhat lower returns in the long run than less convex positions. 凸度调整预期回报背后的直觉是,如果投资者关心预期回报而不是收益率,那么他们将合理地接受凸度更大而收益率和滚动收益率较低的债券。在这个意义上,凸度通过影响债券收益率而被定价。然而,一个更微妙的问题是凸度是否也影响不可直接观察到的预期回报。滚动收益率劣势可能完全抵消凸度优势,使得具有相同久期但不同凸度的两个债券头寸具有相同的近期预期回报。由于保险型回报模式,凸度是有吸引力的特点,即市场(总体投资者)因为债券较大的凸度而接受期较低的预期回报。最后,追逐眼下回报的投资者有可能在市场上占主导地位,导致了高回报、小凸度债券的价格溢价(较低的预期回报)。这个问题还没有定论。下文中来自历史债券回报的证据表明,凸度较大的头寸从长期来看回报略低于凸度较小的头寸。 In this final section, we examine the historical performance of a long-term bond position and of a wide barbell-bullet position between January 1980 and December 1994, focusing on the impact of convexity on realized returns. The first strategy involves always investing in the on-the-run 30-year Treasury bond; this strategy is long convexity by holding a long-duration bond. The second strategy involves rolling over a fives-thirties flattening trade each month. Specifically, we sell short the on-the-run five-year Treasury bond each month and buy a barbell of the 30-year bond and one-month bill. The trade is duration-matched to horizon; that is, the weight of the 30-year bond in the barbell is such that the barbell and the bullet have the same expected duration at the end of the month. A little algebra shows that the weight is the ratio of the five-year bond's duration to the 30-year bond's duration (at horizon). Although the trade is cash-neutral and duration-neutral, it is long convexity because a barbell is more convex than a bullet. 在本篇最后一节中,我们研究了1980年1月至1994年12月期间的长期债券头寸以及杠铃-子弹组合的历史表现,重点是凸度对实现回报的影响。第一个策略总是投资于当期(活跃的)30年期的国债;这种策略通过持有长期债券而做多凸度。第二个策略涉及到在5年期与30年期债券之间反复做月度做平交易。具体来说,我们每月做空五年期国债,并做多30年期债券和一个月期国库券。久期和投资期限匹配,也就是说,30年期债券的权重使得杠铃组合和子弹组合在月底有相同的预期久期。简单计算表明,权重是五年期债券久期与30年期债券久期的比例。虽然交易是现金中性和久期中性的,但它是做多凸度的,因为杠铃组合比子弹组合的凸度更大。 We first show some summary statistics of various bond positions in Figure 11 but focus on the last two columns. The bullet has roughly a 100-basis-point higher average return and average yield than the duration-matched barbell.[15] Thus, the barbell's convexity pick-up (0.69 versus 0.19) and the impact of yield curve reshaping do not offset its initial yield give-up. However, the barbell does have clearly lower return volatility than the bullet, reflecting the lower yield volatility of the 30-year bond than the five-year bond. 我们首先在图11中显示各种债券头寸的一些汇总统计数据,但重点在最后两列。子弹组合比久期匹配的杠铃组合多大约100个基点的平均收益率和平均收益率。因此,杠铃组合获得的凸度(0.69对0.19)和收益率曲线形变的影响并不能抵消其损失的初始收益率。然而,杠铃组合的回报波动率明显低于子弹组合,这反映了30年期债券的收益率波动率比五年期债券低。 Figure 11 Description of Various On-the-Run Bond Positions, 1980-94 We can decompose any bond's holding-period return into four parts: the yield impact; the duration impact; the convexity impact; and a residual term. Recall from Equation (1) that duration and convexity effects can approximate a bond's instantaneous return well. Over time, a bond also earns some income from coupons or from price accrual; we estimate this income from a bond's yield. Thus, we approximate a bond's holding-period return by Equation (3).[16] The difference between the actual return and its three-term approximation is the residual term; if the approximation is good, the residual should be relatively small. We split the 30-year bond's monthly returns to four components and describe the average behavior and volatility of each component in the top panel of Figure 12.[17] 我们可以将债券的持有期回报分解为四个部分:收益率影响、久期影响、凸度影响和误差项。从等式(1)回顾,久期和凸度因素可以近似债券的即时回报。随着时间的推移,债券还可以从票息或价格中获得一些收入;我们从债券的收益率中估算这笔收入。因此,我们通过公式(3)近似债券的持有期回报。实际回报与其三项近似值之间的差额是误差项;如果近似值较好,则误差项应相对较小。我们将30年期债券的月度回报分为四个组成部分,并在图12中描述了各个成分的平均行为和波动率。 The return volatility numbers in the top panel of Figure 12 show that in any given month, the duration impact largely drives the long bond's return —— it is the source behind 99% of the monthly return fluctuations. However, yield increases and decreases tend to offset each other over time, having little impact on long-term average returns.[18] Over our 15-year sample period, the long bond's average return reflects more the average yield (91%) and less the convexity (14%) and duration (-5%) effects. The residual term has a small mean and volatility, indicating that the approximation in Equation (3) works well. Subperiod analysis shows that over three-year horizons, the duration effect can still have a significant positive or negative impact —— the 1983-85 and 1989-91 subperiods were clearly bull markets and the three other subperiods were bear markets. In contrast, the yield and convexity effects are always positive (by construction). The convexity impact was largest in the early 1980s when yield volatility was very high. During the whole sample, the annualized convexity impact was 148 basis points. In the 1990s, it was about half of that. 图12顶部的回报波动率数据显示,在任何一个月,久期很大程度上影响了长期债券的回报,这是每月回报波动99%的来源。但是,随着时间的推移,收益率的增加和减少往往相互抵消,对长期平均回报影响不大。在15年的样本期,长期债券的平均回报更多反映了平均收益率(91%)的影响,其次是凸度(14%),再次是久期(-5%)。误差项具有小的均值和波动率,表明等式(3)中的近似程度很好。子时段分析显示,三年内久期仍然可以产生显着的正面或负面影响,1983-85年和1989-91年显然是牛市,另外三个子时段是熊市。相比之下,收益率和凸度因素总是正的。1980年代初,当收益率波动率很大时,凸度的影响最大。在整个样本中,年化凸度影响为148个基点,而在1990年代,大约只是一半。 Similarly, we can split the five-year bullet's and the duration-matched barbell's monthly returns into four components based on Equation (3). The lower panel of Figure 12 describes the average behavior and volatility of their difference, which can be viewed as a duration-matched and cash-neutral barbell-bullet trade. Again, the volatility numbers show that most of the monthly fluctuations (99%) come from the duration impact. The trade is duration-neutral; thus, the duration impact refers to the capital gains or losses caused by curve reshaping. That is, although \(Dur_{Barbell} = Dur_{Bullet}\), the duration impacts of the barbell and the bullet differ unless the yield changes are parallel (\(-Dur_{Barbell} * \Delta y_{Barbell} \neq -Dur_{Bullet} * \Delta y_{Bullet}\)). Over the whole sample, these effects tend to cancel out, and the average return depends largely (90%) on initial yields. The barbell has a 105-basis-point lower average annual return than the bullet, mainly because of its yield disadvantage (-95 basis points) and partly due to losses caused by the curve steepening (-36 basis points); these are only partly offset by the barbell's convexity advantage (30 basis points). In four out of five subperiods, the bullet outperformed the barbell, suggesting that a barbell's convexity advantage is rarely sufficient to offset the negative carry over a multiyear period.[19] In addition, the impact of curve-reshaping is larger, in absolute magnitude, than the convexity impact in each subperiod. Again, the residual has a small mean and volatility; thus, the approximation in Equation (3) appears to work well. 同样地,我们可以根据公式(3)将五年期子弹组合和久期匹配的杠铃组合的月回报分为四个成分。图12的下半部分显示了它们差异的平均行为和波动性,可以看作是久期匹配和现金中性的杠铃-子弹交易。同样,波动率数据显示,大部分(99%)月度波动来自久期的影响。交易是久期中性的,因此,久期的影响是指由曲线变形引起的资本损益。也就是说,尽管\(Dur_{Barbell} = Dur_{Bullet}\),杠铃组合和子弹组合的久期影响也是不同的,除非收益率变化是平行的(\(-Dur_{Barbell} * \Delta y_{Barbell} \neq -Dur_{Bullet} * \Delta y_{Bullet}\))。在整个样本中,这些影响趋于抵消,平均回报主要取决于初始收益率(90%)。杠铃组合的平均年化回报比子弹组合低105个基点,主要是由于其收益率劣势(-95个基点),部分原因是由于曲线变陡造成的损失(-36个基点),杠铃组合的凸度优势仅可以抵消部分(30个基点)。五分之四的时段内,子弹组合的表现优于杠铃组合,这表明杠铃组合的凸度优势在多个时段内不足以抵消负的Carry。此外,从绝对值的角度看,在每个子时段内曲线变形的影响大于凸度的影响。残差具有小的平均值和波动率,因此等式(3)中的近似良好。 Figure 12 Decomposing Returns to Yield, Duration and Convexity Effects Figure 12 describes the impact of convexity, and two other effects, on realized bond returns. While characterization of past returns is sometimes useful, most investors are more interested in the future impact of convexity. If volatility and convexity were constant, we could use the historical average convexity impact to proxy for the expected value of convexity. However, volatility and convexity vary over time. Figure 13 shows the behavior of convexity, the rolling 20-day historical volatility and the (expected) value of convexity of the 30-year bond between 1980 and 1994. (Recent historical volatility is often used as an estimate for near-term future volatility.) Convexity has increased as yields declined, but the volatility level has declined even more except for spikes after the 1987 stock market crash and after the Fed's tightening in spring 1994. In the early 1980s, convexity was worth several hundred basis points for the 30-year bond —— while more recently, the value of convexity has rarely exceeded 100 basis points. Such variation implies that any estimates of the value of convexity are as good as the underlying estimates of future volatility. Therefore, when computing convexity-adjusted expected returns, investors should use the information in the current yield curve combined with their best forecasts of the near-term yield volatility. 图12描述了凸度以及另外两个影响对实现债券回报的影响。虽然过去回报的表征有时是有用的,但大多数投资者对未来的凸度影响更感兴趣。如果波动性和凸度不变,我们可以使用历史平均凸度影响来代表预期的凸度值。然而,波动性和凸度随时间而变化。图13显示了1980年至1994年期间的凸度、20天滚动历史波动率和30年期债券凸度(预期)的趋势。(近期历史波动率通常用作近期未来波动率的估计值。)随着收益率的下降凸度有所增加,但1987年以来股市暴跌之后以及美联储在1994年春季收紧财政之后,波动幅度也有所下降。1980年代初期,对于30年期债券来说,凸度价值几百个基点,而近期凸度的价值很少超过100个基点。这种变化意味着对凸度价值的任何估计与未来波动率的基本估计一样好。因此,当计算凸度调整后的预期回报时,投资者应使用当前收益率曲线中的信息及其对近期收益率波动率的最佳预测。 Figure 13 Convexity and Volatility of the 30-Year Bond Over Time APPENDIX A. HOW DOES CONVEXITY VARY ACROSS NONCALLABLE TREASURY BONDS? 附录 A:非可赎回国债的凸度变化 For bonds with known cash flows, convexity depends on the bond's duration and on the dispersion of the bond's cash flows. The longer the duration, the higher the convexity (for a given cash flow dispersion), and the more dispersed the cash flows, the higher the convexity (for a given duration). In this subsection, we discuss the algebra and the intuition behind these relations. We begin by analyzing zero-coupon bonds. 对于已知现金流的债券,凸度取决于债券的久期和债券现金流的分布。久期越长,凸度越高(给定现金流分布);现金流越分散,凸度越高(给定久期)。在本小节中,我们讨论这些关系背后的数学和直觉。我们首先分析零息债券。 The price of an n-year zero is n年期零息债券的价格为 where P is the bond's price, y is its annually compounded yield, expressed in percent, and n is its maturity. Taking the derivative of price with respect to yield reveals that 其中P为债券的价格,y为其年化收益率,以百分比表示,n为其期限。对收益率求导数得出 The second equality holds because \(1/(1+y/100)^n = P / 100\), based on Equation (4). Multiplying both sides of Equation (5) by \((-100/P)\) gives the definition of (modified) duration: 由于\(1/(1+y/100)^n = P / 100\),基于等式(4),所以第二个等式成立。将等式(5)的两边乘以\((-100/P)\),给出(修正)久期的定义: For zeros, maturity (n) equals Macaulay duration (T). Thus, Equation (6) confirms the familiar relation between modified duration and Macaulay duration: \(Dur = T / (1+y/100)\), given annual compounding. 对于零息债券,期限(n)等于Macaulay久期(T)。因此,等式(6)证实了修正久期和 Macaulay 久期之间的关系:\(Dur = T / (1+y/100)\),给定每年付息一次。 Taking the second derivative of price with respect to yield reveals that, 对价格关于收益率求二阶导数得到, Multiplying both sides by \((100/P)\) gives the definition of convexity (Cx): 两边同时乘以\((100/P)\),给出凸度(Cx)的定义: Expressed in terms of Macaulay duration or modified duration, a zero's convexity is \((T^2+T)/[100*(1+y/100)^2]/100\). For long-term bonds —— the square of duration is much larger than duration, thus, the rule of thumb that the convexity of zeros increases as a square of duration divided by 100. For example, for a zero with modified duration of 20 and yield of 8%, convexity is approximately 4.0 (\(=20^2/100 \approx (20^2 + 20/1.08)/100 = 4.8\)). 以 Macaulay 久期或修正久期表示,零息债券的凸度为\((T^2+T)/[100*(1+y/100)^2]/100\)。对于长期债券,久期的平方比久期大得多,因此,零息债券的凸度约等于久期的平方除以100。例如,对于修改久期为20且收益率为8%的零息债券,凸度约为4.0。 The relation between the convexity and duration of zeros, illustrated in Figure 3, is simply a mathematical fact. With Figure 14 we try to offer some intuition as to why long-term bonds have much more nonlinear (convex) price-yield curves than short-term bonds. This figure shows price as a function of yield for various-maturity zeros. All curves are downward sloping but not linear. However large the discounting term \((1+y/100)^n\) is, prices cannot become negative as long as \(y > 0\). Intuitively, high convexity (that is, a large change in the slope of the price-yield curve) is needed to keep bond prices positive if the price-yield curve is initially very steep. Otherwise the linear approximation of the long bond's price-yield curve would hit zero very fast (at a yield of 11% for a 30-year zero in Figure 14 versus at a yield of 43% for a three-year zero). 如图3所示,零息债券的凸度和久期之间的关系是一个简单的数学事实。与图14一样,我们试图提供一些直觉,说明为什么长期债券的价格-收益率曲线比短期债券具有更强的非线性(凸度)。这幅图显示了不同期限零息债券的价格作为收益率的函数。所有曲线都是向下倾斜,但不是线性的。然而,不管贴现项\((1+y/100)^n\)有多大,只要\(y > 0\),价格就不会变为负值。直接上,如果价格-收益率曲线最初很陡,需要大的凸度(即价格-收益率曲线大的斜率变化)来保持债券价格是正的。否则,长期债券的价格-收益率曲线的线性近似将非常快速地达到零(图14中的30年零息债券将在收益率为11%时为零,而三年期零息债券在收益率为43%时为零)。 Figure 14 Price-Yield Curves of Zeros with Various Maturities and Their Linear Approximations For a given duration, convexity increases with the dispersion of cash flows. A barbell portfolio of a short-term zero and a long-term zero has more dispersed cash flows than a duration-matched bullet intermediate-term zero. The bullet, in fact, has no cash flow dispersion. The barbell exhibits more convexity because of the inverse relation between yield level and portfolio duration. A given yield rise reduces the present value of the longer cash flow more than it reduces that of the shorter cash flow, and the decline in the longer cash flow's relative weight shortens the barbell's duration, limiting losses if yields rise further. (Recall that the Macaulay duration of a portfolio is the present-value-weighted average duration of its constituent cash flows.) Of all bonds with the same duration, a zero has the smallest convexity because it has no cash flow dispersion. Thus, its Macaulay duration does not vary with the yield level. 给定久期,凸度随着现金流的分散程度增加。短期和长期零息债券杠铃组合的现金流比久期匹配的中期零息债券子弹组合更分散。事实上,子弹组合的现金流没有分散。由于收益率水平与投资组合久期之间的反比关系,杠铃组合表现出更多的凸度。相对于短期现金流,给定的收益率上涨更会降低长期现金流的现值,而且长期现金流相对权重的下降会缩短杠铃组合的久期,从而限制了收益率进一步上涨时的损失。(回想一下,投资组合的 Macaulay 久期是其组合现金流的现值加权平均久期。)在所有久期相同的债券中,由于没有现金流分散,零息债券的凸度最小。因此,其 Macaulay 久期并不随收益率水平而变化。 In fact, a coupon bond's or a portfolio's convexity can be viewed as a sum of a duration-matched zero's convexity and additional convexity caused by cash flow dispersion. That is, the convexity of a bond portfolio with a Macaulay duration T is: 事实上,付息债券或投资组合的凸度可以看做是一系列久期匹配的零息债券的凸度之和加上由于现金流分散造成的附加凸度。Macaulay 久期为 T 的债券组合的凸度是: where the first term on the right-hand side equals a duration-matched zero's convexity (see Equation (8)) and "dispersion" is the standard deviation of the maturities of the portfolio's cash flows about their present-value-weighted average (the Macaulay duration).[20] 其中右侧的第一项等于久期匹配的零息债券的凸度(参见等式(8)),并且“Dispersion”是 投资组合现金流的期限关于其现值加权平均(Macaulay 久期)的标准差。 Figure 15 illustrates the convexity difference between a bullet (a 30-year zero) and a duration-matched barbell portfolio of ten-year and 50-year zeros, We use such an extreme example and a hypothetical 50-year bond only to make the difference in the two price-yield curve shapes visually discernible. If the yield curve is flat at 8% and can undergo only parallel yield shifts, the barbell will, at worst, match the bullets performance (if yields stay at 8%) and, at best, outperform the bullet substantially (if yields shift up or down by a large amount). Clearly, high positive convexity is a valuable characteristic. In fact, because it is valuable, the situation in Figure 15 is unrealistic. If the flat curve / parallel shifts assumption were literally true, investors could earn riskless arbitrage profits by being long the barbell and short the bullet. In reality, market prices adjust so that the yield curve is typically concave rather than flat (that is, the barbell has a lower yield than the bullet), and nonparallel shifts such as curve steepening can make the bullet outperform the barbell. 图15显示了子弹组合(30年零)和久期匹配的杠杆组合(10年期和50年期零息债券)之间的凸度差异,我们使用这样一个极端的例子和一个假设的50年期债券使得两个价格-收益率曲线形状的差异在视觉上可辨别。如果收益率水平为8%,并且收益率只能平行变动,那么最坏的情况下杠铃组合的表现会和子弹组合一致(如果收益率保持在8%),而最好的情况是杠铃组合大幅跑赢子弹组合(如果收益率大幅度上升或下降)。显然,凸度大是一个有价值的特征。其实,因为它是有价值的,图15的情况是不切实际的。如果曲线水平并且平行偏移的假设是真实的,投资者可以通过做多杠铃组合做空子弹组合获得无风险的套利利润。实际上,市场价格的调整使得收益率曲线通常是上凸的而不是水平的(杠铃组合比子弹组合的收益率更低),并且曲线非平行的变化(如曲线变陡峭)可以使得子弹组合跑赢杠铃组合。 Figure 15 Price-Yield Curves of a Barbell and a Bullet with the Same Duration (30 Years) APPENDIX B. RELATIONS BETWEEN VARIOUS VOLATILITY MEASURES 附录 B:不同波动率度量方法间的关系 Equation (1) shows that \(0.5*Cx*(\Delta y)^2\) approximates the impact of convexity on a bond's percentage price changes. Thus, the expected value of convexity \(\approx 0.5*Cx*E(\Delta y)^2\). Now we discuss relations between \(E(\Delta y)^2\) and some volatility measures. The variance of basis-point yield changes is defined as 公式(1)表明,\(0.5 * Cx * (\Delta y)^2\)近似于凸度对债券价格变动百分比的影响。因此,凸度价值的预期约等于\(0.5 * Cx * E(\Delta y)^2\)。现在我们讨论\(E(\Delta y)^2\)与一些波动率度量方法之间的关系。收益率变动的方差定义为 Because yield changes are mostly unpredictable, it is reasonable to assume that \(E(\Delta y) \approx 0\). Therefore, \(Var(\Delta y) \approx E(\Delta y - 0)^2 = E(\Delta y)^2\). The volatility of yield changes (\(Vol(\Delta y)\)) is often measured by standard deviation —— the square root of variance. Thus, 由于收益率变化是不可预测的,因此有理由假设\(E(\Delta y) \approx 0\)。因此,\(Var(\Delta y) \approx E(\Delta y - 0)^2 = E(\Delta y)^2\)。收益率变化的波动率(\(Vol(\Delta y)\))通常是指标准差,即方差的平方根。因此 As long as \(E(\Delta y) \approx 0\), volatility is roughly proportional to the expected absolute magnitude of the yield change, \(E(|\Delta y|)\). Note that it makes sense to assume that \(E(|\Delta y|)\) is positive even when \(E(\Delta y) = 0\). Even if an investor thinks that the current yield curve is the best forecast for next year's yield curve, he can think that the curve is likely to move up or down by, say, 100 basis points from the current level over the next year. In fact, it would be extreme to assume that \(E(|\Delta y|) = 0\); this assumption would imply zero volatility (no rate uncertainty). 只要\(E(\Delta y) \approx 0\),波动率与收益率变化的绝对值期望(\(E(|\Delta y|)\))大致成正比。注意,即使\(E(\Delta y) = 0\),\(E(|\Delta y|)\)也是正的。即使投资者认为目前的收益率曲线是对明年收益率曲线的最佳预测,该曲线在明年也有可能相对于今年水平向上或向下移动100个基点。事实上,假设\(E(|\Delta y|) = 0\)将是极端的,因为这个假设将意味着零波动率(不存在不确定性)。 Next we show that for zero-coupon bonds the value of convexity is proportional to the variance of returns. Both yields and returns are expressed in percent. Short-term fluctuations in bonds' holding-period returns (h) mostly reflect the duration impact (\(-Dur * \Delta y\)) because the yield and convexity impacts are either so stable or so small that they contribute little to the return variance (see Equation (3) and Figure 12). Therefore, 接下来我们说明,零息债券的凸度价值与回报方差成比例。收益率和收益率都以百分比表示。债券持有期回报的短期波动主要反映久期的影响(\(-Dur * \Delta y\)),因为收益率和凸度的影响要么稳定要么很小,所以它们对回报方差贡献不大(见等式(3)和图12)。因此, The relation \(Cx \approx Dur^2 / 100\) is explained below Equation (8). A comparison of Equations (11) and (12) shows that the value of convexity for zeros is approximately equal to the variance of returns divided by 200. Interestingly, also the difference between an arithmetic mean and a geometric mean is approximately equal to the variance of returns divided by 200.[21] It appears that a duration extension enhances convexity and increases the (arithmetic) expected return, but the ensuing increase in volatility drags down the geometric mean and offsets the convexity advantage. 下面等式(8)说明关系\(Cx \approx Dur^2 / 100\)。方程(11)和(12)的比较表明,零息债券的凸度价值近似等于回报方差除以200。有趣的是,算术平均值和几何平均值之间的差异近似等于回报的方差除以200。看来,增大久期会增加凸度并增加(算术)预期回报,但是随之而来的波动性增加会拖累几何平均值并抵消凸度优势。 Equation (12) illustrates the relation between a bond's return volatility and yield volatility. We finish by stressing the distinction between the volatility of basis-point yield changes \(Vol(\Delta y)\) and the volatility of relative yield changes \(Vol(\Delta y / y)\). The volatility quotes in option markets and in Bloomberg or Yield Book typically refer to \(Vol(\Delta y / y)\), while our analysis focuses on \(Vol(\Delta y)\). 公式(12)说明了债券回报波动率与收益率波动率之间的关系。最后,我们强调基点收益率变动的波动率(\(Vol(\Delta y)\))与相对收益率变动的波动率(\(Vol(\Delta y / y)\))之间的区别。期权市场和彭博或收益率手册中的波动率通常指\(Vol(\Delta y / y)\),而我们的分析则集中在\(Vol(\Delta y)\)上。 In Figure 7, we use the historical basis-point yield volatility to proxy for the expected basis-point yield volatility. Alternatively, we could compute the historical relative yield volatility and multiply it by the current yield level. The latter approach would be appropriate if the relative yield volatility is believed to be constant over time, making the basis-point yield volatility vary one-for-one with the yield level. Empirically, this has not been the case in the United States since 1982 (see footnote 9). 在图7中,我们使用历史基点收益率波动率来代表预期基点收益率波动率。或者,我们可以计算历史相对收益率波动率,并将其乘以当前收益率水平。如果相对收益率波动率被认为是不随时间变化的,后一种方法将比较适当,这使得基点收益率波动率随着回报水平变化。经验上,自1982年以来,美国并非如此(见注9)。 LITERATURE GUIDE 文献指南 On the Basics of Convexity 关于凸度的基本知识 - Garbade, Bond Convexity and its Implications for Immunization, Bankers Trust Co., March 1985. - Klotz, Convexity of Fixed-Income Securities, Salomon Brothers Inc, October 1985. - Ho, Strategic Fixed-Income Investment, 1990. - Diller, "Parametric Analysis of Fixed Income Securities," in Fixed Income Analytics, ed. by Dattatreya, 1991. - Fabozzi, Bond Markets, Analysis and Strategies, 1993. - Tuckman, Fixed Income Securities, 1995. On the Impact of Convexity on the Yield Curve Shape 关于凸度对收益率曲线形状的影响 - Cox, Ingersoll, and Ross, "A Re-examination of Traditional Hypotheses about the Term Structure of Interest Rates," Journal of Finance, 1981. - Campbell, "A Defense of Traditional Hypotheses about the Term Structure of Interest Rates," Journal of Finance, 1986. - Diller, "The Yield Surface," in Fixed Income Analytics, ed. by Dattatreya, 1991. - Litterman, Scheinkman, and Weiss, "Volatility and the Yield Curve," Journal of Fixed Income, 1991. - Christensen and Sorensen, "Duration, Convexity, and Time Value," Journal of Portfolio Management, 1994. On the Impact of Convexity in the Context of Multi-Factor Term Structure Models 关于多因子期限结构模型下凸度的影响 - Brown and Schaefer, "Interest Rate Volatility and the Shape of the Term Structure," working paper, London Business School, 1993. - Gilles, "Forward Rates and Expected Future Short Rates," working paper, Federal Reserve Board, 1994. On the Empirical Impact of Convexity on Bond Yields and Returns 关于凸度对债券收益率和回报影响的实证分析 - Kahn and Lochoff, "Convexity and Exceptional Return," Journal of Portfolio Management, 1990. - Lacey and Nawalkha, "Convexity, Risk, and Returns," Journal of Fixed Income, 1993. This section provides a brief overview of convexity. Readers who are not familiar with this concept may want to read first a text with a more extensive discussion, such as Klotz (1985) or Tuckman (1995). 本小节提供了关于凸度的简要介绍,还不熟悉凸度概念的读者可以先阅读 Klotz(1985)或 Tuckman(1995)的文献 ↩︎ Equation (1) is based on a two-term Taylor series expansion of a bond's price as a function of its yield, divided by the price. The Taylor series can be used to approximate the bond price with any desired level of accuracy. A duration-based approximation is based on a one-term Taylor series expansion; it only uses the first derivative of the price function (\(dP/dy\)). The two-term Taylor series expansion also uses the second derivative (\(d^2P/dy^2\)) but ignores higher-order terms. In Equation (1), the word "convexity" is used narrowly for the difference between the two-term approximation and the linear approximation, but the word is sometimes used more broadly for the whole difference between the true price-yield curve and the linear approximation. Given the price-yield curves of Treasury bonds and typical yield volatilities in the Treasury market, the two-term approximation in Equation (1) is quite accurate. As an "eyeball test," we note that Figure 2 shows the most nonlinear price-yield curve among noncallable Treasury bonds and yet, the two-term approximation is visually indistinguishable from the true price-yield curve within a 300-basis-point yield range. 作为收益率函数的债券价格进行二阶 Taylor 级数展开,再除以价格,得到等式(1)。Taylor 级数可以以任何精度近似债券价格。基于久期的近似建立于一阶 Taylor 级数展开;它只使用价格函数的一阶导数(\(dP/dy\))。二阶 Taylor 级数展开使用二阶导数(\(d^2P/dy^2\)),但忽略更高阶项。在等式(1)中,“凸度”一词仅指二阶近似和线性近似之间的差异,但该词有时更广泛地指代真实价格-收益率曲线与线性近似之间的全部差异。鉴于国债的价格-收益率曲线和国债市场的典型收益率波动率结构,等式(1)中的二阶近似值已经相当准确。通过“肉眼观察”,图2显示了非可赎回国债中最具非线性的价格-收益率曲线,但是在300个基点的范围内,二阶近似已经与真实的价格-收益率曲线在视觉上无法区分。 ↩︎ The convexity of a given security can be quoted in many ways, depending, in part, on the way that yields are quoted. If yields are expressed in percent (200 basis points = 2%), as in Equation (1), the convexity of a long zero with a duration of 15 is quoted as roughly 2.25 (= \(15^2 / 100\)). However, if yields are expressed in decimals (200 basis points = 0.02), the same bond's convexity is quoted as 225 (= \(15^2\)). We decided to use the former method of expressing yields and quoting convexity because it is more common in practice. (For careful readers, we point out that in Appendix A of Overview of Forward Rate Analysis, titled "Notation and Definitions Used in the Series Understanding the Yield Curve," we expressed yields in decimals and, thus, used the other quotation method for duration and convexity.) Fortunately, the quotation method does not influence convexity's impact on bond returns. The convexity impact of a 200-basis-point yield change on the long zero's return is approximately \(0.5 * convexity * (\Delta y_{percent})^2 = 0.5 * 2.25 * 2^2 = 4.5\%\). We get the same result if the yield change is expressed in percent and convexity is scaled correctly: \(0.5 * (100 * convexity) * (\Delta y_{decimal})^2 = 0.5 * 225 * 0.02^2 = 0.045 or 4.5\%\). 债券的凸度可以以多种方式表示,这部分地取决于收益率的表示方式。如果收益率以百分比表示(200基点 = 2%),如等式(1)所示,久期为15的长期零息债券的凸度大致为2.25(= \(15^2 / 100\))。然而,如果收益率用小数表示,则相同债券的凸度被记为225(= \(15^2\))。我们决定使用前一种方法表示收益率和凸度,因为在实践中更常见。(对于仔细的读者,我们指出,在《远期收益率分析概述》的附录A(《理解收益率曲线》系列报告中用到的符号与定义)中,我们以小数表示收益率,因此用另一种方法表示久期和凸度)。幸运的是,表示方法与凸度对债券回报的影响无关。长期零息债券上200基点的收益率变化对应的凸度影响约为\(0.5 * convexity * (\Delta y_{percent})^2 = 0.5 * 2.25 * 2^2 = 4.5\%\)。如果收益率变化以百分比表示,并且凸度正确缩放,则我们得到相同的结果:\(0.5 * (100 * convexity) * (\Delta y_{decimal})^2 = 0.5 * 225 * 0.02^2 = 0.045 or 4.5\%\)。 ↩︎ Equation (1) shows that the impact of convexity on percentage price changes can be approximated by \(0.5 * convexity * (\Delta y)^2\). The expected value of convexity is, therefore, \(0.5 * convexity * E(\Delta y)^2\). Appendix B shows that \(E(\Delta y)^2\) is roughly equal to the squared volatility of basis-point yield changes, \((Vol(\Delta y))^2\). 方程(1)表明凸度对价格变动百分比的影响可以近似为\(0.5 * convexity * (\Delta y)^2\)。凸度价值的预期为\(0.5 * convexity * E(\Delta y)^2\)。附录B显示,\(E(\Delta y)^2\)大致等于基点收益率变化的波动率的平方根,即\((Vol(\Delta y))^2\)。 ↩︎ This example suggests that scenario analysis is one way to incorporate the value of convexity to expected returns. If we compare the average expected bond price from two rate scenarios (+/-2%) to the expected price given one scenario, the difference will be positive for positively convex bonds (if the scenarios are not biased). In reality, more than two possible rate scenarios exist, but the same intuition holds; the expected value of convexity depends on volatility (also if this is computed from 500 yield curve scenarios instead of two). 这个例子表明情景分析是将凸度价值纳入预期回报的一种方式。如果我们将两种收益率情景(+/-2%)的平均预期债券价格与单一情景下的预期价格进行比较,那么对于正凸度债券来说差额将为正(如果情况没有偏差)。在现实中,存在两种以上的可能情景,但是直觉同样成立;凸度价值的预期取决于波动率(无论是从500个还是两个收益率曲线情景中计算得出)。 ↩︎ Our use of the term "convexity bias" is slightly different from its use in a recent article "A Question of Bias," by Burghardt and Hoskins, Risk, March 1995. In that article, convexity bias refers to the difference between the forward price and the futures price in the Eurodollar market. This bias also reflects varying degrees of curvature in the price-yield curves of different fixed-income assets; the mark-to-market system makes the future's price-yield curve linear, while the forward price is a convex function of yield. 我们对“凸度偏差”一词的使用与最近的一片文章——《A Question of Bias》(Burghardt and Hoskins, Risk, March 1995)——有所不同。在这篇文章中,凸度偏差是指 Eurodollar 市场上远期价格与期货价格的差。这种偏差也反映了不同固定收益资产价格-收益率曲线的不同程度的曲率;盯市系统使期货的价格-收益率曲线呈线性,而远期价格是收益率的凸函数。 ↩︎ The rolling yield is a bond's holding-period return given an unchanged yield curve. If a downward-sloping yield curve remains unchanged, long-term bonds earn their initial yields and negative rolldown returns (because they "roll up the curve" as their maturities shorten). An n-year zero-coupon bond's rolling yield over the next year is equal to the one-year forward rate between n-1 and n. For details, see Market's Rate Expectations and Forward Rates, Salomon Brothers Inc, June 1995. 滚动收益率是在假设收益率曲线不变的情况下债券的持有期回报。如果向下倾斜的收益率曲线保持不变,长期债券将获得初始收益率和负的下滑回报铝(因为随着期限缩短收益率开始上升)。n年期零息票债券在下一年的滚动收益率等于n-1年期和n年期之间的一年期远期收益率。详见《Market's Rate Expectations and Forward Rates》(Salomon Brothers Inc,June 1995)。 ↩︎ Here is an intuitive "proof." A bond's expected holding-period return can be split into a part that reflects an unchanged yield curve (the rolling yield) and a part that reflects expected changes in the yield curve. The second part can be approximated by taking expectations of Equation (1). If we expect the yield curve to remain unchanged, as a base case, but allow for positive volatility, the duration impact will be zero, leaving only the value of convexity. (Some modifications are needed because Equation (1) holds instantaneously for constant-maturity rates, while the actual bond price changes occur over a horizon.) 这是一个直观的“证明”。债券的预期持有期回报可以分为反映不变收益率曲线(滚动收益率)的部分,以及反映回报曲线预期变化的部分。第二部分可以通过对等式(1)求期望来近似。如果我们预期收益率曲线保持不变,但允许正的波动率,久期的影响将为零并只留下凸度价值。(一些修正是必要的,因为等式(1)只对期限固定的一瞬间成立,而实际上债券价格在持有期内会发生变动。) ↩︎ Whenever the period 1979-82 is included in a historical sample, the estimated volatilities will be much higher, the term structures of volatility will be more inverted and basis-point yield volatilities will appear to be more "level-dependent" than if the sample period begins after 1982. In many countries outside the United States, the inversion and the level-dependency also have been apparent features of the volatility structure recently. These features seem to become stronger if the central bank subordinates the short-term rates to be tools for some other monetary policy goal, such as money supply (United States 1979-82) or currency stability (for example, countries in the European Monetary System). Figure 7 also illustrates interesting findings about the term structure of volatility in the 1990s. The shape is humped, not inverted, because the intermediate-term yields have been more volatile than either the short-term or long-term yields. Moreover, yield volatility is not just a function of duration; it also depends on a bond's cash flow distribution. For a given duration, zeros have exhibited greater yield volatility than coupon bonds. This pattern probably reflects the coupon bonds' diversification benefits (unlike zeros, these bonds have cash flows in many parts of the yield curve that are imperfectly correlated) as well as the humped shape of the volatility structure. 相较于从1982年开始选取的样本,当1979-82年被包括在历史样本期中时,估计波动率将会更高,波动率期限结构将会更加倒挂,并且基点收益率波动率似乎更加“依赖于(收益率)水平”。在美国以外的许多国家,倒挂和(收益率)水平依赖性也是近来波动率期限结构的明显特征。如果中央银行将短期收益率作为其他货币政策目标的工具,如货币供应(例如1979-82年的美国)或通货稳定(例如欧洲货币体系中的国家)。图7还显示了1990年代波动率期限结构的有趣发现。由于中期收益率比短期或长期收益率波动更大,因此形状是隆起的而不是倒挂的。此外,收益率波动率不仅仅是久期的函数,也取决于债券的现金流量分布。对于给定的久期,零息债券的收益率波动率比付息债券更大。这种模式可能反映了付息债券的多元化回报(与零息债券不同,这些债券在收益率曲线的许多部分都有不完全相关的现金流)以及波动率结构的隆起形状。 ↩︎ This point is most easily seen by considering a horizontal yield curve. All bonds have same yields and rolling yields, but their expected returns are not the same. Long-term bonds are more convex than short-term bonds; thus, they have higher near-term expected returns. 通过考虑水平的收益率曲线最容易看出这一点。所有债券的收益率和滚动收益率相同,但其预期回报则不尽相同。长期债券比短期债券的凸度更大;因此,他们有较高的近期预期回报。 ↩︎ Our empirical analysis in Market's Rate Expectations and Forward Rates indicates that it is reasonable to take today's yield curve as the base forecast for the future yield curve. Therefore, the rolling yield can proxy for a bond's near-term expected return (assuming zero volatility). Other hypotheses about the yield curve behavior would lead to other expected return proxies than the rolling yield, but the value of convexity could be added to any such proxy. For example, if the implied forward curve were the best forecast for the future yield curve, the near-term expected return of each bond would be the sum of the near-term riskless rate and the (bond-specific) value of convexity. Or, if investors have strong subjective expectations about curve-reshaping, the impact of such expectations can be easily added to the convexity-adjusted expected returns —— as a third term on the right-hand side of Equation (2). 我们在《市场收益率预期与远期收益率》中的实证分析表明,将今天的收益率曲线作为未来收益率曲线的基本预测是合理的。因此,滚动收益率可以代表债券的近期预期回报(假设零波动率)。除了滚动收益率,关于收益率曲线行为的其他假设将产生其他表示预期回报的指标,但凸度价值可以添加到任何一种代表上。例如,如果隐含的远期曲线是对未来收益率曲线的最佳预测,则每个债券的近期预期回报将是近期无风险收益率和(债券特定)凸度价值的总和。或者,如果投资者对曲线形变具有强烈的主观预期,则这种预期的影响可以很容易地加到凸度调整后的预期回报中——作为等式(2)右侧的第三项。 ↩︎ Total Return Management, Martin L. Leibowitz, Salomon Brothers Inc, 1979, and Understanding Duration and Volatility, Salomon Brothers Inc, September 1985, among other papers, made the concepts of rolling yield and duration widely known among bond investors. 《Total Return Management》(Martin L. Leibowitz,Salomon Brothers Inc,1979)和《Understanding Duration and Volatility》(Salomon Brothers Inc,September 1985),以及其他论文使得滚动收益率和久期的概念为债券投资者熟知。 ↩︎ The barbell-bullet trade that we analyze over a static one-year horizon is comparable to a strategy of buying and holding a straddle. Readers familiar with options know that the profitability of this strategy depends solely on the starting and ending yield levels and not on the yield path during the horizon. Option traders may use this strategy if they expect yields to end up far away from the current levels. It is useful to contrast this option strategy with another option strategy: buying a delta-hedged straddle and rebalancing the position dynamically throughout the horizon. The profitability of this strategy depends on the level of volatility (yield path) during the horizon and not on the ending yield level. Option traders initiate this strategy (and "go long volatility") when they think that the current implied volatility is "too low." If the realized volatility turns out to be higher than the initial implied volatility, the trade makes money from profitable rebalancing trades —— even if the ending yield is the same as the starting yield. These two option positions are analogous to two types of barbell-bullet strategies. In the first type (our example), the barbell and the bullet are duration-matched to horizon and no rebalancing occurs. In the second type, the trade is duration-matched instantaneously and the match is rebalanced frequently. The appropriate strategy in a given situation depends on several factors, including the following: (1) whether the investor has a particular view about the likely horizon yields (for example, "far away from the current level") or about the implied volatility during the horizon; (2) whether the investor tolerates some duration drift (because in the first case, the duration would drift during the year) or has a strict duration target; and (3) whether the investor expects rates to be mean-reverting (in which case he may want to rebalance and lock in the convexity gains after significant rate movements). 我们所分析的持有期一年的杠铃-子弹交易与购买和持有跨式策略相当。熟悉期权的读者知道,这一策略的盈利能力完全取决于开始和结束时的收益率水平,而不是持有期的收益率路径。期权交易者可能会使用这一策略,如果他们预期收益率将远离目前的水平。有必要将此期权策略与另一个期权策略进行对比:持有Delta对冲的跨式策略并在持有期动态的再平衡头寸。这一策略的盈利能力取决于波动率水平(收益率路径),而不是期末的收益率水平。当他们认为当前的隐含波动率“太低”时,期权交易者实施这一策略(“做多波动率”)。如果实现波动率高于初始隐含波动率,则交易从有利可图的再平衡交易中赚取回报——即使收益率与起始收益率相同。这两个期权头寸分别类似于两种类型的杠铃-子弹策略。在第一种类型(我们的例子)中,杠铃组合和子弹组合与持有期久期匹配,不会发生再平衡。在第二种类型中,交易是时刻保持久期匹配的,并且匹配频繁地再平衡。给定情况下的适当策略取决于多种因素,包括以下几个因素:(1)投资者是否对持有期的收益率或隐含波动率有一个特定的看法(例如,“远离当前水平”);(2)投资者是否容忍一些久期偏差(因为在第一种情况下,久期将在一年内发生偏移)或具有严格的久期目标;(3)投资者是否预期收益率是均值回归的(在这种情况下,他可能希望在明显的收益率变动之后再平衡和锁定凸度回报)。 ↩︎ The assumption of no curve-flattening expectations is realistic when describing the long-run average behavior of the yield curve, but may be unrealistic at times, especially if the Fed has recently begun easing or tightening. Because the performance of the barbell-bullet trade depends more on the curve reshaping than on convexity effects, curvature (the rolling yield spread between a barbell and a bullet) provides very noisy implied volatility estimates. Thus, it might be more useful to try to extract the market's curve-flattening expectations from the curvature by subtracting the value of convexity (based on, say, the implied volatility from option prices) from the rolling yield spread. 在描述收益率曲线的长期平均行为时,没有曲线变平预期的假设是现实的,但有时可能是不现实的,特别是如果美联储最近开始放松或收紧(货币政策)。由于杠铃-子弹交易的表现更多地取决于曲线变形而不是凸度效应,曲率(杠铃组合和子弹组合之间的滚动收益率利差)提供噪声很大的隐含波动率估计。因此,通过从滚动收益率利差中减去凸度价值(基于期权价格的隐含波动率)对于从曲率中提取市场曲线变平预期可能更有用。 ↩︎ The bullet's outperformance is consistent with the finding in Part 3 of this series, Does Duration Extension Enhance Long-Term Expected Returns?, that historical average returns do not increase linearly with duration. Instead, the average return curve is concave, indicating that the intermediate-term bonds earn higher average returns than duration-matched pairs of short-term bonds and long-term bonds. 子弹组合的表现与本系列第3部分(《久期增加会提高长期预期回报吗?》)的发现是一致的,历史平均回报不会随久期线性增加。相反,平均回报曲线是上凸的,这表明中期债券比久期匹配的短期和长期债券组合获得更高的平均回报。 ↩︎ Why is the first term on the right-hand side of Equation (3) yield and not rolling yield? Equation (3) is the correct way to approximate a bond's holding-period return when we study actual bond-specific yield changes (which can be viewed as the sum of the rolldown yield changes and the changes in constant-maturity rates). In this case, the rolldown return is a part of the duration and convexity impact. Alternatively, if we studied in Equation (3) the changes in constant-maturity rates (which do not include the rolldown yield change), we should include the rolldown return into the first term on the right-hand side; it would be rolling yield instead of yield. 为什么等式(3)右边的第一项是收益率而不是滚动收益率?当我们研究实际的债券特定收益率变化(可以看作是下滑收益率变化和固定期限收益率变化的总和)时,等式(3)是近似债券持有期回报的正确方法。在这种情况下,下滑回报是久期和凸度影响的一部分。或者,如果我们在等式(3)中研究固定期限收益率变化(不包括下滑收益率变化),那么我们应该将下滑回报包括在右边的第一项中,这将是滚动收益率而不是收益率(影响)。 ↩︎ The percentage contributions of average returns in Figure 12 add up to 100% because we use an approximate method of annualizing monthly returns (multiplying by 12). In contrast, the percentage contributions of volatilities do not add up to 100% because volatilities are not additive (whether annualized or not). 图12中平均回报的百分比贡献加起来高达100%,因为我们使用了近似方法来年化月度回报(乘以12)。相比之下,波动率的百分比贡献加起来不会高达100%,因为波动率不是可加的(无论是否年化)。 ↩︎ A careful reader may find it puzzling that the average duration impact on bond returns is negative over a sample period when the bond yields declined, on average. There are two explanations. First, the duration impact is a product of duration and yield changes, and it turns out that yield declines (from high yield levels) tended to coincide with relatively short durations, while yield increases (from low yield levels) tended to coincide with long durations. Thus, yield increases are "weighted" more heavily than yield declines. Second, historical yield changes that are based on a time series of on-the-run yield levels can be misleading because they ignore the impact of changing on-the-run bonds. For example, if a new bond is issued on August 15, the on-the-run yield change from July 31 to August 31 compares the yields of different bonds, the old one and the new one. Typically, the old bond loses some of its liquidity premium; thus, its end-of-month yield tends to be higher than that of the new bond —— a pattern hidden in the on-the-run yield level series. For the analysis in Figure 12, we create a clean series of yield changes that always compares the beginning- and end-of-month yields of one bond. The average monthly yield change in the clean series is one basis point higher than in the unadjusted series. 仔细的读者可能会发现令人困惑现象,久期对债券回报的平均影响在债券收益率下降的样本期是负数。这有两个解释,首先,久期影响是久期和收益率变化的乘积,结果是收益率下降(从高收益率水平)往往与相对较短的久期吻合,而收益率增加(从低回报水平)倾向于与较长的久期一致。因此,收益率增加比收益率下降的权重更大。其次,以活跃券收益率水平的时间序列为基础的历史收益率变动可能是误导性的,因为忽视了活跃券变更的影响。例如,如果8月15日发行新债券,7月31日至8月31日期间的活跃券收益率变动比较了新旧不同的债券。通常情况下,旧债券失去了一些流动性溢价;因此,其月末收益率往往高于新债券——这是一个活跃券收益率中的隐藏模式。对于图12中的分析,我们创建了一条干净的收益率变化序列——总是比较同一个债券开始和结束月份的收益率。干净的平均月度收益率变化序列比未调整的序列高出一个基点。 ↩︎ One should not generalize these findings about wide barbells to narrower barbells. The yield curve exhibits less curvature in the intermediate sector than between the extreme front end and long end. For example, a barbell-bullet trade from fives to twos and tens tends to have a much smaller yield give-up than the trade from fives to cash and thirties —— and a smaller convexity pick-up. 不应将这些关于(期限跨度)较宽的杠铃组合的发现推广到较窄的杠铃组合上。收益率曲线中间部分的曲率比在最前端和最长端之间的更小。例如,五年vs两年和十年期债券的杠铃-子弹交易往往比五年vs现金和三十年期债券的交易损失更少的回报铝——同时获得的凸度也更小。 ↩︎ Stan Kogelman derived Equation (9) in "Dispersion: An Important Component of Convexity and Performance," an unpublished research piece, Salomon Brothers Inc, 1986. Stan Kogelman 推导出了等式(9)(《Dispersion:An Important Component of Convexity and Performance》(Salomon Brothers Inc,1986),未发表)。 ↩︎ The arithmetic mean (AM) and geometric mean (GM) are computed using the following equations: \(AM = (h_1 + h_2 + \cdots + h_K) / K\) and \(GM = ([(1 + h_1 /100) * (1 + h_2 /100) * \cdots * (1 + h_K /100)]^{1/K} - 1) * 100\), where \(h_k\) is the one-period holding-period return at time k, and K is the sample size. It can be shown that \(GM = AM - Var(h)/200\). 算术平均(AM)和几何平均(GM)分别依如下等式计算:\(AM = (h_1 + h_2 + \cdots + h_K) / K\);\(GM = ([(1 + h_1 /100) * (1 + h_2 /100) * \cdots * (1 + h_K /100)]^{1/K} - 1) * 100\),其中\(h_k\)是k时期的持有期回报,K是样本量。可以看出来\(GM = AM - Var(h)/200\) ↩︎
https://www.cnblogs.com/xuruilong100/p/8519517.html
CC-MAIN-2020-50
en
refinedweb
News NET Framework 20 and Visual Studio 2005 development - 28, 2007 28 Oct'07 Case Study: IdeaBlade DevForce object mapping A CTO employed IdeaBlade DevForce 3.0 ORM tools in a .NET 2.0 application using C# in order to achieve better abstraction in a software architecture. - 31, 2007 31 Mar'07 Microsoft unveils VB 2005 'Power Packs' The company has released a third Visual Basic 2005 add-in, known as a Power Pack, and is soliciting suggestions for future tools. -. -. - January 05, 2006 05 Jan'06 MS offering sample source code in VS 2005 starter kits Microsoft is offering source code for a variety of different applications in downloadable Visual Studio 2005 Starter Kits. - November 08, 2005 08 Nov'05 ... - November 02, 2005 02 Nov'05 Chrome extends Pascal to the .NET platform RemObjects' Chrome for Visual Studio extends Object Pascal programming to Visual Studio .NET 2003 and 2005. - October 26, 2005 26 Oct'05 Expert asks if Microsoft Visual Studio 2005 is too feature rich Author Charles Petzold last week decried Visual Studio's "insistence on writing code" for developers in a speech entitled "Does Visual Studio Rot the Mind?" to the NYC .NET developers group. - ... - August 10, 2005 10 Aug'05 VB 2005 'My' Space set to help developer With VB.NET 2003, programmers were challenged to navigate around the namespace hierarchy. Help in the form of the My namespace framework is on the way. -...
https://searchwindevelopment.techtarget.com/info/news/NET-Framework-20-and-Visual-Studio-2005-development
CC-MAIN-2020-16
en
refinedweb
elm-jsonapi decodes any JSON API compliant payload and provides helper functions for working with the results. This library only provides base functionality for decoding payloads and working with the results. A more sophisticated wrapper which includes content negotation with servers can be found here. JSON API specifies a format with which resources related to the document's primary resource(s) are "side-loaded" under a key called included. This library abstracts the structure of the document and reconstructs the resource graph for you; use the relatedResource and relatedResourceCollection functions to traverse the graph from any given resource to its related resources. See the documentation at: This module can be used with elm 0.18.x. It can be tested with elm-test 0.18.2 import Http import Json.Decode exposing ((:=)) import JsonApi import JsonApi.Decode import JsonApi.Resources import JsonApi.Documents import Task exposing (..) type alias User = { username : String , email : String } userDecoder : Json.Decode.Decoder User userDecoder = Json.Decode.object2 User ("username" := Json.Decode.string) ("email" := Json.Decode.string) getUserResource : String -> Task Http.Error (JsonApi.Document) getUserResource query = Http.get JsonApi.Decode.document ("" ++ query) extractUsername : JsonApi.Document -> Result String User extractUsername doc = JsonApi.Documents.primaryResource doc `Result.andThen` (JsonApi.Resources.attributes userDecoder) import JsonApi.Encode as Encode import JsonApi.Resources as Resource import Json.Encode exposing (Value) encodeLuke : Result String Value encodeLuke = Resources.build "jedi" |> Resources.withAttributes [ ( "first_name", string "Luke" ) , ( "last_name", string "Skywalker" ) ] |> Resources.withAttributes [ ( "home_planet", string "Tatooine" ) ] |> Resources.withRelationship "father" { id = "vader", resourceType = "jedi" } |> Resources.withRelationship "sister" { id = "leia", resourceType = "princess" } |> Resources.withUuid "123e4567-e89b-12d3-a456-426655440000" |> Result.map Encode.clientResource elm-jsonapi is currently under development. I use waffle.io and Github Issues to track new features and bugs. if there's a feature you'd like to see, please submit an issue! if you'd like to contribute yourself, please reach out to me or submit a pull request for the relevant issue.
https://package.frelm.org/repo/720/2.2.2
CC-MAIN-2020-16
en
refinedweb
This post is the first of a series; click here for the next post. Introduction What is this? Who are you? I’m Jacob, a Google AI Resident. When I started the residency program in the summer of 2017, I had a lot of experience programming, and a good understanding of machine learning, but. I’m writing this blog post as a message-in-a-bottle to my former self: it’s the introduction that I wish I had been given before starting on my journey. Hopefully, it will also be a helpful resource for others. What was missing? In the three years since its release, Tensorflow has cemented itself as a cornerstone of the deep learning ecosystem. However, it can be non-intuitive for beginners, especially compared to define-by-run1 neural network libraries like PyTorch or DyNet. Many introductory Tensorflow tutorials exist, for doing everything from linear regression, to classifying MNIST, to machine translation. These concrete, practical guides are great resources for getting Tensorflow projects up and running, and can serve as jumping-off points for similar projects. But for the people who are working on applications for which a good tutorial does not exist, or who want to do something totally off the beaten path (as is common in research), Tensorflow can definitely feel frustrating at first. This post is my attempt to fill this gap. Rather than focusing on a specific task, I take a more general approach, and explain the fundamental abstractions underpinning Tensorflow. With a good grasp of these concepts, deep learning with Tensorflow becomes intuitive and straightforward. Target Audience This tutorial is intended for people who already have some experience with both programming and machine learning, and want to pick up Tensorflow. For example: a computer science student who wants to use Tensorflow in the final project of her ML class; a software engineer who has just been assigned to a project that involves deep learning; or a bewildered new Google AI Resident (shout-out to past Jacob). If you’d like a refresher on the basics, here are some resources. Otherwise: let’s get started! Understanding Tensorflow Tensorflow Is Not A Normal Python Library Most Python libraries are written to be natural extensions of Python. When you import a library, what you get is a set of variables, functions, and classes, that augment and complement your “toolbox” of code. When using them, you have a certain set of expectations about how they behave. In my opinion, when it comes to Tensorflow, you should throw all that away. It’s fundamentally the wrong way to think about what Tensorflow is and how it interacts with the rest of your code. A metaphor for the relationship between Python and Tensorflow is the relationship between Javascript and HTML. Javascript is a fully-featured programming language that can do all sorts of wonderful things. HTML is a framework for representing a certain type of useful computational abstraction (in this case, content that can be rendered by a web browser). The role of Javascript in an interactive webpage is to assemble the HTML object that the browser sees, and then interact with it when necessary by updating it to new HTML. Similarly to HTML, Tensorflow is a framework for representing a certain type of computational abstraction (known as “computation graphs”). When we manipulate Tensorflow with Python, the first thing we do with our Python code is assemble the computation graph. Once that is done, the second thing we do is to interact with it (using Tensorflow’s “sessions”). But it’s important to keep in mind that the computation graph does not live inside of your variables; it lives in the global namespace. As Shakespeare once said: “All the RAM’s a stage, and all the variables are merely pointers.” First Key Abstraction: The Computation Graph In browsing the Tensorflow documentation, you’ve probably found oblique references to “graphs” and “nodes”. If you’re a particularly savvy browser, you may have even discovered this page, which covers the content I’m about to explain in a much more accurate and technical fashion. This section is a high-level walkthrough that captures the important intuition, while sacrificing some technical details. So: what is a computation graph? Essentially, it’s a global data structure: a directed graph that captures instructions about how to calculate things. Let’s walk through an example of how to build one. In the following figures, the top half is the code we ran and its output, and the bottom half is the resulting computation graph. import tensorflow as tf Graph: Predictably, just importing Tensorflow does not give us an interesting computation graph. Just a lonely, empty global variable. But what about when we call a Tensorflow operation? Code: import tensorflow as tf two_node = tf.constant(2) print two_node Output: Tensor("Const:0", shape=(), dtype=int32) Graph: Would you look at that! We got ourselves a node. It contains the constant 2. Shocking, I know, coming from a function called tf.constant. When we print the variable, we see that it returns a tf.Tensor object, which is a pointer to the node that we just created. To emphasize this, here’s another example: Code: import tensorflow as tf two_node = tf.constant(2) another_two_node = tf.constant(2) two_node = tf.constant(2) tf.constant(3) Graph: Every time we call tf.constant, we create a new node in the graph. This is true even if the node is functionally identical to an existing node, even if we re-assign a node to the same variable, or even if we don’t assign it to a variable at all. In contrast, if you make a new variable and set it equal to an existing node, you are just copying the pointer to that node and nothing is added to the graph: Code: import tensorflow as tf two_node = tf.constant(2) another_pointer_at_two_node = two_node two_node = None print two_node print another_pointer_at_two_node Output: None Tensor("Const:0", shape=(), dtype=int32) Graph: Okay, let’s liven things up a bit: Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node ## equivalent to tf.add(two_node, three_node) Graph: Now we’re talking - that’s a bona-fide computational graph we got there! Notice that the + operation is overloaded in Tensorflow, so adding two tensors together adds a node to the graph, even though it doesn’t seem like a Tensorflow operation on the surface. Okay, so two_node points to a node containing 2, three_node points to a node containing 3, and sum_node points to a node containing… +? What’s up with that? Shouldn’t it contain 5? As it turns out, no. Computational graphs contain only the steps of computation; they do not contain the results. At least…not yet! Second Key Abstraction: The Session If there were March Madness for misunderstood TensorFlow abstractions, the session would be the #1 seed every year. It has that dubious honor due to being both unintuitively named and universally present – nearly every Tensorflow program explicitly invokes tf.Session() at least once. The role of the session is to handle the memory allocation and optimization that allows us to actually perform the computations specified by a graph. You can think of the computation graph as a “template” for the computations we want to do: it lays out all the steps. In order to make use of the graph, we also need to make a session, which allows us to actually do things; for example, going through the template node-by-node to allocate a bunch of memory for storing computation outputs. In order to do any computation with Tensorflow, you need both a graph and a session. The session contains a pointer to the global graph, which is constantly updated with pointers to all nodes. That means it doesn’t really matter whether you create the session before or after you create the nodes. 2 After creating your session object, you can use sess.run(node) to return the value of a node, and Tensorflow performs all computations necessary to determine that value. Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node sess = tf.Session() print sess.run(sum_node) Output: 5 Graph: Wonderful! We can also pass a list, sess.run([node1, node2,...]), and have it return multiple outputs: Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node sess = tf.Session() print sess.run([two_node, sum_node]) Output: [2, 5] Graph: In general, sess.run() calls tend to be one of the biggest TensorFlow bottlenecks, so the fewer times you call it, the better. Whenever possible, return multiple items in a single sess.run() call instead of making multiple calls. Placeholders & feed_dict The computations we’ve done so far have been boring: there is no opportunity to pass in input, so they always output the same thing. A more worthwhile application might involve constructing a computation graph that takes in input, processes it in some (consistent) way, and returns an output. The most straightforward way to do this is with placeholders. A placeholder is a type of node that is designed to accept external input. Code: import tensorflow as tf input_placeholder = tf.placeholder(tf.int32) sess = tf.Session() print sess.run(input_placeholder) Output: Traceback (most recent call last): ... InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype int32 [[Node: Placeholder = Placeholder[dtype=DT_INT32, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] Graph: …is a terrible example, since it throws an exception. Placeholders expect to be given a value. We didn’t supply one, so Tensorflow crashed. To provide a value, we use the feed_dict attribute of sess.run(). Code: import tensorflow as tf input_placeholder = tf.placeholder(tf.int32) sess = tf.Session() print sess.run(input_placeholder, feed_dict={input_placeholder: 2}) Output: 2 Graph: Much better. Notice the format of the dict passed into feed_dict. The keys should be variables corresponding to placeholder nodes from the graph (which, as discussed earlier, really means pointers to placeholder nodes in the graph). The corresponding values are the data elements to assign to each placeholder – typically scalars or Numpy arrays. Third Key Abstraction: Computation Paths Let’s try another example involving placeholders: Code: import tensorflow as tf input_placeholder = tf.placeholder(tf.int32) three_node = tf.constant(3) sum_node = input_placeholder + three_node sess = tf.Session() print sess.run(three_node) print sess.run(sum_node) Output: 3 Traceback (most recent call last): ... InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype int32 [[Node: Placeholder_2 = Placeholder[dtype=DT_INT32, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] Graph: Why does the second call to sess.run() fail? And why does it raise an error related to input_placeholder, even though we are not evaluating input_placeholder? The answer lies in the final key Tensorflow abstraction: computation paths. Luckily, this one is very intuitive. When we call sess.run() on a node that is dependent on other nodes in the graph, we need to compute the values of those nodes, too. And if those nodes have dependencies, we need to calculate those values (and so on and so on…) until we reach the “top” of the computation graph where nodes have no predecessors. Consider the computation path of sum_node: All three nodes need to be evaluated to compute the value of sum_node. Crucially, this includes our un-filled placeholder and explains the exception! In contrast, consider the computation path of three_node: Due to the graph structure, we don’t need to compute all of the nodes in order to evaluate the one we want! Because we don’t need to evaluate placeholder_node to evaluate three_node, running sess.run(three_node) doesn’t raise an exception. The fact that Tensorflow automatically routes computation only through nodes that are necessary is a huge strength of the framework. It saves a lot of runtime on calls if the graph is very big and has many nodes that are not necessary. It allows us to construct large, “multi-purpose” graphs, which use a single, shared set of core nodes to do different things depending on which computation path is taken. For almost every application, it’s important to think about sess.run() calls in terms of the computation path taken. Variables & Side Effects So far, we’ve seen two types of “no-ancestor” nodes: tf.constant, which is the same for every run, and tf.placeholder, which is different for every run. There’s a third case that we often want to consider: a node which generally has the same value between runs, but can also be updated to have a new value. That’s where variables come in. Understanding variables is essential to doing deep learning with Tensorflow, because the parameters of your model fall into this category. During training, you want to update your parameters at every step, via gradient descent; but during evaluation, you want to keep your parameters fixed, and pass a bunch of different test-set inputs into the model. More than likely, all of your model’s trainable parameters will be implemented as variables. To create variables, use tf.get_variable().3 The first two arguments to tf.get_variable() are required; the rest are optional. They are tf.get_variable(name, shape). name is a string which uniquely identifies this variable object. It must be unique relative to the global graph, so be careful to keep track of all names you have used to ensure there are no duplicates.4 shape is an array of integers corresponding to the shape of a tensor; the syntax of this is intuitive – just one integer per dimension, in order. For example, a 3x8 matrix would have shape [3, 8]. To create a scalar, use an empty list as your shape: []. Code: import tensorflow as tf count_variable = tf.get_variable("count", [])_13<< Alas, another exception. When a variable node is first created, it basically stores “null”, and any attempts to evaluate it will result in this exception. We can only evaluate a variable after putting a value into it first. There are two main ways to put a value into a variable: initializers and tf.assign(). Let’s look at tf.assign() first: Code: import tensorflow as tf count_variable = tf.get_variable("count", []) zero_node = tf.constant(0.) assign_node = tf.assign(count_variable, zero_node) sess = tf.Session() sess.run(assign_node) print sess.run(count_variable) Output: 0 Graph: tf.assign(target, value) is a node that has some unique properties compared to nodes we’ve seen so far: - Identity operation. tf.assign(target, value)does not do any interesting computations, it is always just equal to value. - Side effects. When computation “flows” through assign_node, side effects happen to other things in the graph. In this case, the side effect is to replace the value of count_variablewith the value stored in zero_node. - Non-dependent edges. Even though the count_variablenode and the assign_nodeare connected in the graph, neither is dependent on the other. This means computation will not flow back through that edge when evaluating either node. However, assign_nodeis dependent on zero_node; it needs to know what to assign. “Side effect” nodes underpin most of the Tensorflow deep learning workflow, so make sure you really understand what’s going on here. When we call sess.run(assign_node), the computation path goes through assign_node and zero_node. Graph: As computation flows through any node in the graph, it also enacts any side effects controlled by that node, shown in green. Due to the particular side effects of tf.assign, the memory associated with count_variable (which was previously “null”) is now permanently set to equal 0. This means that when we next call sess.run(count_variable), we don’t throw any exceptions. Instead, we get a value of 0. Success! Next, let’s look at initializers: Code: import tensorflow as tf const_init_node = tf.constant_initializer(0.) count_variable = tf.get_variable("count", [], initializer=const_init_node)_16<< Okay, what happened here? Why didn’t the initializer work? The answer lies in the split between sessions and graphs. We’ve set the initializer property of get_variable to point at our const_init_node, but that just added a new connection between nodes in the graph. We haven’t done anything about the root of the exception: the memory associated with the variable node (which is stored in the session, not the graph!) is still set to “null”. We need the session to tell the const_init_node to actually update the variable. Code: import tensorflow as tf const_init_node = tf.constant_initializer(0.) count_variable = tf.get_variable("count", [], initializer=const_init_node) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) print sess.run(count_variable) Output: 0. Graph: To do this, we added another, special node: init = tf.global_variables_initializer(). Similarly to tf.assign(), this is a node with side effects. In contrast to tf.assign(), we don’t actually need to specify what its inputs are! tf.global_variables_initializer() will look at the global graph at the moment of its creation and automatically add dependencies to every tf.initializer in the graph. When we then evaluate it with sess.run(init), it goes to each of the initializers and tells them to do their thang, initializing the variables and allowing us to run sess.run(count_variable) without an error. Variable Sharing You may encounter Tensorflow code with variable sharing, which involves creating a scope and setting “reuse=True”. I strongly recommend that you don’t use this in your own code. If you want to use a single variable in multiple places, simply keep track of your pointer to that variable’s node programmatically, and re-use it when you need to. In other words, you should have only a single call of tf.get_variable() for each parameter you intend to store in memory. Optimizers At last: on to the actual deep learning! If you’re still with me, the remaining concepts should be extremely straightforward. In deep learning, the typical “inner loop” of training is as follows: - Get an input and true_output - Compute a “guess” based on the input and your parameters - Compute a “loss” based on the difference between your guess and the true_output - Update the parameters according to the gradient of the loss Let’s put together a quick script for a toy linear regression problem: Code: import tensorflow as tf ### build the graph ## first set up the parameters m = tf.get_variable("m", [], initializer=tf.constant_initializer(0.)) b = tf.get_variable("b", [], initializer=tf.constant_initializer(0.)) init = tf.global_variables_initializer() ## then set up the computations input_placeholder = tf.placeholder(tf.float32) output_placeholder = tf.placeholder(tf.float32) x = input_placeholder y = output_placeholder y_guess = m * x + b loss = tf.square(y - y_guess) ## finally, set up the optimizer and minimization node optimizer = tf.train.GradientDescentOptimizer(1e-3) train_op = optimizer.minimize(loss) ### start the session sess = tf.Session() sess.run(init) ### perform the training loop import random ## set up problem true_m = random.random() true_b = random.random() for update_i in range(100000): ## (1) get the input and output input_data = random.random() output_data = true_m * input_data + true_b ## (2), (3), and (4) all take place within a single call to sess.run()! _loss, _ = sess.run([loss, train_op], feed_dict={input_placeholder: input_data, output_placeholder: output_data}) print update_i, _loss ### finally, print out the values we learned for our two variables print "True parameters: m=%.4f, b=%.4f" % (true_m, true_b) print "Learned parameters: m=%.4f, b=%.4f" % tuple(sess.run([m, b])) Output: 0 2.3205383 1 0.5792742 2 1.55254 3 1.5733259 4 0.6435648 5 2.4061265 6 1.0746256 7 2.1998715 8 1.6775116 9 1.6462423 10 2.441034 ... 99990 2.9878322e-12 99991 5.158629e-11 99992 4.53646e-11 99993 9.422685e-12 99994 3.991829e-11 99995 1.134115e-11 99996 4.9467985e-11 99997 1.3219648e-11 99998 5.684342e-14 99999 3.007017e-11 True parameters: m=0.3519, b=0.3242 Learned parameters: m=0.3519, b=0.3242 As you can see, the loss goes down to basically nothing, and we wind up with a really good estimate of the true parameters. Hopefully, the only part of the code that is new to you is this segment: ## finally, set up the optimizer and minimization node optimizer = tf.train.GradientDescentOptimizer(1e-3) train_op = optimizer.minimize(loss) But, now that you have a good understanding of the concepts underlying Tensorflow, this code is easy to explain! The first line, optimizer = tf.train.GradientDescentOptimizer(1e-3), is not adding a node to the graph. It is simply creating a Python object that has useful helper functions. The second line, train_op = optimizer.minimize(loss), is adding a node to the graph, and storing a pointer to it in variable train_op. The train_op node has no output, but has a very complicated side effect: train_op traces back through the computation path of its input, loss, looking for variable nodes. For each variable node it finds, it computes the gradient of the loss with respect to that variable. Then, it computes a new value for that variable: the current value minus the gradient times the learning rate. Finally, it performs an assign operation to update the value of the variable. So essentially, when we call sess.run(train_op), it does a step of gradient descent on all of our variables for us. Of course, we also need to fill in the input and output placeholders with our feed_dict, and we also want to print the loss, because it’s handy for debugging. Debugging with tf.Print As you start doing more complicated things with Tensorflow, you’re going to want to debug. In general, it’s quite hard to inspect what’s going on inside a computation graph. You can’t use a regular Python print statement, because you never have access to the values you want to print – they are locked away inside the sess.run() call. To elaborate, suppose you want to inspect an intermediate value of a computation. Before the sess.run() call, the intermediate values do not exist yet. But when the sess.run() call returns, the intermediate values are gone! Let’s look at a simple example. Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node sess = tf.Session() print sess.run(sum_node) Output: 5 This lets us see our overall answer, 5. But what if we want to inspect the intermediate values, two_node and three_node? One way to inspect the intermediate values is to add a return argument to sess.run() that points at each of the intermediate nodes you want to inspect, and then, after it has been returned, print it. Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node sess = tf.Session() answer, inspection = sess.run([sum_node, [two_node, three_node]]) print inspection print answer Output: [2, 3] 5 This often works well, but as code becomes more complex, it can be a bit awkward. A more convenient approach is to use a tf.Print statement. Confusingly, tf.Print is actually a type of Tensorflow node, which has both output and side effects! It has two required arguments: a node to copy, and a list of things to print. The “node to copy” can be any node in the graph; tf.Print is an identity operation with respect to its “node to copy”, meaning that it outputs an exact copy of its input. But, it also prints all the current values in the “list of things to print” as a side effect.5 Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node print_sum_node = tf.Print(sum_node, [two_node, three_node]) sess = tf.Session() print sess.run(print_sum_node) Output: [2][3] 5 Graph: One important, somewhat-subtle point about tf.Print: printing is a side effect. Like all other side effects, printing only occurs if the computation flows through the tf.Print node. If the tf.Print node is not in the path of the computation, nothing will print. In particular, even if the original node that your tf.Print node is copying is on the computation path, the tf.Print node itself might not be. Watch out for this issue! When it strikes (and it eventually will), it can be incredibly frustrating if you aren’t specifically looking for it. As a general rule, try to always create your tf.Print node immediately after creating the node that it copies. Code: import tensorflow as tf two_node = tf.constant(2) three_node = tf.constant(3) sum_node = two_node + three_node ### this new copy of two_node is not on the computation path, so nothing prints! print_two_node = tf.Print(two_node, [two_node, three_node, sum_node]) sess = tf.Session() print sess.run(sum_node) Output: 5 Graph: Here is a great resource which provides additional practical debugging advice. Conclusion Hopefully this post helped you get a better intuition for what Tensorflow is, how it works, and how to use it. At the end of the day, the concepts presented here are fundamental to all Tensorflow programs, but this is only scratching the surface. In your Tensorflow adventures, you will likely encounter all sorts of other fun things that you want to use: conditionals, iteration, distributed Tensorflow, variable scopes, saving & loading models, multi-graph, multi-session, and multi-core, data-loader queues, and much more. Many of these topics I will cover in future posts. But if you build on the ideas you learned here with the official documentation, some code examples, and just a pinch of deep learning magic, I’m sure you’ll be able to figure it out! For more detail on how these abstractions are implemented in Tensorflow, and how to interact with them, take a look at my post on inspecting computational graphs. Please give me feedback in the comments (or via email) if anything discussed in this guide was unclear. And if you enjoyed this post, let me know what I should cover next! Happy training! This post is the first of a series; click here for the next post. Many thanks to Kathryn Rough, Katherine Lee, Sara Hooker, and Ludwig Schubert for all of their help and feedback when writing this post. This page from the Chainer documentation describes the difference between define-and-run and define-by-run. ↩ In general, I prefer to make sure I already have the entire graph in place when I create a session, and I follow that paradigm in my examples here. But you might see it done differently in other Tensorflow code. ↩ Since the Tensorflow team is dedicated to backwards compatibility, there are several ways to create variables. In older code, it is common to also encounter the tf.Variable()syntax, which serves the same purpose. ↩ Name management can be made a bit easier with tf.variable_scope(). I will cover scoping in more detail In a future post! ↩ Note that tf.Printis not compatible with Colab or IPython notebooks; it prints to the standard output, which is not shown in the notebook. There are various solutions on StackOverflow. ↩
https://jacobbuckman.com/2018-06-25-tensorflow-the-confusing-parts-1/
CC-MAIN-2020-16
en
refinedweb
WL#5223: Group Commit of Binary Log Affects: Server-5.6 — Status: Complete PROBLEM (SUMMARY) ----------------- When the binary log is enabled there is a dramatic drop in performance due to the following reasons: . the binary log does not exploit the group commit techniques; . there are several access to disk, i.e. writes and flushes. MySQL uses the write-ahead logging to provide both durability and consistency. Specifically, they write redo and less frequently undo changes to the log and make sure that upon commit the changes made of behalf of a transaction are written and flushed to a stable storage (e.g. disk). Notice however that the higher the number of transactions committing per second the higher the rate one needs to write and flush the logs. If nothing is done the log will eventually become a performance bottleneck. To circumvent this issue one postpones for a while any access to a stable storage in order to gather in memory as many commits as possible and thus issue one single write and flush for a set of transactions. This technique is named group commit and is widely used in database systems to improve their performances. PROPOSED SOLUTION (SUMMARY) --------------------------- We are going to: . use the group commit technique to reduce the number of writes and flushes; PROBLEM (DETAILS) ----------------- See in what follows what happens when one wants to commit a transaction and the binary log is enabled. This description is based on Harrison's analysis of the performance problems associated with the current implementation and uses Innodb as the storage engine because it is the only truly transactional engine kept by MySQL: 1. Prepare Innodb: a) Write prepare record to Innodb's log buffer b) Sync log file to disk c) Take prepare_commit_mutex 2. "Prepare" binary log: a) Write transaction to binary log b) Sync binary log based on sync_binlog 3. Commit Innodb: a) Write commit record to log b) Release prepare_commit_mutex c) Sync log file to disk d) Innodb locks are released 4. "Commit" binary log: a) Nothing necessary to do here. There are five problems with this model: 1. Prepare_commit_mutex prevents the binary log and Innodb from group committing. The prepare_commit_mutex is used to ensure that transactions are committed in the binary log by the same order they are committed in the Innodb logs. This is a requirement imposed by Innodb Hot Backup and we don't have the intention to change that. 2. The binary log is not prepared for group committing. Due to this mutex only one transaction executes step 2 at a time and as such the binary log cannot group a set of transactions to reduce the number of writes and flushes. Besides, the code is not prepared to exploit group commit. In this WL, we plan to remove the mutex and make the binary log exploit the group commit techniques. 3. Locks are held for duration of 3 fsync's. MySQL uses locks to implement its consistency modes. Clearly, the higher the number of locks the lower the concurrency level. In general, it is only safe to release transaction's locks when a it has written its commit record to disk. Locks are divided in two distinct sets: shared and exclusive locks. Shared locks may be released as soon as one finds out that a transaction has carried on its activities and is willing to commit. This point happens after (1.a) when the database has obligated itself to commit the transaction if someone decides to do so. 4. There are unnecessary disk accesses, i.e. too many fsync's. Transactions are written to disk three times when the binary log is enabled. This may be improved as the binary log is considered the source of truth and used for recovery. Currently, upon recovery, Innodb compiles a list of all transactions that were prepared and not either committed or aborted and checks the binary log to decide on their fates. If a transaction was written to the binary log, it is committed. Otherwise, it is rolled back. Clearly, one does not need to write and flush the commit records because eventually it will be written by another transaction in the prepare phase or Innodb's background process that every second writes and flushes Innodb buffer logs. In this WL, we postpone the write and flush at the commit phase, in order to improve performance and make the time between periodic writes and flushes configurable. The greater the period the greater the likelihood of increasing the recovery time. In the feature, we should improve this scenario by avoiding the write and flush at the prepare phase and relying in the binary log to replay missing transactions. See WL#6305 for further details. 5. The binary log works as both a storage engine and a transaction coordinator making it difficult to maintain and evolve. The binary log registers as a handler and get callbacks when preparing, committing, and aborting. This allow it to write cached data to the binary log or manipulate the "transaction state" in other ways. In addition, the binary log acts as a transaction coordinator, whose purpose is to order the commits and handle 2PC commit correctly. The binary log is unsuitable as a handler and is in reality *only* a Transaction Coordinator. The fact that it registers as a handler causes some problems in maintenance (and potentially performance) and makes it more difficult to implement new features. PROPOSED SOLUTION (DETAILS) --------------------------- The work of improving performance when the binary log is enabled can be split in the following tasks: 1. Eliminate prepare_commit_mutex, or the need for it. This has to do with the ordering of the transactions in the binary log compared to the order of transactions in the Innodb logs. 2. Flush the binary log properly. Preparing and committing a transaction to the binary log does not automatically mean that the binary log is flushed. Indeed, the entire point of performing a group commit of the binary log is to not flush the binary log with each transaction and instead improve performance by reducing the number of flushes per transaction. 3. Handle the release of the read locks so that it is possible to further improve performance. Releasing locks earlier may improve performance specially for applications that have a high number of reads. 4. Postpone write and flush at the commit phase. This will improve performance by reducing the number of writes and flushes when the binary log is enabled. 5. Make the binary log be only a transaction coordinator. Although this will not bring any performance improvements, this will simplify the code and ease implementation of the changes proposed in this WL. NOTES/DISCUSSIONS ----------------- Facebook's proof of concept showed a > 20x performance increase was possible when (1) (2) and (4) are fixed [this was publicized on the MySQL Conference /Matz]. Fixing just one of the three doesn't give nearly the improvement, so any new model should take into account these three things. FUTURE WORK ------------ We also believe that WL#4925 will improve performance particularly when Hard Drives are used and the operating system provides support to pre-allocate files with no overhead. See WL#4925 for further details. Bugs similar to BUG#11938382 need to be fixed because the DUMP Thread grabs mutex LOCK_log to access a hot binary log file thus harming performance. WL#2540: Replication event checksums WL#4832: Improve scalability of binary logging WL#5493: Binlog crash-safe when master crashed WL#4832: Improve scalability of binary logging WL#5493: Binlog crash-safe when master crashed HIGH LEVEL SOLUTION ------------------- We propose to execute the following steps to commit a transaction: 1. Ask binary log (i.e. coordinator to prepare a) Request to release locks earlier b) Prepare Innodb (Callback to (2.a)) 2. Prepare Innodb: a) Write prepare record to Innodb log buffer b) Sync log file to disk 3. Ask binary log (i.e. coordinator) to commit a) Lock access to flush stage b) Write a set of transactions to the binary log c) Unlock access to flush stage d) Lock access to sync stage e) Flush the binary log to disk f) Unlock access to sync stage g) Lock access to commit stage h) Commit Innodb (Callback to (4.a)) i) Unlock access to commit stage 4. Commit Innodb a) Write a commit record to Innodb log buffer Similar steps happen when a transaction is rolled back but still needs to write its changes to the binary log because non-transactional operations were executed. The different is that the transaction is never prepared and a rollback is called instead of a commit. For simplicity we omit the cases for all servant calls. See additional information on these cases in what follows. MAKING BINARY LOG TRANSACTION COORDINATOR ----------------------------------------- The solution is, in short, to not register the binary log as a handlerton and instead extend the =TC_LOG= interface to cover for the cases where critical job is done by the binary log handlerton functions. There are a few places where some job needs to be done by the binary log, and these naturally extend to other transaction coordinators: 1. When committing, transaction records can be written to the binary log, not only when it is a "real" transaction. This means that we always need to have a call to a logging function such as the =log_xid= function, and not only when it is an XA transaction. If we do that, all the binlog writing from the =binlog_commit= can be moved to this logging function. 2. When rolling back a transaction, a transaction record can potentially be written to the binary log, or the caches have to be cleared to be able to execute a new transaction. 3. When setting a savepoint, the binary log need to set a marker in the transaction cache to be able to truncate written data on a rollback. 4. When rolling back to a savepoint, the transaction cache need to be truncated. 5. When the connection is closed, it is necessary to clean up data for the session. 6. The commit job for the binary is done in the =log_xid= replacement mentioned in point 1 above, but for symmetry it might make sense to introduce a =commit= function as well, or just introduce the =commit= function and let the =TC_LOG= do everything there. Based on the existing functions in =handlerton.cc= (=ha_commit_trans=, =ha_rollback_trans=, and =ha_prepare=), we stratify the commit interface in three level: 1. Low-level transaction functions used by the transaction coordinator. These functions commit all handlertons and reset transaction data for the thread and are called when the transaction is actually committed. This means copying information from the other functions in =handlerton.cc= and creating separate functions =ha_commit_low=, =ha_prepare_low=, and =ha_rollback_low=. These functions are used by the transaction coordinator to prepare, commit, or rollback the transactions. 2. Transaction coordinator functions: =commit=, =prepare=, and =rollback= are added. For =TC_LOG_DUMMY= and =TC_LOG_MMAP= these just call the corresponding low-level functions above, but for the binary log the low-level functions above will not be called until the transaction is successfully written to the binary log. 3. High-level transaction functions used by the server: =ha_commit_trans=, =ha_prepare_trans=, and =ha_rollback_trans=. These functions will use the transaction coordinator interface to execute 2PC, if necessary. At large, they remain intact to an external observer (except for the name change of =ha_prepare= to =ha_prepare_trans=). The =ha_commit_trans= function contain the 2PC procedure and will call the TC_LOG functions at the appropriate times. At no time may a function at level N call functions at level N-1, but it is possible that a transaction coordinator decides to abort a transaction instead of committing it. BINARY LOG GROUP COMMIT ----------------------- This feature involves three queues associated with three stages for commit and is the base for binary log group commit. 1. Flush Stage In this stage, the caches are copied to the binary log cache and written to disk. 2. Sync Stage In this stage, the binary log file is synced to disk. 3. Commit Stage In this stage, all transactions that participated in the sync stage are committed by the same order as they were written to the binary log. As aforementioned, this is a requirement imposed by Innodb Hot Backup. There will be an option to enable this behavior and by default transactions may commit in any order thus further improving performance. In particular, the first transaction that reaches a stage is elected leader and the others are followers. The leader will perform the stage on behalf of the followers and itself. Followers release all latches and go waiting on a condition variable that is signalled when the leader has finished the commit stage. When entering a stage, the leader will then grab the entire queue of sessions that have queued up for the stage and process them according to the stage. When leaving a stage, the leader queues up the full queue for the next stage, and if the queue for the next stage was empty, it will be the leader for this stage as well, otherwise, it will change role and become a follower. With this strategy, stages that take long time will accumulate many sessions while still allowing earlier stages to execute as large batches as possible. Even temporary degradations resulting from OS or hardware behaviour will allow the procedure to adapt and accumulate larger batches. For the flush queue, the leader acts a little differently. Instead of taking the entire queue, the leader will read from flush queue until the last transaction is unqueued or a timeout expires. For each transaction fetched from the flush queue, it writes its contents to the binary log's memory buffer. If unqueueing a transaction results in an empty queue, the leader in the flush phase will write the binary log's memory buffer to disk and proceed to the sync stage. It is necessary to take the transactions one by one since emptying the queue will cause the next thread that enqueues to become a stage leader, which means that the currently running thread have to leave the flush stage. The idea is that by taking as many transactions as possible and including them in the sync, we will increase the number of transactions per sync and thereby improve the overall performance. The timeout keeps a boundary on how long one should wait until proceeding to the next phase. Otherwise, one could wait until all running transactions had committed thus causing performance problems. Notice when the time has expired the entire queue (i.e. flush queue) have to be fetched otherwise there will be no leader to take care of the next batch. Finally, to further improve performance, shared locks are released on the prepare phase and commit records are written to memory and eventually to disk by a transaction in the prepare phase or periodically by Innodb's checkpoint process. MAKING BINARY LOG TRANSACTION COORDINATOR ----------------------------------------- class TC_LOG { /* Prepare the coordinator to handle a new session. This works as a startup function. */ virtual int open_connection(THD* thd)= 0; /* This works as a shutdown function and deinitialize any structure created to handle the session. */ virtual int close_connection(THD* thd)= 0; /* Log a commit record of the transaction to the transaction coordinator log. When the function returns, the transaction commit is properly logged to the transaction coordinator log and can be committed in the storage engines. */ virtual int commit(THD *thd, bool all) = 0; /* Log a rollback record of the transaction to the transaction coordinator log. When the function returns, the transaction have been aborted in the transaction coordinator log. */ virtual int rollback(THD *thd, bool all) = 0; /* Called when a new savepoint is defined and gives the chance to allocate any internal structure to keep track of the savepoint, if necessary. */ virtual int savepoint_set(THD* thd, SAVEPOINT *sv)= 0; /* Called when a savepoint is released and gives the chance to clean up any internal structure allocated to keep track of the savepoint, if necessary. */ virtual int savepoint_release(THD* thd, SAVEPOINT *sv)= 0; /* Called when a transaction is rolled back to a previously defined savepoint and is used to throw away saved changes that are not necessary after the rollback. */ virtual int savepoint_rollback(THD* thd, SAVEPOINT *sv)= 0; }; public class TC_LOG_DUMMY: public TC_LOG { int open_connection(THD* thd) { return 0; } int close_connection(THD* thd) { return 0; } int commit(THD *thd, bool all) { return ha_commit_low(thd,; } }; public class TC_LOG_MMAP: public TC_LOG { int open_connection(THD* thd) { return 0; } int close_connection(THD* thd) { return 0; } int commit(THD *thd, bool; } }; int TC_LOG_MMAP::commit(THD *thd, bool all) { /* Get information on Xid to do a 2-PC*/ ha_commit_low(thd, all); /* Call engines to carry on the commit. */ /* Release information on Xid */ } public class MYSQL_BIN_LOG: public TC_LOG { int open_connection(THD* thd); int close_connection(THD* thd); int commit(THD *thd, bool all); int rollback(THD *thd, bool all); int savepoint_set(THD* thd, SAVEPOINT *sv); int savepoint_release(THD* thd, SAVEPOINT *sv); int savepoint_rollback(THD* thd, SAVEPOINT *sv); }; int MYSQL_BIN_LOG::open_connection(THD* thd) { /* Allocate memory used to store transaction's changes. */ } int MYSQL_BIN_LOG::close_connection(THD* thd) { /* Deallocate memory used to store transaction's changes. */ } int MYSQL_BIN_LOG::commit(THD *thd, bool all) { /* Call batch_commit(). */ } int rollback(THD *thd, bool all) { /* Call batch_commit() if there is any non-transactional change that require to be written to the binary log. */ } int savepoint_set(THD* thd, SAVEPOINT *sv) { /* Sets a savepoint. */ /* In the future, can be used to improve how the binary log handle savepoints. This is however out of the scope of this WL. */ } int savepoint_release(THD* thd, SAVEPOINT *sv) { /* Does nothing. */ /* In the future, can be used to improve how the binary log handle savepoints. This is however out of the scope of this WL. */ } int savepoint_rollback(THD* thd, SAVEPOINT *sv) { /* Rolls back the binary log to a pre-defined savepoint. */ /* In the future, can be used to improve how the binary log handle savepoints. This is however out of the scope of this WL. */ } BINARY LOG GROUP COMMIT ----------------------- Besides doing the aforementioned changes and removing the prepare_commit_mutex, we present in what follows the core of the WL. int MYSQL_BIN_LOG::batch_commit(THD* thd, bool all) { /* Add transactions to the flush queue. The first transaction becomes the leader and proceeds to the next stages. Followers will block and eventually will return the commit's status: success or error. The stage is executed under the LOCK_log. */ if (change_stage(thd, Stage_manager::FLUSH_STAGE, thd, NULL, &LOCK_log)) return finish_commit(thd->commit_error); /* Write itself and follower's contents to the binary log. */ THD *flush_queue= NULL; /* Gets a pointer to the flush_queue */ error= flush_stage_queue(&wait_queue); /* Before going into this stage, the wait_queue is copied into the sync_queue then the LOCK_log is released and the LOCK_sync is acquired. */ if (change_stage(thd, Stage_manager::SYNC_STAGE, wait_queue, &LOCK_log, &LOCK_sync)) return finish_commit(thd->commit_error); /* Sync the binary log according to the option sync_binlog. */ THD *sync_queue= NULL; /* Gets a pointer to the sync_queue */ error= sync_stage_queue(&sync_queue); /* This stage is skipped if we do not need to order the commits and each thread have to execute the handlerton commit instead. However, since we are keeping the lock from the previous stage, we need to unlock it if we skip the stage. */ if (opt_binlog_order_commits) { if (change_stage(thd, Stage_manager::COMMIT_STAGE, final_queue, &LOCK_sync, &LOCK_commit)) return finish_commit(thd); THD *commit_queue= NULL; /* Commit all transactions by the same order they were written into the binary log. */ error= commit_stage_queue(&commit_queue); mysql_mutex_unlock(&LOCK_commit); final_queue= commit_queue; } else { final_queue= sync_queue; mysql_mutex_unlock(&LOCK_sync); } /* Notify followers that the can carry on their activities. Either commit themselves if opt_binlog_order_commits is false or simply return the result to clients. */ stage_manager.signal_done(final_queue); return finish_commit(thd); } int MYSQL_BIN_LOG::flush_stage_queue(THD** queue) { } int MYSQL_BIN_LOG::sync_stage_queue(THD** queue) { } MYSQL_BIN_LOG::commit_stage_queue(THD** queue) { } ADDED OPTIONS ------------- sql/sys_vars.cc: static Sys_var_mybool Sys_binlog_order_commits( "binlog_order_commits", "Issue internal commit calls in the same order as transactions are" " written to the binary log.", GLOBAL_VAR(opt_binlog_order_commits), CMD_LINE(OPT_ARG), DEFAULT(FALSE)); static Sys_var_int32 Sys_binlog_max_flush_queue_time( "binlog_max_flush_queue_time", "The maximum time that the binary log group commit will keep reading" " transactions before it flush the transactions to the binary log (and" " optionally sync, depending on the value of sync_binlog).", GLOBAL_VAR(opt_binlog_max_flush_queue_time), CMD_LINE(REQUIRED_ARG), VALID_RANGE(0, 100000), DEFAULT(0), BLOCK_SIZE(1), NO_MUTEX_GUARD, NOT_IN_BINLOG); storage/innobase/handler/ha_innodb.cc: static MYSQL_SYSVAR_UINT(flush_log_at_timeout, srv_flush_log_at_timeout, PLUGIN_VAR_OPCMDARG, "Write and flush logs every (n) second.", NULL, NULL, 1, 0, 2700, 0); User Documentation ================== Changelog entry: System variable descriptions:- log.html#sysvar_binlog_order_commits- log.html#sysvar_binlog_max_flush_queue_time- parameters.html#sysvar_innodb_flush_log_at_timeout Copyright (c) 2000, 2020, Oracle Corporation and/or its affiliates. All rights reserved.
https://dev.mysql.com/worklog/task/?id=5223
CC-MAIN-2020-16
en
refinedweb
setbuf, setvbuf, setvbuf_unlocked, setbuffer, setlinebuf - Assign buffering to a stream #include <stdio.h> void setbuf( FILE *stream, char *buffer ); int setvbuf( FILE *stream, char *buffer, int mode, size_t size ); int setvbuf_unlocked( FILE *stream, char *buffer, int mode, size_t size ); void setbuffer( FILE *stream, char *buffer, int size ); void setlinebuf( FILE *stream ); Standard C Library (libc) Interfaces documented on this reference page conform to industry standards as follows: setbuf(), setvbuf(): XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Specifies the input/output stream. Points to a character array. Determines how the stream parameter is buffered. Specifies the size of the buffer to be used. The setbuf() function causes the character array pointed to by the buffer parameter to be used instead of an automatically allocated buffer. Use the setbuf() function after a stream has been opened but before it is read or written. If the buffer parameter is a null pointer, input/output is unbuffered. A constant, BUFSIZ, defined in the stdio.h header file, tells how large an array is needed: char buf[BUFSIZ]; For the setvbuf() function, the mode parameter determines how the stream parameter is buffered: Causes input/output to be fully buffered. Causes output to be line buffered. The buffer is flushed when a new line is written, the buffer is full, or input is requested. Causes input/output to be completely unbuffered. If the buffer parameter is not a null pointer, the array that the parameter points to is used for buffering instead of a buffer that is automatically allocated. The size parameter specifies the size of the buffer to be used. The constant BUFSIZ in the stdio.h header file is one buffer size. If input/output is unbuffered, the buffer and size parameters are ignored. The setbuffer() function, an alternate form of the setbuf() function, is used after stream has been opened but before it is read or written. The character array buffer, whose size is determined by the size parameter, is used instead of an automatically allocated buffer. If the buffer parameter is a null pointer, input/output is completely unbuffered. The setbuffer() function is not needed under normal circumstances, since the default file I/O buffer size is optimal. The setlinebuf() function is used to change stdout or stderr from block buffered or unbuffered to line buffered. Unlike the setbuf() and setbuffer() functions, the setlinebuf() function can be used any time the file descriptor is active. A buffer is normally obtained from the malloc() function at the time of the first getc() or putc() function on the file, except that the standard error stream, stderr, is normally not buffered. Output streams directed to terminals are always either line buffered or unbuffered. The setvbuf_unlocked() function is functionally identical to the setvbuf() function, except that setvbuf_unlocked() may be safely used only within a scope that is protected by the flockfile() and funlockfile() functions used as a pair. The caller must ensure that the stream is locked before these functions are used. A common source of error is allocating buffer space as an automatic variable in a code block, and then failing to close the stream in the same block. The setvbuf() and setvbuf_unlocked() functions return zero when successful. If they cannot honor the request, or if you give an invalid value in the mode argument, they return a nonzero value. If the following condition occurs, the setvbuf() function sets errno to the corresponding value. The file descriptor that underlies stream is invalid. Functions: fopen(3), fread(3), getc(3), getwc(3), malloc(3), putc(3), putwc(3) Standards: standards(5) setbuf(3)
http://nixdoc.net/man-pages/Tru64/man3/setlinebuf.3.html
CC-MAIN-2020-16
en
refinedweb
Unit Test Your Configuration Files Peter Benjamin ・5 min read Infrastructure as Code. Photo courtesy: @Bass Emmen Originally published at Table of Contents Overview The era of Infrastructure-as-Code (IaC) has unlocked tremendous developer productivity and agility features. Now, as an Engineer, we can declare our infrastructure and environments as structured data in configuration files, such as Terraform templates, Dockerfiles, and Kubernetes manifests. However, this agility and speed of provisioning and configuring infrastructure comes with a high risk of bugs in the form of misconfigurations. Fortunately, we can solve this problem just as we can solve for other bugs in our products, by writing unit tests. One such tool that can help us unit test our configuration files is conftest. What is unique about conftest is that it uses Open-Policy-Agent (OPA) and a policy language, called Rego to accomplish this. This might appear difficult at first, but it will start to make sense. Let's explore 2 use-cases where we can test our configurations! Getting Started First, some prerequisites: conftest: - macOS: brew install instrumenta/instrumenta/conftest - (Optional) opa: - macOS: brew install opa Dockerfile Let's say we want to prevent some images and/or tags (e.g. latest). We need to create a simple Dockerfile: FROM kalilinux/kali-linux-docker:latest ENTRYPOINT ["echo"] Now, we need to create our first unit test file, let's call it test.rego, and place it in a directory, let's call it policy (this is configurable). package main disallowed_tags := ["latest"] disallowed_images := ["kalilinux/kali-linux-docker"] deny[msg] { input[i].Cmd == "from" val := input[i].Value tag := split(val[i], ":")[1] contains(tag, disallowed_tags[_]) msg = sprintf("[%s] tag is not allowed", [tag]) } deny[msg] { input[i].Cmd == "from" val := input[i].Value image := split(val[i], ":")[0] contains(image, disallowed_images[_]) msg = sprintf("[%s] image is not allowed", [image]) } Assuming we are in the right directory, we can test our Dockerfile: $ ls Dockerfile policy/ $ conftest test -i Dockerfile ./Dockerfile FAIL - ./Dockerfile - [latest] tag is not allowed FAIL - ./Dockerfile - [kalilinux/kali-linux-docker] image is not allowed Just to be sure, let's change this Dockerfile to pass the test: # FROM kalilinux/kali-linux-docker:latest FROM debian:buster ENTRYPOINT ["echo"] $ ls Dockerfile policy/ $ conftest test -i Dockerfile ./Dockerfile PASS - ./Dockerfile - data.main.deny "It works! But I don't understand how," I hear you thinking to yourself. Let's break the Rego syntax down: package mainis a way for us to put some rules that belong together in a namespace. In this case, we named it mainbecause conftestdefaults to it, but we can easily do something like package dockerand then run conftest test -i Dockerfile --namespace docker ./Dockerfile disallowed_tags& disallowed_imagesare just simple variables that hold an array of strings deny[msg] { ... }is the start of the deny rule and it means that the Dockerfile should be rejected and the user should be given an error message msgif the conditions in the body (i.e. { ... }) are true - Expressions in the body of the deny rule are treated as logical AND. For example: 1 == 1 # IF 1 is equal to 1 contains("foobar", "foo") # AND "foobar" contains "foo" # This would trigger the deny rule input[i].Cmd == "from"checks if the Docker command is FROM. input[i]means we can have multiple Dockerfiles being tested at once. This will iterate over them - The next 2 lines are assignments just to split a string and store some data in variables contains(tag, disallowed_tags[_])will return true if the tagwe obtained from the Dockerfile contains one of the disallowed_tags. array[_]syntax means iterate over values msg := sprinf(...)creates the message we want to tell our user if this deny rule is triggered - The second deny[msg]rule checks that the image itself is not on the blocklist. Kubernetes Let's say we want to ensure that all pods are running as a non-root user. We need to create our deployment $ mkdir -p kubernetes $ cat <<EOF >./kubernetes EOF Now, we need to create our unit test: $ mkdir -p ./kubernetes/policy $ cat <<EOF >./kubernetes/policy/test.rego package main name := input.metadata.name deny[msg] { input.kind == "Deployment" not input.spec.template.spec.securityContext.runAsNonRoot msg = sprintf("Containers must run as non root in Deployment %s. See:", [name]) } EOF And, let's run it: conftest test -i yaml ./kubernetes/deployment.yaml FAIL - ./kubernetes/deployment.yaml - Containers must run as non root in Deployment nginx-deployment. See: This is a bit more straightforward: - Get the metadata.namefrom the input(which is the Kubernetes Deployment yaml file) - Create a deny rule that is triggered if: input.kindis Deploymentand securityContext.runAsNonRootis not set - Return an error message to the user that containers must run as non-root and point them to the docs. Next Steps So, where to go from here? The Rego language is vast and it can take a bit to wrap your head around how it works. You can even send and receive HTTP requests inside Rego. I recommend reading the docs to learn more about Rego's capabilities: I also barely scratched the surface with conftest in this blog post. The repository has a nice list of examples that you should peruse at your leisure. conftest even supports sharing policies via uploading OPA bundles to OCI-compliant registries, e.g. conftest push ..., conftest pull .... Lastly, if you have any questions, the OPA community is friendly and welcoming. Feel free to join the #conftest channel in OPA Slack. Happy coding! Does your background make your work unique? What's the skill, hobby or weird habit that makes your work unique? I applaud the sentiment of unit testing configuration however it concerns me that a test can appear more complex than the configuration under test :) Are there other tools and techniques that can be used effectively to test a deployment config? Hi Phil, That's a great question. I don't know of many tools or testing frameworks that are general enough to be applied to any structured data. One such tool that falls under this category is terratest, which requires you to write your tests in Go (I haven't had a chance to explore it). On the other hand, there are a number of specialized testing frameworks/tools for specific types of configuration files, like: So, it depends on your use-case. Cool, thanks Peter, I have reading to do!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/petermbenjamin/unit-test-your-configuration-files-3mnf
CC-MAIN-2020-16
en
refinedweb
Wikidump infobox extractor. Project description Extract infoboxes from wikidumps To create a wikidump for a specific category or group of articles, you can use Wikipedia's special export feature. Download the .xml file and then you can convert the xml dump to a .js file containing a list of infobox objects. The package is able to handle a variety of infoboxes and can correctly parse lists within infoboxes such as: | Holding = {{ordered list |style=text-align: left; |1=States may not prohibit citizens from contracting insurance out of state for acts performed outside the state. |2=States may not prohibit citizens from contracting insurance out of state by written communication, even if the property to be insured is within the state. }} The package can also handle multiline items (below will be handled as one element): |Prior=Patent application 07/479,666 filed, February 13, 1990; Examiner's rejection affirmed by Board of Patent Appeals and Interferences, ''Ex parte Zurko, et al'', July 31, 1995 (_ USPQ 2d _, Appeal No. 94-3967); request for reconsideration denied, December 1, 1995; Board decision reversed, ''In re Zurko, et al'' 111 [[F.3d]] [ 887] ([[Fed. Cir.]] 1997); reheard, Board decision reversed, 142 [[F.3d]] [ 1447] (Fed. Cir. 1998) (en banc); petition for writ of certiorari granted, {{ussc|525|961|1998|el=no}} It is also handled when infobox elements are on the same line as the infobox declaration: {{infobox| above = Arizona v. California Finally, infoboxes are matched regardless of proper spelling or capitalization ( nfobox, Infobox, infobox) are all matched. Installation $ pip3 install wikidump-infobox-extractor Usage $ infodump <xml dump file path> <output file path> Notes Wikidumps and Wikipedia pages have a lot of errors. This package does a pretty good job of dealing with them. However, you will likely need to do some key correction after the parse. Spelling, capitalization, and relevance all need to be analyzed. For instance, below are all the keys from Wikipedia's Supreme Court Cases (after I manually edited some pages on Wikipedia to remove non-Supreme Court Cases): If you wish to check your processed infoboxes, you can do something such as: import json f = open("./wiki-dump-out.js", 'r') case_dict = json.loads(f.read()) f.close() keys = set() for obj in case_dict: for key in obj: keys.add(key) for key in sorted(keys): print(key) Keys from Wikipedia's Supreme Court Cases Abrogated Advocates for Appellant Advocates for Appellee ArgueDate ArgueDate1 ArgueDate2 ArgueDateA ArgueDateB ArgueDateC ArgueYear Argument Claim Concur Concurrence Concurrence/Dissent Concurrence/Dissent2 Concurrence/Dissent3 Concurrence/Dissent4 Concurrence/Dissent5 Concurrence2 Concurrence3 Concurrence4 Concurrence5 Concurrence6 DecideDate DecideYear Dissent Dissent2 Dissent3 Dissent4 Docket Docket2 Docket3 FiledDate FiledYear FullName Fullname Holding JoinConcurrence JoinConcurrence/Dissent JoinConcurrence/Dissent2 JoinConcurrence/Dissent3 JoinConcurrence/Dissent4 JoinConcurrence/Dissent5 JoinConcurrence2 JoinConcurrence3 JoinConcurrence4 JoinConcurrence5 JoinConcurrence6 JoinDissent JoinDissent2 JoinDissent3 JoinDissent4 JoinMajority JoinMajority2 JoinMajority3 JoinPlurality JoinPlurality2 LawsApplied Limited Litigants Litigants2 Litigants3 Majority Majority2 Majority3 NotParticipating Opinion OpinionAnnouncement Oral Argument OralArgument OralArguments OralReargument Outcome Overruled Overturned previous case ParallelCitations Parties PerCuriam PetitionDate PetitionYear Plurality Plurality2 Prior Procedural QuestionsPresented QuestionsPresnted ReargueDate ReargueDate2 ReargueDateA ReargueDateA2 ReargueDateB ReargueDateB2 ReargueYear ReargueYear2 Related SCOTUS Seriatim Seriatim2 Seriatim3 Seriatim4 SubmitDate SubmitYear Subsequent Superseded USPage USVol Vote above abovestyle bodystyle caption citations court data11 data13 data15 data2 data23 data24 data25 data26 data27 data3 data3class data4 data5 data6 data7 data8 data9 data9class date decided full name header1 header10 header12 header14 header2 header3 header4 header5 header6 header7 header8 header9 headerstyle image italic title judges label2 label23 label24 label25 label26 label27 label3 label4 label5 label6 label7 label8 label9 name opinions prior actions subsequent actions title Example xml input: <mediawiki xmlns="" xmlns: <siteinfo> // ... </siteinfo> <page> <title>Younger v. Harris</title> <ns>0</ns> <id>1712852</id> <revision> <id>881592324</id> <parentid>877923066</parentid> <timestamp>2019-02-03T16:14:22Z</timestamp> <contributor> <username>Legalskeptic</username> <id>11540368</id> </contributor> <comment>added link to district court opinion</comment> <model>wikitext</model> <format>text/x-wiki</format> <text xml:{{Infobox SCOTUS case }} }} '''''Younger v. Harris''''', 401 U.S. 37 (1971),{{ref|citation}} was a case in which the [[United States Supreme Court]] held that [[United States federal courts]] were required to [[abstention doctrine|abstain]] from hearing any [[civil rights]] [[tort]] claims brought by a person who is currently being [[prosecution|prosecuted]] for a matter arising from that claim. ==Facts== A [[California]] statute prohibited advocating "unlawful acts of force or violence [to] effect political change." The [[defendant]], Harris, was charged with violating the statute, and he sued under [[42 U.S.C. § 1983]] to get an injunction preventing District Attorney [[Evelle J. Younger]] from enforcing the law on the grounds that it violated the free speech guarantee. ==Decision and precedent== In an 8-1 decision, the Court held that federal courts may not hear the case until the person is [[convicted]] or found not guilty of the crime unless the defendant will suffer an irreparable injury that is "both great and immediate." Merely having to endure a criminal prosecution is no such irreparable harm. There are three exceptions to Younger abstention: #Where the prosecution is in bad faith (i.e. the state knows the person to be innocent)—as applied in ''[[Dombrowski v. Pfister]]'';). ==Status as precedent== The doctrine was later. Moreover, the principle of abstention applies to some state administrative proceedings. In regard to the exceptions which the ''Younger'' Court articulated, later decisions make it clear that these are highly difficult to meet. #''Bad faith prosecution'': in no case since ''Younger'' was decided has the Supreme Court found there to exist bad faith prosecution sufficient to justify a federal court injunction against state court proceedings. The Court has specifically declined to find bad faith prosecution even in circumstances where repeated prosecutions had occurred. As commentator [[Erwin Chemerinsky]] states, the bad-faith prosecution exception seems narrowly limited to facts like those in ''Dombrowski''.<ref>Erwin Chemerinsky, ''Federal Jurisdiction'' (5th ed. 2007), Aspen Publishers, p.860</ref> Other scholars have even asserted that the possible range of cases which would fit the ''Dombrowski'' model and allow an exception to the no-injunction rule is so limited as to be an "empty universe."<ref>Chemerinsky, p. 859-60</ref> #''Patently unconstitutional law'': in no case since ''Younger'' was decided has the Supreme court found there to exist a patently unconstitutional law sufficient to justify a federal court injunction against state court proceedings. The Court has specifically declined to find such patent unconstitutionality in at least one case (Trainor v. Hernandez) <ref>431 US 434 (1977), [ oyez.org]</ref> #''Inadequate state forum'': the Supreme Court has found the state forum in question to be inadequate on a small number of occasions.<ref>e.g. Gerstein v. Pugh, 420 U.S. 103 (1975), [ oyez.org] Gibson v. Berryhill, 411 U.S. 564 (1973), [ oyez.org]</ref> == See also == * [[Abstention doctrine]] * [[Anti-Injunction Act (1793)]] ==References== {{reflist}} ==External links== * {{wikisource-inline|Younger v. Harris}} * {{note|citation}}{{caselaw source | case = ''Younger v. Harris'', {{ussc|401|37|1971|el=no}} | courtlistener = | findlaw = | justia = | oyez = | loc = | googlescholar = }} [[Category:United States Supreme Court cases]] [[Category:United States Supreme Court cases of the Burger Court]] [[Category:United States Constitution Article Three case law]] [[Category:United States abstention case law]] [[Category:1971 in United States case law]]</text> <sha1>rw2jnxxjqezqnunfqwnga1xgjheawtt</sha1> </revision> </page> // ... Output [{ "title": "Younger v. Harris", }}" }, // ... ] Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/wikidump-infobox-extractor/
CC-MAIN-2020-16
en
refinedweb
Introduction to the Morpheus DataFrame Introduction to the Morpheus DataFrame Morpheus can help you scale on multi-core processor architectures and facilitate the development of performant analytical software. Come learn all about this powerful tool! Join the DZone community and get the full member experience.Join For Free The Morpheus library is designed to facilitate the development of high-performance analytical software involving large datasets for both offline and real-time analysis on the Java Virtual Machine (JVM). The library is written in Java 8 with extensive use of lambdas but is accessible to all JVM languages. Motivation At its core, Morpheus provides a versatile two-dimensional memory-efficient tabular data structure called a DataFrame, similar to that first popularized in R. While dynamically typed scientific computing languages like R, Python, and Matlab are great for doing research, they are not well-suited for large-scale production systems, as they become extremely difficult to maintain and dangerous to refactor. The Morpheus library attempts to retain the power and versatility of the DataFrame concept while providing a much more type safe and self-describing set of interfaces, which should make developing, maintaining, and scaling code complexity much easier. Another advantage of the Morpheus library is that it's extremely good at scaling on multi-core processor architectures given the powerful threading capabilities of the Java Virtual Machine. Many operations on a Morpheus DataFrame can seamlessly be run in parallel by simply calling parallel() on the entity you wish to operate on, much like with Java 8 Streams. Internally, these parallel implementations are based on the fork and join framework, and near-linear improvements in performance are observed for certain types of operations as CPU cores are added. Capabilities A Morpheus DataFrame is a column store structure where each column is represented by a Morpheus array, of which there are many implementations, including dense, sparse, and memory-mapped versions. Morpheus arrays are optimized and, wherever possible, are backed by primitive native Java arrays (even for types such as LocalDate, LocalDateTime, etc.) as these are far more efficient from a storage, access, and garbage collection perspective. Memory-mapped Morpheus aArrays, while still experimental, allow very large DataFrames to be created using off-heap storage backed by files. While the complete feature set of the Morpheus DataFrame is still evolving, there are already many powerful APIs to affect complex transformations and analytical operations with ease. There are standard functions to compute summary statistics, perform various types of Linear Regressions and apply Principal Component Analysis (PCA), just to mention just a few. The DataFrame is indexed in both the row and column dimension, allowing data to be efficiently sorted, sliced, grouped, and aggregated along either axis. Morpheus at a Glance Let's look at a simple example, a regression example, and a use case. A Simple Example Consider a dataset of motor vehicle characteristics accessible here. The code below loads this CSV data into a Morpheus DataFrame, filters the rows to only include those vehicles that have a power to weight ratio > 0.1 (where weight is converted into kilograms), then adds a column to record the relative efficiency between highway and city mileage (MPG), sorts the rows by this newly added column in descending order, and finally records this transformed result to a CSV file. DataFrame.read().csv(options -> { options.setResource(""); options.setExcludeColumnIndexes(0); }).rows().select(row -> { double weightKG = row.getDouble("Weight") * 0.453592d; double horsepower = row.getDouble("Horsepower"); return horsepower / weightKG > 0.1d; }).cols().add("MPG(Highway/City)", Double.class, v -> { double cityMpg = v.row().getDouble("MPG.city"); double highwayMpg = v.row().getDouble("MPG.highway"); return highwayMpg / cityMpg; }).rows().sort(false, "MPG(Highway/City)").write().csv(options -> { options.setFile("/Users/witdxav/cars93m.csv"); options.setTitle("DataFrame"); }); This example demonstrates the functional nature of the Morpheus API, where many method return types are in fact a DataFrame and therefore allow this form of method chaining. In this example, the methods csv(), add(), and sort() all return a frame. In some cases, the same frame that the method operates on; in other cases, it's a filter or shallow copy of the frame being operated on. The first 10 rows of the transformed dataset in this example look as follows, with the newly added column appearing on the far right of the frame. A Regression Example The Morpheus API includes a regression interface in order to fit data to a linear model using either OLS, WLS, or GLS. The code below uses the same car dataset introduced in the previous example and regresses Horsepower on EngineSize. The code example prints the model results to standard out, which is shown below, and then creates a scatter chart with the regression line clearly displayed. //Load the data DataFrame<Integer,String> data = DataFrame.read().csv(options -> { options.setResource(""); options.setExcludeColumnIndexes(0); }); //Run OLS regression and plot String regressand = "Horsepower"; String regressor = "EngineSize"; data.regress().ols(regressand, regressor, true, model -> { System.out.println(model); DataFrame<Integer,String> xy = data.cols().select(regressand, regressor); Chart.create().withScatterPlot(xy, false, regressor, chart -> { chart.title().withText(regressand + " regressed on " + regressor); chart.subtitle().withText("Single Variable Linear Regression"); chart.plot().style(regressand).withColor(Color.RED).withPointsVisible(true); chart.plot().trend(regressand).withColor(Color.BLACK); chart.plot().axes().domain().label().withText(regressor); chart.plot().axes().domain().format().withPattern("0.00;-0.00"); chart.plot().axes().range(0).label().withText(regressand); chart.plot().axes().range(0).format().withPattern("0;-0"); chart.show(); }); return Optional.empty(); }); ============================================================================================== Linear Regression Results ============================================================================================== Model: OLS R-Squared: 0.5360 Observations: 93 R-Squared(adjusted): 0.5309 DF Model: 1 F-Statistic: 105.1204 DF Residuals: 91 F-Statistic(Prob): 1.11E-16 Standard Error: 35.8717 Runtime(millis) 52 Durbin-Watson: 1.9591 ============================================================================================== Index | PARAMETER | STD_ERROR | T_STAT | P_VALUE | CI_LOWER | CI_UPPER | ---------------------------------------------------------------------------------------------- Intercept | 45.2195 | 10.3119 | 4.3852 | 3.107E-5 | 24.736 | 65.7029 | EngineSize | 36.9633 | 3.6052 | 10.2528 | 7.573E-17 | 29.802 | 44.1245 | ============================================================================================== UK House Price Trends It is possible to access all UK residential real-estate transaction records from 1995 through to current day via the UK Government Open Data initiative. The data is presented in CSV format, and contains numerous columns including such information as the transaction date, price paid, fully qualified address (including postal code), property type, lease type, and so on. Let's begin by writing a function to load these CSV files from Amazon S3 buckets, and since they are stored one file per year, we provide a parameterized function accordingly. Given the requirements of our analysis, there is no need to load all the columns in the file, so below we only choose to read columns at index 1, 2, 4, and 11. In addition, since the files do not include a header, we re-name columns to something more meaningful to make subsequent access a little clearer. /** * Loads UK house price from the Land Registry stored in an Amazon S3 bucket * Note the data does not have a header, so columns will be named Column-0, Column-1 etc... * @param year the year for which to load prices * @return the resulting DataFrame, with some columns renamed */ private DataFrame<Integer,String> loadHousePrices(Year year) { String resource = ""; return DataFrame.read().csv(options -> { options.setResource(String.format(resource, year.getValue())); options.setHeader(false); options.setCharset(StandardCharsets.UTF_8); options.setIncludeColumnIndexes(1, 2, 4, 11); options.getFormats().setParser("TransactDate", Parser.ofLocalDate("yyyy-MM-dd HH:mm")); options.setColumnNameMapping((colName, colOrdinal) -> { switch (colOrdinal) { case 0: return "PricePaid"; case 1: return "TransactDate"; case 2: return "PropertyType"; case 3: return "City"; default: return colName; } }); }); } Below, we use this data in order to compute the median nominal price (not inflation adjusted) of an apartment for each year between 1995 through 2014 for a subset of the largest cities in the UK. There are about 20 million records in the unfiltered dataset between 1993 and 2014, and while it takes a fairly long time to load and parse (approximately 3.5GB of data), Morpheus executes the analytical portion of the code in about five seconds (not including load time) on a standard Apple MacBook Pro purchased in late 2013. Note how we use parallel processing to load and process the data by calling results.rows().keys().parallel(). //Create a data frame to capture the median prices of Apartments in the UK'a largest cities DataFrame<Year,String> results = DataFrame.ofDoubles( Range.of(1995, 2015).map(Year::of), Array.of("LONDON", "BIRMINGHAM", "SHEFFIELD", "LEEDS", "LIVERPOOL", "MANCHESTER") ); //Process yearly data in parallel to leverage all CPU cores results.rows().keys().parallel().forEach(year -> { System.out.printf("Loading UK house prices for %s...\n", year); DataFrame<Integer,String> prices = loadHousePrices(year); prices.rows().select(row -> { //Filter rows to include only apartments in the relevant cities final String propType = row.getValue("PropertyType"); final String city = row.getValue("City"); final String cityUpperCase = city != null ? city.toUpperCase() : null; return propType != null && propType.equals("F") && results.cols().contains(cityUpperCase); }).rows().groupBy("City").forEach(0, (groupKey, group) -> { //Group row filtered frame so we can compute median prices in selected cities final String city = groupKey.item(0); final double priceStat = group.colAt("PricePaid").stats().median(); results.data().setDouble(year, city, priceStat); }); }); //Map row keys to LocalDates, and map values to be percentage changes from start date final DataFrame<LocalDate,String> plotFrame = results.mapToDoubles(v -> { final double firstValue = v.col().getDouble(0); final double currentValue = v.getDouble(); return (currentValue / firstValue - 1d) * 100d; }).rows().mapKeys(row -> { final Year year = row.key(); return LocalDate.of(year.getValue(), 12, 31); }); //Create a plot, and display it Chart.create().withLinePlot(plotFrame, chart -> { chart.title().withText("Median Nominal House Price Changes"); chart.title().withFont(new Font("Arial", Font.BOLD, 14)); chart.subtitle().withText("Date Range: 1995 - 2014"); chart.plot().axes().domain().label().withText("Year"); chart.plot().axes().range(0).label().withText("Percent Change from 1995"); chart.plot().axes().range(0).format().withPattern("0.##'%';-0.##'%'"); chart.plot().style("LONDON").withColor(Color.BLACK); chart.legend().on().bottom(); chart.show(); }); The percent change in nominal median prices for apartments in the subset of chosen cities is shown in the plot below. It shows that London did not suffer any nominal house price decline as a result of the Global Financial Crisis (GFC); however, not all cities in the UK proved as resilient. What is slightly surprising is that some of the less affluent northern cities saw a higher rate of appreciation in the 2003 to 2006 period compared to London. One thing to note is that while London did not see any nominal price reduction, there was certainly a fairly severe correction in terms of EUR and USD since Pound Sterling depreciated heavily against these currencies during the GFC. Visualization Visualizing data in Morpheus DataFrames is made easy via a simple chart abstraction API with adapters supporting both JFreeChart as well as Google Charts (with others to follow by popular demand). This design makes it possible to generate interactive Java Swing charts as well as HTML5 browser-based charts via the same programmatic interface. For more details on how to use this API, see the section on visualization here and the code here. There are just a few charts below. Maven Artifacts Morpheus is published to Maven Central, so it can be easily added as a dependency in your build tool of choice. The codebase is currently divided into five repositories to allow each module to be evolved independently. The core module, which is aptly named morpheus-core, is the foundational library on which all other modules depend. The various Maven artifacts are as follows: Morpheus Core This is the foundational library that contains Morpheus arrays, DataFrames, and other key interfaces and implementations. <dependency> <groupId>com.zavtech</groupId> <artifactId>morpheus-core</artifactId> <version>${VERSION}</version> </dependency> Morpheus Visualization The visualization components to display DataFrames in charts and tables: <dependency> <groupId>com.zavtech</groupId> <artifactId>morpheus-viz</artifactId> <version>${VERSION}</version> </dependency> Morpheus Quandl The adapter to load data from Quandl: <dependency> <groupId>com.zavtech</groupId> <artifactId>morpheus-quandl</artifactId> <version>${VERSION}</version> </dependency> Morpheus Google The adapter to load data from Google Finance: <dependency> <groupId>com.zavtech</groupId> <artifactId>morpheus-google</artifactId> <version>${VERSION}</version> </dependency> Morpheus Yahoo The adapter to load data from Yahoo! Finance: <dependency> <groupId>com.zavtech</groupId> <artifactId>morpheus-yahoo</artifactId> <version>${VERSION}</version> </dependency> And that's it! Published at DZone with permission of Xavier Witdouck . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/introduction-to-the-morpheus-dataframe?fromrel=true
CC-MAIN-2020-16
en
refinedweb
class for range index management of curve More... #include <IntTools_CurveRangeSampleMapHasher.hxx> class for range index management of curve Returns a HasCode value for the Key <K> in the range 0..Upper. Returns True when the two keys are the same. Two same keys must have the same hashcode, the contrary is not necessary.
https://www.opencascade.com/doc/occt-7.1.0/refman/html/class_int_tools___curve_range_sample_map_hasher.html
CC-MAIN-2020-16
en
refinedweb
Graphic Thinking for Architects & Designers TH RD E ITION Graph·c Thinking for Architects & Designers PAUL ASEAU JOHN WILEY & SONS, INC. New Yor k Chich ester Weinheim Brisb ane Singapor e Toron to Thi s book is printed on acid-free paper. il' e 2001 by John Wil ey & Sons . All rights re served . Publish ed simultaneou sly in Canada . Int erior D esign: Da vid Levy No part of thi s p ublication may be reprod uce d, stored in a retrieval syst em or transmitted in any form or by an y me ans, electronic, m echanical , photocopying, recording, scanning or otherwise, excep t as permitted under Sections 107 or 108 of the 1976 United Stat es Copyright Act, without either the prior w ritten pe rm ission of th e Publisher, or authorization through payment of th e appropriate p er -cop y fe e to the MA 0192 3, 1978) 750-8400, fax 1978) 750-4744. Requests to the Publi sh er for p er m iss io n sho u ld be add ressed to the Permissions Department , John Wiley & Sons, Inc ., 605 Third Avenue, New York , NY 10158-0012, (212) 850-6011 , fax (212) 850-6008, E-Mail: PERMREQ@WILEYCOM. This publication is design ed to provide accurate and author itative information in re gard to th e subject matter covered. It is sold with th e under standing th at th e publish er is no t engaged in rendering professional se rvices . If professional advice or othe r expe rt ass istance is required , th e services of a compe te nt professional per son sh ould be sou ght. Librar y of Congress Cataloging-in-Publication Data: Lase au , Paul, 1937 Graphi c think in g for ar chitects & d esi gners I Paul Laseau .-3rd. ed. p. em . Includes bibliographical ref er en ces and ind ex. ISBN 0-471-3529 2-6 (pap er) 1. Architectural dr awing . 2. Com m u nication in ar chi tectural design . 3. Architecture-Sketch-book s. I. Ti tle. 4. Graphic arts. NA2705 .L38 2000 720 '.28 '4-dc21 Pr inted in th e United Stat es of America . 10 98 7 99- 08680; 9 • Contents vi 8 Discovery 141 Preface to the Third Edition vii 9 Verification 163 Preface to the First Edit ion viii Foreword Acknowled gments 1 Int roduction ix 1 BASICSKILLS CO MMUNICATION 10 Process 179 11 Individual Design 189 12 Team Design 203 2 Drawi ng 17 13 Public Design 217 3 Conventions 39 14 Conclusion 231 4 Abst raction 55 Notes 237 5 Expression 67 Bibliography 239 Illustration Credits 242 Index 244 APPLIED SKILLS 6 Analysis 7 Exploration 81 115 v • Foreword au l Las eau p roposes tw o re late d ideas: th e first is th at of "graphic th inking"; th e second is gra p hic thin king as a de vice for com m uni cati on bet w een the de signe r and the designed for. Th e follow ing brief remarks are addressed to the relati onship betw een the two ideas. P d irect th e ac tions of others and wh o co m m un ica te their de cision s to th ose w ho wo rk thro ugh dr aw ings ma de by d raft sm en . Design ing, as a separate task , has co me in to being . Th e professional designer, th e profession al draftsman, and the as sem bly lin e occ ur simultan eously as related phen omena . Histor ica lly, buildi ng d es ign was not so ind iffer eh t to human w ell-being that "com m un ication with the peopl e" becam e an issue until th e ac t of draw in g wa s divided into tw o sp ec ialized activities. The first wa s design drawi ng, in w hic h th e design er exp res sed h is or h er ideas. Th e seco nd was d raft in g used to in struct the builder. This all occ urred some tim e ago, bu t the m om en tum of the ch ange fr om craftsm an ship to draftsm an sh ip , broug h t abou t by the pe culiar form of in d ustrialization we have ch osen to adopt , persists. It now exte nd s to the division of labor in th e design er 's office. Th e build ing of gr eat bu ildings is no longer the cre ation of m aster cr aftsmen led by a m as ter builder but of archite ctural offi ces organized along the lines of in dus tri al production . The task of th e ar ch itec t has been divided and subdivided in to an as sembly line of designer, con str u ct ion m anager, in ter ior desi gner, decora tor , struc tural, elec tri cal, and m ech anical engi neers, an d d raft sm en . Design dec isions onc e made by th e designer on the drawing board ar e now made by th e p rogram m er on comput er p rintou ts. De sign d rawing began as and remains a m eans of gen era tin g ideas, for ta ppi ng in itial con cepts to be sorted out and developed , or simply as an enjoyable ac tivi ty. Dra ftin g is an eigh t-h our ta sk p er forme d dai ly, fill ing shee ts of paper w ith precise lines d ic tated by ot hers. Long ago, when the w ork of individual craftsme n beca m e larger and m ore com plex, wh en a cathedr al rather than a chair w as to be designed , dimension s had to be esta blis he d so th at th e work of a single cra ftsm an co u ld be coo rd ina te d with th e w ork of m any. Drawing w as introdu ced as a cr eativ e device for plan n ing wo rk. • Cr aft sm en ha ve a lways u sed drawings to hel p th em visua lize the ir ideas as th ey made adj ustments in th e continu ous p ro cess of fitt in g parts tog eth er. Dr awing under these cond itions is in sep arabl e fr om the w ork itself. Som e historian s say th at th e w orking draw in gs for the gr eat church es of the tw elfth and thirteenth centuries w er e d rawn on boards that w er e later nai led int o the const ruct ion . But drawing also has other purposes. Th e d ivisi on of labor in cr eases product iv ity. Art ifacts requ iri ng several wee ks of wo rk by a sing le sk illed cra ftsman are d ivid ed in to sm all er st and ardi zed w ork tasks. Pr oduc tion is increa sed as skill is elimina te d. The cr aftsm an' s expression of m at eria l, design sen se , and sket ches are bani sh ed fr om the wo rkplace . Drawings an d specificatio ns pre de te rm ine a ll fac et s of t he w ork. Design decision s are give n to a new class of wo rk m en who do not w ork w ith the mat er ial but in st ead vi There are those of us w ho believe that indu strial ization cou ld have been ach ieved w ith out dest roy ing the crafts m an 's skill, love, and respect for material and the joy of building. We find it even less desirable tha t the jo y of creativity a nd grap hic thi nking that acco mpanies th at ac tivity should leave th e design er 's offi ce for the m em ory ban k of a comp uter. The built world and artifacts around us are ev i den ce of the alm os t fat al erro r of basing design on the mindl ess w ork of the ass embly line . To devel op pro gr amming and operat ion al resea rc h based on m ind less design would be to con tin ue a dis astrous hist oric continu um . Graphic th in king is of course necessa ry to help rej uven ate a mo ri bund design sy stem. But com muni ca tion "w ith th e pe ople " is n ot enough . Cr ea tiv ity itself m ust be share d , and sha red wi th everyone from do w el kn ock er to "Liebe r Meis ter." The nee d fo r grap hic thinking is grea t, bu t it is greater on th e w or kben ch es of the as sem bly lin es at Riv er Rouge th an on the desks of the chi ef designers of Skid m ore, Ow ings & Merrill. - F ORREST W ILSON, 1980 Preface to the Third Edition w enty years have passed since th e fir st pu bli ca tion of th is book . Th e events of the inter ve n ing ye ars have served to re in force m y in itial assu mp ti on s and th e poin ts made by For rest Wilson in th e Forew or d . T The ac celerated developmen ts in persona l com p u ters and th eir app lica tion to arch itectural des ign a nd co nstru cti on ha ve rai sed m or e for cefully t h e question of th e role of ind ivid ual thought a nd creativ ity wi th in p roce sse s tha t a re incr easingly com p lex and special ized . W ill in d iv id ua ls exp erie nce m or e opportunities for expression and co ntri buti on or w ill t heir con tributions be devalued because of th e speed and p recision of comp uter- dr iven processes? Alt ho ug h th e In tern et /web ha s d ra ma tically increased ind ividu al access, tw o major philosop hical camps still guide comp uter deve lopmen t and app lica ti ons. O ne cam p se es th e co m p ut er as a w ay to exten d and im prove tradi tional bus in ess or ganizat ion , w ith it s se gm enta tion of tas ks and relia nc e on spe cialists. The other ca m p see s th e comp uter as a w ay to re vo lution ize busin ess by br oade nin g the sc op e . and impact of th e in d ividual to th e benef it of bot h the • ind ividual and the org anization . O ne vie w is of ind i vid ual s supporting inform ation ; th e ot he r is of info r mation supporting in d ivid uals. A pr emi se of the first edition of this book was that in d ividu al , creative th inking has a vital role in a pres , ent and fu tur e society th at m ust cop e w ith complex, interr elat ed p robl e m s. Add ressing such problems d ep en ds up on a com p reh ensive u nde rs ta nd ing of th eir nature ra ther th an shoehorn ing them in to con venien t, si m p listic, th eoretica l mod els . And visua l com muni cati on pr ovi des a n im por tan t tool for describing and under standi ng co m plexity. Inc reased com pre hensive, ra ther tha n spe cialized , kno w ledge in the possession of ind ividu als shou ld benefit both the orga nization and the indi vidual. In thei r book, In Search of Excellence, I Peters and Wate rman illus trated th at the effec tiveness of organizations depends up on an understa nd ing of val ues, aspira tion s, an d m ean ings th at is share d by all me m bers. We are als o be com ing m ore aware that the m ental and ph ys ica l health of in d ividual s is a valid as well as pra ctical conc er n of orga ni zations . vii Preface to the First Edition n the fall of 1976, while participating in a discus sion group on design communication at the U niversity of W isconsin-Milwaukee, I had the occ asion to mention my book Graphic Problem Solving. Essentially, that book was an attempt at con vincing architects to apply their freehand concept gath ering skills to nontraditional problems dealing more with the processes than the products of archi tecture. During the discussion , Fuller Moore stated that the graphi c skills I had assumed to be part of arch itectural training were being neglected in the schools and that a more basic book on drawing in support of thinking was needed. Soon after, I had th e chance to talk to several architects about the sketches th ey use to d evelop designs in con trast to the "fin ished drawings th ey use in p resen tations." Most cre ative architects had de veloped impressive freehand sketching sk ills and felt comfortable sketching while thinking . Some architects d r ew observations or design ideas in sm all sketchbooks they carried with them at all times. Both the architects and th e educa tors I interviewed expressed concern over the appar en t la ck of freehand graphic skills in pe op le now entering the profession. I As I began to collect materials for this book , 1 wondered about the re levance of sketching in archi tecture. Could sketching be better applied to design ing as p racticed tod ay ? The answer to this qu estion depends on an exami nation of the present challenges to architectur al design : 1 . To be more respon sive to needs, a problem-solv ing process. 2. To be m ore scientific, more reliable, or pr'> dict able. The response to these challenges was su ggest ed by Heinz Von Foerster: ...the language of arch itecture is conn otative lan guage because its in tent is to initiate interpretation . v iii • The crea tive architectural space begets crea tivity, new insights, new cho ices. It is a ca talyst for cogni tion . This suggests an ethical imperative that applies not only to architects but also to anyone who acts on that imperative. A ct always so as to: inc rease, enlarge, enhance the number of choices. I Relating these ideas to the challenges en um erated earl ier, I see two correspo nding imperatives: 1. Ar chitects sh ou ld solve p roblem s wi th peopl e in st ea d of for them by helping them under stand their ne eds and the choices of designs th at me et those ne eds. This is d one by bringing th ose who use the build ings int o the process of de sig ning those bu ild ing s. 2. Archi tect s m us t better u n ders tand sc ience and how mu ch it has in common with architec tur e. Jacob Bronowski pointed out th at the crea tive sci entist is more in teres ted in exploring and exp and ing id eas th an in es tablishing fixed "truths." The unique qu al ity of human beings lies in th e increase rather than the decrease of diversity. Within this context , sketches can contribu te to de sign, first by facilitat ing the exploration and d iver sity of ea ch designer's th in kin g. Second, sket che s can help open up the desi gn process by developing com m unicat ion with people instead of presentin g con clu sions to people. Th e not ion of graphic think ing gr ew out of the recognition that sketchi ng or drawing can and should support the de signe r 's thinking. I re ali ze that som e readers w oul d be more comfo rtable w ith a bo ok about either thi nkin g or drawi ng , but I felt it was cri t ical to deal w ith th eir inte raction . Pu lling th em apart se emed to be like tr ying to u nde rs tan d ho w a fish swims by studying th e fish and the water sepa rately. I hope you will be able to bear w ith the rough spots in this book and find som e th in gs that wi ll help in your work. AcknowLedgments h is book is ded icat ed to th ose ar chitects who gen erou sly took time to discuss their use of drawing s in de sign d uring m y or iginal an d su bs eq uent rese arch . Man y of th em als o pro vided sketches to illustrate th e text. Th eir ded ication to creativity in arch itecture, enthusiasm for dr aw ing, and co m me nts abou t their de sign p rocesses were a gre at he lp and inspirat ion for my work. Among the se architec ts, I am especiall y indeb ted to David Stieg litz, T hom as Bee by, Mor se Pay ne , Thomas La rso n, Mich ae l Ge b har t, Rom a ldo G iurgola , Jam es Tice , Nor m a n Crow e, Harry Egin k , Kir by Lock ard , and Steven and Cathi H ouse. T Recognition is due th e following p eop le for the ir p articu la rly important contribution s to this eff ort: Full er Moore for first su ggesting the id ea. Robert McKim for his ins ights to visual th in king and his en cou rage m ent. Jim An ders on for vital co mments on graph ic co mmu nication. • Karl Brown for comments and other val uab le ass is tan ce . Mi ch ele Laseau for technical ass istanc e. Jack Wyman , Ken Car penter, Juan Bonta , Ch arles Sappen field , and oth er pr ese nt and past col lea gues at the College of Architecture and Planning, Ball State University for com me n ts an d moral support. A special thanks to Forrest Wilson for his enth usi astic sup por t at th e humbling ou tset of thi s effort. Fin all y, th anks must be given to my wife, Peggy, and children , Mich ele , Kevi n, an d Made lein e, for their grea t patience and sa cri fices whi le I struggled with revision s. Pr eviou sly p ublished draw in gs w ere pho tographed by Jerry Hoffm an and Stev en Talley. ix .~~ "'_ .:; . \ - : -- , .1 )I ~ . .• \ H .. · ~ ' ' i _ .. q ,. . -~ ,r , .... . 10, c , \ -: \. .... .;.. • • 1 Introduction rap hic thin king is a te rm I ha ve ad opted to describe thin kin g assisted by ske tch ing. In ar chitecture , t his type of thin king is usually associate d w ith the concep tua l design stages of a projec t in wh ich th inkin g and s ke tch ing w ork closel y toge th er as st im ula n ts for develop ing ideas. In terest in th is form of th inki ng is prom ot ed by a reexam ination of the histor y of ar ch itectura l des ign , th e impact of visu al com mun ication in society, and new concepts of th e role of design and design ers. G The re is actua lly a ve ry strong tradit ion of grap hic th in king in archi tecture. Looking th ro ugh rep ro du c tions of th e not ebo oks of Leon ardo da Vinci, w e are str u ck by th e d yn am ic t hin kin g t hey re flect . It is im possible to rea lly u nd erst and or app rec ia te da Vin ci's thin king apa rt fro m his d rawi ngs because the graphic images and th e thinking are one, a unity. A close r look at the se ske tches reve als certain featur es tha t are instr uc tive for anyone intereste d in grap hic th in king. 1. There are m any d ifferent ideas on one page-his attention is constantly sh ifting from one su bje ct to another. 2 . The way da Vin ci looks a t prob lems is di ve rse both in m ethod and in scale- there are oft en per spect ives, sections, p lans, de ta ils, an d panoram ic view s on the sam e page. 3 . T he thin king is exp lor a to ry, op en-end ed - the sket ches are loose and fr agm ented w h ile sho w ing how th ey were der iv ed . Ma ny alt erna tiv es for extend ing th e id eas ar e sugges te d . The sp ecta tor is invited to parti cipate. Wh at a m arvelous exa mple! Here is a mi nd in fer m ent, using draw ings as a m eans of discover y rath er than as a w ay to imp ress other peop le. Alt ho ug h it is oft en d iff icult to fi nd reco rds of develop m en tal sketch es in hist or ica l documen ts , t her e is eno ug h sur v ivin g evide nce to in d ica te th a t th e use of sk etches for th in king w as com m on to ar ch itects thr oughout history. Depe nd ing on th e d ic tat es of th e bui ld ing trades or customs, the dr aw ing conven tions varied from plan to sec tion to ele vat ion . For alm ost tw o centuries, th e Ecole des Beaux Art s in Paris used the plan esquisse as th e found ation for its Figure 1-2 By Edwin Lutyens. Castle Drago and British Pavilion 1911 Exposition, Rome. ;,~ j' ~ .t .oli\ ~ ,. ' 0 '· L , . .-l I ', . Figure 1-3 By Edwin Lutyens. Castle Drago and British Pavilion 1911 Exposition, Rome. tr ain in g m e th od . W it h th e establis hme n t of la rge arch itec tura l fir ms in the U ni ted States, th ree dimension al scale models gradua lly rep laced d raw ing for the purposes of design deve lopment. Th e use of de sign ing sk etc h e s furt h er decl ined w ith th e ad vent of professio na l m ode l makers an d profes siona l rende rer s. 1 f .......-. . ~ '. ' ..L ! --...._---' "T'" , . i \ ) ~ I I -~ / - ,- ' ~ _. ..r ~.r ' ~-~ "I .""';' ' ., j ' - U . i'J.' ., : '\ ~_ . ' '" . ' ,., ,; . t.~! 1.-'\ 11' "; / ./ . '. >.... _ ...... .., ". , [ " r r "> • \ ~ ~ -I:':" - ,- . .. : ' \ -- <, t,.. ..- .. , ~~ --~~ ~. -; ( " 1 \ _.. . . . ~i WI' ;p, , Ii ) ,~\ ej JIJ:i I' I ' /J. 'f l -t ~~~ ~7 - .s ~ " , ~-T"~ '" ~. -" ,/),. ,)I" V:' -.-~ ~.• ' ~~ . ~ [I),. ; •.- - -r "' ., - . ~'-_ i.o--r---:::> . ' C-7~ t" '~ .~ ,. 6r>o" 1 \ ; . . < ~ I ~~.!l-:"" ~- . -~ . .. - Figure 1-4 By Alvar Aalto. Th er e ha s, of cours e., been an int ense in te rest in ar ch itec ts ' drawings re kindled by exhibits lik e the Beaux-Arts and 200 Years of Am erican Architectural Dr aw ings. But th e emphasis is mo stl y on com m uni cation of the final fixed product , and these pres enta tion drawin gs tell us p ractica lly nothing about the w ay in w h ich the b uildin gs were designed. The think in g sketches ar e necess ary to understand the step -by -step proc es s. Yet ev en when the thinking sketch es ar e avai la ble , as in the do cuments of the work of LeCorbusier, they ar e usually overlooked in favo r of th e renderin gs or photos of the finished w ork. We ar e ju st beginnin g to appreciate th e impor tance LeCorb usier pl aced on sketches. As Geoffrey Broad bent no te s, "All the internal harmony of the work is in th e drawin gs.. .. It is incredible that artists today shoul d be in di ffer en t (even ho stile) to th is prime m over, this' sca ffolding' of th e project. "l 2 Intr oduction ~. \fJ!lJiK~-:'1. I . Tt1~~""'~ . \ ," Figure 1-5 By Th omas Larson. The Grandberg Residence. • -c.· ·~ -- .- ILJI \. ~~~~ Figure 1-6 By Tho mas Beeby. House of Virgil. Among mod ern arc hitects, Alvar Aalto has left us proba bly one of th e best models of th e gra p h ic think in g tradition. His sket ch es are rapid and divers e; they def tly pr obe th e s ubject. Hand, eye , a nd mi nd are int en sely concentra ted . The sketc hes record th e level of develop men t , profi ciency, an d clari ty of Aalto ' s ide as. There are m any other architect s w hose work we can turn to , particul arly here in the United States, whe re w e are exp eriencing a resurgen ce of ske tching. Their draw ing s ar e inventive, diverse, and p rov oca tive. Whethe r they are m akin g notes in a sketchbook or turning over con ce pt s in th e design studio , th ese cre ative design ers are looking for som et hing specia l over and above solving the design pro bl em , like the gourmet w ho is lookin g fo r somethin g more than food . T hey e njoy the eure k a experience , and they enjoy th e sea rch as w ell. Th is book is really a bo ut find ing th ings, about se ein g new ideas, about di scov ery, and abo ut sharing ideas a nd d iscove ries . Figure 1-7 By Norman Jaffe. Int rodu ction 3 Figure 1-8 Battl e of Cety I with the Chet a. Figu re 1-9 Greek geometry Figure 1-10 ExpLoration map. VISUAL COMMUNICATION THROUGH TIME Through out his tory, vi sion has h ad an important imp act on th inki ng. Starting with th e cav em an , dr aw ings we re a way of "freez ing" ideas and even ts out side of hi m and cr eati ng a history. In m any ways, the "second wo rld " man cr ea ted through his images w as critical to the evo lution of thinking. Man was able to separate th e he re an d now fr om what could be imag ine d, the fu ture. Through im ages, the world of the spirit , the ideal world of mythology, and compelling ut op ias be came immedia te an d real. Th e ideals of an entire cult ure could be contained in one pi cture; the unsp eakable could be shared with others. Fro m earli es t tim es, thi s vi su al expression of thinking ha s be en commu na l. O n ce a concept , such as the notion of m an be ing able to fly, wa s converte d to an im age, it w as free to be rein ter preted again and again by others un til the airplane was inven ted. Ma n used signs and sym bols lon g before w ritten la ng uages w ere ado p ted. Early w ritte n langua ges, 4 Int rodu ction Figure 1-11 Constellation of sta rs such as Egyptian hieroglyphics, were hi ghly sp ecial ized set s of symbols derived fr om p ict ures. Th e devel opmen t of geometry, combining mat hemati cs w ith diagram s, m ade it pos sible to think of str ucture and othe r abstractions of reality. This led"to the const r uc tion of objects or buildings of monumental scal e fr om desi gn s. In addit ion to tr ying to make se n se of hi s immediat e surroundin gs, man used d rawings to reach outnto the unknown. Ma ps rec ons tituted from notes and sketches of ex p lore rs spar ked the im aginati on and s ti m ula te d new d iscoveries about our w orld and th e un ivers e. In spite of the asc endance of writt en language, vis ual co m m u nication con tin ue s to be an essenti al part of the wa y w e think. This is re ve aled in th ese phrases that liberally sp rinkle our everyday conversa tion : "I see what you m ean ; take anothe r look at the situation ; put this all in pe rspective." Alt h ough research opinion varies , it seems gen erally acc ep ted that 70 to 80 p ercent of what w e learn I S thro ugh sigh t. ;:jight seems to be the m ost rapid an d compre- Ir= m II ~~~ Figure 1-12 Figure 1-13 Figure 1-14 Figure 1-15 Figure 1-16 Figure 1-17 he nsiv e of ou r se nse s fo r rec eIv m g in formation. T hr oug h cen turies of condi tion ing, w e rely on vision for a n ea rly wa rn ing of dange r. Not only h av e w e co m e to dep end on s ight as a pri m a ry m ea n s of understandi ng th e w orl d , bu t we ha ve al so lea rn ed to tran sla te in fo rm ati on pi c ked up b y th e senses in to visual clu es so tha t , in ma ny w ays, sigh t is actu all y us ed as a substi tu te for the oth er senses. There is a m ple evidence that vi sual comm u n ica tion is becomi ng an eve n m ore powerfu l force in our lives. Th e m os t ob vi ous exa m p le is te le v is ion , thr ough wh ich we ca n exp lore th e sk ies, the oc ea ns, a n d th e societ ie s of our sh ri n k in g p lan e t. We re ly heavily on grap hics to ex p lain a n d p er suad e . Cartoons have becom e a very sophisticated m ean s of distilling an d reflecti ng our c ultur e. Bu t th e most sig n ifica nt re volutio n is the sh ift of visual com m u n ica tio n from th e realm of specialis ts to that of th e gen eral p u bli c. In st an t ly d eveloping film a nd v id eo recorders are just th e beg in n ing of th e visual tools that w ill becom e as com m on as the PC an d the calc u lator. The poten tia l of visu a l com m u n ica tion w ill be tested as we be gin th e tw ent y-first ce n tury. Two over riding features are the de luge of inf orma tion th a t w e must absorb and th e increasingly in te rac tive na ture of th e problems we m us t solve . As Edward Hami lto n p u t it , "Up ...to the prese nt age we h av e ab sor b ed inf ormation in a one -th ing-at-a-time , an a bs tract, lin - ear, fragm ented bu t sequential way... . Now, the term pattern ...w ill ap p ly in creasin gly in un dersta nd ing th e w orld of total -en v ir on m en ta l stim ul i in to which we are mo v ing.'? We se ek patte rn s, no t on ly to screen for sign ifica n ce of in for ma tio n , b u t also to illus tr a te p rocesses or stru ctures by w h ich our world ope ra tes. Th e em ergin g tech n ology for collecting, storing, and d isp layi ng d ifferen t m od els of reality h old s excit ing p rom ise. Co mpu ter-co ns truc te d sa tellite m aps, video ga mes, com p ut er gr aphics, and th e mi n iaturiza tion of com p u ting and re cording eq u ipmen t will open up a n ew e ra in v isual com m un ication . Th e full use of this new capab ility w ill be directly rel ated to the d evelop ment of our ow n vis ual think in g. "Com p u te rs ca n no t see or drea m , nor can they cr ea te : comp u ters are la n guage -bo u n d . Sim ilarly; thinkers w h o ca nnot escape the st ruc ture of la n guage , w ho are u naw a re tha t th in k ing ca n occur in ways havin g littl e to do w ith la nguage, a re often ut i lizing on ly a sma ll part of their bra in tha t is indeed like a comp ut er." Th is observation by Robert McK im p oints out the critic al issue of man-machi ne in ter ac tion . T he n ew equ ip m e n t is of no va lue in itself; it is on ly as good as our im agina tion can make it. If we are to rea lize the potential of visual technology , we must lea rn to th in k visua lly. Visua l Comm unication Thro ugh Time 5 Figure 1-18 Conceptual sketches. VISUALTHINKING The study of visual thin king has developed in maj or pa rt fr om the st udy of cr ea tivity wi thin the field of p sycho logy. Th e w ork of Rudolp h Arn h eim in th e psychology of art has been particularly signi fica nt . In his book , Visual Think ing, he laid a basic fram ew ork fo r r esearch by dis solv ing the artificial barrier bet w een th in king an d the ac tion of the se nses. "By cogn itive , I mean all m enta l op erations invo lve d in re ce ivin g, sto ri n g, an d processing of in for m ation: se n sory p erce ption , m em ory, th in king , learn in g.'" T his w as a n ew w ay of und erstand ing p er cep tion , namely, an int egration of mi nd and sen ses ; th e focu s of th e study of creativity sh ifts fr om the mi nd or th e senses to the in terac tion of bo th. Vis ual th in kin g is th erefore a form of th inking that uses the products of vision -seeing, im agining, an d drawi ng. Wi thi n th e context of de sign ing, th e focu s of this book is on th e third p rod uct of vision, d raw ings or ske tches. When th in k ing becom es ex ternalized in th e form of a sketched im age , it can be sa id to have become grap hic. There ar e stro ng indications that thin kin g in any fie ld is greatly enha nced by th e us e of more than on e sense, as in doing w hile seeing. Although this book 's foc u s is on arch itectu ra l design, it is my hope tha t other readers w ill find the exp lanation s and exa mples usef ul. T he long history of a rc hitec tura l design has p rod uced a grea t w ealt h of graphic tech niqu es and imagery in response to hi gh ly complex, com prehen sive, quantita tive-qual itative prob lems. Tod ay, arc hi te ct ura l desi gn attem pts to deal w ith our total man -made environ m ent , a prob lem that is p ers onal and pressing for everyone . The graphic thinking tools used by archi tec ts to solve p rob lem s of intera ction, conflict, efficie ncy, and aesthetic s in build ings have now become im portant to all part s of society w ith its own in cr easingly complex problem s. 6 Introduction I f ~.- ;// . r . f ,;:;:, Ii,,"· /"'''":k '~ 'f '< , ,, L, . -. /k,:~, ~'h!".. J..{J. .. .._. , J ./ '. ;;." f '" ~- ~ . c \ '-"0 t'n~"," "lY.<r:;;c :11< ' ' . ( _ \ ;' ' - I -...... " . .' l' oIO" j ) .100;: r~.r ~~'~ ~f. iF_~. i'J . ". .,•, 1/, . i. . , 'if'; ' '.,, -- - -' l. .:J., ~ .. '\'I ..< '1 . rt' ,.:,.... ( "... i . .~ .-.. . , -,' ''''';-- . ~ :' l· -~«>~'~;1i.~::i··: ·;' ::::c,," ...; .- G ._ . . I \ J oIl.: '\ { ! \; '; Jt,~~I' y .] '- ." ., -I. ,, ~ , . ' " ": ~. !'t .i \I t\ ,,·. . ~•. f :'l'...".f' \' c..~t". ~. t~ ! 1 Figure 1-19 Conceptual sketches. J ~ ~ Figure 1-20 Conceptual sketches using digital media. Visual Thinking I~ \ Figure 1-21 Graphic thin king process. GRAPHIC THINKING AS A COMMUNICATION PROCESS The proc ess of graphic th inking ca n be seen as a con versa tion wi th ourse lves in w hich we comm unicate w ith sketches. Th e com mun ication p ro cess involves th e sketched image on th e paper, th e eye, the brain, an d the hand. How can this ap parently closed ne t w ork gen erate ideas th at ar e not already in the br ai n? Part of the answer lies in th e def inition of an ide a . Th e so-called new id eas are really a new way of look ing at and com bi ning old ide as. All ide as can be said to be co n n ected; the t h in king p rocess re shuffles ideas, focu ses on pa rts, and re com bines th em . In th e d iagra m of th e graph ic-th in ki ng p rocess , all four pa rts-eye, brain , hand , a nd ske tch - ha ve the capa bility to add, subtract, or mo dify the information tha t is being passed th rou gh th e com m unica tion loop . The eye, assisted by pe rc ep tion, can select a foca l p oint and screen ou t oth er in form at ion . We can re ad ily accep t th at the brain ca n add in formation . Bu t th e oth er two parts, han d and sketch , are also important to th e p roce ss. A differ en ce oft en exists betw een w hat we in tend to draw and w ha t act uall y is draw n . Draw ing ability, m aterials, and our m ood ca n all be sources of change. And yes , even the image on pa p er is su bjec t to change. Differences in ligh t in tensity and angle, the size and d istance of th e image from the eye, reflect ivity of pap er, an d transp are n cy of m edia all op en up new possibilities. The potential of graphic thinking lies in the con tin uou s cycling of inform ati on- laden im ages from pa pe r to eye to brain to hand and back to the pap er. Th eoreticall y, t he m ore often the informa tio n is pas sed aroun d the loop , th e m ore oppor tun ities for change. In th e sequen ce of im ages opp osite, for exam ple, I started with a sketc h of car toon-l ike bu bble s to 8 Introduction rep re sent spaces in a hou se tha t is ye t to be designed. Dep end ing on my exp erience, int er est s, and what I am tr yin g to do , I w ill see ce rt ain th ings in the sketch an d ign or e oth ers . T he resulting perce pt u al im age seg r ega tes sp ec ial-u se sp aces, th e livi n g roo m an d kitchen , fr om several other mo re pr iva te or support spaces. Next, I form a m ental im age to further organ ize th e spaces and give th em or ien ta tion bas ed on what I already kn ow about th e site or a south ern exp osur e for th e living room and ki tchen. Wh en this m en ta l im age is tr a nsferred to pap er once mo re , it goe s th rough yet another ch ange in which the special sp aces begi n to ta ke on distinctive forms. T his is, of cours e, an overs im p lification of th e proce ss. Grap hic thi nkin g, like visu al com mu ni catio n w ith th e rea l world, is a con ti n uous process. In for m ation is sim ultan eously dar ting a ll over th e ne tw ork. W hen graphic th inkin g is mo st active , it is similar to wa tching a fantastic array of firew orks and loo king for the one yo u rea lly enjoy. Not on ly is it pro d uctive, it is fun . In Arnh eim 's w ords, "Far from b ei n g a passive mech an is m of regis trati on li ke the p ho togr ap hi c cam era , our vis ua l appa ra tu s co pes w ith th e in coming im ages in ac tive str uggle;" Visu al thin kin g an d visual per ception cannot be se parated from ot her types of th inking or percept ion . Ver bal thinking, for example, adds mo re to the idea of a ki tchen or livin g roo m w ith su ch q ua lifiers as brigh t, ope n , or co m fort able . Obvio u sly, grap h ic thin king is n ot all yo u need to k now in or der to solve p ro ble m s or thi n k crea tivel y, bu t it ca n be a ba sic tool. Grap hic thinking ca n op en up chan nels of com m uni ca tion w ith ou rse lve s and th ose p eople w ith w hom we work. The sketches generated are im po r tan t because they sh ow ho w we are th inking about a problem, not ju st w ha t we th ink abo ut it. I Figure 1-23 Dialogue. Grap h ic th in king takes advantage of the po w er of v isual percep tion by making vis ual images exte rn al a nd exp licit. By p u tti ng th em on paper, we give vis ua l images objectivity outs id e our brain , an existen ce of th ei r ow n over tim e. As Ro b er t M cKim p oi n ts out , gra phic th in king, as externalized th inkin g: has several advantages over internalized thought. First, direct sensory involvement wi th materials pro vides sen sory nourishment-litera lly 'food for thought.' Second, thin k ing by manipulating an actual structure permits serendipity-the hap py accident, the unexpected discovery . Third, thinking in the direct context of sight, touch, an d mo tion engenders a sense of im mediacy, actuality, and action. Finally, the ex terna lized thoug ht structure provides an object for critical contemplation as well as a visible form tha t can be shared with a colleague." To the person w ho m ust reg u larly se ek n ew solu tion s to problems, who must th ink creatively, the se q ua lities of im med ia cy, stimu la tio n, acci de n t , a nd con templa tion are very importa nt. To th ese q ualities I would add one more sp ecia l att r ib ute of graph ic th in k in g, sim u lta neity. Ske tc hes a llow u s to see 'a grea t amoun t of informa tion at the same time, expos ing re lationsh ips and descr ibing a wid e range of su b tleties. Ske tch es a re direct an d represen ta tive. According to Arn he irn, "Th e power of visual la nguage lies in its sp ont aneou s ev idence, its almost ch ild like simpli ci ty.. .. Da r kn ess means d a r kn es s, thin gs tha t be long togeth er are shown toget h er, and what is great an d h igh app ear s in large size and in a high loca tion. "7 Figure 1-22 Evolution of images. Gra phic Thinking As a Com m unication Process 9 , '\ ','-1 .\ } ' -Y I .!'--, ~ ~ :;..~ ~ ~ .,. ,. i\ J~ Figure 1-24 By David Stiegletz. Development sketches on back of a placemat, Siegler Residence. Figure 1-25 Front of placemat , Hotel Mercur, Copenhagen. 10 In troduction EFFECTIVE COMMUNICATION A st a nd ard story th at m any archit ects del ight in tell in g de scribes h ow the m ost ba sic co nc ept fo r a multim illion -doll ar project was first scribbled on the ba ck of a restauran t na p kin . I have wo nd er ed w hy both th e telle r and th e listener alw ays se em to derive a m use me nt from s uc h a sto ry. Perhaps the story restores confidence in the strength of the ind ividual de signe r, or m ay be it is the incongruity that de cision s on suc h im por ta nt matters ar e being made in suc h a re laxed , cas ua l m ann er. Viewing th is story in the con text of gr aphic thinking, it is not at all sur prising th at in spired , inven tiv e thi n kin g sho ul d ta ke place at a resta ura nt tabl e. Not on ly are th e eyes, m inds, and han ds of at leas t tw o person s interacting with th e im ages on th e napk in , but als o they ar e further stim ulated by con versat ion . Besi des , these pe rso ns a re separa ted fr om th eir day-to-day wo rk prob lems ; th ey are rel axing in a pleasant at m osphere, and with th e co nsu mptio n of good food , th eir level of anxiety is significan tly recfuced. They ar e op en , ready, prepared for d iscovery ; ind eed , it would be surprising onl y if the most cr eative ideas w ere n ot born in this setting. To be effective commun ica tors, arc hitec ts m ust: 1. Un d ers ta nd the bas ic elem e nt s of co mmun ica tion-th e com m unicator, th e receiver or aud ience , the m ed ium , and the context-e-and their ro le in effect iven ess. 2. Develop a gra p hic language fr om w h ich to dr aw the m ost effective sketch es for specifi c com m uni cati on tasks. 1 . Never take for gra nte d th e process of comm un ica tion and be w illing to tak e the time to examin e their effe ctiveness. Basic co m m unica tion th eo ry stresses th e com m u nication loop betw een the com munica to r or sender and the receiver in order to att ain maximum effec tiveness. Response fr om th e audienc e is essential to a speaker wh o wants to get his m es sa ge across. The inform ation com ing from the receiver is as im porta nt as what th e sender, th e archit ect , transmits. And so we m ust p ay very clo se atte n tio n to th ose p ers ons with whom we h op e to comm un ica te . The bes t app ro ac h is to try to p la ce one se lf in th eir shoes . What ar e th ey expecting? Wha t are th eir co nce rns? Equ ally important , w e sho uld be awa re of our m ot i vations and conce rns. Do w e h ave an unconscious or hi dden agenda? S ~N 17E:e. ~e lv£R ArGhrtt.e..t c /il'nt A ~ d l e Y\c.e Ol'$ I ~"'(.Y' CONl E-XT ObJe-d'lV e?1 Lo CCtt IOl'\, flo'V lI'DI)/Io' t n0 11Ml / Clrcuyy,.<;{o.I1 C1'S Figure 1-26 The st ruct ure of comm unications. As furth er chap ters r ev iew th e m any w ay s gr aphi c thin ki ng is us ed in the practice of arch itec ture, it is critical to remember that individ uals ca nnot really be cut off from the ir environm ent or th eir soci et y. The grap hic thinking of on e person thrives in the presence of goo d com pa ny and a su pportive atmos phere. See k both enthusias tically. Altho ugh th e m edium with w hich this book dea ls is principally fre ehand sketches, th e basic me thods are ap pli cable to many graphic m ed ia. But each specific m ed ium has so m e u niq ue characte ris tics th at have sp ecia l effec ts on co m m un ication. Expe rim enta tion wi th differ ent media is th e fast est route to using them eff ect ively. Although there are books on th e us e of th ese m ed ia, th ere is no su bstitute for practi ce, becau se w e all have different n eeds and abiliti es. The context for com m u n icatio n includes su ch th ings as location , time, duration , weather, and type of space, w ha t took place be fore the com mu nic ation, w hat will ta ke place after. We may be able to con trol some of the se context variables, but we ca nnot afford to igno re th em . E ffective Com m unica tion 11 I t 0:.;::,.() i: ~o Q ,.:;. ~ ~ st- > ..'. ~ 1.'1 1 .. j ;~ jii!11 o o o Figure 1-27 Gym, St. Mary's College, C. F. Murphy Associates, architects . Figure 1-28 Wall sectio n, Headquarters Building, Smith, Hinchman & Grylls Associates, Inc. THE ROLE OF GRAPHICTHINKING IN ARCHITECTURE their p ur pose is to exp lain to other pe opl e the prod u ct s of o ur th in kin g, the co n cl us io n s. Tra in in g in ar ch itectu ral sch ool s h as been primarily gear ed tow ar d the at ta in ment of finished presen tation skills, whil e in architectural offi ces, th e emphasis has been on turning out working drawings that clearly pr es ent the necessary di rectives for the contractors. To realize the pot ential of gra phic th in king in ar chi tecture, w e m ust unders tand to day 's prevailing atti tu de s on th e design process and the use of d raw ings in that pro cess. In th e ea rly 1960s, A. S. Levens w as ab le to write w ith conf id ence tha t: One source of confusion in thi nkin g ab out design is the tendency to identify design wi th one of its lan guages, drawing. T his fallacy is sim ilar to th e confu sion w hich would result if musical composition were to be identi fied w ith the w riting of not es on a sta ff of five lines. Design, lik e m usical composition, is done essentially in the m ind an d the making of drawings or wri ting of no tes is a recording process. 8 Today, w e hav e broade r conc ep ts of h ow an d wher e design takes place, bu t drawings are st ill nor ma lly th ought of as sim ply representations of ideas; 12 Introduction In response to Levens' ana logy, graphic th in king treats drawings more like a piano than a score sheet. Like com position, desig n is poss ib le witho ut an instrum en t to provid e feedback, bu t for m ost des ign ers this is not very produ ct ive. Design thi nking and design comm u nica ti on sh ould be interactive; t his implies n ew roles for graph ics. As w e anticipate th e p oten tial of comp uters and other evol vi ng comm uni cation te ch nol ogies , the con cept of feedbac k wi ll be key to effective use of media . • I NDlVl DUAL IEAM Figure 1-29 ORGANIZATION OF THE BOOK bu ild ing could get designing started. Distortion of an eleva tion might re veal a new approach to de tailing. Rever sal of a process diagram m ight suggest a mo d ifi ca tio n of the bu ilding program. The first m ajor sec tion of th e book is devo ted to the basic grap hic th inking ski lls of repre sen tation and con ception . Th e section incl udes four chapters dealing with drawi ng, the use of conventions, abstraction, and expression . My aim is to pro mo te an awareness of the rich variety of graph ic tools availab le for adding pro ductivi ty and enjoy me n t to th in king activities. Th e third section of the book considers grap hic th inking as communi cat ion in three des ign cont exts: individual, tea m, and public . The em p hasis is on better communication so that ideas can be sha red. Th e se cond sec tio n of th e boo k addresses the applicat ion of grap hic thin ki ng to de sign processes. Its four chapters discuss analysis, ex plora tion, discov ery, and verificat ion . Although there are some obvious applicat ions of thes e use s to a n u m ber of design pr ocess m odels, I have purposely avo ided promoting a spec ific design process. O ne of the problems wi th design p roce ss models is th eir accept ance in too sim plistic a way; types of th ink ing or behavior are cate gor ized, and the int ermeshi ng of processe s and ide as is ignored . Instead of cat egories, we ne ed flex ibility. Ma nip u lat ion of gra p hic images, for examp le, might be used at ma ny stages of de signi ng . I still wo uld not attemp t to guess w here it wo uld be ha ndy for a spe cific p roj ect . Man ip ulati on of the ste reotyp es for a Th is boo k is a coll ection of im age s, id eas, and de vices that I hop e a re he lp fu l a nd enjoyable. The approach is eclectic ra ther than dis cr imi nating, inclu sive no t excl usive, expectan t n ot co n clu sive. T he intent is not simpl y to describ e examples bu t to con vey th e excitem en t of grap hic th ink in g and even m ake it contagi ou s. We all have sp ecial , uni qu e ca p acities for th inkin g, w hich , if un locked , co u ld make grea t contributions to th e solu tion of problems we face. Arn heim emphas izes tha t "Every gre a t art ist gives bi rth to a ne w un iverse, in w hi ch the fam iliar things look th e w ay th ey have n ever before looked to anyone. " 9 Th is book is writ ten in anticipati on of a tim e when many of us w ill be ab le to give birth to our own uni verses. O rganiza tion of the B ook 13 BASIC SKILLS • 2 Drawing hiS chapte r's focus is on th e ba sic represen ta tion skills help ful to graphic thinking m eth ods as prese n te d in th e rem ainde r of thi s book . De ve loping freeha nd draw ing skills is n eces sary to th e att a in m en t of graphic thinking an d per ceptual skills. Some might say, "I really admir e good draw ings and those designers wh o have a qui ck hand , but 1 hav e accepted th e fact tha t 1 w ill never be th at good ." Bun k! It ju st is not SO l Anyone can learn to d raw we ll. If you don 't believe m e, ta ke th e time to tal k to people w ho draw very wel l. You w ill find that their fir st drawi ngs w ere ten tative . Th ey probably took every oppo rtunity to draw. With tim e and hard w ork , th ey gr ad ua lly im p roved and n ever regretted th e eff ort th ey made. T The re are tw o impo rtant co ndi ti on s to keep in m ind wh en trying to develop any skill : 1 . Skill comes w ith re petition. 2. The surest w ay to p rac ti ce an y s kill is to enjo y w hat you ar e doing. Because of th e he av y em phas is on ra tionalization in formal ed uca tio n, many people mis tak enly th ink that th ey can master a ski ll, suc h as drawing, sim ply by understanding concepts. Con cepts ar e helpful. but pr actice is esse ntia l. The orchestra conduc tor Artie Shaw on ce explai ned w hy he refu sed all requ ests by parents to audition the ir childr en. He felt that th e wo rst thing you can do to a talented child is to tell him he has talent. Th e greats in the m us ic bu siness, rega rdless of na tu ral talent, became successful through hard work an d a com m it m ent to their cr aft. They believed in themselv es but knew the y would have to stru ggle to prove th emselves to ot hers. The focus of energy, sense of comp etition , and year s of hard work are essential to becom ing a fine m usicran . The kn owled ge t ha t d raw ing a nd t hin king are im p ort a n t to ar chitecture is not sufficien t. Nat ura l draw in g talent is no t enoug h . To sus ta in th e n eces sary lifeti m e effort of learn ing and p erfecting gra p hic th in kin g, w e nee d to find p leasure in drawing an d think in g. We must be challen ged to do it be tter than those arc hitec ts w e ad m ire do. Morse Payne of Th e Arch itec ts Collaborat ive once noted th e infl uence of Ralp h Raps in on many talented designers: "To w at ch Ralph kn ock out one of his beaut ifu l per sp ectives in fifte en min utes w as tru ly ins piring. It set a goal for us that was very challenging." 1 For tunately, t here is still a lot of respect wit hin th e architec tura l profession for high-quali ty d raw ing. Th e person who ca n expres s hims elf b ot h graphicall y a nd verba lly on an impromptu basis is highly ' valued . W hen hiri ng, off ices oft en loo k for a bi li ty to commun icate ove r ability to be origina l. They know that your ability to develop ideas w ith th em is muc h m or e important in the long run than the idea tha t yo u in itially bri ng to them. It is possi ble to be a n architec t w ithout having w ell-developed grap hic thi nking skills. A barber or a bartender ca n surely cut hair or serve d rinks withou t being able to carry on a conversation . But th e job is a lot ea sier if you enjoy tal king w ith people, an d you will prob ably do more business. 1 believe that grap hic thinking can m ak e design m or e enjoyable and more eff ective. Four types of basic ski lls sup po rt grap hic th in k ing: observa tion , pe rc ep tion , d iscr im ina tion, and imagin ation . Although these are considered to be pr i marily th in king sk ills , in this ch apter 1 have tr ied to show how gr aph ic mean s may be used to pro mo te th ese sk ills an d att ain a funda m ental inte gra tion of gra p hics a nd th in ki ng . The se q uen ce in w h ich the skills are add ressed reflects my ass umption th at each thi nking skill sup ports those th a t foll ow. 17 • 2 Drawing his cha p ter' s focus is on the basic represe nta tion skills helpful to gra p h ic th in king m etho ds as p rese n ted in th e rema inder of th is bo ok . Develop ing free hand d raw ing skills is ne ces sary to th e atta inmen t of graphi c think ing and p er cep tual skills. Some m igh t say, "I really adm ire goo d d ra w ings a nd th ose desi gn er s wh o have a qui ck han d, bu t I have acce pt ed the fact th at I wi ll ne ver be th at good ." Bu n k! It jus t is not so! Anyone can learn to d raw well . If you don 't believe m e, take the time to ta lk to pe op le w ho dr aw very w ell. You w ill find that th eir first draw ings w ere ten ta tive . They p roba bly took every oppo rtu ni ty to draw. W ith time and hard wo r k, th ey gradually imp roved and never re grette d the effort th ey mad e. T T h er e a re tw o im por tan t con dition s to keep in mi nd when trying to develop any sk ill: 1. Skill co mes w it h rep etition . 2. Th e surest w ay to prac tice any s kil . is to en joy what you are d oi ng. Beca use of th e heavy em p has is on rationa lizati on in form al edu cati on, m any peop le mi staken ly think th at th ey ca n ma ster a skill , such as draw ing, simply by understandi ng co nc ep ts. Concepts are helpful, but practice is esse ntial. The orchestra conductor Artie Shaw on ce explained why he refused all requests by parents to au dit ion th eir child ren. He felt that th e wo rst thing you can do to a talented child is to tell him he has talent. Th e grea ts in the m usic bu siness, regardless of na tural talent , became successful th ro ugh hard work and a com m it m en t to their craft. They believed in themselves but knew the y would have to struggle to prove th em selves to oth ers. Th e focus of en ergy, se nse of competiti on, an d years of hard work are essential to becom ing a fine musician. Th e know ledge th at d raw ing and t hi nk ing are imp or ta n t to a rchi tect ure is not sufficien t. Nat ura l d rawing talen t is not enou gh . To susta in the neces sary lifeti m e effort of learning and perfectin g grap hic th inking, w e n eed to find p leasure in draw ing and thin ki ng . We must be ch all enged to do it bett er than those arch itects w e admire do . Morse Payne of The Architects Colla bo rat ive once noted the in fluence of Ralph Rapsin on m any tal en ted designers: "To w atch Ralp h knoc k out one of his bea u tiful perspec tives in fifteen minutes w as tr uly inspiring . It set a goal for us that was ver y ch allenging.": For tu nate ly, th ere is st ill a lot of resp ect w ithin th e architectura l profession for high-q ua lity d raw ing. T he pe rso n who ca n ex press h ims elf both gra ph ically and verba lly on an im p ro mp tu basis is hig hly · valued . W hen hir ing, off ice s oft en loo k for abi lity to com m u n ica te ove r ab ility to be original. The y know that your a bility to develop ideas w ith th em is mu ch m ore im por tan t in th e long run th an th e idea that you initially b ring to th em . It is possible to be a n arc hitect w itho u t hav ing we ll-developed graphic th inkin g sk ills. A barber or a bartender can su rely cut hair or serve drin ks wi tho ut being a ble to car ry on a conversation . Bu t th e job is a lot eas ier if you enjoy talking wit h people, and you w ill pr ob ably do more b usines s. I be lieve that grap hic thin kin g can ma ke d esign m ore en joya ble and mo re effe ct ive . Four ty pes of basic s kills support graphic th ink ing: obs ervation, p ercep tion , d iscrim ina tio n , and im agin ation. Alt hough th ese are considered to be p ri marily th inking skills, in this chap ter I have tried to sho w how grap h ic me ans may be used to promote th ese sk ills and attain a fund amental int egrat ion of graphics a nd th inki ng. T he se q uence in whic h the sk ills a re addressed reflec ts my ass u mption tha t each thi n king skill supports th ose that follow . 17 , 1.11~~ \;:J,,~~ ' t \? I ~ ~:;~~ ', " i4"-~ tl ) ;Y J f" " ~If":/ ' ,e ~ II ' ~ ':) :,. ;f1 \-.L.~ "~~~ ~~ ~ L:f,~s ~ .~ -r =: c ' <, .@J / ~' THE SKETCH NOTEBOOK Frederick Perls hel d that, "People who look at thin gs withou t see ing th em w ill exp er ience the same defi cien cy when call ing up me ntal pi ctures, while those who .. .loo k at thi ngs sq ua rel y and wi th recognition w ill have an eq u ally .a ler t in ternal ey e. " 2 Visual im agery is cr itica l to the creati ve design er ; he must rely on a very rich collection of visual memori es . The rich ne ss of these memories depen ds on a w ell-deve l oped and act ive visu al perception . The sk etch note boo k is an excellen t way of coll ecting visua l im ages and sha rpe n ing percep tio n , for it promotes see ing rather th an just lookin g. Arc hitects who have gotte n int o th e sketch not ebook ha bi t q u ick ly discover its us efu lne ss. All I can say is to try it; you'll like it. A sketch notebook should be sm all and portable, able to fit in to a pocket so it can be carr ied an ywhere. It sh ou ld ha ve a d u ra ble bind ing and cove rs so it w on 't com e ap art. Car ry it w ith you at all times and leave it next to your be d at nigh t (some of the best id eas com e to pe op le ju st b efo r e going to sleep or ri ght up on aw ak ening). As the nam e implies, it is a book for notes as w ell as for sketches and for remind ers, r ecipes, or anythi n g else you can think about. Com bin ing ver ba l an d grap hi c not es h elps unite verbal and visual thin king. D rawing V :>\" t ,,~ , / '0 -:\: ' ,) , ",,--- ~1:t ;> Figure 2-3 By Lawrence Halprin. r'~.::. c'; / h'? _ r rN N ( ~ 1k~}Ir~ ~/".u .1 """, ' &C 6 - ,(,"',) ~~~ P'1-~ ~ · ~'i· l ,I 1: Figure 2-4 By Karl Brown. 18 , -, - ' ,. ' n'l';' _ ~G Figure 2-2 By Lisa Ko lber. .E(f " Figure 2-5 By Karl Mang . Fig ure 2-6 By Ronald Margolis. Old Mai n Building, Wayne University. : -_.- -----.-__.-y / Figure 2-7 By Patrick D. Nall. Th e Sketch N otebook 19 Figure 2-8 Spanish Steps, Rome. OBSE RVATION The thousands of students who pass through archi tectu ral schoo ls are us ua lly to ld th at they shou ld learn to sk et ch fr eeha nd and , to a cer tain degr ee, how . Rarely are they told w hat they sh ou ld ske tch or w hy. Draw ing cu bes and othe r still-life exercises ar e an att em p t to teach ske tching d ivorced from th in k ing. Mo st st uden ts fin d it bori ng, and it drive s some away from sketchi ng for the rest of th ei r lives. I pre fer to sta rt students with th e sket ching of exis tin g buildi ngs beca use : 1. The y ar e drawing subjects in wh ich th ey have a basic in terest and are re ady to dis cuss. 2. T he eye an d m in d as well as the hand ar e involved ; percep tion becomes fine-tuned , and we begin to sor t out our vis ual experiences. 3. On e of th e best ways to learn about archi tectural design is to look close ly at existin g buildi ngs and spaces. 20 D rawing The clearest way to dem on str ate the valu e of fr ee hand sketchi ng for develop in g grap hic thinking skills is to compa re sketchi ng wi th ph otogra phy. Although a cam era is oft en a us eful or expedient too l, it lac ks many of the attributes of ske tche s. SKetches have the ab ility to rev eal our perc ept ion , th erefore giving more im po rtan ce to certain pa rts, wh ereas a photo shows everything wi th equal em phas is. In the sketch of the Spanish Steps in Rom e , the focu s is on the ch urc h , ell ipse, and step s as orga n izin g elem en ts for th e e ntire ext erior space. Th e sign ifican t im pact of the flowers in th e p hoto ha s been elim ina ted in the sketch . The abstra ction can be pushed furt her until there is on ly a pa ttern of light and dark, or we ca n fo cus on ly on certain det a ils, suc h as lamp posts or window s. This on e scen e alo ne is a d ict ion ary of urban design . But you do not have to wa it until you get to Rome to get s tarted ; th ere ar e lesso ns a ll around us. Becom e a prospec tor of ar chitect ural design ; build your ow n collec tion of good ideas whil e you learn to sket ch . It is a lot of fun. Figure 2-9 Spanish . Steps r Rorne. Figure 2-10 Wrn ' dow Det ail. Figure 2-9a Spanish Steps, R_orne. : Fig ure 2-11 Street lamp detai l. Observation 21 I(YM~ ?trl-{4 lJY~ {! IJ Figure 2-12a House drawing structure. Figure 2-12b Tones. BUILDING A SKETCH corr ect p rop or tion s it makes n o d iffere nce w hat is draw n from t he n on; th e sketch w ill alw ays look w rong. So take your time; look ca refully at th e sub ject ; continually compar e your sketch wi th w hat you see. Now add the ton es. The se re presen t the sp ace defini ng elemen ts of ligh t, shadow, and color. Again , look carefully at the subject. Where are th e lightest ton es; where are th e da rkest? The sket ch is becoming m ore re a listic. Th e d etail s are ad ded las t . At thi s poin t everyth ing is in its place, a nd you can really concen trat e on th e details one at a tim e. It is no lon ger ove rw helming; you can re lax and enjoy it. In his book D rawing Bu ildings, Richard Dow ner pre sen ted the most effecti ve app roach to fr eeh an d sketc hing I have ever com e across . "Th e fi rst an d m ost importan t th ing ab ou t dra w ing buildi ngs is to realize that what you intend to draw should inter est you as a su bject .'? Nex t , it is im por tan t to select a va ntage point th at be st descri bes your subject. Now you ar e r eady to bu ild the sketch by a three-ste p process of sketching basic structure, tones, an d th en det ails. The basic st ructure sketch is most important. If the p arts ar e not show n in their proper p lace and A Fi gure 2-13a Bowl drawing structure. 22 D rawing Figure 2-13b Tones. Figure 2-13c Finished bowl drawing. Figure 2-12c Text ure and color. Figure 2-12d Finished house drawing . Building a Sk etch 23 IZJ D L7k1 \ 1 Figure 2-17 Structure Sketch Figure 2-14 Fig ure 2-15 Fi gure 2-16 24 Dra wing T he most im port an t part of a sketch , the basic lin e draw ing, is also the mo st d ifficult skill to m as te r. It re quires a lot of practice, but I have a few suggestions th at sho uld help : 1. To help sha rp en th e se ns e of propo rtion need ed for ske tching , practi ce dr aw ing squares and th en rec tangles that are tw o or th ree times longer on one side th an on th e other. Now try to find squares in a scene yo u are ske tchi ng . (At th e beginn in g, th is co uld be don e w ith tracing pap er over a photograp h.) 2. Use a cross or a fra m e to get th e parts of th e sk etch in th eir prop er p lace, or maybe a p romi ne nt fea tur e of the scene or subjec t can act as an organizer for the ot her parts of the sketch . 3. ·Alth ough pen cil ca n certai n ly be us ed for sk etchi ng, I pref er fe lt -tip or in k pens b eca use the lines they prod uc e are simple and clear. If a line is in th e wrong place, it is qu ite ev ide n t. Because the lin e can not be eras ed , it m ust be redr aw n to get it right. This proc ess of rep et ition and checking against t he subjec t develops ski ll. Drawings th at are so light they ca n be ign ored or erased den y the design er the feed back essen tia l to his im pro vem ent. 4. To gai n mor e co nt r ol ov er line m a king, try so me sim ple exercises sim ilar to our "id le m om ent " doodl es . Th e sp irals, like tho se above , are d rawn fr om the ou ts id e toward the cen te r, both clockw ise and co unte rclockw ise . Try to m ake them as fas t as p ossib le without let tin g the lines touc h each ot her ; tr y to get th e lines close to each othe r. Stra ight hatch in g can be done in several directi ons , always striv ing for consistency. Fi gure 2- 18 Figure 2-1 9 Figure 2- 20 Tones Tones can be represen ted with differe nt den sities of hatching or com binations of cross-ha tching . The lines s ho u ld be p arallel and have eq ual spa ces between them . Always re m em ber that th e ma in purpose of the cross- ha tching is to ob tai n different levels of gray or dar kne ss . Use straight strokes as if you were pa inting the sur faces w ith a brush . Errati c or irreg ular lin es d raw att ention to th em and di stract th e e ye fr om m ore im po rtant thing s. There is no st r ict r ule for ap ply ing tones on a sket ch, bu t I ha ve some prefer enc es tha t se em to work well. Horizontal ha tch ing is used on horizontal surfaces, di ago na l hatch ing on vertical surfaces . Wh en two ve rti cal surfaces meet, the h at ching on on e is at a slig htly di ff eren t an gle from the hatching on the other surfac e. App ly tones in a three-step process: 1 . Indicate any texture that appears in th e surface, such as the vertical boards on a barn . 2. If th e textur e in d ication does not prov ide the level of darkness of th e subject , add the necessary addi tional hatching over th e entire surface. 3 . No w app ly m ore ha tch in g w here any s had ow s fall. To show gradat ions of sha dow, add a su cces sio n of hatches at different angl es. The refinement of ton es in a d raw ing is ach ieved by loo king carefully at th e su bjec t a nd by ge tt in g m or e control over t he consistency of the lines. Severa l alt ern ative techniqu es for ske tc hing in tones are illustr ated throughout th is bo ok . The one show n at t he ri ght ab ove is a ra pid me thod usin g ra n dom strokes. De sign ers usua lly de ve lop tech niq ues w ith w hi ch they feel m ost com for table. Bu ilding a Sketch 25 = == -ififf ~I ~I r ~ ')!! .~~ ../ ~u[1fj 0000 L..J l--J L.....-..> '--.J 1= ~ ~ ·,.j71iii~~~~;) ~11i/W WJ&il. ~ Figure 2-21 Fi gure 2-22 Details Detai ls a re ofte n the most in terestin g or compe lling as pe ct of buildings. T he window is an exce lle n t exam p le. T her e, the de ta ils ca n be th e result of a tr an siti on be tw een tw o m a ter ial s-brick and glass- or b etween two b u ild ing elements-wall and op e ning. The w ood w indow frame , brick arch , key stone, and w indowsi ll ma ke t hese transi tio ns po ss ib le , an d each of th ese de tai ls tells us more abou t th e b ui lding . O n a regular basis, I have students sk etch window s, doors, or other bu ild ing elemen ts so they ga in an unde rstand ing a nd appre ciat ion of the con tribution of detai ls to th e q uali ti es and func tions of the bui lding . Details tell us so m e thi ng of need s a nd ma ter ia ls as w ell as our in ge n uit y in re lating th em . Th e ske tc h of the me ta l grating around th e b ase of t h e tree exp la ins bot h the need s of the tr ee and the use of th e su rfa ce under the tree where people w alk . Figure 2-23 26 In m ost arc h it ectural sce nes, th ere a re d et a ils close to us a n d othe rs fa rth er away. We can see m or e of th e close det ail and sho u ld sho w in th e ske tc h suc h things as sc re w s or fas te ne rs or fin e joints a n d tex tur es. As d etai ls recede in th e sk et ch , few er a n d few er of the pi eces ar e sho w n , unt il on ly th e ou tline is v isib le. Drawing Figure 2-25 MO lltgomery, Alabama. Combining Observations Fi gure 2-24 Sail Francisco. Ca lifornia. Wit h practice , struc ture , ton es, a nd d etai ls ca n be effective ly combined to ca p ture th e com p le te se nse of a subject. Old er houses of d ifferent sty les ar e suit ab le su bj ects for practicin g a nd developing ob ser va tion sk ills. T he y a re usua lly readi ly access ib le and p ro v ide a varie ty o f v is u a l effect s th a t ca n sus ta in yo ur in te rest. Try vis iting favorit e houses at d iffer ent tim es of d ay in orde r to v iew the impact of di ffer ent light ing co n d itions. Walk arou n d , approa ch , a nd re tr ea t fr om th e su bj ect to captur e a va r ie ty o f appearances . Building a Sketch 27 TRACING Trac ing ex isting graph ic mat erial is anoth er w ay to bu ild sketching skills. Ma king an overlay of you r ow n drawing s w ith tr acing paper is an ob vious but und er used dev ice. Rath er th an overwork a d raw ing th at is h ead ed in t he w rong directi on, make an ov er lay sh owing th e ele men ts that need to be corrected and then, in anothe r overlay, ma ke a w hol e new ske tc h incorpor ating th e ch an ges. You w ill learn more from yo ur mi stak es, and th e fina l sketc h w ill be better an d fresher. Tracing can also be do ne by lay ing a tran s pare n t s hee t with a grid ov er a draw ing or p ho to, draw ing a larger gr id, and th en transferring the draw in g square by sq uar e. A thi rd tec hniq ue uses a slide projector a nd a sm a ll m irr or to p roject images of a conveni ent size for tracing on your d rawi ng ta ble. The large sketc h on page 3 1 w as don e in this w ay. IJ o 01 1'0 0 Figure 2-26a Orig inal sketc h. n o No m att er th e rea son you th ou gh t copy ing w as im pr op er or illega l, forget it. Ma st er dr af tsm en su ch a s Leon ardo da Vinc i cop ied oth er p eo pl e's wo r k w hen th ey were learn ing to d raw. No tracing is ever th e sa m e a s th e or igina l. You w ill pi ck out some details and simplify other parts. Tracing forces you to look closel y at th e or igina l sketc h or photo an d better un der stand the su bject. DOD Figure 2-26b Overlay sketch. ) MWJrt4~~11-¥ Figure 2-26c Final sketch. M \t?RD1<... IA~LE ~ -611~ ~?etf~ I~ I'1j kt-~td.-l - up ~~~: ~J"'T r "Z061'V-. ~w; ) ""'T' i ii i i " 'm [e-V\ :7L '' ~ lIdu,y flat~ 9 1 A.('~ j. li\ol 4~~~lA-!PtUltw$ ~kj?ru . ~~LWt>r- ~ \ '>\< J :tl~ -n-ttwl~ l?l1X _f!. ~-tvu4d crr 11 11 1 ><1 wood. ~4: ( ~( d0 ~ ~ th '\1\ l?oMt{ 'f~~) Ml~D1< 130)( Fi gure 2-27 Projection table and projection box. 28 Dra wing Fi gure 2-28a Original sketch. Figure 2-28b Enlargement of sketch. Figure 2-29 Tracing after Ray Evans. Figure 2-30 Tracing after Ray Evans. Tracing 29 ~ Fig ure 2-31 Sketc h of Athens, Ohio. (a) Figu re 2-32 Sketch of Athens, Ohio. Figure 2-33 Sketch of At hens, Oh io. 30 Drawing Figure 2-34 a (opposite), b (above) Plan, section, and perspective of garden-court restaurant, Salzburg, Austria. PERCEPTION Many a rchitects have b ecome m et hodica l abo ut sketch and note taki ng. Gordon Cullen , the British illus trator and urban d es ign consult an t, had a m ajo r influe nce on the use of ana lyti ca l sket ches. His book Townscape' is a wond erful collec tion of visua l percep t ions of th e urba n e n v iro n me n t. Th e sketc he s a re clear and com p rehe nsive , im p ress ive ev ide nce of w ha t can be dis covered wi th gra p hic thinking. Using pl ans , sect io ns , and perspectives, th e sketches go beyond th e obvio us to uncover n ew percep tio ns . Tones ar e used to iden tify m ajor orga ni zers of sp ace. (In the book , many of th ese tones are achieved m ec hani cally, b u t th ey are easily rendered in sket ches by hatch ing wi th grease pe nci l or large felt tip markers.] The verba l ca tego ri zation of urban ph e nom ena th rough shor t titles helps to fix the visual p ercep t ions in our memori es; verbal and gra phic communications are working together. And these are not complica ted sketches; th ey ar e w ithin th e poten tial of most designe rs , as show n in the ske tc hes oppo site, w hich apply Cullen 's techn iq ues to th e ana lysis of a small midweste rn town . As Joh n Gund elfinger puts it : A sk etchbook should be a personal diary of what interests you and not a collection of finished draw ings compiled to impress with weight and numb er. ..a finished on-the-spot drawin g...shouldn't be the rea son you go out, for the objective is drawing and not the drawing. I often learn more from drawings that don't work out, studying the unsuccessful attempts tc see where and why I went off ..can learn more than from a drawing where everything fell into place . The draw ings that succeed do so in some measure because of the failures I've learned from preceding it, and so certain pitfalls were unconsciously ignored while drawing. S Perception 31 rt~ ;~t~~ Figure 2-35 Waterfront, Mobile, Alabama. 11 , I ~ k ~ ~ /~~ c h~ k-.tk ~~ Ea ch subject ma y re veal new w ays of seei ng if we remain op en to its specia l ch aracteris tics. It m ay be th e red undancy of form s or a pattern of shadows; it may be an aw are ness of the sp ecial set of elemen ts an d circu ms tances th at p ro du ces a p articularly in ter estin g visual experience. A sketch of the interior of a cathedra l can uncover th e exciting p lay of scale and m at er ials. Th e ac t of dr aw ing can d rama tically heigh ten your visu al sensitivity. I ~.JIIiF Figure 2-36 Salzburg, Austria. ~? Figure 2-37 Mobile, Alabama. 32 Drawing ~. · ---~I c.,K . . , . - -- _ .., M.A.~1-€ =._~~~'S. Figure 2-38 By Todd Calson. West minster Cat hedral. Figure 2-39 Ohio Universit y Quad, Athens, Ohio. Percep tion 33 Figure 2-40 Cartoon style sketch, after Rowland Wilson. Fi gure 2-42 Afte r Saul Steinberg. o oL ::=I:>::J ()~ Figure 2-41 After Saul Steinberg. Figure 2-43 After Saul Steinberg. DISCRIMINATION Cartoons ar e a n im portan t source of sk et ch ing id eas. My favo r ite sourc es are T he N ew Yor k er a nd Pu nch m aga zin es, but th e re a re many oth e r sou rces. Cartooni st s co nvey a co nv inc ing sense of reality w ith an in c red ib le economy of m ean s. Simp le con tou r lin es suggest d et ail inform ati on w h ile con cen tra ting on ove ra ll shape s. Michael Folke s desc ribes some of the d iscip line of cartoon drawings : .. .simplicity refers to the need to ma k e the clearest possible sta teme nt.... Avoid all unnecessary de tail. 34 Drawing Ma k e th e focal point of your pictu re stand ali t . Re frai n from filling every corne r with obj ects or shading.. .. Tra in y our hand and eye to put down on paper rapidly recogniza ble situ a tions: in the fewe st possible strok es. One significant de tail is worth far more tha n an un certain clu tter of lines tha t don 't really describe any thing. M a k e dozens of sma ll pic tures.. .draw ing directly in pen a nd ink so that the pen becom es a natural drawi ng instrument and no t som ething tha t can only be used to wor]: pain fully over ca refu lly prepa red pencil lines.(, T he ca rt oon is selective or d isc rim ina ting; it he lps yo u seek out th e esse n ce of a n exper ie nce. Figure 2-44 Sketc h exte nding a view derived from t he painting, Giovanni Arnofini and Hi s Bride, by Jan Va n Eyck. Figure 2-45 Drawing from imagination. Figure 2-46 Drawing from imagination. IMAGINATION tho se pa r ts of the roo m access ibl e only th rou gh your imagina tion . 2 . Draw a se t o f objects and th e n draw w hat you believe to be the view from the backsid e. 3. Sket ch a s im p le objec t su ch as a cube w ith d is tinc tive m ar kin gs. T he n im agine that you a re cu t tin g th e objec t and m ov ing the parts. Draw the di ffe ren t new configura tions. To m ove fr om gra p h ics in su p po r t o f obse rva tio n towa rd gra phic thinking th at supports d esigning , you must deve lop a nd stre tc h imagin ati on . Her e are so me simp le exe rc ises to sta rt: 1. Find a d rawing, p hotogra p h , or painting of a ro om th at show s a part of a space. O n a large sh eet of pap e r, draw the sc ene d epic ted a nd th en ex te n d th e drawing beyo nd it s or igin al fr am e to s ho w Imagination 35 Visual-Mental Games An en te rtaining way to im p rove ha nd - eye - mind coord ination and promote an ability to visualize is to play some simple games. 1. Show a few people four or five cuto uts of sim ple shapes arranged on a pi ece of paper (above , left ). Ou t of view of the ot hers, one p ers on m oves th e cut outs while verbally desc ribing the move. The oth ers attempt to d raw th e new ar ra nge me nt from the description . Th is is repeated a few tim es to see w ho can ke ep track of the pos ition of th e shape s. Aft er m aster ing th is exercise, have the persons draw ing try to form a men ta l picture of each new arrangem ent and then try to draw only th e final arran gemen t. In a sec ond version of this gam e, an object is su bstitu ted for the cut outs, an d it is ma n ip u late d , op ened , or taken apart. 2 . Form a circl e wi th a small gr ou p. Each pers on m akes a sim pl e sket ch a nd pa sses it to hi s righ t. 36 D raw ing Everyone tr ies to copy the sketch he has received an d in turn pa sses th e copy to th e righ t. This contin ues unti l the fin al copy is passed to the creator of the ori g in al sketch . Then all sketches are ar ra nge d on a w all or table in the order they were made. This ga m e illu s trates th e distinctiven ess of individ ual visual p ercep tion (above, cen ter). 3. Doodles, usin g an arch itectural or de sign th eme, are another form of puzzle. He re, th e obj ec tive is to provide just enough clues so the subj ect is ob vious once the title is given (above, right) . There are many visual p uzzles that exer cise our visual per cept ion. Try some of those sh ow n opposite; look for more puzzles , or invent some of your ow n . In th e sketches opposite, an arbitrary diagram is given and the cha llenge is to use it as a parti for di fferent bu ild ings by seeing it as standing for a section or plan view for starters. Figure 2-50 Visual puzzles. Figure 2-51 ExpLoring design based on a parti diagram. Imagination 37 \ \ \ " ,\ ' • • \ . t \ f "-, " \ .. '\ , •~ .. v, .', ·"~ .I'" \.~~\~ . ..... ,•~" .,." '"' '<, '~ . .... ... ':, " '~\~'f, ,'", I' \ \ .,\~. ~~~.~-- ~_ " .----1 • 3 ( nventions Represent: Call up by description or portrayal or imagination, figure, place, lik eness of be fore mind or senses, serve or be meant as likeness of ..stand for, be specimen 0(, fill place 0(, be subs titu te [or: hroughout history, represen tation and design have been closely lin ked . T he ac t of d esign in g grew directly out of m a n 's desire to see w ha t could or would be ach ieved b ef ore in vesti ng too much time, energy, or money. To crea te a clay p ot meant simply working directly with your han ds until the desired resu lt was ach ieved. Bu t m a king a go ld pot required expensive materia l, m uc h preparatio n , time, and energy. A representation , a design drawing, of the gold bowl was necessary befor e s ta rting th e project. Design became an important part of arch itec tural projects simply because of th eir sca le. Represen ting th e imag ined b u ild in g perm itted n ot only a view of the final result bu t the planni ng for labor and materials to assure completion of th e projec t. Figure 3- 2 The rep rese n ta tion al capacity of sketches is lim ited. We mus t recognize tha t eve n wi th th e m ost sophisticated techniques drawings are not a full sub stitute for the ac tua l experience of a n a rch it ec tural e nv ironment. On the other hand, th e capac ity o f sketches as thinking tools extends well beyond wh at is act ually con ta in ed in the sketc hes. Draw ings, as representations, should be seen as ex tensions of th e persorus] who uses them to aid in thinking. As Ru do lf Arnheim says: The world of im ages does no t simply im prin t itse lf upon a fai thfully sensitive organ. Rathel; in looking at an object, we reach out for it. W ith an in visible finger we move through space around us, go out to distant places where things are foun d, touc h the m , catch them, scan their surfaces , trace their borders, explore their texture. It is an eminently act ive occu poti on.' I find a great variation in the deg ree to whi ch architects rely on drawings to v isua lize designs. One probable explanation for this is experience in vis ua l izing and w ith the bui lding of th ese desig ns. For examp le, when architecture students look at a p la n view of a room, they likely see just an abstract di a gram, but some experienced arc hitects ca n vis ua lize a perspective view of the same room without having to draw it. Figure 3-3 Some bas ic types of re p rese n ta tion ske tc hes, which I feel a rch itects shou ld be a ble to understand, a re di scussed in th is ch apter. I do not intend to pres e n t a comprehensive exp lanation of the construction of basic d rawing co nven tions . Th er e are already sev era l good books on th at subject. Rather, th e emphasis will be on freehand techn iq ues w ithou t th e use of tri a n gles, sca les, a n d st raigh te dges, a llowing for rapid repres entation . 39 ',"/ Figure 3-4 Site plan. Figure 3-5 Axonometric. Figure 3-6 Partial elevation. Figure 3-7 Detail section. Ther e ar e a gr ea t n umber of th ings w e can repre sen t ab out a space or a build ing and many ways to represent them. Th e sketch ed subjects can ra nge in scale from a building and its surrou nding proper ty to a window or a light sw itch . We m ight be interest ed in how it looks or h ow it works or h ow to p ut it togeth er; we m ay be se arching for clarity or charac ter. Variations in drawings ra nge from the concrete to the abs tract , an d the convention s in clude sec tion or cut, eleva tio n , persp ect iv e, axonom etric, isometric, and projec tions . Med ia , techn iq ue, an d st yle acco unt for m any of th e other variations. Ma ny of these va ria tions are cove red in lat er chap ters. Th e elementary for ms of repres enta tio n discussed at this po int are : 40 Conventions 1. Comprehensive views- To st udy d esign s as com plete sy st ems , w e must have m odels that repre se n t the whole from some view point. 2. Concrete images- Dealing wit h th e m os t direct experience. Abs traction is covere d in Cha pte r 4. 3. Perceptual focus- Trying to involve th e vi ewe r in th e expe rien ce signi fied by th e draw ing. 4. Freehand sk etches-Decision -m ak in g in de sig n should in clude the consi derati on of m an y altern a tives. Represe n ta tion of altern atives is encouraged by th e speed of freehand ske tch ing, w here as the ted iou sn ess of "constr uc ted " hard-lin e d raw ings d iscourages it. BUI LDING A PERSPECTIVE. Rdure 9 1ane l1 otrzon L U1l Figure 3-8a Setting the pict ure plane and viewpoint. Fi gure 3-8b Starting grids. Figure 3-8c Setti ng cross-grids. Figure 3-9a Settin g the pict ure plane and viewpoint, plan view. Figure 3-9b Setting one grid, plan view. Figure 3-9c Setting t he cross-grids, plan view. PERSPECTIVE Pers pect ive sket ches ha ve a n eq ual stand ing w ith plan d raw ings, the starting poi n t of m ost d esign edu cation. O ne-poin t persp ec tive is the easiest and there fore , I fee l, th e mo st usefu l of pe rsp ect ive co nvent ions. I have fou nd th e follow ing th re e-step m ethod to be mo st succe ssful : 1 . Indicate th e pictur e plane in bot h elevation a nd pl a n ; it is usuall y a w a ll or a not he r fea ture th a t d efines the far li m its of th e immed ia te space to be view ed . Loca te the p oin t from w h ich the space is to be viewed, or view point (V P.). Vertically, th is po in t is usu ally abo u t 5.5 feet from th e bottom of the pi ctur e plan e. Horizon tally, it can be p laced just a bout any w her e in the sp ace w ith the un der standin g that pa rts of the sp ace outsi d e a 50 -d egree con e of v ision in fr on t of the view er tend to be di s torted in the per sp ect ive. T he horizon ta l line d raw n th rou gh th e V P. is called th e horizon line. 2 . Establish a grid on th e floor of the space. Draw th e sq ua re grid in p la n a nd co u n t th e n umbe r of spaces the v iew er is away fr om the pi cture p lane. Then , in th e perspec tive , loca te the d iago na l va nish ing point (D .V P.) on the hori zon line at the same d is ta nce from the vie w po int. Draw floor gr id lines in the perspect ive in one d irection com ing from th e view point; d raw a diagonal lin e fro m th e diagona l vanish ing poi nt th ro ugh the bot tom corner o f t he p ic ture p lane an d across the space. W here th e d iago nal int er sec ts the floor grid lines running in th e on e direc tion, hori zo n ta l lines can be d raw n to s how the other d irection of the floo r gr id. 3 . In d icate th e str uc ture of the basic eleme nts of the sp ace. Co n tinu e th e grid on the w alls and ceiling (if a pp ropriate ). Using th e gr ids as qu ick refer ence, place vertica l plan es an d openings as we ll as signifi can t d iv isions of the pla nes. Persp ec tive 41 Figure 3-10a Definition of space. Sketching straigh t lin es freeh a nd is an im p orta n t skill to ma st er for all ty pes of gra p h ic th in ki ng, an d p rac tice ma kes perfect. Once you begin to rely on a st raighted ge, the work slows dow n. Sta rt by con cen trat ing on w h ere the line begin s an d end s ra ther th a n on th e lin e itsel f. Place a d ot a t th e begin n ing a nd a d ot w h e re th e lin e s h ou ld e n d . As yo u re pea t th is exe rcise , let th e p en drag ac ross th e p ap e r be tw ee n the tw o dot s. T his sou nd s pret ty ele men ta ry, b u t it is su rprising how ma ny peop le have n ever bot he red to lea rn ho w to sketch a straigh t lin e. W ith th e ba sic p er sp ec ti ve a nd p la n com p le ted th e values, or tones, ca n now be ad d ed . Th e actu al col or of objec ts or p la n es, s h ad e, or s ha d ow s can ca use d iffe re nces in values; ind ica ting th ese chan ging valu es sh ow s th e in te raction of ligh t w ith th e sp ace, p rovid ing spa tial defini tion . Conve nt ions for castin g s hadow s a re presen ted w hen pla n draw in gs a re di s cu ssed . For now, it is en ough to n ote tha t shadow s ar e firs t cast in p lan and the n add ed to the perspect ive, 42 Conventions using th e square gr id as a ref eren ce. Sha de appea rs on objects on the side oppos ite to the su n or othe r so urce of light w here no direct light fa lls ; shaded sur fac es a re ge ne rally lighter in tone tl;an sha dows. As in sketchi ng exis ting build ings, I prefer to use pa ra lle l ha tch lin es to show tones (see Building a Sketch in C hapte r 2). Finally, de tai ls a nd objects ca n be add ed . Peop le ar e m ost importa n t because th ey esta bli sh th e sca le of the space an d in volve th e viewe r th roug h id entifi ca tion w ith th ese sketched figur es. Sim p licity, real is tic p roporti on s, and a sense of m ovemen t a re basic to good hu m a n figures such as th ese. T he squ a re grid s help in co ordi na tin g the p lacem e n t of hum an figures and ot h er objects in p la n a nd pe rs pective . Be sure to p lace p eo pl e a nd objects w he re th ey w ou ld really be ; the p ur pose of th e ske tc h is to u nd er stand the sp ace, n ot to ca mo u flage it. Fi gure 3-11 Casti ng shadows in plan. Fig ure 3-10b Adding ton es and shadows. Figure 3-12 Practice drawing straight lines. Figure 3-13 Practice drawing people. Figure 3-10c Completing details. Perspective 43 OY1::W'\~\ r~~ ?I~ 1t>r- \-f'O< lJIl ~~\Ve - -- - - )7f'- ' <, ' q ( " Cl - ~2 / - <, :s: / / :: ---- »> ./ »> 1------ -4'l\ ~/ / ( C\ (", ..... , / / / ---- -- \ l) i? I 1" I)b ./ /' / / \ \ Figure 3-14 Modification of a one-point perspective. Figure 3-15 Organization of a modified perspective. after Lockard. QUALITATIVE REPRESENTATION spec t ive , p ara llel wit h the horizon lin e , are now sligh tly slan ted in the di rect ion of th e imaginary sec ond poi n t. To m ake th e transition fro m one -poin t per spec tive , the top and bottom lin es of the pi cture pla ne can be given a sligh t slan t an d a new plane is estab lished ; by d rawing a new d iagona l, the new diagonal va n ishin g poin t can be set. A grid ca n also be app lied to this type of perspective to help in plac ing objects in the spa ce. At this p oin t w e are not in ter es ted in th e qualities of dr aw ing expressi on , such as style or tech niq ues; this is cover ed in Ch apt er 5. By q ua lita tive rep resen ta tion , I mean the rep resenta tion of the qualities of a space. I n hi s book Design D rawing W illia m Lock ard ma kes a very co nvi ncing arg um en t for the supe riority of p ersp ectives a s rep rese nt atio na l d raw ings. "Pers p ec tive s ar e m or e qualitat ive than quan titative. The ex per ien tial qua lities of an envi ro n m en t or ob ject can be perceived d irect ly fro m a p er sp ec tive.. .Th e q ua lities of th e space/tim e/ligh t con tin uum are much better re p rese n ted and u nderstood in p er spect ive (than by othe r co nven tions). " 3 Perspective s have the adva ntage of showing the re lationsh ip of all the elem en ts of a sp ac e in a way most sim ilar to how w e w ou ld ex perie nce it whe n b u ilt. Alt hough it is tr ue that bu ildings are not expe rie nc ed only through persp ectives, it is th e best way of sho w ing a d ire ct visua l experience of a specifi c space. Lockar d 's ch apter on representation has probably the best ex p lana tio n of th e use of p ersp ec tive sketc he s for re presentat ion . Locka rd illustrat es a per spective view that is close to one-p oin t perspective ; it in volv es an imagin ary seco nd perspe ct ive po in t added at som e dis ta nc e fr om the ske tc h (see Figur e 3 15). Lin es run n in g the wi dt h o f the one -po int pe r 44 Conventions To rep res en t th e q ua lities of an im agined space, w e have to know some thing abo u t th e q ua lit ies of sp aces. Th ough th is seem s obv ious, it is of ten ignored . As architect s, w e have to look for w hat gives spaces th eir special charac ter , th e d ifferen t kinds of ligh t, color , texture, pat tern , or sha pes possibl e and how they are combin ed . Con tin ua l sket ch ing in a sketch notebook is one sur e way of learn ing a bout the q ualiti es of spaces. Wh en th is know led ge is ap p lied to th e repres en ta tive pe rs pecti ve , w e must rem em ber to con vey th e t h ree-d im e nsiona l exp erie n ce of th e sp ace onto a tw o-di m ensional surface, the pa per. To d o this, w e need to illus tra te the effect s of dep th or d ist ance upon th os e thing s th at giv e th e sp ace its q ualiti es. Wit h an increase in d epth , ligh t se ems to p ro du ce few er grad at ion s of to ne ; d et ail is less ev i d en t; text u re and co lor are less v iv id ; ou tli n es or ed ges are less sha rp . Dep th can a lso be con veyed through overlap of object or con tou r. Figure 3-16a Set up of sketch perspective based on Lockard method. Figure 3-16b Completed sketch perspective. Qualita tive Representation 45 ~ ii Figure 3-17 Para lle l project ions. PARALLEL PROJECTIONS Cur re ntly in comm on use, the axono me tric sketch is an importa nt alte rnative to th e persp ect ive, plan, and section . The axonometri c is simp ly a projection fr om a p lan or sect ion in wh ich all p a rallel lines in the space are show n as para llel; t his is in con tra st to a persp ective wh ere parallel lines are show n as exte nd ing fr om a single point. The axonom etric techniq ue is traditio nal in Chi ne se d raw ings. Instead of p lacin g the viewer at a single poin t from w hich to view the scene, it gives the view er th e feel ing of being every w here in fr ont of the sc ene. The axonom etr ic has the 46 Conventions additi on al ad va nta ge of rep res ent ing th ree -d imen sional spa ce wh ile re taining the "tr ue" d im ensions of a p lan and sec tion . T h is las t ch aracterist ic makes a n axo n ome tric easy to draw be caus e all th ree dimens ions are show n at the sam e scale. Axo no metric p rojections forward or bac kward fr om p lan s or sections are convent ion ally ma de a t an gles of 30, 45, or 60 degrees, bu t in a sketch th e exact angle is no t imp ort an t as lon g as the proje cted lines rem ain p arallel. 1 r VERTICAL SECTION A vertical cut through a space is ca lled a sec tion. What was said abou t the plan sketc h also ap plies to the section sketch, excep t for the cas ting of shadow s. With sec tions, we can show depth of space by apply ing the one -poin t perspective co nventions explai ned earlier. Imagine you are looking at a cut m odel of the space; the point at which you loo k d irectly into the mode l is where the viewpoi n t (V. P.) wi ll be p laced . T he viewp oint is used to projec t th e pe rspect ive be hind the section. Figure 3-18 Section. Human figures are als o imp or ta n t for sec tion ske tches. Many designers ske tch in view lines for the people; this seems to make it easier to imagine being in the space a nd gives some se nse o f w hat can be seen from a particular posit ion in the space. Shadows can be ind icated to see the effect of sunlight within the space. Vertical Section 47 ~ ~ , ~.. Figure 3-19 Plan. PLAN SECTION Abs tract pl an d iagr ams suc h as th e one ab ove have m an y uses in the ea rly concep tu a l s tages of design . Th is is covered in d epth in Ch apt er 4 . However, m any architecture s t ud~nt s make th e m istake of try ing to use th ese plan d iagram s to rep resent th e m ore concrete decisions a bout th e format ion of space. Plan sketc hes of d esign ed sp aces mus t sho w w ha t is enclosed and wh at is no t, including scale, height, pat tern , and d etai l. A p lan is basicall y a horizontal cut or section th ro ugh th e spa ce. Thin gs th at are cut , su ch as w alls or columns , are ou tl ined in a heavy lin e wei gh t. T hings that can be see n bel ow th e pl ace w here the p lan w as cut are ind icated in a lighter line weight. Things such as a skyligh t th at ca n not be see n becau se they are abo ve the lev el of the cut ca n be show n w ith a heavy dashed lin e if d esired. The first stage of a r ep resent a tiv e pla n is the heavy ou tlini ng of wa lls clearl y show ing ope n ings. In th e secon d s tage, d oo rs, win d ow s, fu rn iture, an d other d eta ils a re ad d ed . Th e thi rd- sta ge ske tc h 48 Conventions inc lu des sha d ow s to sho w th e re lative he igh ts of p la ne s and objects. Th e p re vail ing co n ven tion for sha do ws casts th em on a 45-de gee angle, up and to the right. Th e sha dow s need onl y be as long as neces sary to clearly sho w th e relat ive heig h ts of the fu rn i tur e, wall s, etc. Finally, co lor, texture, or pattern can be ad d ed to exp la in fur ther the ch a racte r o f th e space. OTHER REPRESENTATIONS A variety of sk etche s ba sed on th e con ven tions of persp ective , pl a n , section , and axono me tric are shown on the next page. By m ean s of ske tches, we can cu t open, peel ba ck, p u ll apart, re constr uct, or m a ke co ncre te objects transpa ren t to see how th ey are arra nged or con structed . Th es e are jus t a few of th e poss ibl e ex tens ions of repr esen tation . As we use ske tches to v isu alize design s, w e sho uld al w ays be ready to inven t new too ls as need ed . Fi gure 3-20 Transparent sketch. Figure 3-21 By Th omas Truax. Structural systems illustrations, Boston City Ha ll, Kallman, McKinnell & Knowles, architects. ~ - -- + - - I I I , ~ -- -- Figure 3-22 Cut-away view, the Simon House, Barbara and J ulian Neski, architects. Figure 3-23 "Ex plodarnetric" drawing of a barn. Other Representa tions 49 , - /I » \ • I ,>I' - --1tr-- - - - -,~ ,~ / .1 .:% .f / ~ -~ f1~ : .: 1, _l"E;'-~--, : I_ "" '__~:_~..: _ ~<'~~ __ ~_'_:~ --f --- - <=.:. ~ _ ... ,t .i"" .. . t~~· - :I( _ . ~ :-~- I or , 'jd .~ ~ ' ~ - r'-:i?J~ - i::'"-~. t~ ~ ,,:.L..-<;. t:.t< \ l~~ ..., r , ' /, ~ ~ t6 ~ _ . Fi gure 3-24 By Helmut Jacoby. Boston Government Service Cente r, Paul Rudolph, coordinating archit ects. SKETCH TECHNIQUE ---1 11 Figure 3-25 By Helmut Jacoby. Ford Fo undation headquarters, Dinkerloo and Roche, architects. 50 Conventions Many arc hitects ha ve develop ed th eir ow n ske tc hing s tyles in an att emp t to q uickly rep rese n t str ucture, tones, and det ail with a mi nim u m of effor t. An espe cia lly effe ctive tech niqu e is that of He lm u t Jacoby, an ar chite ctur al d el in ea tor of int ern ational rep u tat ion ." Th e qu ick p relim in a ry studie s he uses to p lan th e fin al ren d erings p rovide remark able clari ty of spa tial d efinitio n wi th an ec on omy of m eans. Not ice how, w ith a range of tigh t an d loose squi ggly lines, he ca n de fine surfaces and th e rap id way fha t he suggests peo ple, tr ees, textur es, and ot her detail s. The under lyi ng structur e of the sk etch is usually quite simple, w it h w hi te areas u sed to hel p d efin e space an d objects. Ja coby is very aware of va riat ions in tone and th e effe cts of sha de and shadow w ith respect to the sur rou nd ing trees as w ell as the build ing . M ichael Ge b hard t sketches w it h an emp ha sis on tones and textur es, defin ing spa ce m ore through con trasts than line w or k . W ith a lo opi ng strok e, he is abl e to establish a consistency th at pu lls the drawing toget her and d irects att en tion to the su bj ect rat her th a n the med ia . In esta blish ing your own sty le , be sur e to exam in e closely the work of othe rs th at yo u adm ire ; th ere is no need to star t fro m scratch . Also kee p in m ind th at the objective in sket ching is speed and ease. \ Figure 3-26 By Brian Lee. Auto matic drawing done wit hout looking at the paper. It encourages fluidity of line and nat uralness of expression. Figure 3-27 By Michael F. Gebhardt . Jo hns-Manville World Headquarters, The Architects Collaborative. Ske tch Technique 51 ,A ,v~/' / ",L.-...-i __\. /~l=0 -c ': I'-l-LA.~ .. c::I " v, I ~-= \~ r l~ c~(~. . \..I;/ / 1/ -: "\ 1"--' -\. ~ '-"'1 ~ <: l_ . _ Fig ure 3-28 By Bret Dodd. Design ing d epend s heav ily upon representation ; to avoid d isappoin tm ent later, th e designer wa nts to see the p hys ica l e ffects of h is d ec is ion s. It is inevita ble that a student w ill tell m e that he is waitin g u nt il h e has d ecid ed w ha t to do before he d raw s it up . This is backward. In fact, he can no t dec ide wh at to do until he has drawn it. N ine time s out of ten inde cisiveness is the result otiack of evidence. Fur ther m ore, a decision imp lies a choice ; recogn izing th at th ere is m or e than on e possi ble d esign so luti on , it ma kes no sense w hatsoever to try to de term ine if one iso la ted solution is good . In stead , t he ques tion s hou ld b e w heth er thi s is the best of the know n alternat ives. To answe r this question , we must a lso be ab le to see the oth er designs. The grap hic thin king approach empha size s sketches tha t fe ed thin k ing and thoug h ts tha t feed sk etc hes; one is co ntin ua lly inform ing the other. For the begin ni ng d esign er , thes e po in ts can not be ove remphasi zed . There is no way to avoid the inten se, com pr ehen sive job of representation or mode ling in design . The only cho ice left is wh ether to m ak e t he j ob eas ie r throughou t a profes sional career by bec om in g a com pe tent illu strator now. Havi ng sa id tha t, I wo u ld ad d the w arn ing th a t draw ing an d thinki ng mu s t be alway s open to gr owt h . Clic hes in drawing lead to cli ches in thi n k ing. As John Gund elfing er says: 52 Conventions 1 never k now what a drawing will look lik e until it is finished. O nce yo u do, that's security, and secu rity is somet hing we ca n all do without in a drawing . It comes from working in a particul ar way or styl e th at enables you to control an y subjec t or situation you enco unter, a nd once you're in control, you stop learning The nervousne ss and a nxiety that precede a drawing are importa nt to the end result.' Architects who have been abl e to find adve n tur e and excite m en t in d raw ing w ill readily attend to the grea t boost it gives to th eir d esign' work and the ir th inking. Fina lly, I wan t to str ess tw o of my p r ej ud ices r egard ing represent ati ve d ra w ing. First , fr eeh and ab ility is v ital for effec tive use of rep resen tat ion in arch itec tura l d esign . You m ust be ab le to tu rn over idea s ra pid ly ; to do thi s req uires th e spo n ta n eous graph ic d isp lay tha t rapid sk etc h ing provid es. Second, a tte n tion sho u ld be p a id to m ak ing the s ketches fa ith full y re p resen t d esign id eas. Avoid ad d in g th in gs to a d raw ing s im p ly to im p rove the ap pea ra nce of the draw ing. Chan ges shou ld refle ct conscious chang es in the d esign . Kirb y Lockar d ca u tions, "Rem ember, th e best , m ost d irect and hon est per su asion for a d esign 's ac cep tance shou ld be th e design itself , and all suc cessful persua sion sho uld be based on com pe te n t a nd hon est r ep resenta tions of the des ign .:" Figure 3-29 Design development sketches. Ske tch Technique 53 • 4 Abstraction T he design process can be thou ght of as a series of tran sformations going from un certainty towa rds inform ation. The successive stages of the process are usu ally registe red by some kind of graphic model. In the final stages of the design process, designers use highly form alized gra ph ic languages such. as those provided by descriptive geometry. B ut this type of representation is hardly suita ble for the first stages, when designers lise quick ske tches and diagrams...It has been accep ted for years that becau se of the high level of abstraction of the ideas wh ich are ha ndled at the beginn ing of the design process, they mu st be expressed necessa rily by mean s of a rath er ambigu ous, loose graphic languag e-a private language w hich no one can properly understand excep t th e des igner himself .. the high level of abstraction of the inform ation which is handled mu st not prevent us from using a clea rly defi ned graphic language. Such a language would register the information exac tly at the level of abstraction it has, and it would facilitate com m unicat ion an d coopera tion among designe rs. I Yl1lZlA ]V1Wff//A :1 'Ii ?l12tZ~ o -JUA N PABLO BONTA y ow n vers ion of a grap h ic language is based on experience w it h student s in th e d esign studi o a nd r esearch in de sign p ro ces s co mm u n icati on s. It is p rese nted her e bec ause I am con vin ced th at a clearl y def ine d graphic lang uage is importan t bo th to design th ink ing and to communi cat ion betw een design ers. ¥waw~ M Figure 4-2 As Robert McKim poin ted out, "A langu age con sists of a set of rules by w hich sym bols can be related to represent larger m eanin gs. " 2 The di ffere nc e be tween verbal and grap hic languages is both in the sy m bols used an d in the w ays in w hi ch th e symbols are re lated . The sym bols for verbal languages are large ly rest ricted to word s, whereas gr ap hic la n guages incl ude im ages, signs, numbe rs , a nd w ords. M uch more significant , ver bal language is se q u en tial - it has a be ginn ing, a midd le, a nd a n end . Graph ic langu age is sim u ltaneous -all symbols and their rel ation sh ips are co ns idered at th e same tim e. The sim ulta ne ity a nd co m p lex in terre la tion sh ip of reali ty acc oun ts for th e specia l strength of grap hic language in addressing comp lex pro blems . 55 rY1 0 dl + , e ~ , -,( Figure 4-3a Sente nce diagram. -: ./ .--- --- -- ~ ---- --- / ( (d) ", I 5VI£;\J. 1\ no.",e ) , ' d.,f,! ,,~ (c) / I / I /- - - ,el"h01\ Sfl/p \ \ / ,/ -----B I - I- ... , (e) I /tj v'?t '; ' h"" ' ( ~ "---j ~ Fi gure 4-3b Graphic diagram. Figure 4-3c, d. e Graphic "sentences." GRAMMAR T he re are ot her ways of draw ing "graph ic se n tenc es" ; three alter natives ar e show n here: The grap hic language pr opose d here has gra m ma tical ru les compar ab le to thos e of ve rbal langu age. Th e d ia gram of the se ntence (Figur e 4-3a ) show s three basic parts: nou ns, ver bs, an d m od ifiers such as adjec tives, ad verbs, a nd p h rases. Nouns r epresent iden tities , verb s es ta blish re lations h ips betw een nouns, and the modifiers qualify or q uan tify the iden tities or the re la tionships betw een ident ities. In the grap h ic d iagram (Figure 4-3b), identities are shown as circl es, rela tion ships are show n as lin es, an d m od ifiers are show n by cha nge s in the cir cles or lines (heavier lines ind icate mor e importan t relation ship s and tones ind icat ing dif fer ence s in id en tities ). In th e sen tenc e d iagram , the verb shows a relat ionship that th e subject has to th e object: the d og ca ugh t the bo n e. T he lin e in th e graphic di agram is bi-directiona l; it says that the livin g room is co n nected to the kitchen and that the k itchen is con nec ted to the livi ng room . Th us the gr aphic d iagr am conta in s m any se n tences as : 1. Th e very im portan t liv ing room has a minor rela tions hip to th e garage (Figure 4-3c) . 2. The dini ng room must be con nected to th e special spaces , the kitchen and the deck (Figur e 4-3d ). 3. The fut ure gu esthouse w ill be related to the en try and indirectly to the pool (Figure 4-3e). 56 Abstraction 1. Position- An implied gri d is used to establish rela tion sh ips between id en tit ies; th e resulting orde r som et im es m a kes the di agra m easie r to read (Figur e 4-4a ). 2. Proximity- T he degree or in tensity of th e relation sh ips of ide n tities is ind ica ted by the re lative d is tan ces betw een th em. A sign ifica n t increase in d ista nc e can im pl
https://ar.b-ok.org/book/1049404/1968d9
CC-MAIN-2020-16
en
refinedweb
You want your application to output trace information to the event log and, at the same time, control what level of information is output. Modify web.config to: Add the EventLogTraceListener listener to the Listeners collection and make it available to your application, as shown in Example 13-18. Add the TraceSwitch, as shown in Example 13-18. In the classes you want to output trace information, create a TRaceSwitch object using the name of the traceSwitch you added to the web.config file, and use the WriteIf and WriteLineIf methods of the TRace class to output the required messages, as we demonstrate in our sample application shown in Examples 13-19, 13-20 through 13-21. The technique we advocate for writing trace information to the event log involves adding the EventLogTraceListener to the listener collection in web.config to write the trace information to the event log. We find it useful to control the level of messages output to the event log, such as outputting only error messages or outputting error and warning messages. Controlling the level of messages that are output involves the use of switches (more about this in a minute). As discussed in Recipe 13.5, you can add additional listeners to the traceListeners collection via the web.config file. When a TRace.Write or trace.WriteLine is executed, all listeners in the TRaceListeners collection receive and process their output. The support that the .NET Framework provides for TRaceListeners is more powerful when coupled with switchesw. Switches provide the ability to control when trace information is sent to the traceListeners configured for your application. Two switch types are provided in the .NET Framework: BooleanSwitch and TRaceSwitch. The BooleanSwitch class supports two states (on and off) that turn the trace output on and off. The traceSwitch class supports five levels (off, error, warning, info, and verbose) to provide the ability to output messages only for the configured levels. You must first add the switch and listener information to your web.config file, as shown in Example 13-18. The switch data includes the name of the switch and the value for the switch. The switch name is the name used in your code to access the switch configuration. The value defines the message level to output, as shown in Table 13-1. Value Meaning 0 Output no messages 1 Output only error messages 2 Output error and warning messages 3 Output error, warning, and informational messages 4 Output all messages To output trace messages that use the switch information, you need to create a traceSwitch object passing the name of the switch and a general description of the switch. After creating the traceSwitch, you use it with the WriteIf and WriteLineIf methods of the trace class to output your messages. The first parameter of either method defines the level for which the message should be output. In other words, if you only want the message to be output when the switch is configured for "warnings," set the first parameter to the traceWarning property of the switch you created. The second parameter should be set to the message you want to output. We are not outputting the trace information to the web form, as we have in other examples in this chapter, so it is unnecessary to add the TRace="true" statement to the @ Page directive in the .aspx page or to turn on application-level tracing in the web.config file. The name used in the constructor of the TRaceSwitch must match the name of the switch in the web.config file. If you fail to use the exact name defined in the web.config file, you can wind up spending a great deal of time trying to determine why your messages are not being output as expected. In a web application, referencing the TRace class without further qualifying the namespace will actually reference the System.Web.Trace class, which does not support the WriteIf and WriteLineIf methods. To access the TRace class in the System.Diagnostics namespace that provides the WriteIf and WriteLineIf methods, fully qualify the reference: System.Diagnostics.Trace.WriteIf(level, Message) Recipe 13.5 <configuration> … <system.diagnostics> <switches> <!-- This switch controls messages written to the event log. To control the level of message written to the log set the value attribute as follows: "0" - output no messages "1" - output only error messages "2" - output error and warning messages "3" - output error, warning, and informational messages "4" - output all messages --> <add name="EventLogSwitch" value="1"/> </switches> <trace autoflush="true" indentsize="0"> <listeners> <add name="EventLogTraceListener" type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" initializeData="Application" /> </listeners> </trace> </system.diagnostics> </configuration> <%@ Page Tracing With Output To Event Log with Trace Levels (VB) </div> </asp:Content> Option Explicit On Option Strict On Imports System Imports System.Diagnostics Namespace ASPNetCookbook.VBExamples ''' <summary> ''' This class provides the code-behind for ''' CH13TestTracingWithLevelControlVB.aspx ''' </summary> Partial Class CH13TestTracingWithLevelControl generalTraceSwitch As TraceSwitch ") End Sub 'Page_Load End Class 'CH13TestTracingWithLevelControlVB End Namespace using System; using System.Diagnostics; namespace ASPNetCookbook.CSExamples { /// <summary> /// This class provides the code-behind for /// CH13TestTracingWithLevelControlCS.aspx /// </summary> public partial class CH13TestTracingWithLevelControl) { TraceSwitch generalTraceSwitch = null; //"); } // Page_Load } // CH13TestTracingWithLevelControlCS }
https://flylib.com/books/en/1.505.1.117/1/
CC-MAIN-2020-16
en
refinedweb
You don't need regular expressions. Python has a built-in string method that does what you need: mystring.replace(" ", "_") Replacing spaces is fine, but I might suggest going a little further to handle other URL-hostile characters like question marks, apostrophes, exclamation points, etc. Also note that the general consensus among SEO experts is that dashes are preferred to underscores in URLs. import re def urlify(s): # Remove all non-word characters (everything except numbers and letters) s = re.sub(r"[^\w\s]", '', s) # Replace all runs of whitespace with a single dash s = re.sub(r"\s+", '-', s) return s # Prints: I-cant-get-no-satisfaction" print(urlify("I can't get no satisfaction!"))
https://pythonpedia.com/en/knowledge-base/1007481/how-do-i-replace-whitespaces-with-underscore-and-vice-versa-
CC-MAIN-2020-16
en
refinedweb
A good Covolutional Neural Network model requires a large dataset and good amount of training, which is often not possible in practice. Transfer learning provides a turn around it. It’s a method to use pre-trained models to obtain better results. A pre-trained model has been previously trained on a dataset and contains the weights and biases that represent the features of whichever dataset it was trained on. There are two ways to achieve this: The following table summarizes the method to be adopted according to your dataset properties: Case I: Small dataset, similar data Case II: Small dataset, Different data Case III: Large dataset, Similar data Case IV: Large dataset, Different data The following guide used ResNet501 as pre-trained model and uses it as feature extractor for building a ConvNet for CIFAR102 dataset. The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. import numpy as np import matplotlib.pyplot as plt from keras.datasets import cifar10 from keras.utils import np_utils from keras.models import Sequential from keras.layers import Dense, Conv2D, MaxPooling2D from keras.layers import Dropout, Flatten, GlobalAveragePooling2D from keras.applications.resnet50 import ResNet50, preprocess_input (X_train, y_train), (X_test, y_test) = cifar10.load_data() X_train.shape, X_test.shape, np.unique(y_train).shape[0] # one-hot encoding n_classes = 10 y_train = np_utils.to_categorical(y_train, n_classes) y_test = np_utils.to_categorical(y_test, n_classes) Now, extract features from ResNet50 and save them. # load model model_tl = ResNet50(weights='imagenet', include_top=False, # remove top FC layers input_shape=(200, 200, 2)) # reshape as min size of image to fed into ResNet is (197, 197, 3) X_train_new = np.array([imresize(X_train[i], (200, 200, 3)) for i in range(0, len(X_train))]).astype('float32') # preprocess data resnet_train_input = preprocess_input(X_train_new) # create bottleneck features for training data train_features = model.predict(resnet_train_input) # save the bottleneck features np.savez('resnet_features_train', features=train_features) # reshape testing data X_test_new = np.array([imresize(X_test[i], (200, 200, 3)) for i in range(0, len(X_test))]).astype('float32') # preprocess to fed it in pre-trained ResNet50 restnet_test_input = preprocess_input(X_test_new) # extract features test_featues = model.predict(restnet_test_input) # save features np.savez('resnet_features_test', features=test_featues) Finally, build the model in Keras using the extracted features. # create model model = Sequential() model.add(GlobalAveragePooling2D(input_shape=train_featues.shape[1:])) model.add(Dropout(0.3)) model.add(Dense(10, activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(train_features, y_train, batch_size=32, epochs=10, validation_split=0.2, callbacks=[checkpointer], verbose=True, shuffle=True) # model evaluation score = model.evaluate(test_features, y_test) print('Accuracy on test set: {}'.format(score[1])) The use of transfer learning is possible because the features that ConvNets learn in the first layers are independent of the dataset, so are often transferable to different dataset. Update: Also, check implementation in PyTorch on GitHub at kHarshit/transfer-learning. Footnotes: 1: ResNet-50 ↩ 2: CIFAR10 dataset ↩ References:
https://kharshit.github.io/blog/2018/08/10/transfer-learning
CC-MAIN-2020-16
en
refinedweb
Why not O(t*b^2)? Can we do this PRIME1 problem without Sieve of Eratosthenes? What is the time complexity of this program? Here goes the code: #include<iostream>using namespace std;void prime(int a, int b){int i;while(a<=b){for(i=2;i<=a;i++){if(a%i==0){if(a==i){cout<<a<<endl;}else break;}}++a; It has been mentioned that the pre-requisite for this course is basic knowledge of any programming language in topics like input-output, variables, operators, control flow, arrays, and functions. Keshav mentioned that rest of the concepts like classes, operator overloading, pointers are not required for contests. I want to ask whether Data Structures are important or not. Many people say that before moving on to Algorithms one must be able to implement Data Structures. There is a great video series on YouTube channel of mycodeschool. Here is the link: Hope it helps!
https://www.commonlounge.com/profile/84e44422e03940f3a7b76aea714e4789
CC-MAIN-2020-16
en
refinedweb
#include <STY_ResultsFilter.h> Filter that decides which results should be retrieved with a call to getResults() in a styler. Since virtually all calls to getResults() will want results from a specific category, the constructor takes category mask for convenience. Any custom filtering can be done by deriving from this class. Definition at line 25 of file STY_ResultsFilter.h. Constructor, which for convenience takes a mask for accepted override categories. The argument can be NULL for all categories. Returns true if the filter allows the overrides from the given category. Returns true if the filter allows the overrides from style sheet entries of lower precedence level than the given style entry. Usually the overrides from all matching entries are allowed, but sometimes an entery may block any further overrides, eg, if an style contains 'material' override, andy lower level overrides for 'materialParameters' need to be disregarded, since they were intended for the old material. Reimplemented in STY_MaterialResultsFilter.
https://www.sidefx.com/docs/hdk/class_s_t_y___results_filter.html
CC-MAIN-2020-16
en
refinedweb
9.2. Sequence to Sequence with Attention Mechanism¶ In this section, we add the attention mechanism to the sequence to sequence model introduced in Section 8.14 to explicitly select state. Fig. 9.2.1 shows the model architecture for a decoding time step. As can be seen, the memory of the attention layer consists of the encoder outputs of each time step. During decoding, the decoder output from the previous time step is used as the query, the attention output is then fed into the decoder with the input to provide attentional context information. Fig. 9.2.1 The second time step in decoding for the sequence to sequence model with attention mechanism.¶ The layer structure in the encoder and the decoder is shown in Fig. 9.2.2. import d2l from mxnet import nd from mxnet.gluon import rnn, nn 9.2.1. Decoder¶ Now let’s implement the decoder of this model. We add a MLP attention layer which has the same hidden size as the LSTM layer. The state passed from the encoder to the decoder contains three items: the encoder outputs of all time steps, which are used as the attention layer’s memory with identical keys and values the hidden state of the last time step that is used to initialize the encoder’s hidden state valid lengths of the decoder inputs so the attention layer will not consider encoder outputs for padding tokens. In each time step of decoding, we use the output of the last RNN layer as the query for the attention layer. Its output is then concatenated with the input embedding vector to feed into the RNN layer. Despite the RNN layer hidden state also contains history information from decoder, the attention output explicitly selects the encoder outputs that are correlated to the query and suspends other non-correlated information. class Seq2SeqAttentionDecoder(d2l.Decoder): def __init__(self, vocab_size, embed_size, num_hiddens, num_layers, dropout=0, **kwargs): super(Seq2SeqAttentionDecoder, self).__init__(**kwargs) self.attention_cell = d2l.MLPAttention(num_hiddens, dropout) self.embedding = nn.Embedding(vocab_size, embed_size) self.rnn = rnn.LSTM(num_hiddens, num_layers, dropout=dropout) self.dense = nn.Dense(vocab_size, flatten=False) def init_state(self, enc_outputs, enc_valid_len, *args): outputs, hidden_state = enc_outputs # Transpose outputs to (batch_size, seq_len, hidden_size) return (outputs.swapaxes(0,1), hidden_state, enc_valid_len) def forward(self, X, state): enc_outputs, hidden_state, enc_valid_len = state X = self.embedding(X).swapaxes(0, 1) outputs = [] for x in X: # query shape: (batch_size, 1, hidden_size) query = hidden_state[0][-1].expand_dims(axis=1) # context has same shape as query context = self.attention_cell( query, enc_outputs, enc_outputs, enc_valid_len) # concatenate on the feature dimension x = nd.concat(context, x.expand_dims(axis=1), dim=-1) # reshape x to (1, batch_size, embed_size+hidden_size) out, hidden_state = self.rnn(x.swapaxes(0, 1), hidden_state) outputs.append(out) outputs = self.dense(nd.concat(*outputs, dim=0)) return outputs.swapaxes(0, 1), [enc_outputs, hidden_state, enc_valid_len] Use the same hyper-parameters to create an encoder and decoder as in Section 8.14, we get the same decoder output shape, but the state structure is changed. encoder = d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=2) encoder.initialize() decoder = Seq2SeqAttentionDecoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=2) decoder.initialize() X = nd.zeros((4, 7)) state = decoder.init_state(encoder(X), None) out, state = decoder(X, state) out.shape, len(state), state[0].shape, len(state[1]), state[1][0].shape ((4, 7, 10), 3, (4, 7, 16), 2, (2, 4, 16)) 9.2.2. Training¶ Again, we use the same training hyper-parameters as in Section 8.14. The training loss is similar to the seq2seq model, because the sequences in the training dataset are relative short. The additional attention layer doesn’t lead to a significant different. But due to both attention layer computational overhead and we unroll the time steps in the decoder, this model is much slower than the seq2seq model. embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.0 batch_size, num_steps = 64, 10 lr, num_epochs, ctx = 0.005, 200, d2l.try_gpu() src_vocab, tgt_vocab, train_iter = d2l.load_data_nmt(batch_size, num_steps) encoder = d2l.Seq2SeqEncoder( len(src_vocab), embed_size, num_hiddens, num_layers, dropout) decoder = Seq2SeqAttentionDecoder( len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout) model = d2l.EncoderDecoder(encoder, decoder) d2l.train_s2s_ch8(model, train_iter, lr, num_epochs, ctx) loss 0.032, 5317 tokens/sec on gpu(0) Lastly, we predict several sample !
http://classic.d2l.ai/chapter_attention-mechanism/seq2seq-attention.html
CC-MAIN-2020-16
en
refinedweb
RAC Attack - Oracle Cluster Database at Home/Manual of Style In order to get a consistent look-and-feel throughout the RAC Attack Wikibook, I ask that when you make changes, you try to conform to this manual of style. This manual of style is not intended to override or replace the Wikibooks manual of style. It is a local manual of style specific to the RAC Attack wikibook. It provides style guidelines valid only within this book. General[edit] -. Conventions[edit] - New Structure - The original book was using a flat structure - that is, there was only one level of sub-pages. (RAC_Attack_-_Oracle_Cluster_Database_at_Home/Your_Page) - Now, the introduction of a new version of the book introduced a new level: (RAC_Attack_-_Oracle_Cluster_Database_at_Home/Rac_Attack_12c/). - The development of the new book is in progress, so expect some changes to the main pages: they will soon point to the new version. - The old version of the book will be soon moved to another new level (./RAC_Attack_11g) - Title Capitalization - This book uses title case. For all chapter and section titles, capitalize the first letter of every word except for articles, prepositions, and conjunctions. (This follows conventions from the Chicago Manual of Style, according to Writer's Block and Wikipedia.) - Templates - This book will have a few standard templates... - at top, use the custom navbox for each chapter similar to the box here: Help:Categories It should list each page in that chapter with the current page in bold, and non-linked. Might be able to make this page the same as the chapter page itself... that would be cool and wizard-ly. Will have to learn some more wiki-syntax first. :) - {{chapnav|previous subpage|next subpage}} at bottom of every page, with prev and next linked - this will take care of putting the page in the book category - for special, non-book pages (like this one) manually include {{BookCat}}? No - even better, will make a "Special Pages" chapter (excluded from print). - Images - Certain categories should be present for all images. - No Thumbnails - Print Versions - The only supported way to get print versions of this book will be with a script that generates LaTeX which is converted to PDF and posted publicly. Unfortunately, the layout is very poor with browser-based printing, the built-in wikibooks PDF creator, and the pediapress PDF creator. The main issue is that in-line images are consistently orphaned; and there's no way to set a "keep-with-previous-paragraph" flag for any of the above mentioned options. New Chapter Template[edit] Every chapter page is created in the namespace of the book itself and follows this template: <noinclude> {{Sidebox|{{RA/TOC}}|}} <!-- short overview of the chapter can go here --> </noinclude> {{RA/NO-ETOC|''Prev: [[../Download Oracle Enterprise Linux|Hardware and Windows Preparation]]'' ---- ---- }} {{RA/Chapter|{{subst:SUBPAGENAME}}}} #{{RA/A|Create VM}} #{{RA/A|Prep for OS Installation}} #{{RA/A|OS Installation}} #{{RA/A|Wrap-up OS Installation}} #{{RA/A|Create RAC Attack DVD}} #{{RA/A|Prep for Oracle}} {{RA/NO-ETOC|---- ---- ''Next: [[../Create Interconnect|Create Cluster]]'' }} New Page Template[edit] Follow this example to create new pages: <noinclude> {{Sidebox|{{:RAC Attack - Oracle Cluster Database at Home/Linux Install}}|}} </noinclude> ''Optional intro header paragraph'' ---- <br /> <ol><li></li><li> From the '''SUMMARY''' screen, choose '''Create Virtual Machine'''. Name the new machine '''collabn1''' and select the '''RAC11g''' datastore. [[File:RA-vmweb-createVM-name.png|none|border|550px]] </li><li style="margin-top:3em"> Select '''Linux Operating System''' and choose '''Red Hat Enterprise Linux 5 (32-bit)'''. [[File:RA-vmweb-createVM-guestOS.png|none|border|550px]] </li><li style="margin-top:3em"> Choose to '''Create a New Virtual Disk'''. [[File:RA-vmweb-createVM-disk.png|none|border|550px]] </li><li style="margin-top:3em"> Set the disk size to '''30G''' and name the file '''collabn1/system.vmdk''' – leave all other options at their defaults and click '''Next'''. [[File:RA-vmweb-createVM-disk-prop.png|none|border|550px]] </li><li style="margin-top:3em"> Choose to '''Add a Network Adapter'''. [[File:RA-vmweb-createVM-net.png|none|border|550px]] </li><li style="margin-top:3em"> Choose to create a '''NAT''' network connection. [[File:RA-vmweb-createVM-net-prop.png|none|border|550px]] </li><li style="margin-top:3em"> Choose '''Don't Add a CD/DVD Drive'''. [[File:RA-vmweb-createVM-cd.png|none|border|550px]] </li><li style="margin-top:3em"> Choose '''Don't Add a Floppy Drive'''. [[File:RA-vmweb-createVM-floppy.png|none|border|550px]] </li><li style="margin-top:3em"> Choose '''Don't Add a USB Controller'''. [[File:RA-vmweb-createVM-usb.png|none|border|550px]] </li><li style="margin-top:3em"> Review the configuration and click Finish. Do not power on the virtual machine yet. [[File:RA-vmweb-createVM-summary.png|none|border|550px]] </li></ol> {{RA/NAV|Download Oracle Enterprise Linux|Prep for OS Installation}} Image Upload Template[edit] Destination Filename: RA-[grouping]-[details].png Screenshots[edit] Screenshots are non-free media, so they must be uploaded locally to Wikibooks. Microsoft-Only and Complete Window (Desktop not Visible)[edit] - Do not use this template if it's a partial screenshot or if the desktop is visible (with icons from non-microsoft software) Non-free media type: Screenshots of Microsoft Software Summary: {{information |description = <Your Description of the Screenshot> - e.g. Screenshot of "run" dialog, about to launch "msinfo32" |source = <Complete Product Name> - From ABOUT window; e.g. Microsoft Windows Vista Home Premium |date = <YYYY-MM-DD> - date screenshot was taken or uploaded |author = Microsoft Corporation, <Name of Person who Took Screenshot> - e.g. Jeremy Schneider }} {{non-free use rationale |module = RAC Attack - Oracle Cluster Database at Home |copyrights = Copyright <YYYY> Microsoft Corporation - From ABOUT window; e.g. 2007 |source = <Complete Product Name> - From ABOUT window; e.g. Microsoft Windows Vista Home Premium |not free = Screenshots of proprietary software are copyrighted. It is impossible to create public domain or free screenshots of proprietary software. These screenshots are being used with permission from Microsoft to illustrate how to use the}}]] All Others - Including Microsoft Partial-Window or Desktop[edit] Non-free media type: Screenshot includes software interface or Screenshots of web pages Summary: {{information |description = <Your Description of the Screenshot> - e.g. Screenshot of "run" dialog, about to launch "msinfo32" |source = <URL, Product Name(s)> - URL and/or program name(s) from ABOUT window(s) |date = <YYYY-MM-DD> - date screenshot was taken or uploaded |author = <Company Name(s)>, <Name of Person who Took Screenshot> - companies whose software/website appears in this screenshot }} {{non-free use rationale |module = RAC Attack - Oracle Cluster Database at Home |copyrights = <Company Name(s)> - companies whose software/website appears in this screenshot |source = <URL, Product Name(s)> - URL and/or program name(s) from ABOUT window(s) }}]] Diagrams and Other Illustrations[edit] Try to use only content released under free licenses, these should preferentially be uploaded to Commons.
https://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/Manual_of_Style
CC-MAIN-2020-16
en
refinedweb
Roads, Towers, and Online Legal Help Online legal help systems have been a major force for good. Millions of people without the luxury of a personal lawyer have benefited from them. They come in many shapes and sizes, from many different worlds: · Legal aid programs and other nonprofit legal service organizations · Courts and government agencies · Law schools and universities · Commercial providers and startups · Private law firms and departments We have an embarrassment of riches when it comes to tools, platforms, and methodologies for applications that address legal needs. The diversity is a sign of health. But there’s also a lot of suboptimal duplication of effort and missed opportunities around scale and synergy. The map of coverage also remains very sparse. For most people, in most situations, there’s no immediately useful online resource, free or paid. Where is this all going? Where should it be going? (My focus here is on not-for-profit efforts, in North America. Commercial and international developments of course make this even more interesting.) A thought experiment Those who venture to supply online legal applications share many challenges. Getting content correct and keeping it current with limited human and financial resources is a big one. Applications need care and feeding. Users and developers need support. But there’s a shortage of organizational bandwidth. Imagine if we organized a shared resource to ‘mutalize’ some of the infrastructure needed by multiple content providers. What might such a resource look like? It would supply secure and scalable servers that application providers could use, without having to source, configure, and manage their own. It would provide an organized and accessible content collection, optimized for distribution and sharing, and cover more than one development platform. It would offer accounts for end users, developers, and content managers, with associated data storage and sharing (across time, apps, and people). Users’ answers to questions would be securely stored as tagged sets of data that can be accessed and edited by multiple applications. It would support users via email and live chat, bug tracking, and ‘ticket’ management. Such a service would provide training, continuing education, webinars, and navigable knowledge bases of relevant materials for developers and other project participants. It would help build community by hosting discussion fora, regular online meetings, and other arrangements that facilitate collaboration. It would offer one-on-one technical support to developers and managerial support to project teams. Custom statistical reports about usage would be available via a self-help dashboard. This imagined resource would support multiple human languages, integrate with external systems such as case management and e-filing, and offer specialized configurations for in-person and virtual clinics at which groups can be served simultaneously or asynchronously. You might think of it as an ecumenical collection of free legal apps with answer-saving and other valuable forms of automated assistance. LawHelp Interactive It turns out that we already have at least one example of a service that exemplifies all of the above qualities — LawHelp Interactive (LHI). LHI was first envisioned over 18 years ago. Planning began in late 2001. It arose in part from the ashes of AmeriCounsel, the dot-com adventure in which my colleague Bart Earle and I first gained experience with a large scale online document assembly deployment. LHI started out as National Public ADO (Automated Documents Online. Get it?) Pro Bono Net, a national nonprofit dedicated to access to justice, assumed responsibility for the service in 2006 and soon came up with a better name. (For more about the ambitious AmeriCounsel venture, including its Open Practice Tools initiative, check out my keynote at the 2001 CALI conference.) The federal Legal Services Corporation has played a central role in supporting LHI. It provided seed funding for initial R&D, and later acted as a strategic investor, supporting ongoing operations and improvements, as innovations in service delivery to unrepresented litigants and others were proven out. This has been a successful public/private model; today LSC provides only about half of LHI’s budget. LHI has been foremost — at least in terms of scale and impact — among free US resources that leverage the web and intelligent technology to advance access to justice and legal wellness. It has accumulated some 5000 modules, is used in over forty states, and offers its interface in seven languages. Millions of sessions have resulted in millions of customized documents. All without charge. (LHI is free for end users and nonprofit legal aid programs funded by LSC. Courts can subscribe for access.) LHI has been a real catalyst for innovation. The program and community have pioneered new models of access to justice centered on online forms in many states that have dramatically improved the ability for those without lawyers to achieve justice on their own, including unbundled, limited scope services and remote services. (See and.) A substantial percentage of LHI usage is by lawyers, paralegals, advocates, court staff, and other professionals. From the beginning LHI was conceived as including a content commons, a collection of codified legal know-how that could be freely copied and remixed by participating contributors. A partnership with the Center for Computer-Aided Instruction (CALI) cemented an early determination to support multiple interfaces for end users. A2J ‘guided interviews’ have been part of LHI since the get go. Driven by a spirit of continuous improvement, many enhancements have been delivered. There’s been steadily increasing geographic and functional scope, including custom integrations with legal aid and court software systems. (See e.g..) More advanced forms of process guidance, via dynamic ‘pathfinders’ and ‘next steps,’ are in the works. A recent survey identified over thirty further potential enhancements for community prioritization. So LHI is going strong and still growing. Yet it faces challenges. Its UI is periodically refreshed, but aspects feel old-fashioned. Part of that is due to needing to attend to an installed base, and gingerly manage legacy code. Complex services tend to accumulate technical debt. That makes re-architecting things a tall order. (For example, presently all files for each LHI module live in their own container, making it difficult to do quick global updates of textual or logical components used by more than one.) Another concern is LHI’s primary document assembly engine. Two months after the 9/11 attacks a white paper on online document automation commissioned by the Legal Services Corporation was circulated. It described the already mature and variegated market of products and their potential use for legal assistance purposes. Twelve technology providers were then asked to complete a detailed questionnaire about their solutions. Six responded. The next February (2002), several dozen folks from around the country gathered in a conference room at Davis Polk in New York City to hear extended presentations from four vendors. Alternatives were reviewed and assessed. HotDocs emerged as a solid candidate and was adopted by the initial team. It has proved reliable for a couple decades. It still has a huge domestic and international customer base. But there are downsides. HotDocs has undergone several changes in ownership since it was generously donated for use on LHI by LexisNexis. The latest acquisition, by AbacusNext, has reminded us of the vicissitudes of depending on a commercial vendor. Pricing has become more aggressive, although the company has confirmed a commitment to a 70% discount for nonprofits. And most of its development energy has been allocated to next-generation products that aren’t yet a good fit for LHI’s content and community, which depend on its ‘classic’ line. The HotDocs authoring tools are not free, and require Windows. That’s a particular constraint for law students, many of whom use devices only running the Mac operating system. (You can run HotDocs interviews on just about any device, but need Windows to create and edit them.) Also, while the JavaScript interface offered by HotDocs is highly functional and stable, it looks increasingly outdated. LawHelp Interactive may seem like an incumbent surrounded by disruptors, but the future needs something like it. Like a caterpillar, LHI has morphed several times from early conceptions. What butterfly might emerge next? Where does it go from here? (The rest of this article lays out a bigger context in which LHI will likely play a role. I don’t purport to speak for the project or Pro Bono Net.) The spiraling ecosystem In the meantime, others have brought new ideas and energies to this space. Odyssey Guide & File from Tyler Technologies emerged as a commercial alternative to CALI’s A2J, aimed at courts that want to field interviews and electronic forms to simplify the filing process for self-represented litigants. For its part, CALI has implemented native document automation features (it previously relied on HotDocs to assemble documents), and moved to independently hosting guided interviews on its own A2J.org site. An excellent open source alternative finally arrived, in the form of Jonathan Pyle’s docassemble. (See Making Mischief With Open-Source Legal Tech for an example of its power.) And several nimble players jumped in to make available easier interfaces for building and maintaining docassemble applications, Community Lawyer and Documate. The former now boasts it own content collection —. Legal document automation tools have long been a commodity. I’ve used dozens of them. Deep, reliable functionality combined with provider longevity is still rare. Free and easy go a long way, but maybe not far enough. HotDocs, for instance, includes so-far-unmatched facilities for automating complex sets of PDF forms. And history is littered with brilliant alternatives that failed to survive. Spaces adjacent to document services have likewise been busy. The Civil Resolution Tribunal’s Solution Explorer, a pioneering expert system offering free legal information and tools, has been used over 100,000 times. New players are emerging regularly — see e.g.. Law schools have jumped into the game, with courses in which students build applications as part of their course work using tools like A2J Author, Community Lawyer, HotDocs, Neota Logic, and QnA Markup. (I’ve taught such courses at five different schools myself, and fellow teachers are active around the world.) Commercial players and startups have also been making waves. Neota is seeing worthy competition from entrants like Bryter; Legal Zoom may be noticing ascent by disrupters like DoNotPay. There’s a bit of a space race going on. Major investments are happening. Other platforms offer features and functions that LHI does not, but few have comparably robust fabrics of surrounding support. We’ve ended up in multiple camps, with mutually inconsistent tools and skills that are not readily transferred. This fragmentation is discussed more below. New frontiers There’s no lack of new things that LHI and related services can and should do. One perpetual desire is to ease the authoring and maintenance of interactive content, especially by non-programmers such as domain experts. Some products and platforms have made authoring much simpler, at least for basic applications. This piece illustrates how vendors can trumpet ease-of-development advantages. Low-code and no-code are more than buzz words. Many of these services fall short in terms of accessibility. Providing genuinely usable applications for those with serious cognitive or perceptual limitations via browser-agnostic Web sessions that need to present, elicit, and generate complex texts — even on smart phones — is an enormous challenge. All services could expand their interoperability with each other and with common third party tools like Clio, DocuSign, Google Drive, and Legal Server. (LHI was one of the earliest to work with the latter. Other services have also done impressive work in this area.) We should remain alert to new tools and paradigms for dynamic questioning, fact-specific guidance, and document generation. Those will include new forms of interaction — bots, text messages, and other conversational approaches. ‘Push’ — proactive communication of warnings and reminders to users — will also have a role. And we should be on the lookout for tools that support new kinds of assistance, ones that go beyond interactive questionnaires, custom instructions, and assembled documents. One of my candidates for a new field of endeavor has long been decision support. See A Decision Space for Legal Services and The Centrality of Choice in Legal Work. Tools that promote effective choices naturally also have usefulness in the online dispute resolution (ODR) context. We can clearly find ways to introduce more artificial intelligence into our online legal help environments, both with ‘good old fashioned AI’ like expert systems and next generation deep learning and pattern recognition systems. We could intelligently parse documents both to help users in specific situations and to infer models for use more generally. Open source and other forms of openness of course present great opportunities. See Opening Legal Knowledge Automation. Among other things, that could involve standardization of data elements and structures, shared ontologies, and variable namespaces. One aspect of all this that I’ve been particularly vocal about is quality. We could use more fanatical attention to it, maybe in a six sigma, zero-defects spirit. See The High Cost of Quality in Legal Apps and Substantive Legal Software Quality: A Gathering Storm?. Quality assurance — regarding both platforms and their content — would be facilitated by greater transparency and inspectability. The above is a very incomplete list. Which is both exhausting and encouraging. This piece from 2007 describes frontiers in legal document automation more generally, some of which remain uncrossed: Current Frontiers in Legal Drafting Systems. Strategic choices All of the providers face the challenge of funding and sustaining their efforts. And it’s hard enough just to keep things going; trying to rebuild while operating at capacity can be like changing engines on a plane while in flight. One other shared concern is the specter of intensified regulatory scrutiny, e.g. by bar groups contending that assistance via software is tantamount to the unauthorized practice of law. My own view, articulated in places like Safe Harbors and Blue Oceans, is that there should be a bright line rule that making software available is not ‘practice.’ Which is not to say that we shouldn’t be concerned about the quality of some of the published content. However, we can deal with bad actors without resorting to prior restraint. Even if such constraint were constitutional it would be bad policy. That a work of authorship can be made to do useful cognitive work doesn’t deprive it of 1st Amendment protection. And freedom of expression is an empty promise without freedom to distribute what is expressed. One key question is how we might best work together? Should we try to be less disintegrated than we presently are? What’s the right balance of centrality and distribution? What kinds of things are best accomplished together, and which happen best on the edge? Can we reap the benefits of cross-organizational thinking without losing those of autonomy? Do we need a mother ship, or is a loosely coupled federation better? Are big shared environments good things? Is there a place for a neutral “Switzerland” of free interactive content, one that supports many application categories/types? Should content itself be platform independent? Clearly we don’t want to put all of our eggs in one basket, and no organization should spread itself too thin or try to be all things to all people. Organizations of course should focus on their key commitments and core competencies. They also need to maintain continuity of teams. Towering As things now stand, centrifugal forces are in play, and lots of folks are off doing their own things, using different languages and approaches in multiple ‘towers.’ (We would resemble the tower of Babel if there was only one! I guess we’re more like a medieval Italian city-state, with wealthy families competing to have the tallest structure.) Lots of hard-won wisdom remains trapped in silos. Some of this comes from ‘not invented here’ attitudes; some from welcome entrepreneurial zeal. But competition is generally healthy, even at the expense of wasteful duplication and reinvention. And there are clear benefits of diversity in our ecosystems. (See Knowledge Gardening and Civil Justice Engineering.) It’s going to take a lot of villages (and villagers) to ameliorate the access to justice crisis. But shared resources can offer economies of scale. Some of those economies can be tapped through loosely coupled arrangements. There would seem to be much positive opportunity in better connective tissue. That of course raises governance challenges. And any arrangement will naturally involve costs and compromises. A facilitator of shared resources, collaborations, integrations, and interoperations could function as a benevolent natural monopoly. It could supply participants with valuable insight into each other’s collections, and maybe even interchangeable parts. In the widening gyre perhaps such a center cannot hold, but let’s hope we won’t need to settle for mere anarchy. What topology would make the most sense? No center, one center, or multiple centers? If center(s), what form could such entities take? Or is this all best left to the market? These would be good topics for a hackathon, or at least a designathon. Perhaps this socio-technical challenge will come up at this year’s SubTech conference. On the Road We tend to take streets and highways for granted. They’re actually quite impressive structures, albeit not very high. Ribbons of pavement wind through our neighborhoods, cities, and countrysides. They provide the immensely valuable commodity of safe, level ground on which arbitrary kinds of vehicles can move. Heterogeneous traffic flows on the interstate highway system with relatively little friction. Road building and maintenance don’t offer much glory, but their effective practice is critical to modern life. Commerce needs a commodious and reliable substrate. We’re blessed with lots of bright ideas and cool tools in the access-to-justice world. Inventors and visionaries remain essential. But we also need planners and managers. Institutions can help. To revisit an old trope, we may want a cathedral as well as a bazaar. Nimble authoring systems are great, but we also need distribution systems. Engineering a Better Tomorrow Last spring researchers at Cambridge University announced that they had synthesized the complete genetic material of the bacterium Escherichia coli — four million base pairs of DNA — and inserted it into functioning cells, which reproduced and survived. Quite a feat of engineering. Could legal technologists accomplish something as impressive? We’ve long had the tools and knowledge to help under-resourced people deal effectively with their legal needs. But we haven’t yet made a major dent in the problem. We could do SO much more. The need for these kinds of services is easily 100 times greater than what is presently provided. We’ve only scratched the surface of positive potential. Unsolved legal problems cause immense suffering. With adequate help, that suffering can often be avoided, or at least minimized. Yet many folks get little or no help, even to help themselves. Many go unrepresented in formal proceedings; even more are totally ‘unhelped’ across the vast range of law-related problems and opportunities. Vendors come and go; projects rise and fall. But aching needs remain. How about an Apollo program to end poverty of legal help? Imagine we were taking the best emerging technologies and accumulated know-how to address this need. How might we sculpt and nurture a system of systems that stands a chance of achieving that result? Our goal should be nothing less than the eradication of legal helplessness. A vibrant market of high-quality, reasonably priced services, supplemented by equally high-quality resources for those who can’t afford to pay. Those who need and want help should be able to get help. That moonshot will require decent roads as well as sturdy towers.
https://medium.com/@MarcLauritsen01/roads-towers-and-online-legal-help-57957a25767
CC-MAIN-2020-16
en
refinedweb
For some reason, I can't get the images to crop and display correctly, even though I think my script makes sense logically. My code is posted below. You can make an image around the resolution of 300x2000 or so to use with this to see the problem I am having. Attached is my practice image that is rough, but works for now. My code starts printing outside of the area that I want it to show (outside the showsize variable) and I can't figure out why. Any help with this problem would be much appreciated. It seems that the crops don't cut the images short enough, but all the information that I found about it make me think my script should be working just fine. I've tried to annotate my code to explain what's going on. from Tkinter import * from PIL import Image, ImageTk def main(): root = Tk() root.title = ("Slot Machine") canvas = Canvas(root, width=1500, height=800) canvas.pack() im = Image.open("colors.png") wheelw = im.size[0] #wide of source image wheelh = im.size[1] #height of source image showsize = 400 #amount of source image to show at a time - part of 'wheel' you can see speed = 3 #spin speed of wheel bx1 = 250 #Box 1 x - where the box will appear on the canvas by = 250 #box 1 y numberofspins = 100 #spin a few times through before stopping cycle_period = 0 #amount of pause between each frame for spintimes in range(0,numberofspins): for y in range(wheelh,showsize,-speed): #spin to end of image, from bottom to top cropped = im.crop((0, y-showsize, wheelw, y)) #crop which part of wheel is seen tk_im = ImageTk.PhotoImage(cropped) canvas.create_image(bx1, by, image=tk_im) #display image canvas.update() # This refreshes the drawing on the canvas. canvas.after(cycle_period) # This makes execution pause for y in range (speed,showsize,speed): #add 2nd image to make spin loop cropped1 = im.crop((0, 0, wheelw, showsize-y)) #img crop 1 cropped2 = im.crop((0, wheelh - y, wheelw, wheelh)) #img crop 2 tk_im1 = ImageTk.PhotoImage(cropped1) tk_im2 = ImageTk.PhotoImage(cropped2) canvas.create_image(bx1, by, image=tk_im2) ##THIS IS WHERE THE PROBLEM IS.. canvas.create_image(bx1, by + y, image=tk_im1) ##PROBLEM #For some reason these 2 lines are overdrawing where they should be. as y increases, the cropped img size should decrease, but doesn't canvas.update() # This refreshes the drawing on the canvas canvas.after(cycle_period) # This makes execution pause root.mainloop() if __name__ == '__main__': main()
http://www.dreamincode.net/forums/topic/283518-cropping-and-displaying-images-using-pil-and-tkinter-problem/
CC-MAIN-2018-13
en
refinedweb
Hi All guys I hope this is not so newbie question, I was an EJB2 developer in the past, and I made the switch to EJB3 with Hibernate, I have an entity defined like this: @Entity @Table(name="USER") public class User implements Serializable { private Long id; private String full_name; private String username; private Level level; [...] @OneToOne(fetch=FetchType.EAGER,cascade=CascadeType.MERGE) public Level getLevel() { return level; } public void setLevel(Level level) { this.level = level; } Sorry.. Jboss Version 4.0.5.GA Anyone ? :(
https://developer.jboss.org/thread/26626
CC-MAIN-2018-13
en
refinedweb
Christian Wyglendowski I am a Network Administrator at a small college in Illinois. I began learning Python in 2002 while working as a PC Technician. It has been invaluable for systems administration and was just plain enjoyable to learn (or keep learning, I should say!). Code Clinic BrianvandenBroek came up with a great idea to do periodic programming problems with a group of others and then do a shared analysis afterwards to see the different approaches we all took on the problem. At this time, it is called the Python Code Clinic. Other participants with wiki pages are ChadCrabtree and DavidBroadwell. Random Writer Our first project was the Random Writer from the Standford Nifty projects site. You can read more about the project at the Nifty site. I chose to tackle the project from an object oriented perspective. I have slowy been "getting it" as far as OOP goes and this proved to be some more good practice. Here is my base class, RandomWriter <--'how do I make that not link?' asked Christian. See HelpForBeginners for why what I did works--BrianvandenBroek: import random class RandomWriter: def __init__(self, seedLen, outLen, inName, outName): #initialize instance variables from parameters self.seedLen = seedLen #seed length in characters self.outLen = outLen #output file length in characters self.outTotal = 0 #initialize total chars written to zero self.seed = None #initialize seed to None #open files inFile = file(inName) #open the input file self.outFile = file(outName, 'w') #open the output file self.text = inFile.read() #create string variable with contents of inFile inFile.close() #bye, inFile self.textLen = len(self.text) #get the length in chars of the text to analyze self._selectSeed() #generate a seed from the text self.matches = [] #iv for matches self.newChar = '' #iv for the next char from the text #write the initial seed to the output file for ch in self.seed: self._writeChar(ch) def _selectSeed(self): """get a random seed that is self.seedLen long""" pos = random.randrange(self.textLen) self.seed = self.text[pos:pos+self.seedLen] def _getMatches(self): """build a list of indexes of the current seed in the text""" matches = [] match = self.text.find(self.seed) matches.append(match) while match != -1: match = self.text.find(self.seed, match+1) matches.append(match) return matches def _getSubChars(self): """build a list of the chars that come after our current seed in the text""" subChars = [] for index in self.matches: if index >= 0: try: ch = self.text[index+self.seedLen] except IndexError: #oops! end of the text ch = self.text[-1] #get last char subChars.append(ch) return subChars def _writeChar(self, ch): """write char to output file and increment outTotal counter""" self.outFile.write(ch) self.outTotal += 1 def _updateSeed(self, ch): """add the latest char to the end of the seed and drop the first char""" self.seed = self.seed[1:] + ch def Step(self): """process current seed, write probable subsequent char, build new seed""" self.matches = self._getMatches() subChars = self._getSubChars() nextChar = random.choice(subChars) #grab a "random" subsequent character self._writeChar(nextChar) #print "Wrote", nextChar self._updateSeed(nextChar) #print "New seed is", self.seed def Run(self): """do a Step for every char to be written""" for i in range(self.outLen - self.outTotal): self.Step() Here is my implementation file: import sys import randomwriter def usage(): print "python randomwrite.py SEEDLENGTH OUTLENGTH INFILE OUTFILE" def main(): #check for proper number of args if len(sys.argv) != 5: usage() sys.exit(1) seedLength = int(sys.argv[1]) outLength = int(sys.argv[2]) inFile = sys.argv[3] outFile = sys.argv[4] #begin error checking -------------------- if seedLength < 1 or outLength < 1: print "SEEDLENGTH and OUTLENGTH need to be greater than zero." sys.exit(1) try: rw = randomwriter.RandomWriter(seedLength, outLength, inFile, outFile) except IOError: print "Error reading or writing files. Please double check file names and locations." sys.exit(1) if rw.seedLen > rw.textLen: print "The input file has to contain at least as many characters as SEEDLENGTH." sys.exit(1) #end error checking ---------------------- ## print "Initial seed is", rw.seed ## print "Processing..." rw.Run() if __name__ == '__main__': import profile profile.run("main()") Analysis ChadCrabtree already did some great analysis on his wiki page, and I am not going to duplicate that here. I am going to focus on what he brought to light about my algorithm and what I did to improve it. For his analysis, Chad ran everyones' random writers through the Python profiler. When fed the command line args "5 500 tom.txt out.txt", my code faired better than the others by a small margin. However, when ran with these (10 5000 tom.txt out.txt) parameters, my code lagged far behind Chad's implementation and faired similarly to David Broadwell's (see Chad's wiki page for the analysis details). In my approach, for every character to be written to the output file, I searched the input file for occurances of the seed and grabbed the next character. That means that at some level (Python C code I guess) I was looping over the input file for every output character! Not very efficient for large output files. Chad's script was the fastest for large output values. He built a dictionary of seeds and subsequent characters and only had to loop over the source file once, no matter what the output size. I like this approach and have since wrote a subclass that incorporates such a cache, or index. Here is my subclass, FastRandomWriter: class FastRandomWriter(RandomWriter): """Went Chad's route and implemented a one-pass cache of seeds->nextchars. This subclass actually lives up to its name, unlike the samples above.""" def __init__(self, seedLen, outLen, inName, outName): RandomWriter.__init__(self, seedLen, outLen, inName, outName) self._cacheText() def _cacheText(self): chunkpos = 0 chunk = self.text[:self.seedLen] self.cache = {} for i in range(self.textLen + 1 - self.seedLen): try: nextchar = self.text[chunkpos+self.seedLen] except IndexError: #we've reached the end of the text nextchar = '\n' #just stick a newline in as a value for the last key if self.cache.has_key(chunk): self.cache[chunk].append(nextchar) else: self.cache[chunk] = [nextchar] chunkpos += 1 chunk = chunk[1:] + nextchar def Step(self): nextChar = random.choice(self.cache[self.seed]) self._writeChar(nextChar) self._updateSeed(nextChar) Here are the results from my original class: C:\Documents and Settings\Christian\Desktop\randomwriter>python randomwrite.py 1 0 5000 ..\tom.txt out.txt 29958 function calls in 11.322 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.002 0.002 11.308 11.308 <string>:1(?) 1 0.013 0.013 11.322 11.322 profile:0(main()) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 random.py:135(randrange) 1 0.000 0.000 0.000 0.000 random.py:198(randint) 4990 0.063 0.000 0.063 0.000 random.py:229(choice) 1 0.000 0.000 11.307 11.307 randomwrite.py:8(main) 1 0.000 0.000 0.000 0.000 randomwriter.py:28(_selectSeed) 4990 10.865 0.002 10.865 0.002 randomwriter.py:33(_getMatches) 1 0.005 0.005 0.006 0.006 randomwriter.py:4(__init__) 4990 0.069 0.000 0.069 0.000 randomwriter.py:43(_getSubChars) 5000 0.049 0.000 0.049 0.000 randomwriter.py:55(_writeChar) 4990 0.030 0.000 0.030 0.000 randomwriter.py:60(_updateSeed) 4990 0.189 0.000 11.266 0.002 randomwriter.py:64(Step) 1 0.036 0.036 11.301 11.301 randomwriter.py:74(Run) And here are the results from FastRandomWriter: C:\Documents and Settings\Christian\Desktop\randomwriter>python randomwrite.py 1 0 5000 ..\tom.txt out.txt 19980 function calls in 2.769 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.551 0.551 2.757 2.757 <string>:1(?) 1 0.013 0.013 2.769 2.769 profile:0(main()) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 random.py:135(randrange) 1 0.000 0.000 0.000 0.000 random.py:198(randint) 4990 0.036 0.000 0.036 0.000 random.py:229(choice) 1 0.000 0.000 2.205 2.205 randomwrite.py:8(main) 1 0.000 0.000 1.955 1.955 randomwriter.py:115(__init__) 1 1.949 1.949 1.949 1.949 randomwriter.py:119(_cacheText) 4990 0.116 0.000 0.225 0.000 randomwriter.py:135(Step) 1 0.000 0.000 0.000 0.000 randomwriter.py:28(_selectSeed) 1 0.006 0.006 0.006 0.006 randomwriter.py:4(__init__) 5000 0.045 0.000 0.045 0.000 randomwriter.py:55(_writeChar) 4990 0.028 0.000 0.028 0.000 randomwriter.py:60(_updateSeed) 1 0.026 0.026 0.250 0.250 randomwriter.py:74(Run) I will write more later ...
https://wiki.python.org/moin/ChristianWyglendowski?highlight=(CategoryHomepage)
CC-MAIN-2018-13
en
refinedweb
Uplug::XML::Writer - Perl extension for writing XML documents. use XML::Writer; use IO; my $output = new IO::File(">output.xml"); my $writer = new XML::Writer(OUTPUT => $output); $writer->startTag("greeting", "class" => "simple"); $writer->characters("Hello, world!"); $writer->endTag("greeting"); $writer->end(); $output->close(); Uplug::XML::Writer is basically a copy of XML::Writer version 0.4. It is included in Uplug for compatibility reasons. All credits go to the orgiginal authors. Note that the documentation is, therefore, also just a copy of the original documentation. XML::Writer is a helper module for Perl programs that write an XML document. The module handles all escaping for attribute values and character data and constructs different types of markup, such as tags, comments, and processing instructions. By. Create a new XML::Writer object: my $writer = new XML::Writer(OUTPUT => $output, NEWLINES => 1); Arguments are an anonymous hash array of parameters: An object blessed into IO::Handle or one of its subclasses (such as IO::File); if this parameter is not present, the module will write to standard output.. A true or false value; if this parameter is present and its value is true, then the module will insert an extra newline before the closing delimiter of start, end, and empty tags to guarantee that the document does not end up as a single, long line. If the paramter (and non-null value for standalone except 'no' will automatically be converted to 'yes'). . $writer->doctype("html"); Add a comment to an XML document. If the comment appears outside the document element (either before the first start tag or after the last end tag), the module will add a carriage return after it to improve readability: elemnet. WARNING: you must not use these methods while you are writing a document, or the results will be unpredictable. Add a preferred mapping between a Namespace URI and a prefix. See also the PREFIX_MAP constructor parameter. To set the default namespace, omit the $prefix parameter or set it to ''. Remove a preferred mapping between a Namespace URI and a prefix. To set the default namespace, omit the $prefix parameter or set it to ''.
http://search.cpan.org/~tiedemann/uplug-main-0.3.7/lib/Uplug/XML/Writer.pm
CC-MAIN-2018-13
en
refinedweb
On Wed, Apr 29, 2009 at 03:06:13PM -0700, Scott David Daniels wrote: > You did not answer the question above, and I think the answer is the root > of your misunderstanding. A class and a module are _not_the_same_thing_. > sys is not a package, it is a module. >>> Just because you put a class inside a module, does not mean >>> that class magically does something by virtue of having the >>> same name as the module. >>> >>> A module is a namespace to hold classes, functions, etc.... >>> A package is a namespace to hold modules (possibly more). >>> >>> I don't understand why you don't use files like: >>> >>> VLMLegacy/ >>> __init__.py >>> Reader.py >>> VLM4997.py >>> WINGTL.py > Unlike Java, we are free to have several things in a module: > several classes, several functions, several constants.... These modules would grow to be hundreds of pages long and a difficult to deal with to debug a problem related to one obscure system without looking at (or potentially screwing up) any of the others. I prefer one class per module. This gets more into philosophy, but I figure any function or method that does not fit on one page is too big; and any source file that is more than 20 pages long should be broken in half. I like my modules in the 5-10 page size range, including the embedded Unix ManPages and the cvs history. But that's just my house style. > Well, "VLM4997" is a _string_, and it has no attributes (nor methods) > named "Header", "Plan", or "Conditions." And "type" is a perfectly awful > name for a variable, since it hides the builtin named type. You seem to > confuse names, files, and classes defined in files (at least in your > writing). Actually I'm not. I am simply trying to use a pseudo code to explain roughly what is going on. There will be a string that selects what the set of classes are to be used on any given iteration and it will be used to generate the name of the class and/or name of the module where it is to be found. I'm an old ObjC hacker. I often put the class or method in a variable and do the bindings at runtime. I am already doing some of that sort of thing in this system with the method names and it works nicely. The point I take away from this is that packages and modules have dotted names, but Classes do not and there is no way to do exactly what I wanted to do. The dot syntax would have been quite nice (I quite like the "::" syntax in Perl) and would have made the code much clearer. The way you suggested with a 'typename_classname' generated using a from/import statement will just have to suffice. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: Digital signature URL: <>
https://mail.python.org/pipermail/python-list/2009-April/535381.html
CC-MAIN-2016-40
en
refinedweb
This patchset introduces a system log namespace. It is the 2nd version. The link of the 1st version is. In that version, syslog_ namespace was added into nsproxy and created through a new clone flag CLONE_SYSLOG when cloning a process. There were some discussion in last November about the 1st version. This version used these important advice, and referred to Serge's patch(). Unlike the 1st version, in this patchset, syslog namespace is tied to a user namespace. Add. Rui Xiang (9): netfilter: use ns_printk in iptable context fs/proc/kmsg.c | 17 +- include/linux/printk.h | 5 +- include/linux/syslog.h | 79 ++++- include/linux/user_namespace.h | 2 + include/net/netfilter/xt_log.h | 6 +- kernel/printk.c | 642 ++++++++++++++++++++++++----------------- kernel/sysctl.c | 3 +- kernel/user.c | 3 + kernel/user_namespace.c | 4 + net/netfilter/xt_LOG.c | 4 +- 10 files changed, 493 insertions(+), 272 deletions(-) -- 1.8.2.2
http://article.gmane.org/gmane.linux.kernel/1533621
CC-MAIN-2016-40
en
refinedweb
iTerrainCellRenderProperties Struct ReferenceThis is a base class for per-cell renderer-specific properties. More... #include <imesh/terrain2.h> Inheritance diagram for iTerrainCellRenderProperties: Detailed DescriptionThis is a base class for per-cell renderer-specific properties. The classes which hold the render-related data that is specific to a given cell and renderer. Also provides a shader variable context for the cell. Definition at line 131 of file terrain2.h. Member Function Documentation Get a copy of the properties object. Get visibility flag (if it is not set, the cell does not get rendered). - Returns: - visibility flag Set named parameter. - Parameters: - Set visibility flag. - Parameters: - The documentation for this struct was generated from the following file: - imesh/terrain2.h Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structiTerrainCellRenderProperties.html
CC-MAIN-2016-40
en
refinedweb
#include <libunwind.h> int unw_init_local(unw_cursor_t *c, unw_context_t *ctxt); The unw_init_local() routine initializes the unwind cursor pointed to by c with the machine-state in the context structure pointed to by ctxt. As such, the machine-state pointed to by ctxt identifies the initial stack frame at which unwinding starts. The machine-state must remain valid for the duration for which the cursor c is in use. The unw_init_local() routine can be used only for unwinding in the address space of the current process (i.e., for local unwinding). For all other cases, unw_init_remote() must be used instead. From a behavioral point of view, the call: ret = unw_init_local(&cursor, &ucontext);is equivalent to: ret = unw_init_remote(&cursor, unw_local_addr_space, &ucontext);However, unwind performance may be better when using unw_init_local(). Also, unw_init_local() is available even when UNW_LOCAL_ONLY has been defined before including <libunwind.h>, whereas unw_init_remote() is not. On successful completion, unw_init_local() returns 0. Otherwise the negative value of one of the error-codes below is returned. unw_init_local() is thread-safe as well as safe to use from a signal handler. libunwind(3), unw_init_remote(3) David Mosberger-Tang WWW:.
http://www.makelinux.net/man/3/U/unw_init_local
CC-MAIN-2016-40
en
refinedweb
Note: This was an April Fool’s Joke. Please do not take any information in this blog post seriously. As you all know, the languages currently supported by Panda3D are Python and C++. Unfortunately, this forces the user to choose from a trade-off between simplicity and performance. Python is simple and fast to prototype with, however its performance is very poor. A CPU-intensive algorithm in Python will typically run hundreds of times slower than the same algorithm implemented in C++. C++ on the other hand provides almost native performance, but it comes with a plethora of inconveniences for the developer, the most notable being that it’s easy to induce a crash or cause your application to leak memory. Enter Java, a language designed to be a middle-ground between these two goals. Java is a modern, high-level language with strong OO capabilities and garbage collection that doesn’t expose the coder to the dangers of manual memory management as C++. But Java is also an order of magnitude faster than Python. In light of these properties of the Java language, the development team has unanimously decided to adopt Java as the only supported language for the Panda3D API. As 1.7.0 has just been released, now is the perfect time to switch. The upcoming 1.7.3 release of Panda3D will drop all Python and C++ code, in favor of Java. Effectively today, development has commenced on a Perforce repository that drops support for Python and C++. What does this mean to you? Let’s start by comparing how Panda usage will look in the future as opposed to now. This is the current basic Panda example in Python that you may recognize from the manual: from direct.showbase.ShowBase import ShowBase class MyApp(ShowBase): def __init__(self): ShowBase.__init__(self) # Load the environment model. self.environ = self.loader.loadModel("models/environment") # Reparent the model to render. self.environ.reparentTo(self.render) # Apply scale and position transforms on the model. self.environ.setScale(0.25, 0.25, 0.25) self.environ.setPos(-8, 42, 0) app = MyApp() app.run() And this is how the same will be achieved now in Java: import org.panda3d.*; public class MyApp extends ShowBase { private NodePath environ; public MyApp() { this.environ = this.getLoader().loadModel("models/environment"); // Reparent the model to render. this.environ.reparentTo(this.getRender()); // Apply scale and position transforms on the model. this.environ.setScale(0.25, 0.25, 0.25); this.environ.setPos(-8, 42, 0); } public static void main(String args[]) { new MyApp().run(); } } Needless to say, this is a major improvement over the Python equivalent. This will definitely help Panda3D expand in the marketplace since Java alone has more demand than Python and C++ combined (see the graph below). We are already seeing the first benefits of changing to Java. For example, we have replaced our build system, makepanda, with Java’s ant. This allows us to leverage the XML format to streamline the build process’ bottom-line in a monitored, decentralized way. We hope that this multi-tiered non-volatile migration process enables us to provide synergized encompassing software emulation through realigned composite management trends, and resulting in universal global process improvement in the end. We are also in the process of officially renaming the engine into Janda3D. This is because the P in “Panda3D” stands for Python. This involves registering a new domain name and registering the trademark with the US Patent and Trademark Office, so it may take some time. Please stay tuned until our next blog post, in which we will explain how we plan to make the networking system in Panda3D version 1.7.9 fully compatible with RFC 2324. Posted in Uncategorized | 39 Comments »
http://www.panda3d.org/blog/2010/04/
CC-MAIN-2016-40
en
refinedweb
Hello, The topic below was opened in the Boost development mailing list, where it's been pointed out to me that it fits better here. You can also read the thread archive: Regards Bruno ---------- Forwarded message ---------- Hello, I have written a little function that converts any Boost.Fusion sequence into a Python tuple (boost::python::tuple). If a sub-sequence is nested in the sequence, the result will also be a nested tuple (for instance, boost::make_tuple(0, std::make_pair(1, 2), 3) will give (0, (1, 2), 3) in Python). The source code is attached to this mail. The principle is that any sequence previously adapted to Boost.Fusion will become a tuple in Python. So, by including the right boost/fusion/adapted/xxx header, one can convert a pair, a tuple, a boost::array, and obviously any of the sequences provided by Boost.Fusion. For example: #include <boost/python.hpp> #include <boost/fusion/adapted/std_pair.hpp> #include <boost/fusion/adapted/boost_tuple.hpp> #include <boost/fusion/container/generation/make_vector.hpp> #include "make_tuple_from_fusion_sequence.hpp" using namespace boost::python; tuple from_sequence() { return make_tuple_from_fusion( boost::fusion::make_vector( 1, std::make_pair("first", "second"), 2, boost::make_tuple('a', 'b', 'c'), 3 ) ); } BOOST_PYTHON_MODULE(mymodule) { def("from_sequence", &from_sequence); } In Python we get: >>> import mymodule >>> mymodule.from_sequence() (1, ('first', 'second'), 2, ('a', 'b', 'c'), 3) Is there any interest in adding this function into Boost.Python? If yes, I can write the doc and tests, clean the source and maybe improve the implementation (for example, I feel that I could avoid the use of m_iteration with a better use of Boost.Fusion...). Regards Bruno -------------- next part -------------- A non-text attachment was scrubbed... Name: make_tuple_from_fusion.hpp Type: text/x-c++hdr Size: 1356 bytes Desc: not available URL: <>
https://mail.python.org/pipermail/cplusplus-sig/2009-February/014293.html
CC-MAIN-2016-40
en
refinedweb
As a software developer I like to work with everything that is related to software Localization known as L10n. Besides being a developer working defining the architecture that will be adopted in a given project and doing the hard “FUN” work writing the code, I’m also a translator if you don’t know it yet. One thing I've been trying to do recently is to be able to use localized strings that are present in an external assembly [ DLL ] using the ResourceManager object. I have localized strings in resource [ .resx ] files that are specific for each locale I support. I place these .resx files in a separate class library project to maintain things organized. So, suppose the namespace of this class library is MyProject.L10n and the .resx file name is Localization.resx. This gives me access to a class named Localization within the code. I also have Localization.pt.resx. I support English and Portuguese locales in my project for now. This naming pattern allows me to have in the future a file called Localization.es-ES.resx for Castilian Spanish (as written and spoken in Spain) and another one called Localization.es-AR.resx for Argentine Spanish. During runtime the .NET framework will select the correct .resx file to extract the localized string from based on the current culture the user has set while browsing my website. After adding a reference to this class library, I'm able to use this code in my ASP.NET MVC project in a Razor view: MyProject.L10n.Localization.LocalizedString; This works as expected, but it's not what I need, though. As you see the localized string key [ LocalizedString ] is hard coded. I want to be able to use the method GetString from the ResourceManager object so that I can write code like this: ResourceManager.GetString(item.DynamicLocalizedStringValue); The problem and the catchy here is that in order to use the resource manager the way I want, I have to point it to the external assembly this way: grid.Column( columnName: "Type", header: Localization.Type,format: (item) => new ResourceManager("MyProject.L10n.Localization", typeof(Localization).Assembly).GetString(item.Type.ToString())) This part does the tricky: typeof(Localization).Assembly In the code block above I’m using WebGrid that is a new helper that comes with ASP.NET MVC 3. It simplifies the task of rendering tabular data. When I do item.Type.ToString() I’m actually getting different values for each row of my grid and I pass this dynamic value to ResourceManager that in return gives me the translated/localized version of a give string key. Going even further I’ve implemented a Razor’s Helper method in a file called Helpers.cshtml and placed such file inside the App_Code folder. This is the helper’s code: @using System.Resources @using MyProject.L10n @helper GetLocalizedString(string stringValue) { ResourceManager rm = new ResourceManager("MyProject.L10n.Localization", typeof (Localization).Assembly); @rm.GetString(stringValue); } Now it’s just a matter of calling the helper this way in whatever place/view I need it: grid.Column( columnName: "Type", header: Localization.Type, format: (item) => @Helpers.GetLocalizedString(item.Type.ToString())) The above code is way more clear than the one I showed your before… Hope this post helps shed some light in this subject since the only thing that should be done is to get a reference to the assembly that holds the Localization class and pass it to the ResourceManger’s constructor.
http://www.leniel.net/2011/10/resourcemanager-external-assembly-dll.html
CC-MAIN-2016-40
en
refinedweb
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Duplicate - Affects Version/s: 5.3.1, 5.3.2, 5.4.0, 5.4.1 - Fix Version/s: NEEDS_REVIEW - - Labels:None - Environment: Java 64-bit, Windows 2008 Server, Centos 5 64bit, Ubuntu 110 64-bit, OS X 10.5, 10.6 Description. Issue Links Activity - All - Work Log - History - Activity - Transitions I have created two classes to show this problem: test.PutMessages and test.ReadMessages that show the problem. Steps to reproduce 1) Start a 5.3.0 broker 2) Start two messages reader for two different correlations on the same queue java -cp <yourclasspath> test.ReadMessages tcp://localhost:61616 TestQueue ForReader1 java -cp <yourclasspath> test.ReadMessages tcp://localhost:61616 TestQueue ForReader2 3) Start two messages producers for the two different correlations java -cp <yourclasspath> tesPutMessages tcp://localhost:61616 TestQueue ForReader1 java -cp <yourclasspath> test.PutMessages tcp://localhost:61616 TestQueue ForReader2 4) Looking at the output of the readers you started on step two, you will both read the messages for the correlation with the time on the broker about 1ms. 5) Stop the reader ForReader1, you will notice that the program ForReader2 is uneffected. Messages with corrlations "ForReader1" backup on the queue, and the program ForReader2 continues reading normally. 6) stop all classes, and stop 5.3.0 broker. Start a 5.3.2 broker. 7) Repeat steps 1-5. Except you'll notice that once you stop ForReader1, ForReader2 is effected which is shouldn't be. ForReader2 will basically stop being able to read messages until you start ForReader1 again. ForReader2 will occasionally get messages, but incredibly slowly and performance is ruined. package test; import java.net.*; import javax.jms.*; import org.apache.activemq.ActiveMQConnectionFactory; /** * - @author bwillard */ public class PutMessages extends Thread { final private MessageProducer producer; final private String correlationID; final private Session session; public); producer = session.createProducer(queue); producer.setDeliveryMode(DeliveryMode.PERSISTENT); } public void run() { ObjectMessage message; String text; long counter = 0; while (true) { try{ this.sleep(5); counter++; message = session.createObjectMessage(); message.setJMSCorrelationID(correlationID); text = "Message " + counter + " for consumer " + correlationID; message.setObject(text); producer.send(message); } catch (Exception exc){ System.err.println("Error sending message"); exc.printStackTrace(System.err); } } } public static void main(String[] args) { try{ URI uri = URI.create(args[0]); String queueName = args[1]; String correlationID = args[2]; new PutMessages(uri, queueName, correlationID).start(); } catch (Exception exc){ exc.printStackTrace(); } } } package test; import java.net.*; import javax.jms.*; import org.apache.activemq.ActiveMQConnectionFactory; /** * * @author bwillard */ public class ReadMessages implements MessageListener { final private MessageConsumer consumer; final private String correlationID; final private Session session; public); consumer = session.createConsumer(queue, "JMSCorrelationID='" + correlationID + "'"); consumer.setMessageListener(this); } public void onMessage(Message msg) { long inTime, outTime, brokerTime; try { if (msg instanceof ObjectMessage) { ObjectMessage txt = (ObjectMessage) msg; inTime = txt.getLongProperty("JMSActiveMQBrokerInTime"); outTime = txt.getLongProperty("JMSActiveMQBrokerOutTime"); brokerTime = outTime - inTime; System.out.println("Message waited " + brokerTime + "ms : " + txt.getObject().toString()); } } catch (Exception exc) { System.err.println("Error reading message"); exc.printStackTrace(); } } public static void main(String[] args) { try { URI uri = URI.create(args[0]); String queueName = args[1]; String correlationID = args[2]; new ReadMessages(uri, queueName, correlationID); } catch (Exception exc) { exc.printStackTrace(); } } } I updated the issue because it's a major performance issue that also exists in 5.3.2 when using messages selectors. This is the config file I used in both brokers to show the problem Source files instead of pasting into comment, sorry. Has anyone had a chance to verify this is a real problem or if there is something wrong with my config? I am unable to upgrade past broker 5.3.0 because of it, and really want to be able to upgrade to resolve other issues I've been seeing. I also want to make sure this isn't also an issue in the 5.4 broker due out. Confirmed this is still an issue in the 5.4 snapshot. Modifications to 5.4.0 default config file that still show the problem running the provided sample code. This is another case of - there are some strategies using named queues or virtual queues that can help as outlined in the comments on AMQ-2217 5.3.0 had a bug in this regard that could lead to an out of memory exception as there was no limit on the size of the in memory dispatch queue, it was as if maxPageSize == MAX_INT. With a very sparse selector, the broker would exhaust available memory. Set maxPageSize to MAX_INT to replicate.. Relates to the same issue of maxPageSize it would be great if you could provide a tests case for this so we can see your configuration and setup. Of interest is the maxPageSize for the queue and the distribution of messages across the two selectors.
https://issues.apache.org/jira/browse/AMQ-2745
CC-MAIN-2016-40
en
refinedweb
[ ] Wendy Chien commented on HADOOP-438: ------------------------------------ To fix this we're going to enforce a pathname limit in the Path class constructor. . (8K in length and 1K in depth.) The constructor will throw an exception which will be passed back to the client. > DFS pathname limitation. > ------------------------ > > Key: HADOOP-438 > URL: > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.5.0, 0.4.0, 0.3.2, 0.3.1, 0.3.0, 0.2.1, 0.2.0, 0.1.1, 0.1.0 > Reporter: Konstantin Shvachko > Fix For: 0.6.0 > > > I was trying to create a deep hierarchy of directories using DFS mkdirs(). > When the path to the leaf directory became long (~20000) DFS was still able to create > directories with these names, but UTF8 started truncating long strings resulting in > incorrect logging of namespace edits. That later crashed the namenode during restart, > when it was trying to reproduce file creation logged in the edits file with truncated names. > UTF8 is deprecated now so we will have to replace it with Text. > With UTF8 we should enforce a pathname limit of 0xffff/3 = 21845 > With Text it is going to be larger. Not sure what the exact number is. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: - For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200609.mbox/%3C149186.1157592383316.JavaMail.jira@brutus%3E
CC-MAIN-2016-40
en
refinedweb
What is a Proposal? "Proposals" definition: Proposals detail ideas, brainstorms or potential solutions to a known issue or problem in the code, community or process. Proposals may recommend changes we could make that may help us resolve issues we've encountered in the past. A few examples: a proposal could recommend accepting or adding a new technology, changing an API, or changing how we interact as a team or plan new releases of the software. "Special Topics" versus "Proposals" - Special Topics are higher level topics/questions/problems which need to be addressed or discussed. They may include some details or background about the problem domain, but should not suggest any particular solutions to the problem. - Development Proposals should present potential solutions to the problem. One Special Topic (e.g. "How can we better handle Preservation Metadata?") may result in many Proposals (e.g. "Option #1", "Option #2", "Combo of Option #1 and #2", etc.) Overview: Adding a new Proposal This page is a place to post new proposals for review by DSpace Community and/or Developers. New proposals can recommend any sort of change to solve an existing or perceived problem/issue. A few examples: a proposal could recommend accepting or adding a new technology, changing an API, or changing how we interact as a team or plan new releases of the software. To add a new proposal: - Please create a sub-page for the details of your proposal. In your details you should describe the reasons behind your proposal. Your proposal need not be flushed out completely – even initial ideas are welcome, especially if they can start to answer some basic questions: - Why do you feel this will help DSpace Software or our Community as a whole? - What problem(s)/issue(s) are you trying to solve? - How do you feel your proposed solution could resolve these problems/issues? - Once your proposal is ready for review/comments, add the appropriate label to classify your proposal below, but including an {excerpt} marco in your proposal page you can include a description of your proposal below. - Feel free to add additional proposal categories below if your proposal does not fit into an existing category. - Announce that you have a new proposal posted for comment/review. The best way to announce this is via either a mailing list, or at a DSpace developers meeting (if it is a technology proposal) To comment on an existing proposal - Visit the proposal page and either comment on the proposal directly, or add your thoughts on the "discussion" page. - Please keep comments constructive. Give reasons why you agree/disagree to help us improve upon the proposal. Development Proposals Place for proposals about underlying development and technology changes in the software (e.g. API changes, architecture/design changes, etc.). Technology changes may need to be sub-categorized if we start to receive many different levels of proposals. To include your proposal here create a new Sub-page under this page. - Metadata For All —. - Hierarchical Metadata Support - LOM and MODS - Google Analytics Statistics in DSpace - Linked Open Data for DSpace — EPrints supports VOID and LoD, So should we. - ORCID Integration - Proposal to Update DC Registry and Add DCTERMS Registry - Installer Brainstorms - Proposal For Metadata Enhancement - Item Versioning Support - Metadata enhancement work or proposals summary — summary page of the metadata projects/priorities identified by the October 2011 community survey on improving metadata support authored by the DSpace Community Advisory Team at the request of the DSpace Committers/Developers - - i18n Improvements Proposal -. - Adding metadata authority controls and vocabularies to the data model — Open up rights to use a controlled vocabulary and link from an external source - Enhancing the metadata available for Communities, Collections and Files — allow metadata not just at the Item level - Improved or more transparent metadata flexibility — Simplify/make local customizations more accessible through UI and expose RDF triples - Develop support for additional metadata standards — Use the "Other" field to specify which standards - Moving metadata related configurations from dspace.cfg to the database — Improve the verification/safety measures when editing/removing metadata fileds - Fedora Inside - Standardizing the default namespaces — Currently instead of creating a customized metadata schema, some DSpace repository managers edit the default registry, effectively breaking compliance with the standard Dublin Core. This can create a problem for the portability of data to/from of your repository. It has been proposed that in the future that DSpace would include 3 different metadata schemas, to insure that the metadata will be easily portable to other systems - Refactor Packagers to support Chain of Command - Maven Project Consolidation -. - Refactoring the DSpace Domain Model - Refactoring MediaFilterManager for greater reuse and flexibility -. - Database Persistence of Configuration State — Design Direction Proposal to work on the elimination of configuration files in favor of storing configuration in database wherever possible. - VOID LoD Endpoint Descriptor — EPrints supports VOID and LoD, So should we? - - Migrate Search and Browse to DSpace Discovery — This is a proposal to replace DSpace Search and Browse implementations completely with Solr -. - Upgrade Process Improvements — Improve upgrade Process with tools that complete and test upgrades. 1 Comment Bram Luyten (Atmire) During the OR12 developer meeting it was argued that although this page already gives a good overview list of ongoing development, it would be great to see which of these is actively being worked on at the moment (with any concrete target goals/dates where possible), and which ones are lingering. Given that the current listing displays the titles of sub pages and their excerpts, would there be a way to include the last modified dates of those pages? The maintainer of the page then only needs to take care that any recent developments are always logged on that page to keep their modified dates fresh on the listing. According to the documentation for the children macro it should be possible to list sort the listing by modified date. Trying this now. The original, alphabetically ordered list is still available at the bottom of the page where you see the child pages again.
https://wiki.duraspace.org/display/DSPACE/Development+Proposals
CC-MAIN-2016-40
en
refinedweb
Contents - Introduction - 1. How evolutionary algorithms first appeared - 2. Evolutionary algorithms (methods) - 3. Genetic algorithms (GA) - 3.1. Field of application. - 3.2. Problems being solved - 3.3. Classic GA - 3.4. Search strategies - 3.5. Difference from the classic search of optimum - 3.6. Terminology of GA - 4. Advantages of GA - 5. Disadvantages of GA - 6. Experimental part - 6.1. Search for the best combination of predictors - with tabuSearch - 6.2. Search for the best parameters of TS: - 7. Ways and methods of improving qualitative characteristics - Conclusion Introduction Many traders have long realized the necessity of a self-optimization that doesn't require the Expert Advisor to stop trading. There have been several related articles (article1, article2, article3, article4) published already. The experiments in these articles are based on the library suggested here. However, since they were published, new powerful optimization algorithms and new ways of applying them have emerged. In addition to that, I believe that these articles were written by programmers and were intended for programmers. In this article, I will try to enlighten the practical application of genetic algorithms (GA) without going into depth with technical details, specifically aimed at traders. For users like me, it is important to know the principle of GA operation, the importance and value of parameters that affect quality and speed of GA convergence and its other utilitarian properties. Therefore, I may repeat myself, but, nevertheless, I will begin with the history of GA appearance, proceed with its description and get to identifying parameters of the improved GA. Also, we will compare GA with some evolutionary algorithms. 1. How evolutionary algorithms first appeared The history of evolutionary calculations began with the development of various independent models. There were mainly genetic algorithms and Holland classification systems that were published in the early 60s and gained universal recognition after the book "Adaptation in Natural and Artifical Systems" was released in 1975, becoming a classic in its field. In the 70s, within the random search framework, L.A. Rastrigin introduced some algorithms that used ideas of bionic behavior. Development of these ideas found reflection in the series of work by I.L. Bukatova about evolutionary modeling. While developing ideas of M.L. Tsetlin about advisable and optimal behavior of stochastic machines, Y.I. Neumark suggested to search for global extremum based on the team of independent automata that models the processes of development and elimination of species. Fogel and Walsh are individuals that greatly contributed to the development of evolutionary programming. Despite their difference in approaches, each of these "schools" based their strategy on few principles that exist in nature and simplified them so they could be utilized on a computer. Efforts focused on modeling evolution by analogy with systems of nature can be broken down into two major categories. - Systems modeled based on biological principles. They have been successfully utilized for function optimization tasks, and can be easily described in non-biological language. - Systems that appear more realistic from biological perspective, but are not particularly beneficial in the sense of application. They more resemble biological systems and are less directional (or not directional at all). They have a complicated and interesting behavior and, apparently, will shortly find practical use. Certainly, we cannot divide these aspects so strictly in practice. These categories are in fact two poles where various computer systems lie between them. Closer to the first pole there are evolutionary algorithms like Evolutionary Programming, Genetic Algorithms and Evolution Strategies, for example. Closer to the second pole there are systems that can be classified as Artificial Life. 2. Evolutionary algorithms (methods) Evolutionary algorithms — a division of artificial intelligence (section of evolutionary modeling), that uses and models the processes of natural selection. We list some of them: - genetic algorithms — heuristic search algorithm used for optimization and modeling through random selection, combinations and variations of desired parameters; genetic programming — automated creation or change of programs using genetic algorithms; evolutionary programming — similar to genetic programing, but the program's structure is consistent, only numeric values are subject to change; evolutionary strategies — resemble genetic algorithms, but only positive mutations are transmitted to the next generation; differential evolution; neuroevolution — similar to genetic programming, but genomes are artificial neural networks where evolution of weights at the specified network topology occurs, or besides evolution of weights topology evolution is also carried out. They all model basic positions in theory of biological evolution — processes of selection, crossbreeding, mutation and reproduction. The behavior of species is determined by the environment. Multiple species are referred as population. Population evolves according to rules of selection in line with a target function set by the environment. This way, every specimen (individual) in population has its value in the environment assigned. Only the most suitable species are multiplied. Recombination and mutation allow individuals to change and adjust to the environment. Such algorithms refer to adaptive search mechanisms. Evolutionary methods (EM) — approximate (heuristic) methods of solving tasks of optimization and structure synthesis. Majority of EM is based on static approach to researching situations and iterative approximation to the desired solution. Evolutionary calculations constitute one of the sections of artificial intelligence. When creating artificial intelligence systems using this approach, the emphasis is made on building the initial model and rules based on which it can change (evolve). At the same time, the model can be created by applying various methods. For example, it can be either neural network or a set of logical rules. Annealing, genetic algorithms, PSO, ACO, and genetic programming relate to the main evolutionary methods. Unlike precise methods of mathematical programming, evolutionary methods allow to find solutions close to optimal within reasonable time, and unlike other heuristic methods of optimization they are characterized by considerably smaller reliance on features of the application (i.e. more universal), and in the majority of cases provides a better degree of approximation to optimal solution. The universality of EM is also determined by applicability to tasks with non-metrizable space of managed variables (it means that there may be linguistic values, including those that have no quantifiable measure among managed variables). In Simulated Annealing, the process of minimizing potential body energy during annealing of details is imitated. The change of some managed parameters takes place in the current point. A new point is always accepted when the target function improves and, with small probability, when it worsens. The most important case of EM involves genetic methods and algorithms. Genetic algorithms (GA) are based on searching for the best solutions using inheritance and strengthening of useful features of multiple objects of a specific application in the process of imitation of their evolution. Properties of objects are presented with values of parameters combined into a record that is called chromosome in EM. Subsets of chromosomes called population are operated in GA. Imitation of genetic principles — stochastic selection of parents among members of population, chromosomal crossover, selection of children to be included into new generation of objects based on evaluation of the target function. This process leads to evolutionary improvement of values of a target function (utility function) from generation to generation. There are also methods among EM that unlike GA operate with a single chromosome instead of multiple chromosomes. This way, the discrete local search method called Hillclimbing is based on random change of separate parameters (values of fields in a record or, in other words values of genes in a chromosome). Such changes are called mutations. After every mutation, fitness function is evaluated. The result of mutation is saved in a chromosome only if fitness was improved. At the "annealing moduling", the mutation result is saved with a certain probability that depends on the obtained value. In the Particles Swarm Optimization method, the behavior of multiple agents that aim to align their state with a state of the best agent is imitated. The ACO method is based on imitation of ants behavior that shorten their routes from a source of food to their anthill. 3. Genetic algorithms (GA) Genetic algorithms — adaptive search methods that have been commonly used for solving tasks of functional optimization lately. They are based on genetic processes of biological organisms: biological populations evolve during several generations and follow the rule of natural selection and the principal of survival of the fittest, as discovered by Charles Darwin. By imitating this process, genetic algorithms are capable of evolving solutions of real tasks, if they are coded appropriately. In nature, species in population compete against one another for various resources, such as food and water. Furthermore, population members of the same type frequently compete for attracting a partner. Species that are more adjusted to the surroundings have better chances to create offspring. Inferior species either don't reproduce, or they have only a few offspring. It means that genes of well adapted species will spread with growing number of descendants in every new generation. A combination of successful characteristics from different parents can sometimes lead to "overly adapted" descendant who is more adaptable than any of his parents. This way, a kind develops and adjusts to the environment even better. Genetic algorithms draw an analogy with this mechanism. They operate with a set of "species" — population, where each of them provides a possible solution of a problem. Each specimen is evaluated with a measure of "adjustment" in accordance to how "good" a relevant solution is. The fittest species obtain the opportunity to "create" offspring using "crossbreeding" with other population species. This leads to the appearance of new species that combine some characteristics inherited from their parents. The least fit species are less likely to create children, so the characteristics that they possess will gradually disappear from population in the process of evolution. This is how the entire new population of acceptable solutions is reproduced by selecting the best representatives of a previous generation, crossbreeding them and obtaining new species. This new generation contains a higher proportion of characteristics that "good" members of a previous generation posses. This way, good characteristics are distributed across all population from generation to generation. Crossbreeding the fittest species leads to the most perspective areas of search space being explored. Ultimately, the population will come down to the optimal solution. There are different ways of implementing ideas of biological evolution within GA. 3.1. Field of application The real tasks solved can be identified as a search of optimal value, where value is a complicated function that depends on several input parameters. In some cases, it is of interest to find values of parameters that are used to achieve the most accurate function value. In other cases, accurate optimum is not required, and any value that is better than a set value can be considered as a solution. In this case, genetic algorithms frequently become the most appropriate method for searching for "good" values. The force of a genetic algorithm is in its ability to simultaneously manipulate with multiple parameters. This feature of GA was used in hundreds of applications, including the designing of aircrafts, setting parameters of algorithms and searching for stable conditions of systems of nonlinear differential equations. However, there are cases when GA doesn't operate as efficiently as expected. Let's assume that there is a real task that involves searching for the optimal decision. How to find out if it is suitable for GA? There has been no strict answer to this question until now. However, many researchers share the assumption that GA has good chances to become an efficient search procedure in the following cases: - if the search field to be researched is big, and it is assumed that it is not completely smooth and unimodal (i.e. contains one smooth extremum); - if fitness function contains noise; - or if a task doesn't require a strict finding of global optimum. In other words, in situations when it is required to find the acceptable "good" solution (which is rather common with real tasks), GA competes and beats other methods that don't apply knowledge of the search area. If the search area is small, then solution can be found using the exhaustive search, and you can rest assured that the best possible solution is found, whereas GA would most likely to converge on a local optimum, rather than an overall better solution. If the space is smooth and unimodal, then any gradient algorithm (for example, the steepest descent method) will be more effective than GA. If there is some additional information about the search area (for example, the area for a well known "traveling salesman problem"), search methods that use heuristics determined by area, frequently outperform any universal method that GA is. Considering a relatively complicated structure of the fitness function, search methods with a single solution at the moment of time, such as a simple method of descent, could "get caught in" a local solution. However, it is considered that genetic algorithms have less chances to converge on a local optimum and safely function on a multi-extremal landscape, since they operate with the entire "population" of solutions. Certainly, such assumptions don't strictly predict when GA will be an efficient search procedure competing with other procedures. The efficiency of GA highly depends on details like method of coding solutions, operators, setting parameters, partial criterion of success. Theoretical work that is documented in literature on genetic algorithms so far doesn't give grounds for discussing the development of strict mechanisms for accurate predictions. 3.2. Problems being solvedGenetic algorithms are applied for solving the following tasks: - Optimizing functions - Optimizing requests in data bases - Various tasks on the charts (traveling salesman problem, coloring, finding pairs) - Setting and training artificial neural networks - Layout tasks - Creating schedules - Game strategies - Approximation theory 3.3. Classic GAOperation of simple GA The simple GA randomly generates the initial population of structures. GA operation is an iteration process that carries on until the set number of generations is achieved or any other termination criteria is applied. The selection proportional to fitness, a single-pointed crossover and mutation are implemented in every generation of the algorithm. First, proportional selection appoints to every structure the Ps(i) probability that equals ratio of its fitness to the total fitness of population. Then all n species are selected (with replacement) for further genetic data handling according to the Ps(i) value. The simplest proportional selection is a roulette-wheel selection, Goldberg, 1989c) — species are selected with n "spins" of a roulette. The roulette wheel contains one sector for each member of population. The size of i-th sector is proportional to the corresponding value Ps(i). At this selection, the fittest members of population are more likely to be selected than the least fit species. After selection, n selected species are subject to crossover (sometimes called recombination) with a set probability of Pc. n strings are randomly divided into n/2 pairs. For every pair with Pc probability, crossover can be applied. Accordingly, the crossover doesn't occur with 1-Pc probability, and constant species proceed to the mutation stage. If there is a crossover, then the obtained children replace their parents and proceed to mutation. A one-point crossover operates as follows. First, one out of l-1 break points is randomly selected. (Break point is an area between adjacent bytes in a string.) Both parent structures are divided into two segments based on this point. Then, corresponding segments of different parents are linked, and two genotypes of children are created. After the crossover stage ends, mutation operators are performed. Every byte with the Pm probability is changed to the opposite in every string that is subject to mutation. The population obtained after mutation overwrites the old population, and this is how a loop of one generation is ended. The following generations are handled the same way: selection, crossover and mutation. Researches of GA currently offer plenty of other operators of selection, crossover and mutation. Below are the most common ones. First of all, tournament selection (Brindle, 1981; Goldberg и Deb, 1991). It implements n tournaments to select n species. Every tournament is built on the selection of k elements from the population and the selection of the best specimen amongst all. The tournament selection with k=2 is the most common. Elite selection methods (De Jong, 1975) guarantee that the best members of population will survive during selection. The most common process is a compulsory retention of a single best specimen, if it didn't go through the process of selection, crossover and mutation like the rest. Elitism can be integrated in almost any standard method of selection. Two-point crossover (Cavicchio, 1970; Goldberg, 1989c) and a uniform crossover (Syswerda, 1989) — are decent alternatives to the one-point operator. There are two break points selected in the two-point crossover, and parent chromosomes exchange a segment that is between these two points. In a uniform crossover, every byte of the first parent is inherited with the first child with a set probability, otherwise, this byte in transferred to the second child. And vice versa. Basic operators of genetic algorithm Selection operator At this stage, the optimal population is selected for further reproduction. The specific number of the fittest is normally taken. It makes sense to drop "clones", i.e. species with the same set of genes. Crossbreeding operator Most often, crossbreeding is performed over the best two species. The usual results are two species with components taken from their "parents". The goal of this operator is to distribute "good" genes across population and to tighten the density of population towards areas where it is high already. Mutation operation Mutation operator simply changes the arbitrary number of elements in a specimen to other arbitrary numbers. In fact, it is a dissipative element that pulls from local extremums on the one hand, and adds new information to population on the other hand. - In case of a binary sign, it inverts a byte. - It changes the numerical sign to a certain value (most probably, the closest value). - It replaces to other nominal sign. Stop criteria - Finding global or suboptimal solution - Exit to "plateus" - Exhausting the number of generations released for evolution - Exhausting the time remaining for evolution - Exhausting the specified number of calls to the target function 3.4. Search strategiesThe search is one of universal ways of finding solutions for cases when the sequence of steps leading to optimum is not known. There are two search strategies: exploitation of the best solution and studying the space of solutions. A gradient method is an example of a strategy that selects the best solution for a possible improvement, by ignoring the research of the entire search area. A random search is an example of a strategy that, on the contrary, studies the area of solutions while ignoring researches of perspective fields of the search area. A genetic algorithm is a class of search methods of a general function that combine elements of both strategies. Applying these methods enables to keep the acceptable balance between research and exploitation of the best solution. At the start of genetic algorithm's operation, population is random and has diverse elements. Therefore, crossbreading operator conducts a vast research of solution area. With increase of fitness function of obtained solutions, the cross-breeding operator enables researching each of their surroundings. In other words, the type of search strategy (exploitation of the best solution or researching the solution area) for crossbreeding operator is defined with a population diversity, instead of the operator itself. 3.5. Difference from the classic search of optimum Generally, the algorithm of solving optimization problems is a sequence of calculation steps that asymptotically converge on the optimal solution. The majority of classic optimization methods generate a determined sequence of calculations that is based on a gradient or a derivative of a target function of a higher range. These methods are applied to a single output point of a search area. Then the decision is gradually improved towards the fastest increase or decrease of the target function. With such detailed approach, there is a risk of hitting the local optimum. A genetic algorithm performs a simultaneous search in various directions through using population of possible solutions. Switching from one population to another avoids hitting the local optimum. Population undergoes something similar to evolution: relatively good solutions are reproduced in every generation, whereas relatively bad ones die out. Genetic algorithms use probability rules to identify a reproduced or a destroyed chromosome in order to direct the search to areas of a possible improvement of the target function. Many genetic algorithms were implemented in recent years, and in the majority of cases they drastically differ from the initial classic algorithm. And this is the reason why instead of one model, there is a wide range of algorithm classes that bear little resemblance with each other under the term "genetic algorithms". Researchers experimented with various types of views, crossover operators and mutations, special operators and various approaches to reproduction and selection. Although the model of evolutionary development applied in GA is majorly simplified in comparison with it nature analog, nevertheless, GA is a powerful tool and can be successfully applied for a wide class of tasks, including those that are difficult and sometimes impossible to solve using other methods. However, GA along with other methods of evolutionary calculations, can't guarantee finding a global solution over polynomial time. Neither can be guaranteed that global solution will be even found. However, genetic algorithms are good for searching for "relatively good" solution "relatively fast". Almost always these methods will be more effective than GA in both speed and accuracy of found solutions, where special methods can be used to find a solution. The main advantage of GAs is that they can be applied even for complicated tasks, if there are no special methods. Even where existing methods work well, GAs can be used for further improvement. 3.6. Terminology of GA Since GA derives from natural science (genetics) and computer science, the applied terminology is a mixture of natural and artificial compounds. Terms referring to GA and the solution of optimization problems are provided in table. 1.1. Table 1. Classic (one-point) crossing over. "Traditional" genetic algorithm uses a one-point crossing over where two chromosomes are cut once in a specific point, and the obtained parts are exchanged afterwards. Other various algorithms with other types of crossing over frequently including more than one cut point were also discovered. DeJong had researched the efficiency of a multi-pointed crossing over and came to conclusion that a two-point crossing over shows improvement, but further adding of crossing over points reduces the activity of a genetic algorithm. The problem of adding additional crossing over points is that standard blocks will most probably be interrupted. However, the advantage of multiple crossing over points is that the area of states can be researched in more details. Two-point crossing over. In two-point crossing over (and multiple point crossing over generally) chromosomes are considered as loops formed when connecting the ends of linear chromosomes together. In order to change a segment of one cycle to a segment of another cycle, a selection of two cut points is required. In this presentation, a one-point crossing over can be considered as a crossover with two points, but with one cut point fixed at the beginning of the string. Therefore, a two-point crossing over solves the same task as a one-point crossover, but in a more complete way. A chromosome considered as a cycle may contain a lot of standard blocks, because they can perform a "cyclic return" at the end of the string. Many researchers currently agree that, in general, two-point crossover is better than a one-point crossover. Unified (homogeneous) crossing over. Unified crossing over is fundamentally different from a one-point crossover. Every gene in a generation is created by copying a relevant gene from one parent or another that was selected according to randomly generated crossover mask. If a crossover mask has 1, then a gene is copied from a first parent, if it has 0, then a gene is copied from a second parent. In order to create a second generation the process is repeated, but on the contrary, with exchanged parents. A new crossover mask is randomly generated for each pair of parents. Differential crossing. Apart from the crossover, there are other methods of crossbreeding. For example, for searching a minimum/maximum function of many physical variables, "differential crossbreeding" is the most successful. We will briefly describe its concept. Let's imagine that a and b are two individuals in a population, i.e. physical vectors that our function depends on. Then a child (c) is calculated using the formula с=a+k*(a-b), where k — is a certain physical ratio (that can depend on ||a-b|| — a distance between vectors). Mutation in this model is an addition to an individual of a random short length vector. If an output function is continuous, then a model operates well. It is even better if it is smooth. Inversion and reordering. The order of genes in a chromosome is often critical for building blocks that allow to perform an efficient operation of the algorithm. During the algorithm operation, methods for reordering positions of genes in a chromosome were suggested. One of such methods is inversion that reverses the order of genes between two randomly selected positions in a chromosome. (When these methods are used, genes need to have a certain "mark", so they could be correctly identified despite of their position in a chromosome.) The goal of reordering is to attempt finding the order of genes that hold the best evolutionary potential. Many researchers have applied inversion in their work, although it seems that only few have tried to justify it or evaluate its contribution. Goldberg & Bridges analyze the operator of reordering on a very small task, showing that it can give a certain advantage, however, they conclude that their methods wouldn't have the same advantage with big tasks. Reordering also considerably increases the search area. Not only a genetic algorithm attempts to find good sets of values of genes, at the same time it also tries to find their "right" order. This is a bigger challenge to solve. What is epistasis? The "epistasis" term in genetics is determined as influence of a gene on an individual's fitness depending on value of a gene present in other place. In other words, geneticists use the term "epistasis" in terms of a "switch" or "masking" effect: "A gene is considered epistatic when its presence suppresses the influence of a gene in other locus. Epistatic genes are sometimes called inhibitory because of their influence on other genes that are described as hypostasis. In the GA terminology it may sound like: "Fitness of a specimen depends on a position of chromosomes in a genotype". What is a false optimum? One of fundamental principles of genetic algorithms is that chromosomes included in templates contained in global optimum are increased in frequency. This is especially right for short templates of small order, known as building blocks. Ultimately, these optimal templates will meet at the crossover, and a global optimal chromosome will be created. But if templates that are not contained in a global optimum will be increased by frequency quicker than others, then a genetic algorithm will be misled and will deviate from the global optimum, instead of approaching it. This phenomenon is known as false optimum. False optimum is a particular case of epistasis, and it was deeply analyzed by Goldberg and others. False optimum is directly linked with negative impact of epistasis in genetic algorithms. Statistically, the template will increase by frequency in population, if its fitness is higher than the average fitness of all templates in a population. The task is marked as a false optimum task, if the average fitness of templates that are not contained in a global optimum is more than the average fitness of other templates.False optimum tasks are complex. However, Grefenstette wittily demonstrates that there are not always complications. After the first generation, a genetic algorithm doesn't obtain an objective selection of points in the search area. Therefore it cannot objectively evaluate a global average fitness of a template. It is only capable of obtaining a biased evaluation of a template fitness. Sometimes this bias help a genetic algorithm to match (even if a task could otherwise have a stronger false optimum). What is inbreeding, outbreeding, selective choice, panmixia? There are several approaches to selecting a parent pair. The simplest out of all of them is panmixia. This approach implies a random selection of a parent pair when both species that make a pair are randomly selected from the entire population. In this case, any specipem can become a member of several pairs. Despite the simplicity, this approach is universal for solving various tasks. However, it is relatively critical to a population number, since the algorithm efficiency that implements this approach is decreased with an increase of population. With a selective method of choosing species for a parent pair, only those species whose fitness is above the average fitness in a population can become "parents", at equal probability of such candidates to make a pair. This approach enables a quicker convergence of the algorithm. However, due to a quick convergence, a selective choice of a parent pair is not suitable when few extremums must be defined, because the algorithm quickly comes down to one of solutions with such tasks. Furthermore, for few classes of tasks with a complicated landscape of fitness, quick convergence can turn into a premature convergence to a quasi-optimal solution. This disadvantage can be partially compensated with a use of a suitable selection mechanism that would "slow down" overly fast convergence of an algorithm. Inbreeding is a method when the first member of a pair is random, and the second specimen is selected to be as close as possible to the first member. The concept of similarity of species is also applied for outbreeding. Now, however, pairs are formed from species that are as far as possible. The last two methods differently influence the behavior of a genetic algorithm. Thus, inbreeding can be characterized with a property of concentrating the search in local nodes, that, in fact, leads to dividing a population into separate local groups around extremum suspicious areas of landscape. Outbreeding is aimed at preventing the algorithm convergence to already found solutions by forcing the algorithm to search through new areas that remain to be explored. Dynamic self-organization of GA parameters Frequently, the selection of parameters of a genetic algorithm and specific genetic operators is performed using intuition, since there is no objective evidence that some settings and operators are more advantageous. However, we shouldn't forget that the point of GA is in dynamics, the algorithm "softness" and calculations performed. Then why not to let the algorithm to configure itself to the time of solving a task and adapt to it? The easiest way is to organize the adaption of applied operators. For this purpose, we will build few (the more the better) various operators of selection (elite, random, roulette,..), crossing over (one-point, two-point, unified,..) and mutation (random one-element, absolute,..) into the algorithm. Let's set equal probabilities of application for each operator. On every loop of the algorithm, we will select one operator for each group (choice, crossing over, mutation) in accordance with possible distribution. We will mark in the obtained specimen from which operator it was received. Then, if a new distribution of probabilities will be calculated based on information contained in population (probability of applying the operator is proportional to a number of species in a population obtained with this operator), then a genetic algorithm will receive a mechanism of a dynamic self-adaptation. This approach provides yet another advantage: now, there is no need to worry about the applied generator of random figures (linear, exponential, etc.), since the algorithm dynamically changes the distribution mode. Migration and artificial selection method Unlike regular GA, macro evolution is performed, i.e. not just one, but several populations are created. A genetic search here is performed by uniting parents from various populations. Interrupted balance method A method is based on paleontological theory of interrupted balance that describes a quick evolution through volcanic and other changes of the earth's crust. In order to apply this method in technical tasks, it is advised to randomly shuffle individuals in a population after every generation, and then form new current generations. As with wildlife, unconscious selection of parent pairs and a synthetic selection of parent pairs can be suggested here. Then, results of both selections should be randomly mixed, and instead of keeping the size of population constant, it should be managed depending on presence of best individuals. Such modification of the interrupted balance methods may reduce unsound populations and increase populations with the best species. Interrupted balance method is a powerful stress method for changing the environment that is used for efficient exit from local pits. 4. Advantages of GA There are two main advantages of genetic algorithms over classic optimization methods. 1. GA has no considerable mathematic requirements to types of target functions and restrictions. A researcher shouldn't simplify the model's object by loosing it adequacy and artificially ensuring the application of available mathematic methods. The most diverse target functions and restriction types (linear and non-linear) that are defined on discrete, uninterrupted, and mixed universal sets can be used. 2. When using classic step-by-step methods, the global optimum can be found only when a problem has a convexity property. At the same time, evolutionary operations of genetic algorithms allow searching for global optimum efficiently. Least serious but still important advantages of GA: a large number of free parameters that allow building heuristics efficiently; efficient parallelization; works as good as a random search; connection with biology gives some hope for exceptional efficiency of GA. 5. Disadvantages of GA Multiple free parameters that turn "operation with GA" to the "game with GA" Lack of evidentiary support for convergence In simple target functions (smooth, one extrema and etc.), genetics always loose in speed to simple search algorithms 6. Experimental part All experiments will be performed in the R 3.2.4 language environment. We use a set of data for training a model and majority of functions from the previous article. The CRAN depositary section that contains a large number of dedicated packages aimed for optimization and mathematic programming tasks. We will apply several various methods of GA and EM for solving the above mentioned tasks. There is only one requirement to models that participate in the process of optimization — speed. It isn't advisable to apply methods that are trained within hundred seconds. In consideration that in every generation there will be a minimum of 100 species, and population will pass through several (from unity to dozens) epochs, the optimization process will stretch over unacceptable time. In the previous articles we applied two types of deep networks (with SAE and RBM initialization). Both have showed high speed and may well be used for genetic optimization. We are going to solve two optimization tasks: search of the best combination of predictors and the selection of optimal parameters of indicators. We will apply the XGBoost(Extreme Gradient Boosting) algorithm that is often used to solve the first task (predictor selection) in order to learn new algorithms. As mentioned in sources, it shows very good results in classification tasks. The algorithm is available for R, Python, Java languages. In the R language, this algorithm is implemented in the “xgboost” v package. 0.4-3. In order to solve the second task (selection of optimal parameters of indicators), we will use the simplest Expert Advisor MACDsample , and see what can be obtained with its help when using genetic optimization on the flow. 6.1. Search for the best combination of predictors It is important to define the following for solving the optimization task: parameters that will be optimized; optimization criterion — scalar that needs to be maximized/minimized. There can be more than one criterion; target (objective, fitness) function that will calculate the value of optimization criterion. A fitness function in our case will consistently implement the following: forming the initial data frame; dividing it into train/test; training the model; testing the model; calculating optimization criterion. The optimization criterion can be standard metrics like Accuracy, Recall, Kappa, AUC, as well as the ones provided by a developer. We will use a classification error in this capacity. The search of the best combination of predictors will be performed with the "tabuSearch" v.1.1 package that is an extension to the HillClimbing algorithm. The TabuSearch algorithm optimizes a binary string by using an objective function identified by a user. As a result, it gives out the best binary configuration with the highest value of objective function. We will use this algorithm for searching for the best combination of the predictor. The main function: tabuSearch(size = 10, iters = 100, objFunc = NULL, config = NULL, neigh = size, listSize = 9, nRestarts = 10, repeatAll = 1, verbose = FALSE) Arguments: size – length of the optimized binary configuration; iters – number of iterations in preliminary algorithm search; objFun – a method suggested by a user that evaluates a target function for a specified binary string; config – starting configuration; neigh – number of adjacent configurations for checking on every iteration. By default, it equals the length of the binary string. If a number is less than the string length, neighbors are randomly selected; listSize – size of taboo list; nRestarts – maximum times of restarting at the intensive stage of algorithm search; repeatAll – number of search repetitions; verbose – logical if true, the name of the current algorithm stage is printed, for example, a preliminary stage, intensification stage, diversification stage. We will write an objective function and proceed to experimenting. ObjFun <- function(th){ require(xgboost) # Exit if all zero in binary string if (sum(th) == 0) return(0) # names of predictors that correspond to 1 in the binary string sub <- subset[th != 0] # Create structure for training a model dtrain <- xgb.DMatrix(data = x.train[ ,sub], label = y.train) # Train a model bst = xgb.train(params = par, data = dtrain, nrounds = nround, verbose = 0) # Calculate forecasts with the text set pred <- predict(bst, x.test[ ,sub]) # Calculate forecast error err <- mean(as.numeric(pred > 0.5) != y.test) # Return quality criterion return(1 - err) } For calculations we should prepare data sets for training and testing the model, and also to define the model's parameters and the initial configuration for optimization. We use the same data and functions as from the previous article (EURUSD/M30, 6000 bars as at 14.02.16). Listing with comments: #---tabuSearch---------------------- require(tabuSearch) require(magrittr) require(xgboost) # Output dataframe dt <- form.data(n = 34, z = 50, len = 0) # Names of all predictors in the initial set subset <- colnames(In()) set.seed(54321, kind = "L'Ecuyer-CMRG") # Prepare sets for training and testing DT <- prepareTrain(x = dt[ ,subset], y = dt$y, balance = FALSE, rati = 4/5, mod = "stratified", norm = FALSE, meth = method) train <- DT$train test <- DT$test x.train <- train[ ,subset] %>% as.matrix() y.train <- train$y %>% as.numeric() %>% subtract(1) x.test <- test[ ,subset] %>% as.matrix() y.test <- test$y %>% as.numeric() %>% subtract(1) # Initial binary vector th <- rep(1,length(subset)) # Model parameters par <- list(max.depth = 3, eta = 1, silent = 0, nthread = 2, objective = 'binary:logistic') nround = 10 # Initial configuration conf <- matrix(1,1,17) res <- tabuSearch(size = 17, iters = 10, objFunc = ObjFun, config = conf, listSize = 9, nRestarts = 1) # Maximum value of objective function max.obj <- max(res$eUtilityKeep) # The best combination of the binary vector best.comb <- which.max(res$eUtilityKeep)%>% res$configKeep[., ] # The best set of predictors best.subset <- subset[best.comb != 0] We start optimization with ten iterations and see what is the maximum quality criterion and predictor set. > system.time(res <- tabuSearch(size = 17, iters = 10, + objFunc = ObjFun, config = conf, listSize = 9, nRestarts = 1)) user system elapsed 36.55 4.41 23.77 > max.obj [1] 0.8 > best.subset [1] "DX" "ADX" "oscDX" "ar" "tr" "atr" [7] "chv" "cmo" "vsig" "rsi" "slowD" "oscK" [13] "signal" "oscKST" > summary(res) Tabu Settings Type = binary configuration No of algorithm repeats = 1 No of iterations at each prelim search = 10 Total no of iterations = 30 No of unique best configurations = 23 Tabu list size = 9 Configuration length = 17 No of neighbours visited at each iteration = 17 Results: Highest value of objective fn = 0.79662 Occurs # of times = 2 Optimum number of variables = c(14, 14) The calculations took approximately 37 seconds with prediction accuracy of around 0.8 and 14 predictors. The obtained quality indicator with default settings is very good. Let's do another calculation, but with 100 iterations. > system.time(res <- tabuSearch(size = 17, iters = 100, + objFunc = ObjFun, config = conf, listSize = 9, nRestarts = 1)) user system elapsed 377.28 42.52 246.34 > max.obj [1] 0.8042194 > best.subset [1] "DX" "ar" "atr" "cci" "chv" "cmo" [7] "sign" "vsig" "slowD" "oscK" "SMI" "signal" > We see that the increase of iterations has proportionally increased the calculation time, unlike the forecast precision. It has increased only slightly. It means that quality indicators must be improved through setting the model's parameters. This is not the only algorithm and package that helps to select the best set of predictors using GA. You can use the kofnGA, fSelector packages. Apart from those, a selection of predictors is implemented by gafs() function in the "caret" package using GA. 6.2. Searching for the best parameters of ТС 1. Output data for projecting. We will use the MACDSampl Expert Advisor as an example. In the MACDSample Expert Advisor, an algorithm generating signals when crossing the macd and signal strings is implemented. One indicator is used. MACD(x, nFast = 12, nSlow = 26, nSig = 9, maType, percent = TRUE, ...)Arguments MaType – type of applied MA percent – logical if true, then the difference between fast and slow MA in percentage is returned, otherwise — simple difference. The MACD function returns two variables: macd — difference between fast MA and slow МА, or speed of distance change between fast МА and slow МА; signal — МА from this difference. MACD is a particular case of a common oscillator applied to the price. It can be also used with any timeseries of one variable. Time periods for MACD are frequently set as 26 and 12, but the function has initially used exponential constants 0.075 and 0.15 that are closer to 25.6667 and 12.3333 periods. So, our function has 7 parameters with a range of change: p1 — calculated price (Close, Med, Typ, WClose) p2 — nFast (8:21) p3 — nSlow(13:54) p4 — nSig (3:13) p5 — MAtypeMACD – МА type for the MACD string p6 — MatypeSig – МА type for the Signal string p7 — percent (TRUE, FALSE) p5,p6 = Cs(SMA, EMA, DEMA, ZLEMA). Trading signals can be generated in different ways: Option 1 Buy = macd > signal Sell = macd < signal Option 2 Buy = diff(macd) > 0 Sell = diff(macd) <= 0 Option 3 Buy = diff(macd) > 0 & macd > 0 Sell = diff(macd) < 0 & macd < 0 This is another optimization parameter signal(1:3). And, finally, the last parameter — depth of history of optimization len = 300:1000 (the number of last bars where optimization is held). In total we have 9 parameters of optimization. I have increased their number on purpose in order to show that anything can be used as a parameter (figures, strings and etc.). The optimization criterion — К quality ratio in points (in my previous publications it was already thoroughly described). For optimizing parameters we need to identify a fitness (objective) function that will calculate the quality criterion and select the optimization program. Let's start with the program. We will apply secure, fast and, most importantly, repeatedly tested "rgenoud" package. Its main restriction implies all parameters to be either all integer or physical. This is a mild restriction, and it is gently bypassed. The genoud() function combines the evolutionary search algorithm with methods on the basis of derivative (Newton or quasi-Newton)for solving various optimization problems. Genoud() can be used for solving optimization problems for which derivatives are not defined. Furthermore, using the cluster option, the function supports the use of several computers, processors and cores for a qualitative parallel calculation., ...) Arguments - fn – objective function that is minimized (or maximized if max = TRUE). The first argument of the function must be a vector with parameters that are used for minimizing. The function should return the scalar (except when lexical = TRUE) - nvars – quantity of parameters that will be selected for a minimized function - max = FALSE maximize (TRUE) or minimize (FALSE) the objective function - pop.size = 1000 size of population. This is a number of species that will be used for solving optimization problems. There are few restrictions regarding the value of this figure. Despite that the number of population is requested from a user, the figure is automatically corrected in order to satisfy relevant restrictions. These restrictions derive from the requirements of some GA operators. In particular, the Р6 operator (simple crossover) and Р8 (heuristic crossover) require the number of species to be even, i.e. they request both parents. Therefore, the pop.size variable must be even. If not, then population is increased to satisfy these restrictions. - max.generations = 100 — maximum generations. This is a maximum number of generations that genoud will perform at the attempt to optimize the function. This is a mild restriction. Maximum generations will act for genoud, only if hard.generation.limit will be set in TRUE, otherwise two soft triggers will control when genoud must stop: wait.generations and gradient.check. Despite that the max.generations variable doesn't restrict the number of generations by default, it is nevertheless important because many operators use it to correct their behavior. In fact, many operators become less random, since the number of generations becomes closer to the max.generations limit. If the limit is exceeded and genoud decides to proceed with operation it automatically increases the max.generation limit. - wait.generations = 10. If the target function doesn't improve in this number of generations, then genoud will think that the optimum is found. If the gradient.check trigger was enabled, then genoud will start calculating wait.generations only if gradients within solution.tolerance will equal zero. Other variables that manage the completion are max.generations and hard.generation.limit. - hard.generation.limit = TRUE. This logical variable determines if the max.generations variable is a compulsory restriction for genoud. When hard.generation.limit is displayed on FALSE, genoud can exceed the quantity of max.generations, if the target function was improved in any number of generations (determined in wait.generations), or if the gradient (determined in gradient.check) doesn't equal zero. - starting.values = NULL — vector or matrix that contains values of parameters that genoud will use at the start. By using this option, a user can enter one or more species into the starting population. If a matrix is provided, then columns must be parameters and strings — species. genoud will create other species in a random order. - Domains = NULL . This is nvars *2 matrix. For every parameter in the first column — lower boarder, second column — upper boarder. No species from thegenoud starting population won't be generated beyond boundaries. But some operators can generate subsidiary elements that will be positioned beyond the boarders, if the boundary.enforcement flag will not be enabled. - If a user fails to provide values for domains, then genoud will set domains by default through default.domains. - default.domains = 10. If a user doesn't want to provide a matrix of domains, then, nevertheless, domains can be set by a user with this easy to use scalar option. Genoud will create the domain matrix by setting a lower boarder for all parameters that equals (-1 )* default.domains, and an upper boarder that equals default.domains. - solution.tolerance = 0.001. This is a security level used in genoud. Figures with solution.tolerance difference, as suggested, are equal. This is particularly important when it reaches the evaluation of wait.generations and performing gradient.check. - gr = NULL. Function that provides a gradient for BFGS optimizer. If it is NULL, then numerical gradients will be used instead. - boundary.enforcement = 0. This variable determines the level until which genoud is subject to boundary restrictions of the search area. Despite the value of this variable, none of the species from the starting generation will have the values of parameters beyond the boundaries of the search area. boundary.enforcement has three possible values: 0 (all suitable), 1 (partial restriction), and 2 (no violations of boundaries): 0: all suitable, this option allows any operator to create species beyond the search area. These species will be included in a population, if their fitness values are sufficiently good. Boarders are important only when generating random species. 1: partial restriction. This allows operators (especially those who use the optimizer based on a derivative, BFGS), to go beyond the boundaries of the search area during the creation of a specimen. But when an operator selects a specimen, it must be within reasonable boundaries. 2: no violation of boundaries. Beyond the search area, evaluations will never be required. In this case, the restriction of boundaries is also applied to the BFGS algorithm that prevents candidates from deviation beyond boundaries determined by Domains. Please pay attention that it causes the use of the L-BFGS-B algorithm for the optimization. This algorithm requires that all suitable values and gradients are determined and final for all functional evaluations. If this causes error, it is advised to use BFGS algorithm and the boundary.enforcement=1 setting. - lexical = FALSE. This option includes lexical optimization. This is an optimization by several criteria, and they are determined sequentially in the order given by the fitness function.The fitness function used with this option must return the numerical vector of fitness values in the lexical order. This option can have FALSE, TRUE or integer values that equal the number of suitable criteria returned by the fitness function. - gradient.check = TRUE. If this variable equals TRUE, then genoud won't start counting wait.generations, until every gradient won't be close to zero with solution.tolerance. This variable has not effect if the limit of max.generations was exceeded, and hard.generation.limit option was set in TRUE. If BFGSburnin < 0, then this will be ignored, if gradient.check = TRUE. - BFGS = TRUE. This variable questions if Quasi-Newton derivative optimizer (BFGS) should be applied towards the best specimen at the end of every generation after the initial one. Setting in FALSE doesn't mean that BFGS won't be applied. In particular, if you wish that BFGS is never applied, the Р9 operator (local minimum crossover) must be reset. - data.type.int = FALSE. This option sets the data type of parameters of the optimized function. If the variable is TRUE, then genoud will search for solution among integers when parameters are optimized. With integer parameters, genoud never uses information about derivatives. It implies that the BFGS optimizer is never used — i.e., the BFGS flag is set in FALSE. This also implies that the P9 operator (local minimum crossover ) is reset, and that checking a gradient (as a criterion of convergence) is disabled. Despite where other options were set, data.type.int has a priority — i.e., if genoud states that the search should be performed by integer area of parameters, information about the gradient is never considered. There is no option that enables to mix integer parameters with a floating point. If you wish to mix these two types, then an integer parameter can be indicated, and the integer range can be transformed to the range with a floating point in the objective function. For example, you need to obtain the search network from 0.1 to 1.1. You indicate genoud to search from 10 to 110, and then divide this parameter by 100 in the fitness function. - hessian = FALSE. When this flag is set in TRUE, genoud returns the Hessian matrix in a solution as a part of its return list. A user can use this matrix to calculate standard errors. - unif.seed = 812821. This sets seed for a generator of a pseudo random figure with a floating point in order to use genoud. Value by default of this seed 81282. genoud uses its personal internal generator of pseudo random figures (Tausworthe-Lewis-Payne generator) to allow recursive and parallel calls to genoud. - int.seed = 53058. This sets seed for an integer generator that uses genoud. The default value of seed is 53058. genoud uses its personal internal generator of pseudo random figures (Tausworthe-Lewis-Payne generator) to allow recursive and parallel calls to genoud. - print.level = 2. This variable manages the level of printing what genoud does. There are 4 possible levels: 0 (minimum print), 1 (normal), 2 (detailed) and 3 (debugging). If level 2 is selected, then genoud will print details about population in every generation. - share.type = 0. If share.type equals 1, then genoud will check at the start if the project file exists (see project.path). If this file exists, it initializes its output population by using it. This option can be used with lexical, but not the transform option. Operators. Genoud has and uses 9 operators. Weights are integer values that are assigned to each of these operators (P1... P9). Genoud calculates the total s = P1+P2 +... +P9. Weights that equal W_n = s / (P_n) are assigned to each operator. The number of operator calls normally equals c_n = W_n * pop.size. Р6 operators (Simple crossover) and Р8 (Heuristic crossover) require an even number of species to proceed with operation — in other words they require two parents. Therefore, the pop.size variable and sets of operators must be specific to ensure that these three operators have an even number of species. If it doesn't happen, genoud automatically increases the population, in order to meet this restriction. Strong checks of uniqueness were built in the operators to guarantee that operators will produce children that are different from their parents, but it doesn't always occur. Evolutionary algorithm in rgenoud uses nine operators that are mentioned below. - P1 = 50 – Cloning. Cloning operator simply makes copies of the best test solution in the current generation (independent from this operator, rgenoud always saves one sample of the best test solution). Universal mutation, boundary mutation and heterogeneous mutation affect the only test solution. - P2 = 50 – Universal mutation. Universal mutation changes one parameter in the test solution with a random value evenly distributed on a domain defined for the parameter. - P3 = 50 – Boundary mutation. Boundary mutation changes one parameter with one of the boarders of its domain. - P4 = 50 – Heterogeneous mutation. Heterogeneous mutation decreases one parameter to one of the boarders with a total of decrease decreasing when the number of generations approaches to indicated maximum number of generations. - P5 = 50 – Multifaceted crossover. Multifaceted crossover (inspired by simplex search, Gill and other. 1981, p. 94–95), calculates test solution that has a convexity combination of the same number of test solutions as parameters. - P6 = 50 – Simple crossover. Simple crossover calculates two test solutions from two input test solutions, by changing values of parameters between solutions after randomly dividing solutions in a selected point. This operator can be particularly efficient if arranging parameters in every test solution is sequential. - P7 = 50 – Integer heterogeneous mutation. Integer heterogeneous mutation makes heterogeneous mutation for all parameters in test solution. - P8 = 50 – Heuristic crossover. Heuristiccrossover uses two test solutions for a new solution located along the vector that begins in one test solution. - P9 = 0 — Local minimum crossover: BFGS. Local minimum crossover calculates a solution for a new consideration in two steps. First BFGS performs a preliminary set number of iterations started from the input solution; then, convexity combination of input solutions is calculated, and BFGS is iterated. Remarks: The most important options that affect the quality are those that define the size of population (pop.size) and the number of generations performed by the algorithm (max.generations, wait.generations, hard.generation.limit and gradient.check). The search performance, as expected, is improved if the size of population. and the number of generations performed by the program will increase. These and other options should be corrected for various problems manually. Please pay more attention at the search areas (Domains and default.domains). Linear and non-linear restrictions among parameters can be presented by users in their fitness functions. For example, if the total of 1 and 2 parameters is below 725, then this condition can be embedded into the fitness function, a user will maximize genoud, : if((parm1 + parm2)>= 725) {return (-99999999)}. In this example, a very bad fitness value will be returned to genoud, if a linear restriction is violated. Then genoud will attempt to find values of parameters that will satisfy the restriction. We will write our fitness function. It should be able to calculate: MACD signals quality rate # fitness function-------------------------fitness <- function(param, test = FALSE){ require(TTR) require(magrittr) # define variables x <- pr[param[1]] nFast <- param[2] nSlow <- param[3] nSig <- param[4] macdType <- MaType[param[5]] sigType <- MaType[param[6]] percent <- per[param[7]] len <- param[9]*100 # linear restriction for macd if (nSlow <= nFast) return(-Inf) # calculate macd md <- MACD(x = x, nFast = nFast, nSlow = nSlow, nSig = nSig, percent = TRUE, maType = list(list(macdType), list(macdType), list(sigType))) # calculate signals and shift to the right by 1 bar sig <- signal(md, param[8]) %>% Lag() #calculate balance on history with len length bal <- cumsum(tail(sig, len) * tail(price[ ,'CO'], len)) if(test) {bal <<- cumsum(tail(sig, len) * tail(price[ ,'CO'], len))} # calculate quality ration (round to integer) K <- ((tail(bal,1)/length(bal))* 10 ^ Dig) %>% floor() # return the obtained optimization criterion return(unname(K)) } Below is a listing of calculating all variables and functions require(Hmisc) # Types of the average = 4 ------------------------------------------- MaType <- Cs(SMA, EMA, DEMA, ZLEMA) require(dplyr) # Types of prices = 4 ----------------------------------------------- pr <- transmute(as.data.frame(price), Close = Close, Med = Med, Typ = (High + Low + Close)/3, WClose = (High + Low + 2*Close)/4) # how to calculate? per <- c(TRUE, FALSE) # Types of signals = 3 -------------------------- signal <- function(x, type){ x <- na.omit(x) dx <- diff(x[ ,1]) %>% na.omit() x <- tail(x, length(dx)) switch(type, (x[ ,1] - x[ ,2]) %>% sign(), sign(dx), ifelse(sign(dx) == 1 & sign(x[ ,1]) == 1, 1, ifelse(sign(dx) == -1 & sign(x[ ,1]) == -1,-1, 0)) ) } # initial configuration--------------------------- par <- c(2, 12, 26, 9, 2, 1, 1, 3, 5) # search area-------------------------------------- dom <- matrix(c(1, 4, # for types of prices 8, 21, # for fast МА period 13, 54, # for slow МА period 3, 13, # for signal MA period 1, 4, # МА type for fast and slow 1, 4, # MA type for signal 1, 2, # percent type 1, 3, # signal option 3,10), # history length [300:1000] ncol = 2, byrow = TRUE) # create cluster from available processing cores puskCluster<-function(){ library(doParallel) library(foreach) cores<-detectCores() cl<-makePSOCKcluster(cores) registerDoParallel(cl) #clusterSetRNGStream(cl) return(cl) } Define quality ration with initial (usually by default) parameters > K <- fitnes(par, test = TRUE) > K [1] 0 > plot(bal, t="l") Fig.1 Balance with parameters by default The results are very bad. In order to compare calculation speed, we will perform optimization on one core and on the cluster out of two processing cores. On one core: pr.max <- genoud(fitnes, nvars = 9, max = TRUE, pop.size = 500, max.generation = 300, wait.generation = 50, hard.generation.limit = FALSE, starting.values = par, Domains = dom, boundary.enforcement = 1, data.type.int = TRUE, solution.tolerance = 0.01, cluster = FALSE, print.level = 2) 'wait.generations' limit reached. No significant improvement in 50 generations. Solution Fitness Value: 1.600000e+01 Parameters at the Solution: X[ 1] : 1.000000e+00 X[ 2] : 1.400000e+01 X[ 3] : 2.600000e+01 X[ 4] : 8.000000e+00 X[ 5] : 4.000000e+00 X[ 6] : 1.000000e+00 X[ 7] : 1.000000e+00 X[ 8] : 1.000000e+00 X[ 9] : 4.000000e+00 Solution Found Generation 5 Number of Generations Run 56 Thu Mar 24 13:06:29 2016 Total run time : 0 hours 8 minutes and 13 seconds Optimal parameters (henotype) > pr.max$par [1] 1 14 26 8 4 1 1 1 4 We decode (phenotype): - price type pr[ ,1]= Close - nFast = 14 - nSlow = 26 - nSig = 8 - macdType = ZLEMA - sigType = SMA - percent = TRUE - signal = intersection of macd and signal lines - history length = 400 bars. Let's see how the balance line with optimal parameters appears. For this purpose we will perform a fitness function with these parameters and with the test = TRUE option. > K.opt <- fitnes(pr.max$par, test = TRUE) > K.opt [1] 16 > plot(bal, t="l") Fig.2. Balance with optimal parameters This is an acceptable result that an Expert Advisor can operate with. We will calculate the same on the cluster that contains two cores # start the cluster cl <- puskCluster() # maximize fitness function # send necessary variables and functions to every core in the cluster clusterExport(cl, list("price", "pr", "MaType", "par", "dom", "signal", "fitnes", "Lag", "Dig", "per" ) ) pr.max <- genoud(fitnes, nvars = 9, max = TRUE, pop.size = 500, max.generation = 300, wait.generation = 50, hard.generation.limit = FALSE, starting.values = par, Domains = dom, boundary.enforcement = 1, data.type.int = TRUE, solution.tolerance = 0.01, cluster = cl, print.level = 2) # only for experiments. # To set in 0 in EA # stop the cluster stopCluster(cl) 'wait.generations' limit reached. No significant improvement in 50 generations. Solution Fitness Value: 1.300000e+01 Parameters at the Solution: X[ 1] : 1.000000e+00 X[ 2] : 1.900000e+01 X[ 3] : 2.000000e+01 X[ 4] : 3.000000e+00 X[ 5] : 1.000000e+00 X[ 6] : 2.000000e+00 X[ 7] : 1.000000e+00 X[ 8] : 2.000000e+00 X[ 9] : 4.000000e+00 Solution Found Generation 10 Number of Generations Run 61 Thu Mar 24 13:40:08 2016 Total run time : 0 hours 3 minutes and 34 seconds The time seems much better, but the quality is slightly lower. In order to solve even such a simple task, it is important to "play around" with parameters. We will calculate the simplest option pr.max <- genoud(fitnes, nvars = 9, max = TRUE, pop.size = 500, max.generation = 100, wait.generation = 10, hard.generation.limit = TRUE, starting.values = par, Domains = dom, boundary.enforcement = 0, data.type.int = TRUE, solution.tolerance = 0.01, cluster = FALSE, print.level = 2) 'wait.generations' limit reached. No significant improvement in 10 generations. Solution Fitness Value: 1.500000e+01 Parameters at the Solution: X[ 1] : 3.000000e+00 X[ 2] : 1.100000e+01 X[ 3] : 1.300000e+01 X[ 4] : 3.000000e+00 X[ 5] : 1.000000e+00 X[ 6] : 3.000000e+00 X[ 7] : 2.000000e+00 X[ 8] : 1.000000e+00 X[ 9] : 4.000000e+00 Solution Found Generation 3 Number of Generations Run 14 Thu Mar 24 13:54:06 2016 Total run time : 0 hours 2 minutes and 32 seconds This shows a good result. And what about balance? > k [1] 15 > plot(bal, t="l") Fig.3. Balance with optimal parameters Very decent result within reasonable time. Let's conduct few experiments to compare results of genetic algorithms with evolutionary algorithms. First, we will test SOMA(Self-Organising Migrating Algorithm) implemented in the "soma" package. Self-organizing general-purpose migrating algorithm of stochastic optimization — approach similar to genetic algorithm, although it is based on the concept of "migration" series with a fixed set of species, instead of development of further generations. It is resistant to local minimums and can be applied to any task of minimizing expenses with limited area of parameters. The main function: soma(costFunction, bounds, options = list(), strategy = "all2one", …) Arguments Details There are multiple options for setting optimization and criteria of its completion. Default values used here are recommended by the author Zelinka (2004). - pathLength: Distance until the leader that species can migrate to. Value 1 corresponds to leader position, and value above 1 (recommended) considers certain re-regulation. 3 is indicated by default. - stepLength: Minimal step used to evaluate possible steps. It is recommended that the path length wasn't an integer multiple of this value. Default value is 0.11. - perturbationChance: Probability that selected parameters will change on any given stage. Default value 0.1. - minAbsoluteSep: The least absolute difference between maximum and minimum values of the price function. If the difference falls below this minimum, then the algorithm is ended. Default value is 0. It means that this termination criterion will never be satisfied. - MinRelativeSep: The least relative difference between maximum and minim values of the price function. If the difference falls below this minimum, then the algorithm is ended. Default value is 0,001. - nMigrations: Maximum number of migrations for termination. Default value is 20. - populationSize: Number of species in population. It is recommended that this value is slightly higher than the number of optimized parameters, and it shouldn't be below 2. Default value equals 10. Since the algorithm performs minimization only, we will review our fitness function so it would provide value with an opposite sign and start optimization. require(soma) x <- soma(fitnes, bounds = list(min=c(1,8,13,3,1,1,1,1,3), max = c(4,21,54,13,4,4,2,3,10)), options = list(minAbsoluteSep = 3, minRelativeSep = -1, nMigrations = 20, populationSize = 20), opp = TRUE) Output level is not set; defaulting to "Info" * INFO: Starting SOMA optimisation * INFO: Relative cost separation (-2.14) is below threshold (-1) - stopping * INFO: Leader is #7, with cost -11 Best parameters: > x$population[ ,7] [1] 1.532332 15.391757 37.348099 9.860676 1.918848 [6] 2.222211 1.002087 1.182209 3.288627 Round to > x$population[ ,7]%>% floor [1] 1 15 37 9 1 2 1 1 3 Best value in the fitness function = 11. This is acceptable for practical application, but there is space for improvement. The algorithm is fast, but unstable in results and requires fine tuning. Generalized Simulated Annealing Function This algorithm is implemented in the «GenSA” package. This function can perform the search for a global minimum with a complex non-linear target function with a large number of optimums. GenSA(par, fn, lower, upper, control=list(), …) Arguments: - par — Initial values for components that must be optimized. NULL is by default, and in this case, default values will be automatically generated. - fn — function that will be minimized. Few vector parameters are set for minimizing the function . It should return a scalar result. - lower – vector with length(par) length. Lower boarder of components. - upper — vector with length(par) length. Upper boarder of components. - … allows user to send additional arguments of the fn function. - control — control argument. This is a list that can be used to manage the algorithm behavior. - maxit – Integer. Maximum number of algorithm iteration. - threshold.stop — Numerical. The program will terminate upon the expected value of the threshold.stop target function. Default value — NULL. - nb.stop.improvement — Integer. The program will be terminated if there are no improvements throughout nb.stop.improvement steps. - smooth — logical. TRUE, when target function is smooth or differentiated in the area of parameters almost everywhere; otherwise FALSE. Default value — TRUE - temperature — Numerical. Initial value of temperature. - simple.function — Logical. TRUE means that the target function has only few local minimums. FALSE is set by default, which means that the target function is complicated with many local minimums. - trace.mat — Logical. TRUE by default. This means that tracing matrix will be available in the returned value of GenSA call. Values of control components are set by default for a complex optimization task. For a regular optimization task with average complexity GenSA can find a reasonable solution quickly, therefore it is advisable for a user to let GenSA terminate earlier: by setting threshold.stop, if threshold.stop is an expected value of the function; or by terminating max.time, if a user simply wants to run GenSA for max.time seconds; or by setting max.call, if a user simply wants to run GenSA within max.call calls of functions. For very complex optimization tasks, a user should increase maxit and temperature. Let's run optimization by limiting the maximum time of performance by 60 seconds. require(GenSA) pr.max <- GenSA(par, fitnes, lower = c(1,8,13,3,1,1,1,1,3), upper = c(4,21,54,13,4,4,2,3,10), control = list(verbose = TRUE, simple.function = TRUE, max.time = 60), opp = TRUE) Value of fitness function and value of optimal parameters: > pr.max$value * (-1) [1] 16 > par1 <- pr.max$par > par1 [1] 1.789901 14.992866 43.854988 5.714345 1.843307 [6] 1.979723 1.324855 2.639683 3.166084 Round off: > par1 <- pr.max$par %>% floor [1] 1 14 43 5 1 1 1 2 3 Calculate value of the fitness function with these parameters and see the balance line: > f1 <- fitnes(par1, test = TRUE) > plot(-1 * bal, t="l") Fig.4 Balance 4 Quality indicators — on a good level, and calculations are surprisingly fast. These and many similar algorithms (packages dfoptim, nlopt,CEoptim, DEoptim,RcppDE etc.) optimize the function by one criterion. For multiple criteria optimization, the mco package is intended. 7. Ways and methods of improving qualitative characteristics The experiments we conducted showed the efficiency of genetic algorithms. For a further improvement of qualitative indicators it is recommended to perform additional researches with application of: - multicriterial optimization. For example, to perform optimization of the quality ration and its maximum drawdown. The "mco" package implements such opportunity. - try to implement a dynamic self-organization of GA parameters. The package for possible implementation — "GA". It provides a wide range of operators for selection, crossover and mutation. - to test for a possibility of applying a genetic programming in the trading system. Conclusion We have considered the basic principles set in evolutionary algorithms, their different types and features. Using a simple MACDSample Expert Advisor we have used the experiments to show that applying the optimization of parameters even for such elementary TC has a considerable positive effect. Time of performing optimization and the simplicity in programing allow to perform it during operation of EA without market entry. And the lack of strict restrictions on the type of optimization parameters allow to implement the most diverse type of optimization on various stages of EA operation. The most important part of work is to write the fitness function correctly. I hope this article will help you understand that it is not difficult, so you can attempt to implement optimization of your Expert Advisors yourself. Translated from Russian by MetaQuotes Software Corp. Original article:
https://www.mql5.com/en/articles/2225
CC-MAIN-2016-40
en
refinedweb
seesaw-clj Discussion of Seesaw UI toolkit for Clojure. <br><a href=""></a> Google Groups Cecil Westerhof 2016-07-19T10:18:57Z Displaying a scaled down picture and drawing a rectangle on it At the moment I am using Image Magick on the command line with trial and error to crop the right part of the photo. For example: But this is a ‘bit’ of work. So I was thinking about writing a Clojure program to do the cropping for me. That would save James Elliott 2016-07-18T05:07:04Z Confusing doc string I can't tell from reading this if id-of returns a string or a keyword. The first paragraph says string, the second says keyword. I am going to have to try it and see, but it would be nice to be spared that step in the future. :) seesaw.core/id-of [w] Returns the id of the given widget if the James Elliott 2016-07-18T04:59:00Z Need to rebuild documentation pages? I was tripped up when trying to follow the example for a button-group listener, because the documentation at had the following: (listen bg :selection (fn [e] (if-let [s (selection e)] (println 胡傲果 2016-07-12T03:11:15Z where to see config! options I know config! colud be used to set :text :background :size, is there a doc for these options? If I want to find a option to be config, where do I go to find it. James Elliott 2016-05-24T22:59:24Z How can I make a button that shows a popup menu when you click in it? I have contextual-menu style popups working fine, but I would like to give new users a visual indicator that the contextual menu exists by having a button with a gear on it which, when clicked without modifier keys, brings up the same set of choices. But I am having trouble figuring out how to James Elliott 2016-05-18T17:20:30Z Using value with repeating structures Hello, everyone, I just started using seesaw this past week in order to put together a user interface that will be helping coordinate visuals for a DJ at a music festival this weekend, and it has been delightful. The current state of my code can be found at Andrew Dabrowski 2016-05-15T21:22:44Z Widgets inside a canvas? Is it possible to place widgets at arbitrary positions inside a canvas? Apparently that can be done in Java, but I haven't come across any examples in seesaw. I tried a paint function like (fn [c g] (.add c button '(x y)) and although it didn't produce an error, it also didn't display the Andreas Olsson 2016-03-21T20:35:08Z Load image(maby a jpg) to a image-buffer. Trying to import an image to a image-buffer but having trouble solving it. heres a try: (def pic (seesaw.graphics/buffered-image 200 200)) (def dopic (.imageio.ImageIO.read pic (str (System/getProperty "user.dir") "\\resources\\grumpy.jpg"))) Am I totaly of?? Andreas Olsson 2016-03-21T09:58:08Z image-buffer problem... .setStroke?? Trying out the buffered-image, but I cant get line 17 right. How do i use it? Amir Teymuri 2016-01-23T10:39:12Z Understanding (show-options) and (show-events) Often when i call the (show-options) of a function it prints mostly the very same options as for other functions. For example calling (show-options (border-panel)) and (show-options (label)) and (show-options (toolbar)) all include a :text options, from which only (label) supports :text, from Amir Teymuri 2016-01-20T11:30:15Z No printing in the REPL In the tutorial there is the chapter on the listbox, i have tried to print out the selections, but it doesn't work for me. Maybe someone could point it out what am i doing wrong and why i don't get anything printed out in the REPL? (def f (frame :title "sandiego")) (def lb (listbox :model [:d Amir Teymuri 2016-01-19T23:12:13Z Button event-handler Adding an event-handler to a button the actions are divided between right and left buttons of the mouse. I was doing something like this: *(def f (frame :title "san-diego"))* *(def btn (button :text "btn" :font "monospaced-bold-40"))* *(listen btn :mouse-pressed #(config! % :foreground :orange)* Amir Teymuri 2016-01-19T09:17:06Z Invoking :font option on (flow-panel) According to (show-options (flow-panel)) (flow-panel) does support the :font option. How is :font to invoke on flow-panel? This does not work: (config! my-frame :content (flow-panel :items ["FILE:" (text "here comes the TEXT")] :font "ARIAL-BOLD-100" :background :red)) Amir Teymuri 2016-01-16T14:27:15Z Running seesaw and overtone libraries togethear Hello, i want to use the overtone <> and seesaw namespaces in one project. However when i load them there seems to be a function named (select) which exists in both seesaw.core and overtone.core ((seesaw.core/select) (overtone.core/select)), why i can't DB Conrado 2015-12-13T16:42:17Z Beginning by doing some exercises from Deitel's book Sup guys! I'm new to the community and also to the Clojure language itself. So, I began my learning by doing some exercises from Deitel's Java How to Program book, 6th ed. You can see what I've done at: Hope it helps other beginners like me [email protected] 2015-11-29T18:25:07Z setting gradient background color on button I'm trying to change the background color for a button. Since the default theme uses gradients I'd like to paint it with gradients, too. Unfortuately the following gives an error: This works: (do (def my-button (button)) (config! my-button :background :red)) This doesn't work: (do (def [email protected] 2015-11-29T14:25:12Z again (please help!): center-align vertical labels Hi, I'm still not getting anywhere trying to center-align labels in a vertical panel. This is what I have tried: (let [f (frame :content (vertical-panel :items [(label :text "One" :halign :left) (label :text "Four" :halign :center) [email protected] 2015-11-23T09:45:55Z center-align widgets in vertical-panel Hi, I try to get a slider and a label above and below to center-align. I tried the following without success: (-> (frame :title "aligntest" :content (vertical-panel :items [(label :id "lbl01" :text "0" :halign :center [email protected] 2015-11-22T18:33:52Z JLayeredPane and JScrollBar? Hi, is there support for JLayeredPane and JScrollBar in seesaw, or, if not, how would I add it? [email protected] 2015-11-22T15:16:40Z howto run a custom function when frame gets closed Hi, is there a way to run a function when closing a frame in order to do some cleanup? dark-h 2015-11-05T23:52:11Z having trouble with seesaw drag and drop (dnd) Hi All, I am a clojure newbee so please bare with me: I am trying to develop a simple gui using seesaw whereby I want to drag and drop swing components (JButtons) from one container to another. But for some reason I can't make it work. I am pretty sure there is something fundamentally wrong but LAWRENCE 2015-09-08T23:41:46Z problem with .contains in Rectangle2D - seesaw.graphics rect Hi, I am using a canvas and populating it with 32 sq rectangles. When I mouse click on a rectangle I want to find out if the rectangle I just clicked on is one in a list of rectangles I already have. In other words, is the mouse click location located in the list. The problem is I don't seem to Andreas Olsson 2015-08-17T13:02:10Z Newbee! Grid-panel, placement of widgets. This might be a stupid question but I'm coming from pythonWorld. Is it possible to decide where the widgets go like ":collumn 1 :row 1"?? rNewCd 2015-07-27T23:45:37Z Problem with seesaw in the repl I am following this tutorial <> but i cant get the seesaw working in the repl. I am using clojure 1.6, and had also problems running lein repl which you can see HERE < ilukinov 2015-07-22T13:12:28Z How to make widget blink? Hello, Can't figure out hot to make my field to blink. I've tried this (defn blink [] (config! (select root [:#my-input-id]) :background "#ff0000") (Thread/sleep 200) (config! (select root [:#my-input-id]) :background "#00ff00")) And seems like this will sleep then set last color Austin Pocus 2015-07-09T00:29:38Z Key events not firing I'm trying to capture the :key-pressed and :key-released events with a (listen) call on the frame, but the events don't seem to be firing. Here's the code: (let [f (frame :title "Ainur" :on-close :exit :size [1024 :by 768] :content (border-pa Jose Comesaña 2015-06-26T09:58:46Z Does seesaw work with java 6 and java 7? It seems it doesn't. Maybe I am doing something wrong? qsys 2015-06-11T09:32:44Z text input delay Something that's pretty useful in some situations is an input component which performs an action if the value is changed, but only after some time. For example, text input with hints or suggestions, where the suggestions come only after x milliseconds, so that not on every keypress a service is qsys 2015-06-10T13:22:04Z incanter update chart - seesaw bindings I'm using seesaw and incanter for a standalone application. The aim is to show a candle-stick graph, depending on the user input. Updates of the graph (and some labels and so forth) are done by using seesaw bindings. However, when I try to update the chart, I get an IllegalArgumentException: Lawrence Krubner 2015-06-09T17:26:27Z Avoid RejectedExecutionException in lein About this: (defn -main [& args] (when-not (first args) (println "Usage: gaidica ") (System/exit 1)) (reset! api-key (first args)) (invoke-later (-> (make-frame) add-behaviors show!)) ; Avoid RejectedExecutionException in lein :( @(promise)) Enrique Manjavacas 2015-06-05T10:20:52Z pack-all-columns in table-x Hi, I am using a the swingx table-x function and it displays nicely and all but I was wondering what is the best way to programatically call pack-all-columns when starting the application. I couldn't find any information about this so far. Thanks! Enrique Corey Williams 2015-05-23T19:19:24Z Scrollable canvas? I'm trying to get an image set up so that you can scroll it, I've tried: (defn main-window [img] (let [scroll (scrollable (canvas :size [(.getWidth img) :by 400] :paint (image-painter img)))] (scroll! scroll :to :bottom) (frame :title "Main Peter Marshall 2015-04-22T13:41:58Z table.clj [seesaw "1.4.5"] Hi In table.clj line 237 the table model is updated with a column index of -1. This calls setValueAt on DefaultTableModel which throws an IndexOutOfBoundsException when the column or row is out of bounds due to the underlying vector. I am seeing this from time to time, but TBH Im struggling Mike Holly 2015-04-14T18:26:17Z Infinite loop Hi there, I'm working on a project for fun which generates an image using genetic algorithms. Anyway, I'm wondering how best to structure the code. Basically I need to continuously augment and evaluate the "evolved" image. I know I have to use a separate thread and the invoke-later macro... Alexandr 2015-03-11T12:59:02Z Should I learn Seesaw or Swing Hello everybody, I am new to Clojure and I have the task to build GUI to perform some experiments. There are should be several text fields to enter parameters and the button to start experiment. The most important part is to plot graphs and plots showing performance while calculations are Cecil Westerhof 2015-03-01T07:17:59Z Disabling close on a showConfirmDialog I have the followng code: (JOptionPane/showConfirmDialog nil "Message" "Title" JOptionPane/YES_NO_CANCEL_OPTION) Works fine, but I would like to disable the close button. Is this Cecil Westerhof 2015-02-28T21:35:01Z :mnemonic does not work with button I have the following code: (def adjust-dlg (dialog :title "Adjust Quotes" :on-close :nothing :resizable? false)) (def adjust-panel (JPanel. (GridBagLayout.))) (def ^JTextField from-str (text :columns 25)) (def ^JTextField to-str (text :columns 25)) (grid-bag-layout Cecil Westerhof 2015-02-27T18:44:18Z Invoke-later As I understood it you need to use invoke-later for long running tasks to keep the GUI responsive. When not using it the button I clicked keep being selected as long as the command is running. (And the GUI does not show other actions.) When I use invoke-later the button returns almost immediatel Cecil Westerhof 2015-02-27T16:09:50Z Retrieve text in listen function I have: (text :columns 40 :listen [ :action (fn [e] (…)) ]) Is it possible to retrieve the text of the JTextField in the listener function? -- Cecil Westerhof Cecil Westerhof 2015-02-27T15:54:07Z What is the difference between action and action-performed With text you have the events action and action-performed. What is the difference between those two? Because it looks like both are triggered when you give enter. -- Cecil Westerhof Cecil Westerhof 2015-02-27T12:42:18Z Why does show! move a frame after a hide I have the following code: (def f (-> (frame :title "Hello", :content "Hello, Seesaw", :on-close :hide) pack! show!)) When I hide the frame and do a show!, the frame is placed at the place it was originally Cecil Westerhof 2015-02-27T07:42:53Z Listen in a button Looking at (doc button) I use: (button :text "Random Quotes" :listen [:mouse-clicked #(alert "NEXT!")]) But when I click on the button I get: Exception in thread "AWT-EventQueue-0" clojure.lang.ArityException: Wrong number of args (1) passed to: core/eval7049/fn--7050 Cecil Westerhof 2015-02-26T16:18:04Z Why is the frame 4 times as high as necessary I have the following code: (-> (frame :content (scrollable (table :model (seesaw.table/table-model :columns [{:key :name, :text "Name"} {:key :likes, :text "Likes"}] :rows Cecil Westerhof 2015-02-26T10:11:34Z Where to put scrollable I have the following function: (defn show-random-quotes ( [] (show-random-quotes 10)) ( [nr] (let [f (frame :size [1000 :by 800] :title "Random Quotes") html-start (str "<html><table border='1' style='width:100%'><tr>" Cecil Westerhof 2015-02-25T19:51:21Z How to convert this Java code to Clojure/Seesaw I am relatively new to Clojure and Spring. In some Java code there is: infoTableFrame = new JFrame(title); SwingUtilities.invokeLater(new Runnable() { public void run() { Document doc; HTMLEditorKit Adam Matic 2015-01-31T14:30:44Z (listen (select panel [:JRadioButton])) only registers 32 listeners? Hi, I have a bunch of radio buttons on a panel, all with different ids, the select function returns a sequence of all of them, but apparently the listen function does not register more than 32 listener functions. Selecting any of the buttons changes text in a label. Is this a bug or am I doing Mihail Ivanchev 2015-01-16T19:29:46Z djnativeswing-clj: DJ Native Swing wrapper now available for Seesaw. Hello everyone, I wanted to notify you, the Seesaw users, that I just released the first version of a Clojure wrapper of DJ Native Swing -- a great Java library providing Swing widgets for native components. It's fully Seesaw compatible and it's available here: Adam Matic 2015-01-11T17:47:12Z seesaw with jgraphx (mxGraph) Hi, I'm trying to get some basic functionality of jgraphx java library:, using seesaw in clojure, but it doesn't seem to work. I'm new to clojure, have some experience with java, though not with very much with swing. I found a repl session code that does not daveray 2015-01-05T04:55:02Z 1.4.5 release Hi, Just a quick note that I've pushed seesaw 1.4.5 to clojars with several bug fixes and small improvements from the last year. I guess I didn't realize how long it had been since the last release. Way it goes :) Cheers, Dave Alex Seewald 2014-11-30T02:55:46Z How To Understand The String Representations of Frames? I am writing a seesaw application. When I call show! on the frame I declared, it does not show up on the screen. In order to locate the problem, I'm logging the string representation of the frame. Is there documentation somewhere describing the various fields and what values those fields are
https://groups.google.com/forum/feed/seesaw-clj/topics/atom_v1_0.xml?num=50
CC-MAIN-2016-40
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? SUNY series in Global Politics James N. Rosenau, editor AFTER AUTHORITY ❖ War, Peace, and Global Politics in the 21st Century Ronnie D. Lipschutz State University of New York Press Published by State University of New York Press, Albany © 2000 State University Plaza, Albany, N.Y. 12246 Production by Michael Haggett Marketing by Patrick Durocher Library of Congress Cataloging-in-Publication Data Lipschutz, Ronnie D. After authority : war, peace, and global politics in the 21st century / Ronnie D. Lipschutz. p. cm. — (SUNY series in global politics) Includes bibliographical references and index. ISBN 0-7914-4561-5 (hc : alk. paper). — ISBN 0-7914-4562-3 (pbk. : alk. paper) 1. World politics—1989– 2. War. 3. Peace. I. Title. II. Series. D860.L55 2000 909.82'9—dc21 99-38551 CIP 10 9 8 7 6 5 4 3 2 1 To Lee Grodzins Contents Acknowledgments 1 2 3 4 5 6 7 8 Theory of Global Politics The Worries of Nations The Insecurity Dilemma Arms and Affluence Markets, the State, and War The Social Contraction The Princ(ipal) Politics among People ix 1 13 33 63 83 107 133 155 183 197 221 Notes Bibliography Index vii Acknowledgments Once, it seems, we knew what to do. —Bronislaw Szerszynski, “On Knowing What to Do” This book has been a long time coming. It is the second in what I have come to think of as my “security triology.” The first was On Security (Columbia, 1995), the third is tentatively entitled Minds at Peace, and it should appear sometime early in the next millennium. Although some of the preliminary thinking behind this volume occurred in the mid- to late-1980s, the ideas did not really germinate until I arrived at UC-Santa Cruz in 1990, and taught a senior seminar entitled “National Security and Interdependence.” Looking at the literature, I began to think more was needed in international relations than just epistemological debate and more was needed in foreign policy than simply “redefining security.” I tried, therefore, to write on globalization and national security during my first few years at UCSC, but the book refused to be written. Eventually, I gave up, and went on to other books and other projects. Sometimes, however, books come together quite unexpectedly, and when I returned to the project in 1997, I discovered that a number of papers and articles I had written, ix x Acknowledgments presented, and published fit together in what I thought (and what I hope you think) is an interesting and provocative way. As is always the case with such books, they are the product of more than one person, although I take full responsibility for everything that appears here. In the course of thinking about and writing what appears here, I have incurred more debts to friends and colleagues than I am now able to recall. Among those who have, in one way or another, helped me along the way are Beverly Crawford, Ken Conca, Gene Rochlin, Peter Euben, Karen Litfin, James Rosenau, Hayward Alker (who suggested the title), Mary Ann Tetreault, and David Meyer (and, needless to say, many more). My wife, Mary, and my children, Eric and Maia, deserve the utmost thanks and love for showing such great forebearance in dealing with almost constant grumpiness. Finally, I dedicate this book to Lee Grodzins who, as my graduate advisor at MIT, saw that heavy-ion nuclear physics was not in my future. Financial support for various parts of this book have come from a variety of sources, including: the Social Sciences Division and Academic Senate of UC-Santa Cruz, the UC Systemwide Institute on Global Conflict and Cooperation at UC-San Diego, the Center for German and European Studies at UC-Berkeley, the Pew Charitable Trusts, and the Lipschutz-Wieland Research Periphery. Portions of chapter 2 originally appeared in Ronnie D. Lipschutz, “The Great Transformation Revisited,” Brown Journal of World Affairs 4, no. 1 (winter/spring 1997): 299–318. Copyright 1997 Brown Journal of World Affairs, reprinted by permission. Portions of chapters 3 and 4 originally appeared in Ronnie D. Lipschutz, “On Security,” pp. 1–23, and “Negotiating the Boundaries of Difference and Security at Millenium’s End,” pp. 212–28, in Ronnie D. Lipschutz (ed.), On Security (New York: Columbia University Press, 1995). Copyright 1995, Columbia University Press, reprinted by permission of the publisher. A different version of chapter 5 was published as Ronnie D. Lipschutz, “The Nature of Sovereignty and the Sovereignty of Nature: Problematizing the Boundaries between Self, Society, State, and System,” in Karen T. Litfin (ed.), The Greening of Sovereignty in World Politics (Cambridge: MIT Press, 1998). Copyright 1998 MIT Press, reprinted by permission. Acknowledgments xi Portions of chapter 6 were originally published as Ronnie Lipschutz and Beverly Crawford, “Economic Globalization and the ‘New’ Ethnic Strife: What is to be Done?” San Diego: Institute on Global Conflict and Cooperation, UC-San Diego, (Policy Paper 25, May 1996). Copyright 1996 IGCC, reprinted by permission; Ronnie D. Lipschutz, “Seeking a State of One’s Own: An Analytical Framework for Assessing ‘Ethnic and Sectarian Conflicts’,” in: Beverly Crawford and Ronnie D. Lipschutz (eds.), The Myth of “Ethnic Conflict” (Berkeley: Institute of International and Area Studies, UCBerkeley, 1998). Copyright 1998 IIAS, reprinted by permission; and Ronnie D. Lipschutz with Judith Mayer, Global Civil Society and Global Environmental Governance (Albany: State University of New York Press, 1996), chap. 7. Different versions of Chapter 7 appear in Jose V. Ciprut (ed.), “The State as Moral Authority in a Evolving Global Political Economy,” The Art of the Feud: Reconceptualizing International Relations (Westport, CT: Greenwood Publishing, forthcoming 2000); and David Jacobsen, Mathias Albert and Yosef Lapid (eds.), “(B)orders and (Dis)Orders: The Role of Moral Authority in Global Politics,” Identities, Borders and Order (Minneapolis: University of Minnesota Press, forthcoming 2000). Chapter 8 draws on a number of sources, including Ronnie D. Lipschutz, “Reconstructing World Politics: The Emergence of Global Society,” Millennium 21, no. 3 (winter 1992): 389–420 (published in revised form in Jeremy Larkins and Rick Fawn, eds., International Society after the Cold War, London, Macmillan, 1996). Copyright 1992, 1996, Millennium Publishing Group, reprinted by permission; Ronnie D. Lipschutz, “From Place to Planet: Local Knowledge and Global Environmental Governance,” Global Governance: A Review of Multilateralism and International Organization 3, no. 1 (January–April 1997): 83–102. Copyright 1997, Lynne Rienner Publishers, reprinted with permission of the publisher; Ronnie D. Lipschutz, “Members Only? Citizenship and Civic Virtue in a Time of Globalization,” International Politics 36, no. 2 (June 1999): 203–233. Copyright 1999, Kluwer Law International, reprinted by permission; and Ronnie D. Lipschutz, “Politics among People: Global Civil Society Reconsidered,” in Heidi Hobbs, (ed.), Pondering Postinternationalism (Albany: State University of New York Press, 2000). 1❖ THEORY OF GLOBAL POLITICS The nation-state is in trouble. It is under siege by contradictory forces of its own making and its leaders have no idea how to proceed. Paradoxically, these forces are grounded in the end of the Cold War as well as the broadly held goals of economic growth and the extension of democracy and open markets throughout the world, the very things that are supposed to foster peace and stability. Why should this be so? As states open up to the world economy, they begin to lose one of the raison d’êtres for which they first came into being: defense of the sovereign nation. Political change and economic globalization enhance the position of some groups and classes and erode that of others. Liberalization and structural reform reduce the welfare role of the state and cast citizens out on their own. As the state loses interest in the well-being of its citizens, its citizens lose interest in the wellbeing of the state. They look elsewhere for sources of identity and focuses for their loyalty. Some build new linkages within and across borders; others organize into groups determined to resist economic penetration or to eliminate political competitors. The state loses control in some realms and tries to exercise greater control in others. Military force is of little utility under such circumstances. While it 1 2 Chapter 1 remains the reserve currency of international relations, it is of limited use in changing the minds of people. Instead, police power and discipline, both domestic and foreign, are applied more and more. Even these don’t really work, as any cop on the beat can attest. Order is under siege; disorder is on the rise; authority is crumbling. These are hardly new arguments. The search for a unifying theory of international politics and world order has been underway for centuries, if not longer. Such ideas were offered by classical and premodern theorists of politics, such as Thucydides, Hobbes, Kant, List, and various geopoliticians, beginning with Admiral Mahan in the final decade of the 1800s, continuing with Halford Mackinder and Nicholas Spykman during the middle of the twentieth century, and ending with Colin Gray in the 1990s. After World War II, new theories were offered by Morgenthau, Aron, Waltz, and others. Most recently, in the wake of the Cold War’s end, these theories have been restated, albeit in a different form, by Samuel Huntington (1996), Benjamin Barber (1995), and Robert Kaplan (1994, 1996). So why another book on the subject of war, peace, and global politics? One reason is that most of the others have it wrong. That the world is changing is doubted by only a few; how and why it is changing, and what is its trajectory, is hardly clear to anyone. The approach of the millennium has further enflamed the collective imagination, both popular and scholarly, adding fuel to the fire. But most books and films—The Coming Conflict with China (Bernstein and Munro, 1997), Independence Day and Armageddon, and the “Y2K” furor come to mind here—offer the reader (and the policymaker) a biblical dichotomy: the choice between order and chaos, light and darkness, civilization and barbarity. Order draws for its inspiration on both the recent (and antedeluvian) pasts (Noble, 1997), suggesting that a world of well-defined nation-states, under American rule and discipline, still offers the best hope for reducing the risks of war and enhancing the possibilities for teleological human improvement. Chaos reaches even farther back, to the authors of the Bible, as well as the writings of Hobbes, Rousseau, and others, who warned that, in the absence of government, there is only a “State of Nature,” the “war of every one against every one.” The reality (and here, I wish to avoid debates over what is “real” and what “real” means; see Kubálková, Onuf, and Kowert, 1998) is more likely to be found somewhere in Theory of Global Politics 3 between these two poles or even elsewhere. It is always difficult to ascertain the trajectories of change when one stands in the midst of that change. In a prescient 1991 inaugural lecture at the University College of Wales in Aberystwyth, site of the world’s first department of International Relations, Ken Booth put his finger on the central point. He argued that sovereignty is disintegrating. States are less able to perform their traditional functions. Global factors increasingly impinge on all decisions made by governments. Identity patterns are becoming more complex, as people assert local loyalties but want to share in global values and lifestyles. The traditional distinction between “foreign” and “domestic” policy is less tenable than ever. And there is growing awareness that we are sharing a common world history. . . . The [metaphor for the] international system which is now developing . . . is of an egg-box containing the shells of sovereignty; but alongside it a global community omelette is cooking. (Booth, 1991:542) What Booth did not pinpoint were the reasons for the “disintegration of sovereignty” or, for that matter, where it might lead. Indeed, although virtually everyone writing on the future of world politics takes as a starting point the decline in the sovereign prerogatives of the state, almost no one places the responsibility for this loss directly on the state itself. It is not that the governments of contemporary states have meant to lose sovereignty; they were searching for means to further enhance their power, control and sovereignty. Rather, it was that certain institutional practices set in train after World War II have, paradoxically, reduced the sovereign autonomy that was, after all, the ultimate objective of the Allied forces in that war. Indeed, if there is a single central “unintended consequence” of the international politics and economics of the past fifty years, it is the replacement of the sovereign state by the sovereign individual as the subject of world politics. In saying this, I do not mean to suggest that states are bound to disappear, or that the “legitimate monopoly of violence” will, somehow, be reassigned to tribes, clans, or individuals (although some, such as Kaplan [1996] and Martin van Creveld [1991], argue that, in many places, this has already happened). Instead, it is to 4 Chapter 1 argue that the project of “globalization” (an ill-defined and all-encompassing term, discussed in chapter 2), its commitment to individualism in politics, markets, and civil society, and the decline in the likelihood of large-scale wars and threats around which national mobilization can occur, have made reification of the individual the highest value of many societies, both developed and developing. But because globalization has different effects on different people, and some find themselves better off while others are worse off, individual sovereignty is not accepted by all as a positive value; there is reason to question, moreover, whether it should be regarded positively (Hirsch, 1995). The heedless pursuit of individual self-interest can have corrosive impacts on long-standing institutions, cultures, and hierarchies, and can lead to a degree of social destabilization that may collapse into uncontrolled violence and destruction. The implications of this process for sovereignty, authority, and security are manifold. Whereas it used to be taken for granted that the nation-state was the object to be secured by the power of the state, the disappearance of singular enemies has opened a fundamental ontological hole, an insecurity dilemma, if you will. Inasmuch as different threats or threatening scenarios promise to affect different individuals and groups differently, there is no overarching enemy that can be used for purposes of mass mobilization (a theme of one of Huntington’s more recent articles; see Huntington, 1997). Those concerned about computer hackers penetrating their cyberspace are rarely the same as those concerned about whether they will still be welcome in their workplaces tomorrow. Whereas it used to be taken for granted that threats to security originated from without—from surprise attacks, invading armies, and agents who sometimes managed to turn citizens into traitors—globalization’s erosion of national authority has managed to create movements of “patriotic” dissidence whose targets are traitorous governments in the seats of national power.1 The old threats were countries with bombs; the new threats are individuals with mail privileges. The old threat was the electromagnetic pulse from exo-atmospheric nuclear detonations; the new threat is information warfare by rogue states, terrorist groups (and corporations?). The old threat was communist subversion by spies, sympathizers, and socialist teachers; the new threat is juvenile subversion by pornography on the World Wide Web. The old threat was aggressive Theory of Global Politics 5 dictators; the new threat is abusive parents. In short, loyalty to the state has been replaced by loyalty to the self, and national authority has been shouldered aside by self-interest. The world of the future might not be one of 200 or 500 or even 1,000 (semi-) sovereign states coexisting uneasily; it could well be one in which every individual is a state of her own, a world of 10 billion statelets, living in a true State of Nature. What This Book Is About This book reflects on these matters, on the “end” of authority, sovereignty, and national security at the conclusion of the twentieth century, and on the implications of that end for war, peace, and individual and global politics in the twenty-first. I am not so foolish as to argue here that these phenomena will cease to exist in the near future or that the state is doomed to disappear. And I have no intention of brushing over the genealogies of these concepts or, for that matter, the state and state system in speculating on the global political environment of the twentyfirst century. But I do propose here that, in the long view of history, the two hundred-odd years between 1789 and 1989 were exceptional in that the nation-state was unchallenged by any other form of political organization at the global level.2 That exceptional period is now just about over. What will emerge over the coming decades is by no means determined or even clear. As the extent of social change becomes more evident, strong states could reassert their primacy and drive the world back into a new period of geopolitical competition (as could happen in East Asia; see, e.g., Bernstein and Munro, 1997). It is entirely possible that global civil society and institutions of transnational governance will, to a significant degree, supplement or supplant national governments, without undermining the basis for the nation (as appears to be taking place in Europe; see Lipschutz, 1996). Or, the resulting social tensions might be so severe as to cause a collapse into violent chaos and nonstate forms of governance (as some suggest is occurring in various parts of Africa and some urban agglomerations; see Jackson, 1990). Perhaps these, and other, forms of political community and action will coexist, as the medieval and the modern were forced to do 6 Chapter 1 during the transition from one to the other. I make few predictions, and no promises. I begin, in chapter 2, with “The Worries of Nations.” One of the much-noted paradoxes of the 1990s is the coexistence of processes of integration and fragmentation, of globalism and particularism, of simultaneous centralization and decentralization often in the very same place. James Rosenau (1990) has coined the rather unwieldy term “fragmegration” to describe this phenomenon, which he ascribes largely to the emergence of a “sovereignty-free” world in the midst of a “sovereignty-bound” one. Rosenau frames this “bifurcation” of world politics as a series of conceptual and practical “jailbreaks,” as people acquire the knowledge and capabilities to break out of the political and social structures that have kept them imprisoned for some centuries. Rosenau’s theory—if it can be called that—is an essentially liberal one and, while he acknowledges the importance of economic factors in the split between the two worlds, he shies away from recognizing the central role of material and economic change and the ancillary processes of social innovation and reorganization in this phenomenon. Without falling into a deterministic historical materialism, it is critical to recognize just how central “production,” as Robert Cox (1987) and Stephen Gill (1993) put it, is to the changes to which we are witness. Production is more than just the making of things (by which I mean material goods as well as knowledge); it is the making of particular things under particular forms of social organization to fufill particular societal purposes (Latour, 1986). These purposes are not autonomous of the material basis of a society but neither are they superstructure to that base. The two constitute each other and, through practice, do so on a continuous and dynamically changing basis. Social organization then becomes the means by which things are produced and used to fulfill those purposes. Lest this all seem too tautological, or functionalist, there is more at work here than just reproduction, as we shall see. Rosenau’s “fragmegration” is, thus, a consequence of more than just the acquisition of knowledge and skills in a postsovereign political space; it is a direct result of the particular ways in which production and purpose have been pursued and the forms of social organization established to facilitate that pursuit. The simultaneous conditions of integration and fragmentation are, then, part of the process of social innovation and reorganization Theory of Global Politics 7 that go hand-in-hand with changes in production and purpose. Why, after two hundred or more years of state consolidation and centralization, this should happen now, is not immediately apparent although the consequences are all too clear. Whether, on balance, this is to be regarded as a positive or negative development remains to be seen. What is clear is that there is no teleology invoked or involved here. I do, however, attribute recent changes to forces similar to those described by Karl Polanyi (1944/1957) in explaining the causes of the two World Wars, and to the ways in which knowledge and social innovation have transformed our relationship to the nation-state and to each other. In chapter 3, I turn to the “Insecurity Dilemma” and its relationship to globalization. What does it mean to be threatened? What does it mean to be secure? As in the myth of the Golden Fleece, the slaying of the Great Soviet Dragon seems to have given rise to a proliferation of smaller, poisonous lizards, most of which are merely annoying, but some of which might be deadly. The difficulty comes in telling the two apart. Integration and interdependence, it has long been supposed, foster communication, understanding, and peace, especially among democracies, but if fragmentation is taking place at the same time, in which direction does the arrow of safety point? Forty years ago, John Herz (1959) pointed out how the efforts of some states to make themselves more secure often made other states feel less secure (see also Jervis, 1978). Inasmuch as intentions could not be known with certainty, while capabilities could be observed with surety, it was better to assume the worst of one’s neighbor. Today, with the proliferation of imagined threats—imagined in the sense that virtually none have, as yet, come to pass—even capabilities can no longer be fully scrutinized. Terrorists might have acquired weapons of mass destruction—but we do not know for sure.3 Illegal immigrants are subverting our cultures—but they are also supporting them. Mysterious diseases lurk in uncharted forests—but they can escape at a day’s notice, without warning. And even the state cannot protect everyone against these myriads of threats if it does not know whether or not they are real (Lipschutz, 1999b). The result is a wholesale transformation in the security apparatus of the state. Not only is it now directed against external enemies, whomever and wherever they might be, but also against domestic ones— 8 Chapter 1 and these just might be the boy or girl next door. Soldiers become cops. Cops acquire armored cars and tanks. Citizens are scrutinized for criminal proclivities. Criminals adopt military armaments and practices. Even the paranoid have enemies and, in a paranoid society, can anyone trust anyone except her/himself? (There may be good reason to be paranoid, as we shall see in chapter 7; the chances are that someone is watching you). Historically, the purpose of “security” was to protect state and society against war. In chapter 4, “Arms and Affluence,” I ask “Whatever happened to World War III?” War has long been a staple topic of film, fiction, and philosophy, if only because it is so uncommon. For those in the midst of battle, there is hardly a big picture: One’s focus is on survival from one moment to the next. For those who are observers, it is the infrequency and extremities of war that is so fascinating. Yet, in virtually all discussions by international relations specialists, war is taken not as a social institution that can, somehow, be eliminated through deliberate political action, but as a “natural” outgrowth of human nature and relations between human collectives (see, e.g., Waltz, 1959). Where the interests of such collectives come into conflict, it is assumed, war will result; conversely, if collectives can negotiate over their interests, peace is possible. Experience suggests we be more cautious in making such unqualified claims. Paradoxically, while the war of all against all develops apace, the wars of state against state become ever more uncommon. The United States prepares itself for future regional wars, such as the one undertaken against Iraq, in the face of compelling evidence that such wars erupt no more than once every decade or two. In place of really existing war, we now confront virtual warfare, or what I call here “disciplinary deterrence.” This is war by other means: by example, by punishment, by public relations. It rests upon the United States not as world policeperson but as dominatrix, or global vice-principal strolling down the high school hallway, checking miscreants for hall passes. Violators, such as Iraq, get spanked (giving new meanings to bondage and domination), and serve as warning to others who might think about causing trouble. I return to the implications of this metaphor in chapter 7. Hobbes and Locke argued that Leviathan and the social contract were necessary to counter the State of Nature, a condition in which the sole moral stricture was to survive. Only through the state could men Theory of Global Politics 9 (and women) begin to build societies and civilizations. In chapter 5, “Markets, the State, and War,” I examine wars over nature, so-called resource wars that some think could take place over scarce water. In these cases, the limits of nature are presumed to lead to conflict and war among those who require scarce natural goods (Lipschutz, 1989). This amounts to a political redistribution of access meant to redress the arbitrary boundaries of state and geography. The solution offered to impasses of this sort is exchange in the market, a practice and institution that, left to operate on it own under orderly conditions, can impose peace through the price mechanism. But markets are no less political than any other human institution; they require rules to operate properly, and someone must formulate such rules (Attali, 1997). Moreover, relying on markets to defuse conflicts over resources and environment could have the perverse effect of returning us to something much closer to the State of Nature through the naturalization of market relations. Naturalizing the market removes it from the domain of everyday politics by representing it as immutable and subject neither to change nor to external authority. This, as I point out, is an act of power and domination whose outcomes are quite unlikely to be equitable or legitimate. Indeed, letting the market work its magic may result in no more than a transitory “neoliberal” peace that ultimately leads to vast distributive inequities and a new round of violence (Lipschutz, 1999a). Most contemporary wars are neither between states nor about resources. Chapter 6, “The Social Contraction” explores the causes and consequences of wars within nation-states, especially as manifested through what we have come to call “ethnic” or “sectarian” conflict. Conventional wisdom attributes these cultural wars to sociobiology, ancient animosities, and the need for human beings to differentiate themselves from one another. Yet, there is a fundamental problem with such explanations: They fail to tell us in convincing fashion why such violence did not develop earlier or why earlier periods of violence were followed by times of relative peace and stability. Even such arguments as authoritarian governments “keeping the lid on the kettle” are no more than inaccurate metaphors; politics is neither classical mechanics nor thermodynamics nor even chaos theory. Rather than being understood as some sort of atavistic or premodern phenomenon, cultural conflict should be seen as a modern 10 Chapter 1 (or even postmodern) response to fundamental social change. The unachievable dream of political theorists and practitioners is stability, now and forever; the undeniable truth is change, always and everywhere. During periods of “normality,” change is slower and more predictable; it can be managed, up to a point. Over the past few decades, we have been witness to more rapid and less predictable changes, brought about by globalization and social innovation. These changes have destabilized the political hierarchies that rule over social orders— even democratic ones—and provided opportunities for those who might seek greater power and wealth to do so. The conflicts and clashes that result can tear societies apart. The tools for popular mobilization are both contextual and contingent; the phenomenon of social warfare, as Jim Seaton (1994) calls it, has changed only in form, but not in content. During the Cold War, political elites mobilized polities and gained power using the discourse of East versus West, Marxist versus Capitalist. Today, culture has become the language under which political action takes place, and elites operate accordingly. In all cases, it is the contractual basis of social order that is under challenge and being destroyed. When people find their prospects uncertain and dismal, they tend to go with those who can promise a better, more promising future. Cultural solidarity draws on such teleological scenarios and pie in the sky, by and by. In Chapter 7, “The Princ(ipal),” I explore how the state—especially the American state—is engaged in both international and domestic discipline in the effort to maintain political order amidst the disorder generated by globalization. While conventional wisdom sees the nationstate as a functional provider of security, identity, and welfare, it is better understood as an actor that seeks to project its own, unique, national morality into world politics. Each nation-state, as guardian of its own civil religion and inheritor of a moral authority bequeathed to it by Church and Prince (yes, even the United States!), is seen by its members as the total embodiment of good. In this ethnocentric ontology, therefore, all other nation-states come to be representatives of evil. Those states with power try to impose their moralities onto world politics, in the view that the triumph of good can follow only from total domination. If this is not possible, the next best thing is obedience. The globalization of markets, however, poses an unprecedented challenge to statist moralities. In market society, consumption is a Theory of Global Politics 11 good (and is good), and it is the individual’s responsibility to consume according to his or her needs and desires. Authority thus comes to rest within each individual, whose self-interested behavior becomes, ipso facto, a moral good (although some might call it nihilism). The state, seeking to reimpose order, is forced to demonstrate its authority by acting as a moral agent able to impose its wishes both abroad and at home. Culture wars are one result, for material girls and boys are not so easily lured back inside the old moral borders. Are politics in the twenty-first century destined to be so grim? Not necessarily. Trends are never destiny. We are constrained, but we can make choices. In chapter 8, “Politics among People,” I suggest a more optimistic possibility. For better or worse, the end of the twentieth century has seen a gradual shift of political power away from the nation-state to the local and the global. Downward decentralization and upward concentration could be disempowering, or they might provide the means for global diversity and democratization. Some governance functions are becoming globalized; others are being devolved to the local level. If we are not to let the global capture the critical functions and leave the irrelevant ones to the local, it is necessary to find ways to have global rules and local diversity, a transnational politics that is both democratic and action-oriented. I suspect that “global civil society” might be one means of accomplishing this end, but there are other possibilities to offer, as we shall see. If we leave politics to the market, we will be able to choose among cereals, toilet paper, automobiles. If we bring politics back in, opportunities for choices will be broader, more appealing and more just. Political action is, therefore, an absolute necessity; if we fail to act, we may be fat but we will not be happy. The world, “after authority,” can be ours to fashion, if we so decide. 2❖ THE WORRIES OF NATIONS Great Transformation More than half a century has passed since Karl Polanyi penned those words. He wrote The Great Transformation in the midst of the greatest conflagration human civilization has yet known, and, ever since, his book been regarded as one of the classics of modern political economy. Polanyi sought to explain why the twentieth century, then not yet half over, had already been rent by two great wars. Where most blamed “accidents” for World War I, and Germany, Japan and the Great Depression for World War II, Polanyi found an explanation in the dreams and failures of nineteenth-century laissez-faire capitalists and the market processes originally set in train during the early years of the first Industrial Revolution, between 1800 and 1850. The nineteenth century was a time of social and technological innovation and reorganization at a scale theretofore unexperienced by anyone. It left an indelible 13 14 Chapter 2 mark on the world and its impacts are still being felt today. The “Great Transformation” led to the emergence of the modern nation-state as an active political and economic player in people’s everyday lives and turned it into an aggressive agent in international relations. It also resulted, in the twentieth century, in the two world wars. It would seem unlikely that a fifty-year-old book about events taking place almost two hundred years ago would have anything to say to us about either today or the future. Nonetheless, many of the same phenomena examined by Polanyi are, once again, at work today. In this chapter, I argue that we have entered a period of social change for which the history of the Industrial Revolution, and the events that followed, merit close scrutiny for contemporary parallels. To be sure, things are not the same, but there are a number of important similarities between then and now. In particular, as the twenty-first century begins, we find ourselves living through a period of social and technological innovation and reorganization, taking place not only within countries but also globally—a phenomenon that is often called “globalization.” We might expect that, as happened in the past, unanticipated social and political consequences will follow (on globalization, see, e.g., Gill and Mittleman, 1997; Sakamoto, 1994; Castells, 1996, 1997, 1998). In the later chapters of this book, we shall see that these consequences may be violent or peaceful, integrative or fragmenting, bringing prosperity to some and poverty to others. For now, these are mostly only possibilities. At some point during the coming century, however, it is likely that new patterns of global politics will become clear. We may then be able to look back, as Polanyi did, and describe how events, processes of change, and human actions during the second half of the twentieth century led to the new patterns of the twenty-first. At this point, the future remains cloudy and we can only speculate. I begin this chapter with a general discussion of industrial revolutions and their impacts within nation-states and on relations between them. The key element here is social innovation and reorganization at scales running from the household to the global. I then turn to an analysis of the “Cold War Compromise,” the concerted attempt following World War II to avoid the reemergence of those conditions that were thought to have led to the two world wars, and especially World War II. The “compromise” represented the United States’ attempt to steer the global political and economic system toward stability and The Worries of Nations 15 prosperity by reproducing, as much as possible, domestic American conditions abroad. As we shall see, the Compromise was largely a success, but it has had quite unforeseen results. I then describe the origins of the Third Industrial Revolution (a.k.a. the “information revolution”) in the great applied science projects of World War II (the Manhattan Project, in particular), which became the model for technological research and innovation during the decades that followed. More specifically, it was the mobilization of knowledge in the pursuit of a better world that, paradoxically, has served to undermine the very welfare state that gave birth to the teleological, self-interested, Webcentered global crusade on which we have embarked. What Are Industrial Revolutions? The causes and consequences of the social, political, and economic changes, and the seemingly continuous disorder and violence, both interstate and intrastate, that wracked Europe between 1750 and 1850 remain the subject of vociferous controversy (see, e.g., Mann, 1993). For some, it was the mechanization of industry—industrialization— that was central; for others, it was the transition from merchant capitalism to manufacturing and finance capitalism. Still others have argued that it was the destruction of the old post-Reformation hierarchical order by the Enlightenment and the French Revolution that was directly responsible for domestic and international disorder. In many ways, the central contradiction facing the societies of the time was the collapse of authority, as sovereign ruler gave way to sovereign people. Polanyi’s argument was, however, somewhat more subtle than this. He claimed that there was, in effect, a structural mismatch between the emerging system of liberal capitalism and then-existing social values and social relations of production. The enormous investments made in the new factory system by the holders of capital required workers—primarily male, as women were expected to remain at home— willing to work for wages. The workers were not willing to do so. At the beginning of the nineteenth century, society was not organized so as to facilitate the operation of an capitalist industrial system; labor, land, and money were hedged about with all kinds of customary and legal restrictions on use and sale. Indeed, the social organization of 16 Chapter 2 people’s lives was such that they had few incentives to leave the land or enter unregulated labor markets. To be sure, the first stages of capitalist production had already been in existence for some time, especially where woven goods were concerned, but these were mostly made through the cottage industry’s “putting-out” system, based in weavers’ homes. The marriage of water and steam power with such industry, dating from the eighteenth century, made putting out and its social relations of production obsolete. Now it was possible to run multiple looms at one time in one place, with laborers working for a daily wage under the direction of a few on-site managers. But factory owners faced a problem: How could they get male weavers out of their homes and into the factories? The answer was, in effect, to undermine the social support systems that made it possible for them to stay at home, an objective accomplished through the introduction of a self-regulating market economy—that is, liberalization. In such an economy, labor, land, and money would be treated as what Polanyi called “fictitious commodities,” to be bought and sold without any kind of obvious political manipulation (although, to be implemented and made to work, such liberalization required major intervention into society and regulation of social relations; see Gill, 1995:9). Deregulation would ensure availability of the three commodities at least cost to capital and would, in turn, maximize capitalists’ return on investment. It would also generate the funds needed for further national economic expansion (for an exploration of this phenomenon in a contemporary context, see Edmunds, 1996). These were the circumstances under which the first stage of the Great Transformation took place. England, which had operated under principles of mercantilism for some 150 years, made the transition to a self-regulating market system, free trade, and the gold standard (Gilpin, 1977, 1987). Lands held as village commons or bound to particular uses by customary rules were transformed into alienable private property. (This process had begun in England some 150 years earlier, and continues today. Enclosure was recently written into the Mexican constitution with privatization of the ejidos; it is being effected through privatization of intellectual property rights; it is even being applied in implementation of the UN Framework Convention on Climate Change.) The Poor Laws, which had functioned to depress wages and pauperize the common people, were repealed and replaced The Worries of Nations 17 by the “workhouse” and competitive labor markets that undermined residual social solidarity.1 And free trade made it possible to import cheap grains, which made food less costly and small-scale agriculture unremunerative. Polanyi dated “industrial capitalism as a social system” from 1834, the date of the Poor Law Reform. As he put it (1944/ 1957:83), “[N]ow man was detached from home and kind, torn from his roots and all meaningful environment.” What ensued was massive social change. Karl Marx put it more poetically in 1856 (1978:577– 78), observing that “all that is solid melts into air” (the phrase also appears in chapter 1 of The Communist Manifesto). By mid-century, what had begun in England was being repeated throughout much of Western and Central Europe and the Americas, with attendant consequences (see, e.g., Berend and Ránki, 1979, esp. 9–120). Technological innovation in the wake of industrialization exposed the inefficiencies of the old order and led to the political legislation that reorganized social relations. But such reorganization was not cost free to ruling elites; it threatened the social stability that had been laboriously reestablished through repressive means and the balance of power after the Napoleonic Wars. The Concert of Europe was able to keep interstate peace, more or less, but it was hard pressed to address the domestic turmoil and disruption that followed social restructuring. The newly emerging middle classes, heretofore largely excluded from political participation, saw their prospects under threat and began to agitate for political and economic reform that would give them both a say and a stake in the state. The Revolutions of 1848 were, in part, a result of this agitation; the repression that followed, a response (Gerö, 1995). Nationalism, and what later came to be called the welfare state, emerged from this crisis as deliberate political interventions designed to address both domestic political instability and challenges from without. Together, the two could be seen as a form of “social contract,” nationalism representing the commitment by the citizen to the wellbeing of the state, welfarism the commitment by the state to the wellbeing of the citizen (a point developed in chapter 6). To a considerable degree, such mutual obligations helped to temper the social disruption caused by the self-regulating market system. But this contract also, according to Polanyi, set the scene for the outbreak of World War I. The reason was that nationalism set states 18 Chapter 2 against one another, as emerging doctrines of geopolitics combined with forms of Social Darwinism, rooted in Charles Darwin’s theories of natural selection (but not advocated by Darwin himself), were extended from individual organisms as members of species to nations as representations of superior races (Agnew and Corbridge, 1995:57). As we shall see in chapter 5, according to German philosophers, who elaborated the biological and evolutionary metaphor, states could be seen as “natural” organisms that passed through specific stages of life. Thus, younger, more energetic states inevitably succeeded older, geriatric ones on the world stage. States must therefore continually seek individual advantage in order not to succumb prematurely to this cycle of Nature (Dalby, 1990:35). The point here is not that the first Industrial Revolution led, ultimately, to the world wars of the twentieth century, although that is one important aspect of Polanyi’s argument. Rather, it is that modern capitalism was made feasible only through massive, social innovation and reorganization (which are sometimes described as “strategies of accumulation”) affecting Europe, North America, and much of the rest of the world. When the first industrial entrepreneurs discovered that they could not entice labor out of their homes and into the factories in exchange for a full day’s pay, they found ways of rendering unviable the family and social structures that, in the towns and villages, had provided some degree of social support even in the midst of privation. Then, workers had no choice but to go into the factories. When later in the nineteenth century, agitation by workers over low wages and undesirable working conditions led to the formation of the first labor unions, which elites saw as a threat to their control of state and economy (the “spectre haunting Europe”), new regulations and incentives were put in place to, once again, foster a restructuring of social units even while buffering labor and society against some of the worst features of industrial capitalism. Nevertheless, according to Polanyi, these were insufficient to maintain domestic stability. Governments found it necessary to further protect their citizens from the excesses of the system transmitted through the ups and downs of the business cycle, increasingly competitive national policies, and the surplus production capacity that in both the 1870s and 1930s led to major world depressions. Governments responded with growing degrees of protectionism, imperialism, and neomercan- The Worries of Nations 19 tilism. Competition and suspicion led to arms races and mutual hostility. Eventually, wars broke out. The Cold War Compromise Polyani’s book was published in 1944, the year that Allied policymakers gathered at Bretton Woods, New Hampshire, to put together their plan for a postwar economic system (Block, 1977; Kapstein, 1996:20). These men—and they were virtually all men, among whom were John Maynard Keynes and Harry Dexter White—were well aware of the history described by Polanyi. They recognized the inherent tension between states trying to reconcile their participation in an international economy with the need to maintain political satisfaction and stability at home; this, after all, had been the dilemma faced by both Allied and Axis powers during the 1930s. Hence, the economic system proposed by Keynes, White, and others was designed to allow countries to maintain full domestic employment and growth while simultaneously avoiding the consequences for domestic stability of trade imbalances and unregulated capital flows, along with semiliberalized trade to reduce the problem of surplus capacity (Gilpin, 1987). These goals were to be accomplished through free and stable exchange rates maintained by borrowing from and lending to an International Monetary Fund (IMF), provision of longer-term liquidity through reconstruction and development loans from the World Bank, free trade regulated by an International Trade Organization (ITO), and dollar-gold convertibility to provide an international medium of exchange (for discussions of the Bretton Woods institutions and how they were meant to work, see Block, 1977; Ruggie, 1983a, 1991, 1995). The Bretton Woods arrangements failed almost from the start. Efforts to restore convertibility of the pound sterling collapsed in the face of Britain’s enormous wartime debts, insufficient global liquidity, and the international preference for dollars. Convertibility was postponed. Both the IMF and World Bank were undercapitalized, too, and the United States soon found it necessary to inject money into the international economy through grants, loans, and military assistance, which had its own negative consequences during the 1960s and 1970s in the “Triffin Dilemma.”2 The ITO never came into existence, although the 20 Chapter 2 GATT provided something of a substitute until the establishment of the World Trade Organization in 1995. The compromise of “embedded liberalism,” as John Ruggie (1983a) has called it, nonetheless remained on the books. Embedded liberalism was based on a commitment by national governments to the principles of nineteenth-century economic liberalism, with adequate safeguards and the recognition that a rapid return to such a system might well recreate the conditions of the 1930s. Inasmuch as full-blown liberalization was politically impossible in 1944, the Western allies agreed to move over time in the direction of a fully liberal system. There would be a gradual transition from a more protectionist and neomercantilist world to a more liberal one, in which “self-regulating markets” would be phased in through negotiations among states.3 As the dollar liquidity shortage began to bite toward the end of the 1940s, this more-or-less implicit agreement was greased by financial transfers through the Truman Doctrine, the Marshall Plan, the Korean War, and the Mutual Defense Act (see Pollard, 1985; the Mutual Defense Agency subsequently became the U.S. Agency for International Development, which, in 1998, was transformed into a wholly owned subsidiary of the U.S. Department of State). Full convertibility of Western currencies finally arrived in 1958, and successive GATT rounds served to dismantle many of the protectionist barriers that had been put up in the aftermath of World War II. Still, full-blown international liberalism was not yet in sight. Although it is generally argued that the purpose of the Cold War liberalization project was both defensive and economic (as the conventional and revisionist accounts would have it), this is not quite correct.4 Rather, the intention of U.S. policy was to reproduce domestic American society (or, at least, its underlying structural conditions), as much as possible, the world over. The implicit reasoning behind this goal, although specious and faulty, was that stability and prosperity in the United States were made possible by capitalism, democracy, growth, freedom, and social integration. If such conditions could be replicated in other countries, everyone would become like the happy Americans (Packenham, 1973; see also Lederer and Burdick, 1958). They would not threaten each other, they would not fight each other, and the number of twentieth-century world wars would be limited to two.5 Whether or not the USSR, the Warsaw Pact, and miscellaneous radical regimes The Worries of Nations 21 throughout the developing world posed a mortal threat to this project is largely irrelevant. The very existence of the Soviet bloc provided an external enemy that motivated fractious allies to compromise on liberalization (and defense), even when it was not to everyone’s taste or benefit. This ambitious project of liberalization from above came to an end in the late 1960s. Throughout the 1940s and 1950s, the economy of the free world was greased mostly by the dollars that the United States was able to spend abroad or transfer to its allies. The export of dollars helped to maintain high levels of international liquidity and growth, which was to America’s benefit. Already in the late 1950s, as noted earlier, Robert Triffin had warned that this state of affairs could not continue indefinitely. Other countries’ need for additional dollars would eventually reach a limit. They might then demand gold in exchange, more gold than the United States had squirreled away in Fort Knox.6 The expenditures associated with the Vietnam War only hastened the day when the dollar-gold exchange standard would have to end. That day arrived in 1971 (Gowa, 1983). Not altogether coincidentally, it was during this same period that President Nixon enunciated his eponymous doctrine, which promised to place greater reliance on U.S. allies to maintain regional stability and security. Nixon and Kissinger meant to get the United States out of Vietnam but the Nixon Doctrine had wider implications, too. In the future, countries would be expected to provide for their own defense rather than relying on the United States, although the latter would gladly sell to the former the armaments needed for this purpose. It was also during these years that the oil-producing countries finally began to demand higher prices for their product, so that they could purchase the weapons and technology needed to implement the doctrine. The oil embargoes, price hikes, gas lines, and inflation that followed were all of a piece (Schurmann, 1974, 1987; Saul, 1992). The Third Industrial Revolution These events, and those that followed later, might not have been the most important happenings during the 1960s and 1970s. There was another, much more subtle process underway whose significance had 22 Chapter 2 not yet been noticed fully, but whose origins could be traced back to the 1940s: the Third Industrial Revolution, or what is often called today the “information” or “electronics revolution.” This latest great transformation is usually ascribed to the invention of the transistor and the enormous increases in computing speed and capability that followed as more and more semiconductor devices could be crammed into smaller and smaller spaces. But the information revolution is better understood not as a cause of that innovation but rather as a consequence of fundamental innovation in the social organization of scientific research and development and higher education that began during World War II. Prior to 1945, the economic systems of the industrialized countries were organized around consumer-oriented mass production, or “Fordism” (Rupert, 1995). Fordist production, characteristic of the Second Industrial Revolution, was especially widespread in the United States during the first half of the twentieth century, and well into the second half. It came to be emulated throughout the world, although it faltered during the Great Depression as the supply of manufactured goods and raw materials outstripped the demand of domestic and foreign consumers. The Allied victory in World War II was based on Fordist mass production, which only reinforced the virtues of this type of economy (Milward, 1977; Rochlin, 1985; for an argument that military Fordism is over, see Cohen, 1996). Subsequently, at the end of World War II, factories converted back to civilian production and, after a few ups and downs of the business cycle, Keynesian military spending helped to ensure that consumers would be able to purchase the products turned out by the factories with the wages they earned making the goods. What changed? In 1945, Bernard Brodie made the observation that, with the advent of nuclear weapons, everything had changed. The only function of the military, he said, would now be to prevent future wars (quoted in Freedman, 1983:44). Brodie was only half right; the bomb changed much more than he thought. Neither he nor anyone else recognized then that the development of the atomic bomb also signaled the beginning of the end for Fordism, marked by a subtle shift from production based on material capabilities to a system driven by intellectual ones. The advent of the information revolution coincided with the origins of the “nuclear revolution” and, indeed, was inherent The Worries of Nations 23 in it. The change did not come suddenly; just as the First Industrial Revolution had its roots in steam technology that was developed decades before 1800 and coexisted for some time with the putting-out system, and the Second in electricity and electrification of factories, so did Fordism continue to thrive even as it was becoming obsolete. For example, thinking that numbers would make the difference in World War III as they had in World War II, the initial American approach to defense and deterrence was to mass-produce enormous numbers of atomic and hydrogen bombs (some twenty-five thousand by the end of the 1950s) so as to bomb Russia to rubble. As time passed, however, it became obvious that total war with nuclear weapons might not be such a good idea. Most of the nuclear deterrence and arms-control debates of the following forty years pitted those advocating mass use of force (mutually assured destruction, or MAD) against those arguing for niche-targeted “finesse” (MIRVing and counterforce targeting; see Freedman, 1983). The mass production approach to war was obsolete almost as soon as the dust cleared over Hiroshima, but it had yet to be fully applied to science (although it was already being applied in some sectors; see, e.g., Burnham, 1941). In the aftermath of the successes of the Manhattan Project and other state-funded wartime projects, this new model of scientific research and production emerged, organized around “human capital.” Technological change and social innovation once again came into play in the service of the state.7 Science became highly institutionalized. Directed research and development became critical to maintaining the United States’ technological and military edge over its competitors. Education of the workforce in the intellectual tools and skills of this new world became essential. Education itself was transformed, as it became clear that traditional rote learning—reading, writing, and ’rithmetic—was appropriate to creating a “cannon-fodder” citizenry for the mass armies of world wars I and II, but would not produce the critically and scientifically trained cadres needed in this new era of U.S.-directed global management. In response, over the following decades, the American system of higher education expanded manyfold. In the 1960s, University of California President Clark Kerr called the new model the “multiversity”; others ridiculed it as the “educational cafeteria.” No matter; specializations proliferated. A college degree became a prerequisite to 24 Chapter 2 advancement and mobility out of the working class and into the “middle” class (aided and abetted in this by the GI Bill, Pell Grants, and other forms of educational “credit”). And, because intellectual ability and competence were not distributed by class, race, or gender, it also became necessary to provide access to these opportunities to women as well as minorities.8 Finally, just as had been the case in earlier times, the programs of the leading country were adopted by others (Gerschenkron, 1962; Crawford, 1995). The growth in numbers of educated cadres was not limited to the United States, because the American university model was universalized. Foreigners were encouraged to come to the United States to acquire the skills and training necessary to rationalize their own societies and make them more like America.9 Their way and tuition were often paid by the U.S. government as, for instance, in the “Atoms for Peace” program. Other countries recognized the prestige and political benefits inherent in systems of higher education, as well as their need for trained individuals so that they could compete in this new global system. They built national university systems, too. The Revolution at Home Left to its own devices, the information revolution might have gone nowhere. Just as in the absence of the impetus of markets and profits, the steam engine would have remained a curiosity with limited application, so were the dynamic of capitalism combined with political and economic instability required to really get this latest industrial revolution off the ground. That these elements were necessary to the new regime of accumulation (if not essential) is best seen in the trajectory and fate of the Soviet Union. The USSR was able to engineer the first steps of the transformation and acquire advanced military means comparable in most respects to the West’s,10 but eventually it was unable to engage in the social innovation necessary to reorganize the productive process and maintain growth rates (Crawford, 1995). In the United States, the education of cadres of citizens during the Cold War, the erosion of the political legitimacy of the state, and public protests during the 1960s were key parts of the process of social reorganization. The slow decline of American economic dominance The Worries of Nations 25 was another. The political upheavals of the 1960s had their origins in the extension of American national interest to all parts of the globe during the 1940s and 1950s, as well as the growth of higher education. The expansion of interests meant that specialized knowledge about foreign societies, and their cultures, politics, and economics, were essential if the “free world” were to be managed for the benefit of the United States. The “old boy” banker-lawyer network that had supplied diplomats and specialist throughout much of the twentieth century (Barnet, 1973) could no longer meet the demand. The result was a system dedicated to production of specially trained individuals, who could deal with foreign affairs and comparative politics, to staff embassies, the State Department, and other agencies, at home and abroad. And, as I noted above, the emergence of a scientific problem–solving paradigm as the dominant model for managing of the new global system also generated the need for large numbers of individuals trained in a variety of scientific disciplines. Growing numbers of highly skilled individuals were thus trained, with the expectation that they would participate in projects addressing social as well as scientific matters.11 But what would happen to these educated elites after college? In many countries, including the United States, new college graduates expected to find employment with their own national and state governments, state-owned and defense-related private industries, or systems of secondary or higher education. For some decades, there was a balance between graduates and jobs, supported by relatively steady economic growth rates. At some point, however, the supply of competent individuals began to exceed the official demand for their skills (Arenson, 1998). Moreover, as the failure in Vietnam demonstrated during the 1960s and 1970s, even the government’s mobilization of expertise in the pursuit of national security objectives did not always turn out successfully. One result of the Vietnam fiasco was a serious challenge to the legitimacy of Cold War politics; another was the breaking open of the culture of expertise, with all of its hegemonic restrictions on opposition to the “dominant paradigm” (Barnet, 1973). Competing centers of expertise, skills, and knowledge began to surface, epitomized in the global proliferation of “think tanks” and nongovernmental organizations of the right and left. These centers came to represent a system of analytical capabilities, knowledge, and practice parallel to that of 26 Chapter 2 the state’s, providing gainful employment to many “symbolic analysts,” as Robert Reich (1992) has called them, at all levels of society, and a series of way stations to those who might wish to move in and out of government positions. Indeed, it is somewhat paradoxical that, even as Lyndon Johnson’s Great Society was increasingly excoriated for its domestic policy failures, conservative and liberal think tanks were only too happy to rush in with new, usually untested policy advice. Into the Breach Thus, the international political and economic turmoil of the 1970s— the collapse of the Bretton Woods currency exchange system, oil embargo and price hikes, recession, inflation, and implementation of the Nixon Doctrine (Schurmann, 1987)—provided the initial impetus to innovation and reorganization in industry and production. Among the effects were the shift from large, gas-guzzling cars to smaller, more fuel-efficient foreign ones—a trend now being reversed with the shift to SUVs as a result of extremely low oil prices—a greater reliance on market mechanisms to generate supplies of raw materials, and the emergence of what came to be called the “new international division of labor.” Of comparable importance in this transition were the growing social costs of the welfare state, which capital saw as a drag on profits, and an emerging attack on the “liberal” American government by Cold War conservatives. The fact that some of America’s allies and client states had successfully followed, and in some cases surpassed, the leader in terms of technological and social innovation was also crucial. This last change should not have come as a surprise, but it did. (Indeed, it is important to recognize that the postwar reorganization and economic development of Japan and Germany represented major successes of U.S. foreign policy!) Reestablishing growth rates and profits, suppressing inflation, and restoring economic management required a reorganization of social relations and relations of production, although this was not so evident in the 1970s and 1980s; moreover, what followed was certainly not carefully planned. Nonetheless, one result of this change was that growing numbers of women and minorities began to enter the U.S. workforce. Not only did they need the money—incomes were subject The Worries of Nations 27 to high rates of inflation during the 1970s, came under growing pressure as the 1981–82 recession began to bite, and grew more slowly between the mid-1970s and mid-1990s than during the 1950s and 1960s—they also commanded lower wages relative to white men. Moreover, as they acquired heretofore unheard-of purchasing power women and minorities turned out to be good marketing tools and consumers for corporations seeking new markets (Elliott, 1997). Alternative lifestyles and new family structures became necessary and acceptable, in part because of social innovation, in part for economic reasons. As a result, gays and lesbians came out in growing numbers and they, too, offered an attractive niche market toward which capital could target new products and services. By the beginning of the 1980s, this transformation was in full swing, and so was the reaction against it. The conservatism of Ronald Reagan and his supporters is best understood as a backlash against the cultural and social change fostered by social innovation and reorganization, but it is difficult to argue that the Reaganauts did anything to slow it down. To the contrary: Reagan’s economic policies were designed to shrink the welfare state and squeeze inflation out of the economy but they had a quite unintended effect on American society and the rest of the world. The 1982–83 recession reduced inflation but was devastating for Rust Belt “metal-bashing” industries—the core of Fordist production—in the United States and abroad. Liberalization, deindustrialization, privatization of the state, and the rise of finance capital actually worked to undermine families. Selfinterest became the sure path to success, and parents and children were inculcated with a “what’s in it for me?” sensibility. The road to profit was clearly marked, and did not involve the fostering of any sense of social or even familial solidarity. Spatial mobility was the key to upward mobility and, for some, the traditional nuclear family became an albatross. Adam Smith believed in the power of the “invisible hand,” but he had also expected that religious and social values would restrain people from uncontrollable self-interest (Coats, 1971, cited in Hirsch, 1995:137). Smith never reckoned with mass secularization, rampant consumerism, and the social indifference the morality of the market might foster. Pat Buchanan’s “culture war,” declared from the podium of the Republican National Convention in 1992, should have come as no 28 Chapter 2 surprise to anyone; the conflict had been brewing for years (Lind, 1991; see also Lipschutz, 1998b; Rupert, 1997). What was ironic, perhaps, was that Buchanan and his colleagues blamed political “liberals,” rather than hyperliberal capitalism, for the problems they saw destroying American society.12 To have put the blame on the real cause would have been to reveal to the listening public that the new economic system is not—indeed, cannot be—fair to everyone, and that those who begin with advantages will virtually always retain them (Hirsch, 1995).13 Admitting such a contradiction would be to repeat the fatal mistake of Mikhail Gorbachev, when he announced that the Communist Party of the Soviet Union was no longer the vanguard of socialist truth: Attack the legitimacy of your social system’s ideology, and there is no end to the destruction that might follow (Lipschutz, 1998b). It might happen, anyway, if the parallels between today and Polanyi’s Great Transformation are germane. There are three notable similarities between the two “transformations.” First, although it can hardly be said that there was a welfare state in England in 1800, there did exist various forms of social support for the poor. These, as Polanyi and others pointed out, served to depress wages to the benefit of capital and also, it was argued, made it more attractive for people to go on relief than to work (Himmelfarb, 1995)—arguments that sound eerily familiar today. Second, the privatization of various forms of public property and commons, which had also provided a resource buffer for the rural poor, was deemed necessary to foster wider markets and provide the labor pool necessary for industrialism to develop. There are not many peasants left in the United States, but the downsizing and the dismantling of the state, and the drive to make corporations meaner, leaner, and more profitable, have eliminated large parts of the social safety net and job security, both of which could be thought of as a form of common-pool property right guaranteed to workers. The result has been to inject large numbers of college-educated but no longer appropriately skilled mid-level, middle-aged managers and civil servants into what is already a highly competitive labor market. This is a market in which much job creation is either in the lower-wage service sector or in areas, such as writing software code, requiring knowledge the newly unemployed do not possess and could acquire only with great difficulty and considerable expense. The Worries of Nations 29 Third, “opportunity only knocks once.” As people find it necessary to move to where the jobs and money are, other considerations come second. High spatial mobility weakens families, ties to communities, and such other social-support systems as still exist in this country. Like the fabled elders left behind on ice flows by the Inuit, those who cannot move may be left behind or thrown into public shelters or out on the streets. Another interesting, but possibly more significant, parallel to the Great Transformation is the creation of new fictitious commodities akin to Polanyi’s labor, land, and money. The first is embodied in the concept of human capital (or “human resources,” as it is more prosaically known). During the First Industrial Revolution, people found it necessary to sell their physical strength to capital and, during the second, their manual skills. Now, a premium is placed on intellectual strengths and capabilities and an individual’s ability to process and package information in ways that can be commodified and sold for premium prices. The second fictitious commodity is information, which has been transformed from a common-pool resource into “intellectual property” whose ownership is hedged all about with legal restrictions. While information and knowledge have long been bought, exchanged, and stolen, these activities have usually occurred in concert with the production and consumption of material goods. Today, however, even raw data on individual habits and behavior can be turned into proprietary information and sold. Sometimes, the very methods by which people accomplish their everyday objectives are gathered, processed, and resold to them (Have you used your supermarket club card lately?).14 Finally, the third fictitious commodity is the vast expansion in consumer credit, or what we might call “virtual money,” available primarily to those who are most likely to use it. Whereas the monetization of the English economy was a necessary prerequisite to undermining barter and direct exchange of goods, the creation of virtual money eliminates even the need for face-to-face transactions, inherent value in coinage, or the guarantee of legal tender by governments. Such funds appear virtually ex nihilo as physical and intellectual properties are securitized, as stocks rise on the strength of no apparent material causes, and as individual credit lines are magically increased through the daily mail.15 30 Chapter 2 Of course, not everybody is automatically eligible to participate in this new system of fictitious commodities. Many lack the required property or income qualifications to gain access. But as Stephen Gill (1995:22) has pointed out, such access is a prerequisite for citizenship in contemporary liberal democracy: [T]he substantive conception of citizenship involves not only a political-legal conception, but also an economic idea. Full citizenship requires not only a claim of political rights and obligations, but access to and participation in a system of production and consumption. Beginning in adolescence, he argues, this acts to discipline and socialize consumers. Failure to meet the terms of economic citizenship, through late payments or bankruptcy, means social marginalization. The threat of exclusion keeps consumers in line. The result, says Gill, is the replacement of “traditional forms of discipline associated with the family and the school” with “market discipline” (1995:26; see also Drainville, 1995). In this way, the workers of the world of the future are bound into domination by the new global economy (points that are further elaborated in chapters 7 and 8). Whether this Third Industrial Revolution has yet reached its apogee is anybody’s guess (Paul Krugman has suggested that it will take at least fifty years to mature fully; 1994a:28–29). Two points, however, are clear. First, the social innovation and reorganization that has undermined the older material basis of American society—and much of the rest of the world, as well—cannot be halted on command. Contemporary change is a global phenomenon that some societies are carrying out more efficiently and equitably than others, but to quit the race would be to return to some form of neomercantilism and severe economic contraction at home and abroad, and this would play well neither in Peoria nor on Wall Street. Second, this Great Transformation is likely to be as severe as, if not worse than, the one that wracked Britain in the first part of the nineteenth century. Not everyone will suffer equally, of course, or suffer at all, for that matter. Just as some did extremely well by the First and Second Industrial Revolutions, so will many benefit from this one. A global class of the better-off (numbering perhaps 1 billion, if The Worries of Nations 31 that many) and a global class of the poor (as many as 8 to 10 billion) will emerge. Many members of the better-off class will reside in what today we call “developing countries”; a not considerable number of the poor will live in the “industrialized ones.” If things work out, by the middle of the twenty-first century we might even see a global middle class that will provide bourgeois support for this new global order and, perhaps, demand some form of representative global democratization (see chapter 8). Then, again, we might not. Spare Change in World Politics What are the implications of these changes for state, society, citizen and security? The answers to this question are treated in the following chapters. In one sense, the realist mantra—“The world is a dangerous place”—is correct. Life is full of risks, and it always ends in death. There may well be an asteroid somewhere out in space with Earth’s name written on it. But we should always ask Dangerous for whom? Perhaps the world is dangerous, especially for those who would manipulate people and politics in pursuit of individual self-interest. We see an example of this in another literary classic, a work of fiction (even though it was not quite meant as a fiction when published in 1962). Toward the end of Eugene Burdick and Harvey Wheeler’s Cold War novel Fail Safe, the president’s advisor on nuclear strategy, Harvard professor Walter Groteschele (modeled on Henry Kissinger, among others), contemplates his prospects after the thermonuclear annihilation of Moscow and New York City (a catastrophe due, in no small part, to his notions of danger in the world). Foreseeing the likelihood of an end to the arms race between the two superpowers (whose danger has made him so prominent and well-off), Groteschele swung his attention to what his future work would be. If there were drastic cutbacks in military expenditures many businesses would be seriously affected; some of them would even be ruined. A man who understood government and big political movements could make a comfortable living advising the threatened industries. It was a sound idea, and Groteschele tucked it away in his mind with a sense of reassurance. (Burdick and Wheeler, 1962:272) 32 Chapter 2 The postwar project of economic globalization has, perhaps unintentionally, shifted the discursive locus of sovereignty, security, and peace from the state to the individual. The state retains a dominant position in terms of military force, economic management, and so on, but for capitalism to grow successfully beyond the bounds of national markets and become truly global, social innovation must be allowed to take place across all kinds of borders. This can happen only if individuals (and the corporations and organizations they represent and populate) are allowed untrammeled access to all parts of the world and can be assured that they will not be expelled, thrown into jail, or killed if they wander across both figurative and literal borders. Not all governments follow this line, but global innovation is likely to bypass those that don’t. Places that, for one reason or another, find themselves excluded from this process of globalization are also strong candidates for recidivism, revanchism, and reaction. The former Yugoslavia and Myannmar are good examples (Lipschutz and Crawford, 1996; Gagnon, 1995). Even those in the thick of globalization, and reaping extensive benefits from it, are not very comfortable with its implications. The movement of peoples across borders in the interest of social innovation provides entry not only to those seeking work, but some who might have other agendas, too. As we saw in the reactions to the Oklahoma City bombing in 1995 and the crash of TWA Flight 800 in July 1996, the initial impulse was to blame bombs and missiles in the hands of “foreign terrorists,” although subsequent evidence indicates this not to have been the case in either (Lipschutz, 1999b). Nevertheless, as countries lose sovereign control over their borders and the possibility of managing the movement of people, goods, and ideas, they seem to be focusing more closely on the new subjects of transnational sovereignty, the individuals, in the hope that keeping a watchful eye on such free subjects will serve also to discipline them (Gill, 1995; see also chapter 7). This is, most probably, a vain hope: people are very clever, and only the inept—who are not very dangerous—usually get caught. If (Cold) War made the state, and the state made (Cold) War, to paraphrase sociologist Charles Tilly, what is the state to do now? Some ultracompetitive entrepreneurs suggest that “business is war” and, so, we might have to rethink Tilly’s dictum. Wars are a messy business, and it might be prudent to clean them up. That effort is well under way. 3❖ THE INSECURITY DILEMMA What is “security?” What does it mean to be “secure?” Who or what secures us? And why do we feel so insecure? Security demands certainty; to be uncertain about the present and future is to be insecure about them, as well. We try to reduce or eliminate uncertainties in order to become more secure. But risk analysts often tell us that the cost of eliminating a risk is infinite, which suggests that we can never be fully secure. Security is, therefore, something of a chimera, inasmuch as only the dead can be absolutely sure that nothing about their condition will change (and even then, the promises of Christian millennialism auger some uncertainty about that future). For many, particularly in the United States, the absence of a coherent, concentrated threat or enemy seems to have become especially troubling (Huntington, 1997). The president and Pentagon warn darkly of surrounding dangers (Clinton, 1997). Some describe coming conflicts with non-Western civilizations (Huntington, 1996); others fear the collapse of pivotal states (Chase, Hill, and Kennedy, 1996); a growing number see in China a challenger to U.S. dominance (Bernstein and Munro, 1997). Environmental degradation and economic change are deemed to be security “threats,” while hackers and pornographers lurk in cyberspace, ready to steal information and poison young minds. 33 34 Chapter 3 The boy or girl next door could cut our throats, as we are told in films and articles “based on true stories.” Diseases are poised to escape from disappearing tropical forests, flying out on the next 757, to be deposited in the midst of urban insurrections. Drugs, illegal immigrants, and terrorists are everywhere. And a few far-sighted individuals (and film producers) even tell us that, somewhere out in distance space, there is a comet or asteroid with Earth’s name on it. The universe of threats seems infinite; the only limit is our imagination (Foster 1994; Kugler, 1995). Why so many threats? Although a decade has passed since the “end” of the Cold War, the basic premises of U.S. national security policy remain uncertain, ill-defined, and contested. Despite the precise language of President Clinton’s National Security Strategy (1997; see chapter 4), no consensual agreement on the nature or source of present or future threats has developed; no comprehensive strategy akin to containment has emerged; no stable policies regarding force structures and deployments have been formulated (Levin, 1994). The U.S. defense budget continues to grow, albeit more slowly than in the 1980s, but who is the target? NATO expands, but who is the enemy? The world of 170 states on the march against each other is a nostalgic memory; who or what now threatens to stalk us? And why, even though we are, in many ways, more secure than we have been for fifty years— especially with a decline in the probability of large-scale nuclear war— does the search for security continue, more frantically and, some might argue, more fruitlessly, than ever? Is it a failure of policy, or a flaw in reasoning? We face, in short, an insecurity dilemma. Forty years ago, John Herz (1959) formulated the idea of the “security dilemma,” a concept later picked up and further developed by Robert Jervis (1978). Both argued that many of the ostensibly defensive actions taken by states to make themselves more secure—development of new military technologies, accumulation of weapons, mobilization of troops—had the effect of making neighboring states less secure. There was no way of knowing whether the intentions behind military deployments were defensive or offensive; hence, it was better to be safe and assume the worst. The result was, in many instances, an arms spiral, as each side tried to match the acquisitions of its neighbor.1 While there were continual arguments over whether security policy should be based on The Insecurity Dilemma 35 observable capabilities—what the other side could do—or on intent— what the other side meant to do—there was, minimally, a material basis for arriving at assessments, whether correct or not. Today, the basis for assessing threats and potential consequences is of a quite different character, for three fundamental reasons. First, those structural features of international politics that constrained and directed security policies and practices between 1947 and 1991 have vanished, even as most of the institutions and many of the capabilities associated with the Cold War remain in place. Institutions can find new ontologies, from which will follow policies, but these must have some fit to new political configurations or they will lose their legitimacy. Thus, we have NATO trying on a variety of new missions without being quite sure of their purpose. Is NATO to remain a security “blanket” for an expanded Western Europe, on standby against the possibility of a newly aggressive and imperial Russia (as many think was the purpose of inclusion of Poland, Hungary, and the Czech Republic)? Is it to become a security “regime,” encompassing all of Europe, as well as North America and the former Soviet republics, intended to provide psychotherapy for aggrieved countries and nations? Can it best function as a security “maker,” uniting its forces, as over Kosovo, to intervene in ethnic and other conflicts that, many fear, could undermine European stability? Or, should it concentrate on deterring the proliferation of weapons of mass destruction in the hands of “rogues” and “terrorists” (Erlanger, 1998)? In the end, the absence of what seems to be clear and definable threats leads to the “hammernail” conundrum: you fit the task to the tools rather than first defining the task and then choosing the tools.2 Second, the disappearance of nuclear bipolarity and the “Great Transformation” set in train by the Cold War have led individuals and groups to recover and re-articulate various frameworks of belief and practice, or “historical structures” (to use Robert Cox’s term; Cox, 1987), that create enemies where they did not exist before. The result is the institutionalization of uncertainty, even in parts of the world that, for decades, seemed quite fixed and stable. Thus, speculate some analysts, civil conflict in Iraq, Yugoslavia, Somalia, Rwanda, and others would not have broken out had the Cold War not come to an end (leading some, such as John J. Mearsheimer, to predict that “we will soon miss the Cold War”; Mearsheimer, 1990a). As we shall see in 36 Chapter 3 chapter 6, the working assumption of such analyses is that these wars are, somehow, premodern or primordial, afflicting only places not fully socialized into twentieth-century modernity, and that such violence was prevented prior to 1989 by the pressures imposed on those countries by the United States and the Soviet Union. But it is also quite possible that such bloodlettings are very postmodern (see, e.g., Beck, 1992:9–16). Consequently, we might behold the futures of global politics in both the European Union as well as in the world’s chaotic places. As globalization works its way on self, state, and society, we may see the emergence of the “insecurity dilemma” at the social level, rather than between the black-box states of classical realist politics. Finally, the anchors that once permitted self-reflective collectivities to fortify themselves and their friends from foes and threats are decomposing, making it ever more difficult to specify which self is to be made secure from what threat. A proliferation of new identities— as states, as cultures, as ethnies, as individuals—indicate that fundamental units of global political interaction have been destabilized, thereby rendering problematic the finding of new anchorages on which to base stable political relations. What is the political structure of a confrontation between Microsoft or Boeing and the European Union (Strange, 1996)? Can computer hackers wage war against the Pentagon? Could every (wo)man be a country, if not an island?3 Such questions are not meant to lead to dictums such as “the state is obsolete,” or “interdependence confounds sovereignty.” Rather, it is to suggest that the boundaries that, for forty-odd years, disciplined states and polities no longer do so. To rephrase Yeats’ oft-cited line, it is not that the center cannot hold; rather, it is that the margins cannot be contained. And make no mistake, new margins are emerging everywhere, even in the center (Luke, 1995; Enzenberger, 1994). The disintegration of conceptual containers gets only at the ideational core of the insecurity dilemma. Material processes have their consequences, too, and in today’s world, the struggle over security also arises from another phenomenon: changes in the material constitution of the state itself. Under the pressures of globalization and other systemic forces discussed in chapter 2, the state is being transformed into something different from what it was, even in the recent past. To make this new object “secure” implies different constructions of both threat and security than those with which we are familiar from the past The Insecurity Dilemma 37 fifty years. Under fluid conditions such as this, the very act of defining security becomes the subject of struggle, providing not only access to material resources and authority but also the opportunity to establish new boundaries of discourse and research (Thompson, 1979; Lipschutz, 1999a). Those who win the debate win more than just the prize, for they also get to mark those boundaries. Those who find themselves left outside have not only lost the game, they have been banished from politics, made outsiders. They may even become the new enemy. Ultimately, it would seem, the only boundary that is truly secure is the one drawn around the self—and even this is open to doubt—which suggests that security is more than just a material condition, and that insecurity might just be a fact of life. Such insecurity is not to be confused with Hobbes’ State of Nature, however; rather, it is a condition associated with uncertainty, difference, and individuation, as we shall see. In this chapter, I address the twin problems of security and insecurity. I begin with a discussion of the end of the national security state, pointing out how U.S. Cold War policy undermined the very security system meant to protect the West during that period. I then turn to what I call the “insecurity dilemma” and ask why, if the level of global threats has diminished, do societies feel so insecure? Pace Herz, Jervis, and others, the insecurity dilemma arises not from threats but from difference. In the third section of this chapter, I discuss how threats are constructed, and by whom. Finally, I conclude by arguing that we would do better to come to grips with insecurity and difference than to try to eliminate all those things we believe might threaten us because they are different and make us feel insecure. The End of the National Security State? It has become fashionable (once again!) to say that “states no longer matter” (Ohmae, 1991; 1995). Borders are porous—if they are there at all—and people, capital, goods, and information flow across them with both alacrity and disdain for political authorities.4 The result, some argue, must be productive of peace and harmonious relations among people, as they become comfortable with and trusting of one another through growing familiarity and similarity (an idea first proposed by 38 Chapter 3 Norman Angell in 1910; see Angell, 1910). There is a contrary school of thought that insists that states still do matter, more than ever, and that they will be with us for decades, if not centuries and longer, to come. Flows across borders do not foster peace and understanding; if anything, they illustrate just how different societies really are and how few interests they have in common. The result, argue such contrarians, is likely to be increased friction, and even war.5 As is often the case, neither side in the debate has asked or answered the correct question. Moreover, advocates of both versions tend to reify the state, either in terms of its growing weakness or growing power. Consequently, there are only two “states” of the state, as it were: here or gone, on or off. But the state of the state is hardly a binary condition. Political comparativists never tire of pointing out that what international relations scholars and diplomats call “states” represent, in fact, a wide variety of political forms with an incredible diversity of domestic structures and actors (Jackson, 1990; Inayatullah, 1996). And as sociologists and others often suggest, states are, after all, made up of people acting alone and together in the pursuit of many different goals. Frequently, these goals are contradictory, and the group that “wins” is the one better able to bring to bear its power and capabilities in relevant fora (Smithson, 1996). To the extent that such efforts succeed in narrowing down the range of critical issues facing state and society, it may be possible to say that the state still “matters” in one realm or another (as we shall see, below).6 National security is often taken to be a matter where the state does matter: The survival of the state—and, by extension, society—is paramount. Consequently, where security is concerned the state must take the lead because no other institution, whether domestic or international, can provide comparable amounts of this “public good” to a specific polity. Therefore, the state continues to be important in at least this one realm—or so it is said. The flaw in this argument is that the need or demand for security is not fixed over time or across issue areas or, indeed, the same for all of the individuals and collectives that constitute a state’s society. During periods of high international tension, whether real or imagined, the state can force the priority of security policy; the argument that state and society might vanish under external onslaught carries considerable weight. Under other conditions, making such an argument is much more difficult. Some scholars of foreign policy, such as Graham The Insecurity Dilemma 39 Allison (1971) and John Steinbruner (1974), argued this point more than twenty-five years ago, articulating theories of “bureaucratic politics,” “high politics” versus “low politics,” and “cybernetic decision-making,” in order to explain the resolution of national security crises. A constant in all of these offerings was, however, that the state was central to the conceptualization of threats, formulation of responses, and implementation of security policy. It was also the primary object of that policy.7 In keeping with the search for universal laws and theories, as proposed by Hans Morgenthau and others in the discipline, the basic principles were broadly assumed to be true over both space and time. Yet, if we look at the state as an institution that has changed over time, and continues to change, we discover that such formulations obscure more than they reveal. Today’s “Great Powers” often have the same names as those of a century ago, and they are located in more-or-less the same places (although a few have shifted eastward or westward). We would nonetheless be hard put to argue that, in spite of historical and geographic continuity, they are the same. Changes have taken place not only in domestic politics and the external environment, but also in the relationship between citizen and state and in the very constitution and identity of the citizen herself (Drainville, 1995; Gill, 1995). Such changes fundamentally alter both the national and international political environments in which state, society, and self exist, thereby rendering most discussions of “redefining security” almost beside the point.8 What is lacking in these old and new analytical frameworks? To repeat a point made earlier, conventional perspectives on national security ignore a critical existential factor: The state, as well as the threats it faces and the security policies that result, are mental as well as material constructs (Buzan, 1991; Lipschutz, 1989). That is to say, the reproduction of the intellectual and emotional logics of the state and its need for security against “enemies” is as important to national security as the production of the technology, soldiers, and military hardware that are meant to provide the physical infrastructure of protection (Huntington, 1997). As the collapse of the Soviet Union indicated, even a materially powerful and evidently secure state can be undermined if the mental constructs supporting it come under sustained pressure, both domestic and international (Crawford, 1995). Indeed, it might be that, of all types of states extant in the world, it is the national security state that is most likely to be affected by the erosion of these nonmaterial constructs. 40 Chapter 3 What, then, is the national security state (NSS)? The NSS is best understood as a particular type of institution whose origins are found in the logics of the Industrial Revolution and the Social Darwinist geopolitics of the late nineteenth century. Through these two epistemological frameworks, the consolidation of geographically contiguous territories and the integration of societies within those territories became the sine qua non of national power and survival. The founders of national security states were animated by two overriding motivations. First, they directly correlated national power with the domination of resources, territory, people, and violence; second, they directly correlated national power with a state-directed project of industrialization, nationalism, and social welfare. The NSS was premised further on a world of external threats—almost always state-centered in origin—directed against national autonomy and territory, from which the nation must be defended. The interests of state and citizen (and corporate actors, as well) were thereby seen to coincide, even in the economic and cultural spheres (Lipschutz, 1989: ch. 5). This process of national consolidation was neither quick nor simple. National states emerged only very slowly out of the monarchies and empires of the eighteenth and early nineteenth centuries, and they were constantly opposed and suppressed, as evidenced in the Congress of Vienna in 1815 and the counterrevolutions of 1848 and 1872. But the idea of the nation-state—an autonomous entity that contained within itself all those who met specified (and largely constructed) ascriptive requirements and excluded or assimilated those who did not (Brown, 1992)—proved more powerful in the longer run. To protect against revanchism and reaction, however, it was also necessary for states to develop military capabilities. By the end of the nineteenth century, moreover, it was clearly in the strategic interest of some nation-states that other territorial entities become nation-states, too. This would reduce the dominance of the European empires as well as the economic potential inherent, if not realized, in control of extensive territories. It would also transfer power to the more capital-intensive and concentrated nation-state (capital intensive in terms of both “human capital” and finance). It can be said fairly that the NSS reached its apogee during World War II, with total social and industrial mobilization by both Allied and Axis coalitions; even during the first decades of the Cold War, the two superpowers The Insecurity Dilemma 41 failed to achieve this level in either scale or scope (Friedberg, 1991; Davis, 1991). World War II nevertheless fatally weakened those empires that had survived World War I and, under an American logic of “divide and conquer,” the remaining empires slowly decomposed into smaller, militarily and economically weaker nation-states during the twenty years following 1945. The coincidence of interests between the NSS and its citizens has not always been either obvious or stable. This can be seen, for example, in the relationship between Nazi Germany and its Jewish residents. In that instance, Jews were claimed to be a threat to the “German people,” and anti-Semitism helped to recreate shared interests among non-Jewish Germans that had been dissolved by the spread of capitalism and the crises of the Weimar Republic. Historically, such antagonisms have developed, or have been cultivated, for political and strategic reasons. This is not what is happening today. With the trend toward individualism and the growing reliance on markets, what is good for General Motors is not always good for the United States (or vice versa). Today, the policies that generate national military power may very well create individual insecurity, and the actions of individuals in the market may very well weaken the state. While this trend began as long ago as the 1970s, the extent of the divergence between state and citizen only became really evident during the 1990s, as the supposed global threat posed by Communism receded and was replaced by more localized and inchoate ones. Why has this happened? To understand the causes of the decomposition of the NSS, we need to look more closely at the intersection of security strategy and economic policy during the Cold War (Pollard, 1985; McCormick, 1996). For the NSS, this connection was manifest in neomercantilism, and during and after World War II, the neomercantilist geopolitical discourses of Mackinder (1919/1962, 1943), Spykman (1942, 1944) and others were transmuted into the containment policy attributed to George Kennan (Gaddis, 1982) and formalized in documents such as NSC-68 (Dalby, 1990; Agnew & Corbridge, 1995). There was, however, a contradiction inherent in containment. The neomercantilistic geopolitical framework of the late nineteenth and early twentieth centuries was unsuited for the postwar period, especially as envisioned by the founders of the Bretton Woods system. Such a geopolitics treated the nation-state—or empire—as the natural 42 Chapter 3 unit of analysis and policy. The liberalization project of American postwar planners posited a non-imperial, open economic realm much larger than the national territorial space. As Fred Block (1977) has pointed out, such a system could not exist if limited to national capitalist markets. Consequently, a new unit of analysis and action emerged: the Free World. Inside the borders of the Free World, all states would be united in pursuit of common goals based on individualism and the human propensity to “truck and barter.” Outside would be those states whose mode of behavior was “unnatural,” spoken of in terms of “rotten apples” threatening the Free World’s future (a point further developed in chapter 7). The survival and success of the Free World thus depended on creating and extending boundaries around a “natural community” (Stone, 1988) that had not, heretofore, existed. The survival and prosperity of the Free World on one side of the boundaries of containment came to rest upon keeping out the influences of the Soviet bloc on the other side of those boundaries. Indeed, the Free World could not have existed without the “Unfree World.” Within the Free World, however, the maintenance of community was more problematic, for it relied on broad acceptance of a hierarchy that often rankled lower-ranked members. Economic liberalism would make the Free World stronger, but it required a globalized version of neomercantilism in which those inside were restricted in dealing with those outside. Inasmuch as there was only so much that could be done to prevent such exchange from taking place, making the Free World work also required a shift of sovereignty from the state to the individual, the “natural” unit of interaction in the market. This, in turn would prevent Free World states from asserting too strongly their national autonomy as against the economic rights of their citizens. To fully carry through this shift meant that the state would have to yield up its sovereign prerogatives to the market and loosen its control over the domestic economy, a move with security implications (Moravcsik, 1991). Free trade and comparative advantage apply not only to wool and wine, but also to guns and gyroscopes, goods with military potential. For this reason, COCOM, the Coordinating Committee, was established to prevent such goods from falling into Communist hands. Rather than regulating what could be produced domestically, the state was now allowed only to limit what could be The Insecurity Dilemma 43 exported (Pollard, 1985; Crawford, 1993; Mastanduno, 1991). The borders of Free World nations would be breached by flows of raw materials, manufactures, technology, capital, and even labor—in theory if not practice—in the name of growing and spreading markets (but see Ruggie, 1983a). And the idea that the state was the “natural” unit of self-defense would gradually wither. The United States, as the core of this global system, was expected to remain technologically dominant, thereby retaining its edge and autonomy (although this is not what has actually come to pass, much to the dismay of numerous analysts and policymakers; see Sandholtz, et. al., 1992). In theory, all barriers to economic intercourse would have to fall to fully realize the potential of liberalization; in practice, there was (and continues to be) strong resistance to this on the part of some countries, although to little avail. The Soviet Union’s approach to economic control was not entirely dissimilar. Stalin sought to establish an economic sphere dominated by the USSR, while hewing more closely to the mercantilistic prescriptions of Friedrich List (1856; see also Davis, 1991; Crawford 1993). Within what came to be called the “Soviet bloc,” a division of labor emerged too, but one directed by central planning rather than the “invisible hand” (Bunce, 1985). This unit was never as economically integrated as the Free World and, more to the point, actively sought to restrict the kind of exchange that, in its opposite number, fostered rapid technological innovation (Crawford, 1995). Ultimately, that strategy failed. But note: in a world of states and blocs organized around “national capitalism” rather than Free World liberalism, the Soviet bloc might well have measured up to anything the West could have offered.9 There is no way to verify such a counter-factual but, throughout the 1950s and 1960s, Soviet bloc growth rates and technological achievements were impressive and were seen to be quite threatening (Davis, 1991). Such area-based economic arrangements also played important domestic roles in the security strategies of both the United States and the Soviet Union. The NSS sought to maintain discipline within its borders in order to keep enemies out and citizens loyal. I use the term “discipline” here not to denote militarization or regimentation but, rather, to describe a social regimen whereby those who questioned or challenged the premises of the NSS were either chastised or ostracized 44 Chapter 3 (see chapter 7). While such discipline was, quite clearly, much harsher on the Soviet side of the East-West divide, it was not wholly unknown in the West. By opting for autarky and authoritarianism, and an economy whose main customer was the state rather than the consumer, the Soviet approach made the task of social control that much easier (although more visible and generative of resistance). The United States, pursuing liberal economic and political organization, focused on individual well-being at home and state power abroad. This made social discipline more difficult, because it was premised on a particular type of mental and material conformity that penalized aberrant thoughts and practices through social ridicule and rejection, rather than on an outright totalitarianism that rewarded dissidence with prison or exile. People whose behavior went beyond accepted limits were tagged as Communists or social deviants, and offered the opportunity to rejoin the community only if they would recant their heretical beliefs. Many did. Those who didn’t were labeled “un-American” and blacklisted. This system of social discipline began to break down in the 1960s, but it has only been seriously undermined during the past two decades as hyperliberal tendencies have begun to bite deeply into American society (Hirsch, 1995; Barber, 1995). The result of hyperliberalism has been a squeeze on labor and the privileging of capital.10 Gradually, the squeeze has been extended from the blue-collar to the white-collar workforce, as well as the military and defense sectors, with successive “downsizings” and mergers among corporations, as they struggle to reduce costs, improve balance sheets and maintain share value (Nasar, 1994; Edmunds, 1996; Uchitelle, 1998). By now, whether or not it is statistically correct, there is a widespread perception within major segments of the American labor force that no forms of employment are secure (Uchitelle, 1994; Marshall, 1995a; New York Times, 1996). Policymakers and academics such as Robert Reich (1992: part II) argue that “symbolic analysts,” the production workers of the information age, are secure for the future; the reality is that new information technologies may make many of them redundant, too.11 Even as the U.S. economy continues to grow, so do the conditions for alienation, atomization, and social disintegration. The impacts of this change are visible in efforts to rediscipline society. Thus, policymakers struggle to find new threats and define new visions, strategies, and policies for making the world “more se- The Insecurity Dilemma 45 cure.” People, losing faith in their leaders and the state, take things into their own hands. Gated communities proliferate in order to keep out the chaos. The privitization of security continues apace and becomes another realm of commodification. Conservative disciplining of liberals and gays mounts. And the most popular television and film “true-life” stories and newscasts inform us just how insecure each of us should really feel. In a perverse inversion of Herz and Jervis, the national security state is brought down to the level of the household, and each one arms itself against the security dilemma posed by its neighbor across the hedge or fence. Confronting the Insecurity Dilemma The difficulties associated with “redefining security policy” (Krause and Williams, 1996, 1997) to meet these changing circumstances suggests the appearance of a fundamental ontological hole within national identities, especially that of the United States. In the absence of threats or enemies that affect equally all citizens of a country, there can be no overarching ontology of security, no shared identity differentiating the national self from threatening others, no consensus on what—if anything—should be done.12 No single real or constructed problem—short of the alien invasions and cometary impacts depicted in recent films— offers the comprehensive threat of total destruction once promised by East-West nuclear war. What are national security planners to do when, in succeeding beyond their wildest visions in making the country safe, they have also set the stage for domestic anarchy? The simple answer: find new sources of threat and insecurity, both internal and external. Security has been defined conventionally in terms of the state as a nonarticulated or non-internally-differentiated unit. The “black box” state of realism has always been understood to be a heuristic simplification. Nonetheless, that model does suggest that external threats affect all members of the polity living inside the box to a more-or-less equal degree. In the case of war, everyone’s future would be challenged; in the case of nuclear war, anyone (if not everyone) might die. Even Communist subversion could strike at any place, at any time. Whether this was ever “true” or not, it is no longer the case. This loss of total coverage is problematic: In place of comprehensive threats, the 46 Chapter 3 “new” ones discussed or imagined by policymakers, academics and strategists affect only selected groups and classes within states, with differential impacts that depend, to a significant degree, on an individual’s economic, cultural, and social backgrounds. The social consequences of poverty are not deemed to be a broadband threat; the social consequences of Internet pornography are. The impacts of generally poor health are of concern, but are not a national priority; the impacts of biological weapons in one or two cities are. A U.S. congressman can argue that “We can no longer define our national security in military terms alone. Our ignorance of world cultures and world languages represents a threat to our ability to remain a world leader” (San Francisco Chronicle, 1991a). A newspaper editorial can warn that “the major threats to security today are probably found in such disparate sources as the world’s overcrowded classrooms, understaffed health facilities, shrinking oil fields, diverted rivers, and holes in the ozone layer” (San Francisco Chronicle, 1991b). A conservative commentator can suggest that threats arise from “the explicit assault on Western culture by ‘politically correct’ radicals,” manifested in ‘multiculturalism’” (Lind, 1991:40). And President Clinton (1997) can propose that “[T]he dangers we face are unprecedented in their complexity.” The segmentation of threats is especially manifest in different areas of American public and private life. For instance, in June 1996, then-CIA director John Deutch (U.S. Senate, 1996) told the Senate Governmental Affairs Committee of [e]vidence that a number of countries around the world are developing the doctrine, strategies and tools to conduct information attacks. . . . International terrorist groups clearly have the capability to attack the information infrastructure of the United States, even if they use relatively simple means. . . . [A] large-scale attack on U.S. computer networks could cripple the nation’s energy, transportation, communications, banking, business and military systems, which are all dependent on computers that could be vulnerable to sabotage ranging from break-ins by unauthorized “hackers” to attacks with explosives. Asked whether the threat of such attacks was comparable to those associated with nuclear, chemical, and biological weapons, Deutch The Insecurity Dilemma 47 replied, “I would say it was very, very close to the top.” Another witness warned that failure to prepare for such attacks could result in “an electronic Pearl Harbor” (U.S. Senate, 1996). None of the witnesses noted that most computer network breakdowns have, so far, been directly attributable to snafus in hardware and software.13 Of particular interest here are the content and implications of the language used to frame the dangers of information warfare. Deutch and his colleagues compare an incident with a fairly selective class impact, which would affect those tied into long-distance cyberspace systems, with two of the best-known images of war from the past fifty years, suggesting a quite improbable degree of disruption and destruction. While neither Pearl Harbor nor Hiroshima led to national destruction, the image of a “bolt from the blue,” drawn from nuclear war discourses, certainly suggests such a possibility. Deutch’s testimony raises a further set of questions: If so-called international terrorists can use simple means to attack the information infrastructure, why have they not done so? Where are the nuclear suitcase bombs? Who has spread radiation and bacteria over American cities? When has anyone put drugs and poisons in urban water supplies? And, against whom would the United State retaliate should such incidents occur? There is a not-so-subtle implication in Deutch’s statement that the United States—perhaps through the National Security Agency—is itself capable of conducting information attacks, and has practiced them. This, in turn, suggests self-induced fears generated by projecting U.S. national capabilities onto imagined others (see below). Ironically, Deutch’s warnings of “hackers” remind us that villians might also be the boy or girl upstairs or next door or down the road! Such rhetorical tactics are hardly new or innovative; Deutch’s objective is to mobilize legislators into action “by scaring the hell out of them,” as Senator Arthur Vandenburg counseled President Truman to do in 1947. As well, the broadening of national security language to encompass a wide range of social issues and problems has a long history. In the 1950s, education, health, and highways were brought under the “National Defense” blanket. During the 1980s, all manner of commercial research and development were deemed essential to national security. But what is missing from pronouncements such as Deutch’s is a conviction that all Americans are exposed equally to information warfare. The truth is that, although the American economy 48 Chapter 3 is heavily dependent on electronic software, hardware, and networks, warnings about information warfare are a lot like those about prospective climate change. Both might happen, but there is also a lot of handwaving going on. Still, why do we seem so eager to engage deeply with the former and not the latter? Is it because we already have the hammer? The contemporary search for threats to which we can match our capabilities has a rather frantic quality about it, as though even those who warn about them are not wholly convinced that they are imminent or “real.” This suggests, in turn, that the process whereby contemporary national security policy is made is not so simple as discovering and specifying foreign “threats” to which we can then rationally respond. New Threats or No Threats? It is in this context that the insecurity dilemma emerges full-blown to challenge the ontology of the national security state: If there are no plausible threats, what is the purpose of the NSS? If imagined threats are selective and domestic, why continue to expand military capabilities? And if individuals are more concerned about themselves than their society, how can support for security policy be mobilized? It is in this context, too, that the struggle to redefine security has been and is taking place. To understand both the insecurity dilemma and the struggle over “redefining security,” we must consider how security is constituted as both concept and practice. Conceptualizations of security—from which follow policy and practice—are to be found in discourses of security. Such discourses of security are neither strictly objective assessments nor purely analytical constructs of threat. They are, rather, the products of historical structures and processes, of struggles for power within the state, of conflicts between the societal groupings that inhabit states and the interests that beseige them. Hence, not only are there struggles over security among nations, there are also struggles over security among notions. Winning the right to define security provides not just access to resources but also the authority to articulate new definitions and discourses of security, thereby directing the policy that leads to real, material outcomes. As Karen Litfin (1994:13) points out, The Insecurity Dilemma As determinants of what can and cannot be thought, discourses delimit the range of policy options, thereby functioning as precursors to policy outcomes. . . . The supreme power is the power to delineate the boundaries of thought—an attribute not so much of specific agents as it is of discursive practices. 49 Discourses of security, however clearly articulated, thus remain fraught with contradictions that are ignored or minimized but that nonetheless provide important insights into them.14 How and where do discourses of threat and security originate? Barry Buzan (1991:37) has pointed out that “There is a cruel irony in [one] meaning of secure which is ‘unable to escape’.” To secure oneself is, therefore, a sort of trap, for one can never leave a secure place without incurring risks. Moreover, security appears to be meaningless either as concept or practice without an “Other” to help specify the conditions of insecurity that must be guarded against. James Der Derian (1995), citing Nietzsche, points out that this “Other” is made manifest through differences that create terror and collective resentment of difference—leading to a state of fear—rather than a coming to terms with the positive potentials of difference. As these differences become less than convincing, or fail to be made manifest, however, their power to create fear and terror diminish, and so it becomes necessary to discover ever more menacing threats to reestablish difference. For this purpose, reality may no longer suffice.15 What is substituted, instead, is a dangerous world of imagined threats. Not imaginary threats, but threats conjured up as things that could happen. Paradoxically, then, it becomes the imagined, unnamed party, with the clandestinely assembled and crude atomic device, and not the thousands of reliable, high-yield warheads mounted on missiles poised to launch at a moment’s notice, that is used to create fear, terror, and calls for action. It is the speculation about mysterious actors behind blownup buildings and fallen jetliners, and not rather banal defects in wiring and fuel tanks, that creates the atmosphere for greater surveillance and control. It is suspicion of neighbors, thought to be engaged in subversive or surreptitious behaviors, listening to lewd lyrics or logged-on to lascivious Web pages, and not concerns about inner-city health and welfare, that brings calls for state intervention. 50 Chapter 3 None of this means that threats do not exist, or that these particular matters could not do substantial damage to U.S. society, if realized as imagined. Rather, it is to point out that imagination sets no limits to the threats we might conjure up. As David Campbell (1992:2) argues, [I]nfectious diseases, accidents, and political violence (among other factors) have consequences that can literally be understood in terms of life and death. But not all risks are equal, and not all risks are interpreted as dangers. Modern society contains within it a veritable cornucopia of danger; indeed, there is such an abundance of risk that it is impossible to objectively know all that threatens us. Those events or factors that we identify as dangerous therefore come to be ascribed as such only through an interpretation of their various dimensions of dangerousness. Finally, although they might only be imagined, even threats that never come to pass can still have real, material consequences if they are treated as though they were real and imminent. And such treatments can be only too deadly. The weapons sent to the Siad Barre regime in Somalia during the late 1970s and 1980s were intended to counter an imagined Soviet “resource war” in the Horn of Africa (Lipschutz, 1989), but they proved quite enough to kill Americans and Somalis alike during the 1990s. Two consequences follow from the production of a world of imagined threats. The first is that particular social issues may be recast in militarized terms. Thus, although the consumption of drugs within American society has domestic social and economic roots, the “war on drugs” is conducted largely within a military mind-set that turns parents and teachers into soldiers, children into threatened civilians, and inner-city residents into enemies (Campbell, 1992: chap. 7; see also Massing, 1998). The second consequence is that the long arm of the state’s security apparatus is extended into those realms of everyday life that otherwise might be considered to be insulated from it (Gill, 1995). Consider U.S. laws that permit the government to examine the personal and professional lives of air travelers as a means of finding individuals whose personality profiles match those of putative “terrorists” (Broeder, 1996), or the militarization of urban police departments as part of a “war on crime” (Gaura and Wallace, 1997). All individuals, whether The Insecurity Dilemma 51 citizen or permanent resident, whether legal or illegal, become potential threats to state security, even though the absolute numbers of terrorists, criminals and chronic drug users is quite small (I return to this point in chapter 7; see also Lipschutz, 1999b). That security might be, therefore, socially constructed does not mean that there are not to be found real, material conditions that help to create particular interpretations of threats, or that such conditions are irrelevant to either the creation or undermining of the assumptions underlying security policy. But enemies often imagine their Others into being, via the projections of their worst fears onto the Other (as the United States did with Japan in the late 1980s and with China in the 1990s). In this respect, their relationship is intersubjective. To the extent that each acts on these projections, threats to each other acquire a material character. In other words, nuclear-tipped ICBMs are not mere figments of our imagination, but their targeting is a function of what we imagine their possessors might do to us (I return to this point in chapter 4). Present at the Creation? As I noted earlier, the ways in which social matters come under the security umbrella is only too familiar to those in the United States who grew up in the 1950s and 1960s, when interstate highways, mathematics, and social science all were subsumed under “national defense.” Today, however, a different logic is at work. Social welfare issues and matters of culture are cast as threats to the body politic, not as things to be brought within the security sphere. How such threats and dangers can affect national security is not made clear. What such examples do suggest, however, is that threats are not necessarily the product of verifiable, objective material conditions outside of a state’s borders (or inside, for that matter). Rather, conceptions of threats arise, as argued above, out of discursive practices within states and, only secondarily, among states (Banerjee, 1991; Kull, 1985, 1988). As data flow in and information accumulates, someone must make sense of it. There is an excess of data that must be interpreted. Information that is not easily understood because of its uniqueness or complexity is likely to be interpreted in terms of existing frames of reference 52 Chapter 3 (as George Kennan did in the Long Telegram of 1946). The most available frames are those already widely accepted. QED. If this is so, then we must also ask: “Who defines security?” Who proposes how the elements of national power should be mobilized, and to what end? Who has the legitimacy and power to make such proposals? Who is engaged in the social construction of threats and security policy? And how are those ideas disseminated and, finally, realized?16 The fundamental assumption underlying many discussions of “security” is that the creation and propogation of security discourses falls within the purview of certain authorized individuals and groups within a state’s institutions. They possess not only the legitimate right to define what constitutes a threat to security, but also to specify which definitions of threat and security will be legitimate. Generally speaking, such individuals and groups are assumed to be aware of: 1. A consensual definition of what constitutes security—there is, in other words, an empirical reality to which the definition applies and on which all can agree; 2. Objective conditions of threat that stand regardless of any individual’s subjective position; 3. Special knowledge of conditions that allows for the formulation and conduct of policies required by this single definition of security—that is, an understanding of causal relations that will point to determinate outcomes.17 Possession of what is presented as an unambiguous understanding of cause and effect enables these individuals and groups to define threats to security and, in response to specific conditions, formulate policy that will, in their judgement, best secure the state. To be sure, policymakers define security on the basis of a set of assumptions regarding vital interests, plausible enemies, and possible scenarios, all of which grow, to a not-insignificant extent, out of the specific historical and social context of a particular country and some understanding of what is “out there.”18 But while these interests, enemies, and scenarios have a material existence and, presumably, a real import for state security, they cannot be regarded simply as having some sort of “objective” reality independent of these constructions.19 The Insecurity Dilemma 53 Borrowing from the work of Ole Wæver (1995), I want to suggest that what actually happens in the formulation and implementation of security policy is quite different from the standard model. Wæver argues that elites securitize issues by engaging in “speech acts” that frame and freeze discourses. The very act of designating an issue or matter as having to do with security helps to establish and reproduce the conditions that bring that issue or matter into the security realm.20 As Wæver (1995:55) puts it, By uttering “security,” a state-representative moves a particular development into a specific area, and thereby claims a special right to use whatever means are necessary to block it. In intervening in this way, the tools applied by the state look very much like those used during the wars the state might launch if it chose to do so.21 Definitions and practices of security consequently emerge and change as a result of discourses and discursive actions intended to reproduce historical structures and subjects within states and among them (Banerjee, 1991). Who are these security elites? That is, who “constructs” threats and makes security policy? As far as the process of making security is concerned, there are three potentially different answers. First, we can point to those individuals (or groups, interests, or classes; it doesn’t really matter which) responsible for overseeing the power of the state (e.g., the military, defense analysts in and out of government, etc.). Second, there are those responsible for overseeing the institutions of the state (policymakers, legislative representatives, bureaucrats). Finally, there are those responsible for overseeing the idea of the state (heads of state, leaders, national heroes or symbols, teachers, religious figures, etc.; see Buzan, 1991). Each of these groups may conceive of security somewhat differently, and they may intrude on each other’s turf, but under “normal” conditions, there is little or no basic disagreement among them about the amount of security required. In a cohesive, conceptually robust state, a broadly accepted definition of both national identity and the security speech acts needed to freeze that identity is developed and reinforced by each of these three groups as a form of Gramscian hegemony. Each group, in turn, contributes to the discourses that maintain that conventional wisdom. 54 Chapter 3 The authority and power of these groups, acting for and within the state, is marshaled against putative threats, both internal and external. The institutions of the state oversee policies directed against these threats, and the specific “idea” of the state—and identity of its citizens—comes to be reinforced in terms of, first, how the state stands and acts in relation to those threats and, second, the way those responsible for maintenance of the idea (through socialization) communicate this relationship. The outcome is a generally accepted authorized (by authorities) consensus on what is to be protected, the means through which this is to be accomplished, and the consequences if such actions are not taken. Such a consensus is by no means immutable. Things change. A catastrophe can undermine a consensual national epistemology, as in the case of Germany and Japan after World War II. But it is also possible that what might appear to others to be a disaster, for example, Iraq’s defeat in 1991, can also provide an opportunity for reinforcement of that epistemology, as has been apparent in Iraq since 1991. The systemic changes discussed earlier can also undermine consensus, although much more slowly. Domestic and external forces can act so as to chip away or splinter hegemonic discourses by undermining the ideational and material bases essential to their maintenance and the authority of those who profess them. If there is some question about the legitimacy of the state and its institutions, or the validity of its authority, those in positions of discursive power may decide to rearticulate the relationship between citizen identity and state idea. Russian president Yeltsin’s (unsuccessful) search for a new “national idea” is an example of this. Another involves the restoration or refurbishment of old epistemologies (as in “despite the end of the Cold War, the world remains a dangerous place; therefore rely on our judgement which so often before has proved valid”). To put my point another way, a consensual conception of security is stable only so long as people have a vested interest in the maintenance of that particular conception of security. If social change undermines the basis for this conception—for example, by diminishing the individual welfare of many people, by making the conception seem so remote as to be irrelevant, by forgetting the civic behaviors that once reminded everyone about that conception—consensus can and will break down. This may happen, as well, if a particular concept The Insecurity Dilemma 55 or construction is increasingly at odds with material evidence, or if state institutions are unable to “deliver the goods.” Particular discourses can also shift back and forth, as enemies become friends (Russia) and friends threaten to become enemies again (China). But contradictions between older definitions and changing material conditions can also lead to contestation between competing discourses of security (as in “computer hackers’ ability to engage in information warfare is paramount,” so we need new tools and organizations to catch them).22 The failure of any particular discourse to establish its hegemony means that there can result discursive confusion and contestation over the meaning(s) of security among those who, for one reason or another, have a vested interest in a consensual construction. This interest, or the expected benefits, may well be material and not just a matter of patriotic loyalty to nation; by defining security in a particular way, one serves to legitimate a particular set of policy responses. Associated with these are very real armaments, force structures, diplomatic strategies, domestic economic policies, jobs, titles, and incomes (Smithson, 1996). Wall Street has a particular stake in maintaining good relations with China; cultural conservatives and defense corporations have a stake in imagining a Chinese threat against which we must “be prepared.” The risk in the former is that such relations have an impact on domestic industry and employment, and could delegitimate that policy; the risk in the latter is that the People’s Republic of China (PRC) might take the conservatives seriously (especially if they can affect security policy) and respond in commensurate fashion. A discursive remodeling of security may also reinforce the identification of citizens with their state, or it can further divide them. If threats to security can be framed in a particular fashion as, for example, arising from a particular enemy, the differentiation between “us” and “them” becomes clearer; states tend to be defined, at least in part, in terms of “negative organizing principles” (Buzan, 1991:79; see also Huntington, 1996, 1997). The security framework is also buttressed intellectually through reinforcement or establishment of individual roles in a variety of structures and institutions—government, industry, academia. These are linked to a negative organizing principle and its substantiation, in the form of national security advisors and analysts, pundits and professors. Individuals, as well as states, can thus come to define themselves in relation to national security. But, as we 56 Chapter 3 are reminded by conspiracy theories about the New World Order, black helicopters, and UN forces in Canada, some discursive frames can decompose the formerly linked security of state and people (Lipschutz, 1998b). To return to an earlier question: Who constructs and articulates contesting discourses of national security? Among such people are mainstream “defense intellectuals” and strategic analysts, those individuals who, sharing a certain political culture, can agree on a common framework for defining security threats and policy responses (what might be called a security “episteme”). While their discourse is constructed around the interpretation of “real” incoming data, their analysis is framed in such a way as to, first, define the threat as they see it and, second, legitimate those responses that validate their construction of the threat (see, e.g., Schlesinger, 1991). To repeat: this does not mean that threats are imaginary. Rather, they are imagined and constructed in such a way as to reinforce existing predispositions and thereby legitimate them. This legitimation, in turn, helps to reproduce existing policy or some variant of it as well as the material basis for that policy. Finally, we might ask why “redefine security?” Who advocates such an idea? During the 1980s, at the time this argument was first made (Ullman, 1983; Mathews, 1989), the individuals comprising this group were an amorphous lot, lacking an integrated institutional base or intellectual framework (a situation that has slowly changed during the 1990s). Most tended to see consensual definitions and dominant discourses of security as failing to properly perceive or understand the objective threat environment, but they did not question the logic whereby threats and security were defined. In other words, the redefiners proposed that the “real” threats to security were different from those that policymakers and defense authorities were generally concerned about, but that the threats were “really out there.” The redefiners argued further that the failure to recognize real threats could have two serious consequences: First, it might underminine state legitimacy, inasmuch as a national defense that did not serve to protect or enhance the general welfare (which is what “security” often comes to mean) would lose public support. Second, it would reproduce a response system whose costs would increasingly outweigh benefits. At the same time, however, the redefiners did not propose a The Insecurity Dilemma 57 shift away from state-based conceptions of security; rather, their arguments sought to buttress eroding state authority by delineating new realms for state action. Thus, for example, discussions of “environmental security” focused on the need for governments to establish themselves as meaningful actors in environmental protection as it related to state maintenance. This would mean establishing a sovereignty claim in a realm heretofore unoccupied, and defining that realm as critical in security terms.23 Interestingly (and, perhaps, predictably), since the concept of environmental security was first offered, it has gradually acquired acceptance among Western military and political institutions, not because the threats are necessarily evident or can be addressed through military means but because it provides them with a new mission, both conceptual and material. Many of the arguments for redefining security can be seen in retrospect, therefore, as part of an effort to shore up crumbling rationales for state sovereignty, a goal not so far from that desired by security authorities themselves. The success of the redefiners has been considerable, especially in terms of the “new threats” being integrated into the existing machinery of national security (see chapter 4). A few analysts have argued that the purity of the security field must be maintained if it is to have any disciplinary rigor or meaning (Walt, 1991; Deudney, 1990). Others have written that traditional security concerns will soon reemerge, in one form or another, and that it is premature to turn our attention to new problems (Kugler, 1995). Certainly, the lexicon of “new threats” has been picked up and disseminated relentlessly, as seen in the 1997 Quadrennial Defense Review issued by the Pentagon (and addressed in chapter 4). It is less clear whether the U.S. military has a clearly conceived strategy for responding to these problems or whether they have simply offered new rationales for old policies. Perhaps security is an outmoded practice—as slavery, colonialism, and the use of land mines have all come to be—both normatively and materially bankrupt. Or perhaps the more traditional functions of the state are being undermined by processes of interactive (and intersubjective) change: States and governments can no longer manage what they once did, and cannot yet manage what is new. Under these circumstances, it begins to make less and less sense to see the state as the referent object of security. Hence, we not only have to unwrap the 58 Chapter 3 ways in which changing material conditions affect the state materially, we also have to understand how these changes alter the very idea of the state—as well as the idea of security—thereby creating new referents of political activity and, perhaps, security. (B)orders and (Dis)orders To ask, then, the logical question: What is security? In a book published in 1988, the authors could still argue that national borders remained authoritative and determining of security: In the most basic sense, what the American people have to deal with when they adjust to the world outside U.S. frontiers is 170 [sic] assorted nation-states, each in control of a certain amount of the earth’s territory. These 170 nations, being sovereign, are able to reach decisions on the use of armed forces under their government’s control. They can decide to attack other nations. (Hartmann and Wendzel, 1988:3–4) By 1989, it appeared that the roster of states had been fixed, the books closed for good. Only Antarctica remained an unresolved puzzle, where international agreements put overlapping national claims into indefinite abeyance. There were many “international” borders, to be sure, but these were understood to be fixed in number and location, inscribed in stone and on paper. States might draw imaginary lines, or “bordoids,” as Bruce Larkin (forthcoming) has stylized them, defining and encompassing “national interests” beyond their borders. They might extend their borders in a somewhat hypothetical fashion in order to bring allies into the sphere of blessedness, as in practice of extended deterrence in Europe. They might effectively take over the machinery of other states, as in Central America and Central Asia, even as they paid obeisance to the sacred lines on the ground, claiming to be protecting the sovereignty of the fortunate victim. Enemies and threats were, however, always across the line. Is it the lines themselves that are the problem? If so, this suggests that security discourse irreducibly invokes the authority of borders and boundaries, rather than their physical or imagined presence, for its power. Borders and boundaries presume categories of things, be they The Insecurity Dilemma 59 people, states, or “civilizations,” and categories presume differences between subjects on either side of the boundary. The practical difficulty is always how authority is to be linked to these borders and boundaries in order to maintain difference and constrain change. What authority is capable of authorizing such lines? James Der Derian (1995: 34) has pointed out that establishing borders involves the drawing of lines between the collective self and what is, in Nietzsche’s words, “alien and weaker.” In this way, the boundary between known and unknown is reified and secured. But such distinctions are not so easily made. Before 1989, Croat, Serb, and Muslim had lived together in relative peace as “Yugoslavs” for forty-five years. After 1991, the borders between them were, somehow, authorized so as to magnify small differences and turn them into an authorization for “ethnic cleansing.” These borders, moreover, were drawn both on the ground and in the mind, so that the “alien” could be identified, wherever his or her physical location (chapter 6). Thus, conventionally and historically, borders have been drawn not only by dint of geography but also between the self and the enemy, between the realm of safety and the realm of danger, between tame zones and wild ones, between the supposedly known and the presumably unverifiable and unknown. Traditionally, it was practitioners of diplomacy and security who marked such borders between states, or between groups of states, and they did so as the authorities who drew the lines, maintained their integrity, and validated those characteristics, whether cultural or political, that distinguished insider from outsider, one side from the other.24 But such boundaries can be very fluid. Because they are as much conceptual as physical, the insider must be disciplined (or self-disciplined) to remain within them. Hence, when the authority of borders and boundaries weakens or disappears, the old (b)orders become disordered. In retrospect, the revolutions of 1989 actualized what had already been underway for some time but was not recognized: the fluidization, diminution, and dissolution of borders and intrastate boundaries. This was represented by a phenomenon that some observers had, in the past, called “interdependence.” But interdependence assumes the continuity of borders and boundaries, not their dissolution or the intermingling of previously separated groups (see chapter 5). 60 Chapter 3 Paradoxically, as old borders disappeared, new ones emerged, first in the mind and only then on the ground (Dawson, 1996). Former comrades and compatriots now found themselves on the opposite sides of borders, sometimes on the “wrong” side, as was the case with the 25 million Russians in the “near abroad.” New boundaries were drawn through what had once been states or titular republics, creating multiple identities where before there had been (nominally) only one. Even industrialized countries were not immune to this phenomenon, as new lines were drawn between “true” nationals and the children of immigrants who had never traveled to the old country. The post-1989 borders had much the same effect, with newly imagined nations militarizing their identities in order to establish their imagined autonomy from old ones. In doing so, these new nations rejected the old ones, rendering them both illegitimate and undesirable. But new borders did not, and cannot, put an end to the old questions: Who are you? Who am I? Why am I here? Boundaries are always under challenge and they must always be reestablished, not only on the ground but also in the mind. Here is where security is, ultimately, to be found; here is where insecurity is, ultimately, generated. The marking of borders and boundaries is never truly finalized, never finally set in stone. Borders are meant to discipline, but they also offer the opportunity of being crossed or transgressed. Borders are lines on maps and markers on the ground, but border regions are rarely so neat. Borderlands are places where mixing occurs, or has occurred, or might occur. They are, in themselves, a contradiction to, a rejection of, the neatly drawn limits of the nation-state. Borderlands are thus a threat to the security supposedly established by the authorized borders precisely because they offer the possibility of people freely moving back and forth across lines without ever actually crossing borders. It is for this reason, as much as anything else, that border zones are sometimes cleansed of people in the name of security. How did this insecurity dilemma—the loss of firm boundaries— come to pass? As is so often the case in human affairs, the causal mechanism is overdetermined. Liberalization and globalization have been major factors, but the “nuclear wars” that were “fought” between 1950 and 1989 also played a central role. Those wars were never fought on the plains of Germany, as the planners of Flexible Response and the AirLand Battle thought they would be, but they were fought The Insecurity Dilemma 61 in the minds of the military, the policymakers, and a fearful public. What became clear during the 1980s was that no amount of drawing of lines or borders between friend and foe could limit the destruction that would follow if missiles should be launched and opposing armies thrown into battle. In the end, both sides would suffer immeasurable consequences.25 Nuclear deterrence, in other words, came to depend not on physical destructiveness, but rather on the maintenance of borders on the ground and in the mind: To be secure, one had to believe that, were the Other to cross the line, both the self and the Other would cease to exist; to maintain the line and be secure meant living with the risk that it might be crossed. Although neither side would dare to physically cross the line, it was still possible that mental crossings—what was called “Finlandization” (a slur)—could occur. The threat of nothingness secured the ontology of being, but at great political cost to those who pursued the formula. Authority deemed the fiction necessary to survival. Since 1991, the nuclear threat has ceased to wield its old cognitive force, and the borders in the mind and on the ground have vanished, in spite of repeated efforts to draw them anew, perhaps farther East, perhaps elsewhere. To be sure, the United States and Russia still do not launch missiles against each other because both know the result would be annihilation. But the same is true for France and Britain, or China and Israel. It was the existence of the Other across the border that gave national security its power and authority; it is the disappearance of the border that has vanquished that power. Where Russia is now concerned, we are, paradoxically, not secure, because we see no need to be secured.26 France is fully capable of doing great damage to the United States, but that capability has no meaning in terms of U.S. security. In other words, if safety cannot be distinguished from danger, there is no border and, hence, no security problem. The debates over the expansion of NATO, and the decision to bring Poland, Hungary, and the Czech Republic in from the cold, have revived these very same questions. Who is inside? Who is outside? And why does it matter? As a multilateral security alliance aimed at the enemy to the East, NATO’s long-term mission had been to guard the border, to keep the Elbe the line separating the Free World from its unfree doppelgänger. Defining a new “mission” for NATO, or taking in 62 Chapter 3 new members, does not eliminate the conceptual insecurity arising from the new boundaries or lack thereof. A new line is drawn, but every one is careful not to authorize a new meaning for it. Yet, lines are fraught with meanings and so, inevitably, a meaning is sought for this one. Expansion is not directed against Russia, but it might be. NATO will not use its military power to suppress ethnic groups within Europe, but it has (and will, apparently). A Rapid Reaction Force would not intervene in civil wars, but who knows for sure? What else is NATO good for? Applying its military might against terrorists, computer hackers, pornographers and pederasts, drug smugglers, and illegal immigrants would be akin to killing a fly with a Peacekeeper (whether missile or Colt .45).27 Conclusion The insecurity dilemma is a permanent condition of life. This is not, however, the same as the insecurity of a Hobbesian State of Nature, or the fear that arises when neighbors, whether in the house or the state next door, begin to arm. In today’s world, the insecurity dilemma arises out of uncertainty, out of a changing and never fully predictable world. Securing the self and the state against change works both ways: it seeks to freeze lines on the ground and in the mind, and it keeps baleful influences out, but also imprisons those protected within the iron cage. I can do no better in ending this chapter than to quote James Der Derian (1995:34), who argues that “A safe life requires safe truths. The strange and the alien remain unexamined, the unknown becomes identified as evil, and evil provokes hostility—recycling the desire for security.” Surely we can do better than this. 4❖ ARMS AND AFFLUENCE If you would have wealth, prepare for war. —Unattributed Whatever happened to World War III? For almost forty years, two great military alliances faced off across a line drawn through the center of a middling-sized peninsula, ready to destroy the world at a moment’s notice in order to save it. John Lewis Gaddis (1987) called this time the “Long Peace,” Mary Kaldor (1990), “the imaginary war.”1 When it was over, a few sentimentalists warned that we would soon miss it, and tried to tell us why (Mearsheimer, 1990a, 1990b). For many—especially the 20 or 30 million who died in Third World wars— the violence was only too real. For other billions, there was a peace of sorts, purchased only at great cost (Schwartz, 1998) and perpetual terror. For the United States and its allies, it was a time of great opportunity and prosperity. The 10 trillion or more dollars expended on the Long Peace brought a period of unprecedented economic growth and wealth. Affluence, it often seemed, was possible only with arms. But even imagined wars must end (Iklé, 1971). Those that exist in the fevered fantasies of war gamers and war-game writers can continue on computer monitors everywhere but, after a time, those played 63 64 Chapter 4 out by nuclear strategists within the Beltway, or by Special Forces in far-off countries begin to lose their raison d’être as well as their authority and ability to discipline when the “big one” does not come to pass as threatened. If threats are to retain their power to terrorize, therefore, they must be reimagined and fought, over and over, through words, through symbols and images, through languages and rhetorics. World War III was such a war. Although it never took place, it was always about to break out. Peace became a fragile interregnum of not-war. Preparations for the imaginary war were extensive and, in the end, the war that never happened was extravagantly expensive. A poorer world could not have afforded the peace of World War III, but it was that peace that made the West so rich that it could afford to imagine fighting World War III in Europe (and to actually fight it at a much lower level in other parts of the world). These days, there is no shortage of wars, despite the end of the Cold War. But these are minor wars, relatively speaking. Big wars are too expensive to wage in human terms, although they have also become too costly not to imagine. No U.S. president, present or future, could afford the political costs of even a fraction of the fifty thousand American deaths suffered in Vietnam (apparently, the millions of Vietnamese who died carried no such costs for American presidents). At the same time, neither could a U.S. president, present or future, afford the political costs of abandoning preparations for the “big one.”2 The Gulf War cost more than $50 billion, but at a loss of less than two hundred American lives (more died in accidents before and after the fighting than in actual combat). The immediate costs of the war were paid for largely through grants from allies; the strategy, underlying technology, and resulting weaponry, however, were the products of the imagined World War III. In the short term, the Gulf War gave no great boost to the U.S. economy; in the longer term, the enduring “problem” of Iraq has provided a justification for not abandoning imagined wars. Pace Clausewitz, imagined war is the continuation of economics by other means. So, what does this mean for the future of war? The literature is vast and growing. For some, the next enemy has been chosen, and the coming war already imagined (Bernstein and Munro, 1997). But most theorists and journalists of war are like the fabled drunk, looking for opponents in the well-lit places, rather than in the shadows. For them, Arms and Affluence 65 prevention of major wars through reliance on nukes, cruise missiles, and the electronic battlefield remain paramount (Cohen, 1996). Yet, such wars are the least likely. Moreover, most commentators seem to remain fascinated by the kill rather than the whip, by missile defense rather than mental offense. Few analysts of war bother to ask why a war might begin or, for that matter, whether the kinds of wet-dream weapons that excite soldier and civilian alike even have a role in the wars of the future. After all, real war costs money—remember Bob Dole complaining about wasting those expensive cruise missiles? Preparations for war are good for business; deaths, however, are bad for politics. Most contemporary discussions of strategy and battle are, therefore, not about “real” war. They are better understood as “discourses of war” meant, in the absence of an omnipotent and omnicompetent enemy, to terrorize and discipline both friend and foe, citizen and immigrant, alike. A discourse, as noted in Chapter 3, is best understood as an authoritative framework that purports to explain cause and effect and, through practice and repetition, rules out and quashes alternative explanations (Litfin, 1994). Discourses are rooted in the real world, but their power comes from their narrative authority, and not their assumed descriptive or analytical “objectivity” (for an excellent discussion of discourses, see Hajer, 1993). There are, to be sure, many discourses of war that could be discussed here. With variations, however, the three that dominate American thinking are: the last/next war, wars in small countries far away, and imagined wars. In this chapter, I compare and contrast discourses of wars with really existing war, and argue that the two bear little relation to each other. I then discuss how several competing discourses of war have been framed, and assess their political and disciplinary character. Finally, I offer some speculative thoughts on the future of war, especially in a global system in transition: Ten years after, do we face perpetual peace or perpetual war? Imagined Wars In May 1997, the Clinton administration issued “A National Security Strategy for a New Century” (Clinton, 1997). In it, the president waxed enthusiastic about the future: 66 Chapter 4, peaceful conflict resolution and greater hope for the people of the world. But, do not get too excited; Clinton also warned darkly that ethnic conflict and outlaw states threaten regional stability; terrorism, drugs, organized crime and proliferation of weapons of mass destruction are global concerns that transcend national borders; and environmental damage and rapid population growth undermine economic prosperity and political stability in many countries. (Clinton, 1997) Needless to say, these are difficult problems to address. Moreover, they raise a host of additional questions: Whom do these phenomena threaten, and how? Do they affect everyone to an equal degree? How should we respond? Will our allies help? How much would it cost? Who will pay? Is there reason to think we can solve such problems? The detailed responses offered in this and other similar government documents focus on the availability and utility of military power. No surprises there: Deployment of military force remains the apotheosis of state sovereignty, an arena in which the state exercises its greatest discretion and control. Yet precisely how such capabilities are to be used to control or eliminate these “new” threats remains problematic, and this poses real epistemological difficulties for strategic planners. According to the Quadrennial Defense Review (QDR, 1997), issued by the Pentagon in April 1997, The security environment between now and 2015 will . . . likely be marked by the absence of a “global peer competitor” able to challenge the United states militarily around the world as the Soviet Union did during the Cold War. Furthermore, it is likely that no regional power or coalition will amass sufficient conventional military strength in the next 10 to 15 years to defeat our armed forces, once the full military potential of the United States is mobilized and deployed to the region of conflict (QDR, 1997: sec. 2, pp. 2–3). Arms and Affluence 67 Why, then, have a Pentagon? Searching for contingencies that demand maintenance of the military, the authors of the QDR focus on “regional dangers,” and conclude that foremost among these [contingencies] is the threat of coercion and large-scale, cross-border aggression against U.S. allies and friends in key regions by hostile states with significant military power . . . [i]n Southwest Asia . . . [i]n the Middle East . . . [and i]n East Asia [on] the Korean peninsula . . . (QDR, 1997: section 2). But if strength brings peace, it seems also to be debilitating. Despite expressions of confidence regarding the future security environment, U.S. military power and dominance are, rather paradoxically, also portrayed as potential weaknesses. Enemies too cowardly to fight on the field of battle will find other ways to strike back. Thus, the QDR (1997: section 2, p. 2) warns that U.S. dominance in the conventional military arena may encourage adversaries to use such asymmetric means [e.g., terrorism and information warfare] to attack our forces and interests overseas and Americans at home. That is, they are likely to seek advantage over the United States by using unconventional approaches to circumvent or undermine our strengths while exploiting our vulnerabilities. (Emphasis added) In this analysis, capabilities become impediments, and the ability to act is transformed into a formula for paralysis. The apparent contradiction is not explicable, however, in conventional strategic terms. It arises because, on the one hand, nuclear deterrence has no effect at the substate level. On the other hand, both the costs of reallocating defense resources to respond appropriately and the costs of losing young men and women in conventional ground combat, whether interstate or intrastate, are too high in domestic political terms. Moreover, such major changes might also be interpreted by some as “lack of resolve” and a sign of “weakness.” A nuclear World War III would have avoided such ontological difficulties. Tens of millions might die on the home front, but their deaths would make the loss of tens or hundreds of thousands on the front seem minor by comparison. Today’s conventional warriors suffer losses only on the battlefield, and each one is carefully counted by 68 Chapter 4 politicians (if not voters). Yet, as we were warned when NATO began to bomb Yugoslavia, if we do not prepare for such wars, or shift our attention to “not-wars,” aggressors (recalling Munich) will act aggressively against us. Virtual Nukes Is there an answer to this dilemma? Apparently there is: the electronic battlefield, whose ultimate application emerges through what is called the “Revolution in Military Affairs” (RMA). The RMA envisions technicians safely ensconced in bunkers thousands of miles from the fighting, using remote-control weapons to kill the enemy’s human soldiers, destroy its material infrastructure, and encircle its territory (Cohen, 1996). In practice, such technology is neither infallible nor cheap; to paraphrase Senator Everett Dirksen, when B-2 bombers cost a billion dollars each, the loss of even one or two means that pretty soon you are talking real money. Hence, the imagining of such wars becomes the means of avoiding them (that the recipients of such visions might not emerge unscathed is, it would seem, of no concern to policymakers and strategists). And, the communication of such imagined futures to the global audience becomes central to the RMA and its task of deterrence. This is not really new, of course; deterrence—especially of the nuclear variety—fulfills a similar communicative function. Nuclear deterrence threatens to turn the world into a blackened cinder—to bring a decidedly non-Hegelian end to history—if a putative enemy chooses to transgress what the issuer of the threat considers the boundaries of acceptable behavior. The nuclear threat posits an imagined future if certain actions are taken, but there is no intention to turn the imaginary into the real; that would defeat the entire purpose of the exercise. Our experience with such scenario building suggests, however, that the credibility of such an imagined threat remains problematic, and this points to a structural flaw inherent in such deterrence. Whereas conventional deterrence consists of threats to punish the offender and can be tested, nuclear deterrence, according to most conventional wisdom, cannot. Who, after all, would sacrifice New York for Paris, or Los Angeles for an island in the South China Sea? The Arms and Affluence 69 use of a single nuclear weapon, as in the “firing of a warning shot across the bow,” would prevent the enemy’s use of a second, according to some (see, e.g., Scheer, 1982), but implicit in that first step is the imagined escalation of a single nuclear explosion to thousands (Iklé, 1996). If such a calculus exists only in the imagination, however, it can hardly be said to constitute a material threat or one whose veracity can be demonstrated by example. Again, it is not action in response to a provocation that halts an offender, but imagination that disciplines prior to the initial offensive act. In his famous “ladder of escalation,” Herman Kahn (1965) sought to illlustrate through imagined scenario-building that there were many nonnuclear way stations before Gehenna. Inasmuch as his was a theory that could not be fully tested, much less tolerated, the scientific method could not be vindicated for nuclear war. Consequently, World War III, the Imaginary War, had to be conducted by other means, and enemies had to be disciplined not by really existing wars but by wholly imagined ones. The result was the war of things said and displayed, and the curious way in which nuclear weapons were used. By not being used in a literal sense, but only as a medium of exchange in an imagined exchange, nukes disciplined Americans, Europeans, and Soviets alike.3 The notion of use thus began to acquire a peculiar function. The threat to “use” nuclear weapons, as Thomas Schelling and others pointed out, was credible only to the degree that those in a position of power could convince not only others, but also themselves, that the weapons would be used under appropriate circumstances (Schelling, 1966: chap. 2). But such circumstances could never be too well-defined, for to specify actual conditions of attack might someday require an unwanted launch for the sake of maintaining credibility. The “use” of nuclear weapons consequently took the form of speech acts (Wæver, 1995), backed up by doctrine and deployment, but hedged all about with hypotheticals and conditionals. The plausibility of an imagined action poses a further epistemological problem, however, not only for the threatened but also for the threatener. To transform an imagined threat into one that might credibly be fulfilled, the issuer must behave in such a way that s/he actually believes that s/he would execute the imagined action. This will to act must be conveyed fully to the recipient of the threat, or it may be regarded as empty. Such intentionality requires a certain insouciance 70 Chapter 4 of speech that reduces the apocalyptic act to a mundane one (as documented by Robert Scheer (1982) in With Enough Shovels and Steven Kull (1988) in Minds at War; the original effort in this direction is Kahn, 1965). How else is one to explain pronouncements such as that made by then-secretary of defense Caspar Weinberger (1982), who argued in 1982 testimony before the Senate Foreign Relations Committee that to deter successfully, we must be able—and must be seen to be able—to retaliate against any potential aggressor in such a manner that the costs we will exact will substantially exceed any gains he might hope to achieve through aggression. We, for our part, are under no illusions about the consequences of a nuclear war: we believe there would be no winners in such a war. But this recognition on our part is not sufficient to ensure effective deterrence or to prevent the outbreak of war: it is essential that the Soviet leadership understands this as well. (First emphasis added) In place of a thermonuclear holocaust, then, the nuclear establishment conducted a war of the imagination, of possible futures, of horrors best avoided. Aided and abetted by science-fiction films and novels, studies by the RAND Corporation and numberless institutes of strategic studies, and offhand remarks by policymakers and military officers, nuclear deterrence was raised to a perverse form of art(iculation), in which a convoluted but safe rhetoric came to substitute for risky and explicit action. Deterrence thus became a practice akin to telling ghost stories around the campfire: if one could scare oneself silly, perhaps others would be scared, as well (as Tom Leher put it, “If Brezhnev is scared, I’m scared”). But one would never want to become too scared, for to do so might be to lose self-control. . . . A graphic example of nuclear discipline—one of many—took place with the deployment of the intermediate-range nuclear “Euromissiles,” the Pershing-II and Ground Launched Cruise Missiles in Europe during the early 1980s. These missiles were intended to fulfill an imagined lacuna in deterrence created by Soviet SS-20s intermediate-range nuclear missiles discovered in Eastern Europe during the mid-1970s. The SS-20s, it was claimed by then-West German chancellor Helmut Schmidt, imperiled the West by taking advantage of a “gap” in the hypothetical ladder of crisis escalation that might be Arms and Affluence 71 climbed during a future confrontation over Berlin or some other point on the East-West borderline. In such a crisis, the gap could be used by the Soviets, according to Schmidt and others, to menace and discipline Western Europe simply through the motions of preparing to launch the SS-20s. Inasmuch as to actually let loose the SS-20s would have unpredictable, not to mention undesirable, consequences, the result of their presence in the East was to create a Western vision of an imagined future in which such threats might be issued or even executed. In the face of such an eventuality, the failure to respond appropriately could lead to the “Finlandization” of Western Europe, which would be forced to submit to demands made by the Soviet Union out of fear of the imagined future.4 Such demands, of course, had not been made, and never were. Indeed, it would have been considered exceptionally bad manners to actually make such a demand. Rather, they were demands that some in the West imagined might be forthcoming at some future date, and they were demands that, if met, would change Western Europe into something with a different identity and loyalty (a Greater Finland, perhaps?). Imagined threats could not be left alone; they had to generate material responses. To remedy the hole in the whole of nuclear deterrence, policymakers determined that NATO must deploy its own equivalent missiles, thereby countering one imagined war-fighting scenario with another. Again, the Euromissiles were never intended to be launched; they were only put into Europe to fill an imagined gap that had not existed prior to the West’s awareness that the SS-20s had been deployed and pronouncements that they were, indeed, a threat (see Smith, 1984a, 1984b, 1984c, 1984d). To underline the imaginary quality of the threatened futures invoked by both East and West, in 1987, after some six years of off-again on-again negotiations, the gap disappeared, along with both sides’ missiles.5 As is true with most magical thinking, the “gap” had never been real in any objective sense; it was created through discourses of deterrence and the projection of imagined intentions onto the “Other.” A whole world of the future was created out of dreams, casting its unreal shadow on the present.6 Through virtual exchange of nuclear weapons was mutual deterrence assured. As we shall see, the Revolution in Military Affairs (RMA) and its electronic battlefield have problems of their own. 72 Chapter 4 Fighting the Next War It is a well-known cliché that, in planning for the next war, generals always fight the last one. One of the lessons of Vietnam seems to have been that nuclear deterrence and discipline have little, if any, impact on nonnuclear adversaries. The president of the United States could hardly contemplate nuking a regional adversary—even one in putative possession of a few atomic weapons—in response to a conventional provocation. How, then, could s/he respond to an “unconventional attack” with a small nuclear bomb in the mythical suitcase? On whom could a retaliatory device be dropped? Who could be punished?7 Fortunately for generals and strategists, the Gulf War intervened to suggest some answers. The classical image of war is one of a tightly controlled, wellexecuted pas de deux between two enemies, using the most advanced of weaponry, fighting along a well-defined front, each exerting maximum will. This is the idealized war, the AirLand Battle of NATO (whose imagined clarity, Clausewitz warned us, would prove wholly illusory if it came to pass), the conflict reimagined by Tom Clancy (1987) in his mind-numbing Red Storm Rising. In that novel, Clancy pits the brains of NATO against the brawn of the Warsaw Pact, and NATO wins. The war turns out, however, to have been triggered by a misunderstanding. The war games fought by General Norman Schwartzkopf’s “Jedi Knights” and their counterparts in Iraq (Der Derian, 1992) in preparation for an imagined conflict in the Gulf—one that did take place—relied on a similar paradigm and, according to some, was also the result of a “misunderstanding” arising from Ambassador April Glaspie’s replies to Saddam Hussein’s inquiries. The wars to come, according to the conventional wisdom about the RMA, will look a lot like these two, although they are unlikely to be launched as a result of miscommunication. Fought out on electronic battlefields, prosecuted by means of real-time intelligence and weapons controlled through satellite uplinks, won by dint of superior technology, future wars will be something quite different but also much the same (Cohen, 1996; Libicki, 1996). There will be a front line, but our s/he-bots will face the enemy’s flesh-and-blood boys and girls. Technological capital will protect political capital, but the basic choreography will remain the same. Arms and Affluence 73 Or will it? Most of today’s discourses of war are, somewhat paradoxically, characterized by conservatism in imagination even as they are prosecuted with the as-yet uninvented weapons and strategies of the future. These fantasies belie the form, moreover, of most contemporary wars, which follow Hobbesian lines more closely than Cartesian ones (Kaplan, 1996). Planning for future wars requires some idea of what they might look like, but where are we to look? Inasmuch as both fortuna and fog play central roles in any battle, no two wars can be alike, much less resemble one another. But because the “last war” almost always comes as a surprise, in the absence of accurate prognostication it usually stands in as the model for the next. For better or worse, therefore, the Gulf War of 1991 has become the current model for the future, as well as the standard against which all debates are conducted (Libicki, 1996). This is not, it should be noted, the type of conflict that is either most often imagined—if fantasies are to be believed—or the most common—if newspaper column inches are measured. As an interstate war for which a detailed account is available, and with a clearly-defined enemy and set of front lines, however, it is the most straightforward in terms of preparations and buildup (see the first quote from the QDR above). That, too, is reason enough that the Gulf War is unlikely to be repeated. Even determining under what circumstances war might erupt or be justified is itself problematic. Violation of fundamental material and social norms among states, such as triggered the Gulf War, are exceedingly rare. When one state invades another, the violation of borders is clear.8 A moral outrage has taken place. More than this, such an infraction can be understood as a threat to all states, even those with no immediate interest in the violation. If a border can be crossed with impunity in one place, there is no reason that it cannot happen elsewhere. Consequently, not only do states react to border violations, alone and collectively, they also have reason to fortify their defenses against potential violations by their neighbors (for this is where it is presumed such attacks will come—but, because the United States has no overtly threatening neighbors, it is also why U.S. strategy is so problematic). Given this reasoning, even though Gulf-type wars might be quite uncommon, the safest bet is that the Gulf War will occur again, somewhere, sometime. This was the path pursued by the Bush and Clinton administrations. In the early 1990s, a full review of U.S. strategy and forces was 74 Chapter 4 undertaken—the “Bottom-up Review”—with an eye toward rationalizing the military while cutting the defense budget. Out of this exercise came the authoritative war-fighting strategy of the Clinton administration—and the official successor to containment—organized around “major regional conflicts,” or MRCs (a concept recently rechristened by Defense Secretary William Cohen as “major theater wars,” or MTWs). An MRC/MTW is modeled roughly on the Gulf War experience, multiplied by two. More to the point, the MRC strategy assumes the simultaneous outbreak of two such conflicts, for example, a reprise of the Gulf War and the Korean War at the same time. Although most strategic analysts agree that the probability of such a scenario is rather low, others have argued that the beginning of one regional war offers an opportune time for a regional “rogue,” such as North Korea, to launch an attack. The United States therefore requires a range of airlift, manpower, and light equipment capabilities that would allow it to respond expeditiously to provocations with rapid troop deployments, followed by the emplacement of large numbers of troops and heavy battlefield weapons—but in less time than the sixmonth buildup to the Gulf War. Whether American capabilities would permit response to two MRCs at the same time remains unclear, with the result that some propose a “block and hold” approach, whereby regional wars would be fought and won sequentially. The MRC/MTW strategy—and its essential restatement in the 1997 QDR and the 1998 secretary of defense’s Annual Report to the President and the Congress—is best understood not so much as a warfighting plan as an effort to buttress the presumption of rationality in order to avoid the supposed miscommunication that preceded the Gulf War. It is also a (pre)cautionary tale for both friend and foe. Foes are meant to be cowed by these capabilities and threats of retribution should they try to change things—or so U.S. policymakers hope. The loudly articulated American ability to wage the MRC/MTW strategy is meant to incite in others imagined consequences and thereby prevent them from ever launching a war. The outcome of the Gulf War, and the repeated attacks on Iraq, serve as a morality tale for those who might choose not to believe. This, in turn, relieves the United States of having to fight such a war and, once again, conserves political capital at home. For FOUS (Friends of the United States), the strategy presents the world as a dangerous place, full of unseen although very Arms and Affluence 75 real enemies but, nonetheless, in good hands under American management and tutelage. Meddle with this arrangement at the risk of loosing chaos upon the land (as France found out when it demanded a larger role in NATO by attempting to take over the European Southern Command). What is absent from these discussions and plans is the “why?” Why would so-called rogues—and these are the only countries that, according to Washington, threaten U.S. forces, allies, or interests— choose to do so? No rational reason can be given, and so irrational ones are offered instead.9 They hate us, but for no reason, since we have no designs on them. They desire vengeance, but for no reason, since we have never offended them. They wish to injure us, but for no reason, since they have only been injured through their interference with our pursuit of order (a lesson more recently taught to Slobodan Milosevic and Yugoslavia). It is here that the MRC/MTW discourse of war, and the general U.S. strategy, begin to collapse or, at least, run into conceptual trouble. The MRC strategy assumes that all parties to a conflict operate on the basis of the same rational calculus to which the United States adheres, and that each side understands any given situation in similar terms. A failure by the other party to respond appropriately is then attributed to irrationality, insanity, or miscalculation—a “sane, rational” leader would not risk such injuries to society or self—rather than a logic or rationality that we might not understand or to which we might not subscribe.10 In seems safe to say that the number of wars begun by the insane and irrational is probably quite small, and that so-called misunderstandings (i.e., “human error”) are more frequently to blame. More to the point, if insanity or irrationality are to blame for wars, deterrence cannot work to prevent them. But there is a much deeper flaw in the assumptions underlying this discourse, as noted earlier. The failure to deter Iraq in 1990 and 1991 has been attributed to Ambassador April Glaspie and a lack of clarity in the messages sent by the United States. The remedy was and is to develop, deploy, and advertise capabilities so as to communicate clearly, without question, the costs to enemies that would follow from a violation of the status quo. To a determined government, however, the prospective cost in money and lives of a Gulf-scale war might not seem so great as to constrain it from launching an attack on U.S. interests or allies or failing to respond with alacrity when the bombs 76 Chapter 4 start to fall. That much is clear from Iraqi behavior prior to, during and after the Gulf War (and, more recently, by Yugoslavia’s), although it is a lesson yet to be learned by U.S. policymakers. Furthermore, rationality and irrationality, sanity and insanity might not even be the appropriate concepts to apply to this case. Assuming either rationality or irrationality (and nothing else) disregards questions of deep causality in explaining the onset of wars, ignores what is clearly a result of problematic histories of relations among and within states, and attributes events as they inexplicably occur to factors beyond anyone’s control (e.g, faulty genes, chemical imbalances, or Comet Hale-Bopp). Other causal processes simply drop out. This assumption of rationality—that a commitment of force (and the threat of escalation) deters a rational opponent—was, nevertheless, central to the Carter administration’s original “Rapid Deployment Force” (RDF) later configured into the “Central Command” (CC) under General Norman Schwarzkopf. It was, as well, the theory behind the military buildup prior to the four-day war of January 1991. It remains the logic behind U.S. disciplining of Iraq and others. Here, however, the conflation of nuclear and conventional discipline becomes truly problematic. The original purpose of the RDF/CC was to deter the Soviet Union from launching an attack through Iran toward the Gulf. In that imagined future, the RDF/CC was to have functioned as a tripwire (the same function filled by the 300,000 U.S. troops then in Western Europe) whose triggering would lead to the use of nuclear weapons. Working backward, then, it was the threat of imagined nuclear war that would secure American interests in the Gulf and serve to prevent the Soviets from initiating such an attack (but see Clancy, 1987). According to this logic, therefore, the threat to send military forces to oust Iraq from Kuwait should have been sufficient to accomplish that end. This latter theory was tested and found wanting because it relied on nuclear threats backed up by virtual forces, and not conventional threats backed up by material ones. That the Gulf tripwire was never intended to be a “real” threat was originally indicated by the fact that the Central Command existed only on paper and could never have made it to the Gulf in time to become a nuclear sacrifice to Soviet aggression. Here, then, was the flaw that is to be studiously avoided by the MRC/MTW strategy. To make credible a threat to deploy and defeat an opponent, not only must the United States have the capability to Arms and Affluence 77 deploy, it must also make manifest that capability without having actually to deploy. This might be accomplished through war games, rhetoric, and showing the flag, although such exercises suffer from several potential costs. First, it is extremely expensive to maintain such a capability, especially if it requires that troops and equipment be stationed at some distance from potential arenas of conflict (as is the case with the Central Command). Second, there is the very real chance that one’s bluff might be called. One might then be forced to fight and suffer casualties, on the battlefield and in the political wars at home. These are the very things the MRC strategy with its capital-intensive weaponry is meant to avoid. We might conclude, therefore, that the strategy wants some rethinking. Disciplinary Warfare This leaves us with one remaining question: Is the Gulf War the archetype of future wars? Are the electronic battlefields of the MRC/MTWs plausible? Or do they simply provide a distraction from lower-intensity, higher-probability conflicts that are so much more difficult to prevent or resolve? From this latter perspective, the wars in Chechnya, the Balkans, Central Africa, and elsewhere might be more appropriate as models, especially if predictions of continuing national fragmentation are borne out (see chapter 6). From a techno-warrior’s point of view, however, Chechnya-like wars are of little interest and no consequence. Combatants engaged in block-by-block urban combat, using assault rifles, bazookas, and artillery of ancient provenance (relatively speaking) are at high risk of injury or death. Such wars are messy, difficult to orchestrate, and notoriously hard on weapons engineered with high-precision mechanics and fancy electronics. Moreover, the possibility of American involvement in such postmodern struggles— even in a peace-keeping capacity—always appears to be an occasion for policymakers to run for cover. Postmodern warfare is, consequently, regarded by industrialized country foreign ministries and militaries mostly as a nuisance; it is the high-tech stuff that is sexy and porky. But, because “real” war is costly and messy, it has become necessary to find a means of managing wayward parties who fall out of line and violate the principles of a 78 Chapter 4 world order whose form and rules are not always so clear. So, even as neighbors in far-away countries slaughter each other with clubs and machetes, the tools of future wars between the United States and unpredictable aggressors are on display in Time, Newsweek, The Economist, and on CNN. I call this disciplinary deterrence. Disciplinary deterrence is executed through demonstration, through publicity, through punishment. It is a means of engaging in war without the discomforts or dangers of battle. It relies on imagined rather than actual warfare, on the dissemination of detailed information about military capabilities rather than on their actual exercise in combat, on the proliferation of the image rather than the application of capabilities. It is a child of the media age, taking advantage of rapid communication and virtual simulations that look all too real. It communicates a none-too-subtle message to potential miscreants. Finally, in its application to Iraq and more recently, Yugoslavia, disciplinary deterrence warns others to stay in line (see chapter 7). There are other benefits to be had from disciplinary deterrence, too. Expenditures on high-tech equipment and strategies bolster local economies in important congressional districts while reducing the demand for combat forces (see Rochlin, 1997: chaps. 8–10; Kotz, 1988). For the United States, the costs of disciplinary deterrence are relatively low. The military equipment is in hand, because the defense sector cannot be downsized any further without serious political costs. Those in charge of the communications infrastructure, both military and civilian, are only too happy to report on the amazing feats of which the technology is capable (even if the information offered is not always correct). And the elites of all countries—even “rogue states”—pay close attention to CNN and other media outlets in order to keep up with cultural and political attitudes and activities in the United States. The required publicity about the technology (although not about tactics or intelligence) illustrates an emerging paradox associated with disciplinary deterrence and warfare: Whereas countries once tried to keep their military capabilities a secret, so as not to alert or alarm real or potential enemies, it has now become common practice to reveal such capabilities, in order to spread fear and foster caution. A typical example of this can be found in advertisements that regularly appear in The Economist. These are, presumably, read by elites and militaries Arms and Affluence 79 the world over. Northrop Grumman tells the reader about “information warfare . . . the ability to exploit, deceive and disrupt adversary information systems while simultaneously protecting our own. Example: EA-6B Prowler” (emphasis in original). Continues the advertisement In the future, conflicts will be resolved with information as well as hardware. Northrop Grumman has the capability to create and integrate advanced Information Warfare technologies, such as electronic countermeasures and sensors. Northrop Grumann. Systems integration, defense electronics, military aircraft, precision weapons, commercial and military aerostructures. The right technologies. Right now. Accompanying the text is a shadow of an “EA-6B Prowler” superimposed over an unidentfied landscape of land and water. The message? “(Y)our bomb here.” With sophisticated computer graphics lending verisimilitude to the scene, the weapons are once again used without ever being fired. The epistemological flaw in disciplinary warfare is that there is no here there. For the most part, disciplinary warfare is conducted against imagined enemies, with imaginary capabilities and the assumed worst of intentions. As pointed out earlier, where these enemies might choose to issue a challenge, or why, is not at all evident (while their failure to issue a challenge is likely to be interpreted as “disciplinary deterrence works!”). The only apparent reasoning is that we have what they want and they are going to try to get it. Projection is a weak reed on which to base policy or procurement. In retrospect, the Central Command can be seen as the United States’ first effort at conventional disciplinary deterrence. As I suggested above, disciplinary deterrence is a fairly recent innovation; prior to the 1950s, wars between countries were fought to retake lost territory or acquire new land. The advent of nuclear weapons, and their potential to destroy that which they were meant to protect, made large-scale mass warfare highly risky, if not obsolete (Mueller, 1989). But, even smaller-scale wars, such as that in Vietnam or Afghanistan, came to be less about territory or restoration of a status quo ante than about the imposition of a particular moral order on the local parties. 80 Chapter 4 The apotheosis of the disciplinary approach to war may have taken place in 1991 therefore, when the American military machine, in concert with the other members of the “coalition,” ousted the Iraqi army from Kuwait without eliminating the regime that had violated the rules (a feat repeated in the Balkans in 1999). The Bush administration argued that the risk of Iraq’s fragmenting was too great to engage in social engineering within that country, and that a collapse of the regional “balance” could set off a land rush by predatory neighbors. Perhaps. More to the point, ever since that time, Iraq has existed in a state of limited sovereignty, as a zone of discipline and domination that the United States holds in semilegal bondage. The creation of “no-fly zones” in Iraq’s northern and southern regions, the constant surveillance of the country by satellites and spy planes, the regular (attempted) inspections of its industrial facilities by UN representatives, the repeated vetoes on the restoration of even limited trade privileges, and the periodic bombings have all reduced Iraq to a region over which the United States exercises a suzerainty that extends even to domestic affairs. But Iraq also fulfills a demonstration function, illustrating to other rogues and adventurers their fate should they get out of line. Even Iraq’s resistance plays into this game. Each time UN inspectors are prevented from going about their work, the United States begins, once again, to threaten punishment and war.11 This helps to explain, for example, the odd events of September 1996, when fighting took place in the Kurdish region in the north, while the United States loosed its cruise missiles on Iraqi radar stations in the south (it also explains the largely ineffective four-day bombing of December 1998). The U.S. response is best understood not as retaliation for the Iraqi “invasion” of the Kurdish zone, but rather as a punishment inflicted by the international equivalent of a high-school vice principal, intended not just merely to hurt Saddam Hussein but also to issue a warning to others who might think of stepping out of line. “We can do this with impunity,” the White House might have been saying, “You can run, but you cannot hide.” The demonstration has become more important than the effect. The loss of such opportunities would constitute a severe blow to U.S. policy (such as it is). Arms and Affluence 81 W(h)ither War? World War III has come and gone. Some of us didn’t even notice. Yet its implements are still with us. Indeed, inasmuch as their production is essential to the economies of many countries, they continue to proliferate at an accelerating rate. What is the purpose of such weapons, if not to wage shooting wars? War is costly in terms of lives lost and capital destroyed, but peace has its own costs in terms of politics and power. Interstate wars will come and go, no doubt, but not nearly as often as some might believe. Still, the true believers can argue that the absence of such conflicts after the Gulf War of 1991 is proof positive that deterrence “works” and that the electronic battlefield, even when restricted to computer and TV screens, will help to “keep the peace.” But postmodern war is not about the borders between states or even imaginary civilizations; as I proposed in chapter 3, it is about those difficult-to-see boundaries between and among individuals and groups. Who draws these lines? Who makes them significant? If they cannot be mapped, how can they be controlled? And what about the wars wracking so many far-away places? As we shall see in subsequent chapters, war as a disciplinary exercise is not limited to the international realm or those living in “failed states”; it includes in its application those at home, too. 5❖ MARKETS, THE STATE, AND WAR According to a study published in 1995 by the World Bank, the “Wars of the next century will be over water” (The Economist 1995b).1 Perhaps. Such warnings are not new. By now, the invocation of “water wars” is a commonplace, as a search of any bibliographic database will attest.2 The Bank, however, conveys special authority with its pronouncements, both because of its international standing as well as its long involvement in the planning and development of “water resources management.” Not only does the Bank rely on “experts” who are presumed to know everything there is to know about water and its use, as a central icon of the global economic system and a major funder of large-scale water supply systems, it must also be listened to, especially by those who may feel themselves short of water. But the Bank’s analysis leaves a number of questions unanswered or, at least, unsatisfactorily addressed. For instance, who will fight over water? According to The Economist (1995b), wherein appeared an article on the study, the Bank’s experts, and most other water scholars, believe that the Middle East is the likeliest crucible for future water wars. A long-term settlement between Israel and its neighbors will depend 83 84 Chapter 5 at least as much on fair allocation of water as of land. Egypt fears appropriation of the Nile’s waters . . . by upstream Sudan and Ethiopia. Iraq and Syria watch and wait as Turkey builds dams in the headwaters of the Euphrates. It is clear that the combatants will be states. But why would states fight over water? On this point, the Bank’s reasoning is less clear. On the one hand, it is taken for granted that water is scarce in absolute terms, and that people and states (according to conventional economic and political analysis) naturally come into conflict over scarce resources.3 On the other hand, geography (or, to be more precise, Nature) has not seen fit to have rivers, drainages, and mountains remain constrained within the confines of national boundaries. Indeed, rivers act as excellent borders between countries because they are such prominent geographic features and are difficult to cross— although it is true that they have a tendency to wander back and forth, now and again. Nonetheless, the combination of geopolitical and neoclassical logics leads to the conclusion that, if resources are essential, scarce, and “in the wrong place,” states that lack them will go to war with states that have them.4 QED. What, then, is the solution offered by the Bank? Markets in water. But here emerges a paradox. First, we are warned of potential struggles over Nature’s scarcity and the possibility of war between sovereign entities. Then, quite suddenly, we are transported from the “State of Nature” to the nature of markets. In Nature, people fight and often come out losers; in markets, they bargain according to selfinterest and come out winners. Thus, according to the Bank’s VicePresident for the Environment, the avoidance of water wars is to be found in what he calls “rational water management”—that is, in the transmogrification of economics from a doctrine of absolute scarcity and consequent conflict arising from the maldistribution of state sovereignty over resources to one of relative scarcity and exchange of resources—in this instance, money and water—between sovereign consumers in peaceful markets. How is this amazing transformation of interstate relations to be accomplished? Quite simply: through the “appropriate pricing” of water at its “true” marginal cost—although it is seldom noted that the “true” marginal cost of water to people defending the national patrimony Markets, the State, and War 85 may be incalculable. This move, argues the Bank, will lead to the assumption of water’s “proper place as an economically valued and traded commodity” which, in turn, will result in efficient and sustainable use through technologies of conservation. As the author of the Economist article puts it (with no sense of irony whatsoever), “the time is coming when water must be treated as a valuable resource, like oil, not a free one like air.” Not, perhaps, an ideal parallel—especially insofar as the Persian Gulf War was more about the political impacts of oil prices than absolute supply (Lipschutz, 1992a, 1992b)—but the point is well taken: it is probably better to truck and barter in natural resources than it is to fight for them—if these are the only choices available. The Bank’s program for peace is based, of course, on a neoliberal economic framework; indeed, recalling the injunction of Franklin Roosevelt’s secretary of state, Cordell Hull, one might say “if water does not cross borders, soldiers will.” Trade is offered here as the solution to imagined wars, as a way to prevent conflicts that threaten but have not yet (and might never) occur.5 Yet, is it not conceivable that prognostications that predict water wars could drive the contending parties to another form of market-based exchange—in weapons— thereby heightening tension among them and bringing the prophecy to fulfillment? Or is it true that discourses do not kill people; people kill people? In this chapter, I examine the prospect of “resource wars,” a topic often framed as “environment and security” or “conflict and the environment” (see, e.g., Gleditsch, 1997). I first present an exegesis of the “nature of sovereignty and the sovereignty of nature,” with particular reference to geopolitical discourses of sovereignty, scarcity, and security offered from the late nineteenth to the late twentieth century. I begin with a brief examination of the ideas of the classical geopolitical scholars—Mahan, Mackinder, Spykman, Gray—and the ways in which they sought to naturalize the relationship between geography and state power in order to legitimate efforts to redress scarcity through military means. I then turn to a discussion of sovereignty and property. I argue that sovereignty is best understood as a mode of exclusion, as a way to draw boundaries and establish rights of property against those who would transgress against the sovereign state. Paradoxically, perhaps, 86 Chapter 5 although sovereignty is at the core of the state system, its exclusivity sets up the very dilemmas of control and scarcity that geopolitics finds so problematic and conducive to struggle. The solution to this geopolitical dilemma was (and is) reliance on yet another naturalizing discourse, that of the market. Markets require the uneven distribution of resources and goods in order to function properly. Indeed, Malthus may have been “right,” as one environmentalist bumpersticker claims, but scarcity, as we define it today, is a necessary condition for markets and property rights to function. For better or worse, however, neomercantilist power politics stands in the way of free exchange. What then to do? Following World War II, the diffusion of “embedded liberalism” by the United States throughout the world helped to disseminate a new geopolitics that has, more recently, come to rely on the concept of interdependence in order to maintain the fiction, if not the fact, of political sovereignty. In recent years, as a result, we have seen the emergence of discussions of, on the one hand, “limits to growth” and “sustainability” and, on the other, “environment and security,” both the direct descendants of earlier geopolitical discourses. As such, these discourses assume or attempt to reinstate sovereign boundaries where, perhaps, none should exist. Geopolitics and “Natural Selection” Scholars of the “science” of geopolitics believed that national autonomy and control were to be valued above all, and that to rely on the goodwill of others, or the “proper” functioning of international markets, was to court national disaster. Besides, territories could not be bought and sold; as parts of integral nation-states, they might be wrested or stolen in battle, but they were not for sale at any price.6 Classical geopolitics was a product of its time, the Age of Imperialism and Social Darwinism, not the more-contemporary Ages of Liberalism and Ecology (which are, nevertheless, related ideologies). It is no coincidence that the best-known progenitors of geopolitics were citizens of those Great Powers—Britain, Germany, the United States— who sought to legitimate international expansion and control through naturalized ideological covers. Markets, the State, and War 87 Classical geopolitics regarded the power, prosperity, and prospects of a state as fixed by geography and determined by inherent geographical features that could not be changed.7 As Nicholas J. Spykman (1942:41) put it, Power is in the last instance the ability to wage successful war, and in geography lies the clues to the problems of military and political strategy. The territory of a state is the base from which it operates in time of war and the strategic position which it occupies during the temporary armistice called peace. . . . Ministers come and ministers go, even dictators die, but mountain ranges stand unperturbed. To give some of these scholars their due, not all treated geography as so fully binding on state autonomy and action. Halford Mackinder (1919/1962), an Englishman, was initially less of a geopolitical determinist than the American Spykman (1942; 1944). But World War II hardened the views of both, inasmuch as Germany’s efforts to expand appeared to vindicate Mackinder’s dictum about “heartland” and “rimland” powers (a dialectic later picked up by Colin Gray).8 Following World War II, a more vulgar geopolitical determinism came to dominate much realist theorizing as well as foreign policy analysis (Lipschutz, 1989; Dalby, 1990), rooted in no small degree in the sentiments of George Kennan’s “Long Telegram” (Gaddis, 1982). Such determinism became a routine part of every document to emerge from U.S. councils of strategy and counsels of war. A not untypical example can be found in NSC 94, “The Position of the United States with Respect to the Philippines”: From the viewpoint of the USSR, the Philippine Islands could be the key to Soviet control of the Far East inasmuch as Soviet domination of these islands would, in all probability, be followed by the rapid disintegration of the entire structure of antiCommunist defenses in Southeast Asia and their offshore island chain, including Japan. Therefore, the situation in the Philippines cannot be viewed as a local problem, since Soviet domination over these islands would endanger the United States’ military position in the Western Pacific and the Far East (cited in Lipschutz, 1989:103). 88 Chapter 5 More recently, Colin Gray (1988:15), quite possibly the last of the classical geopoliticians, argued that because it is rooted in geopolitical soil, the character of a country’s national security policy—as contrasted with the strategy and means of implementation—tends to show great continuity over time, although there can be an apparently cyclical pattern of change. If Gray’s claim is correct, the facts of national fate are written in Nature. At the risk of national (or natural) disaster, there cannot be, and must not be, any struggle against such facts. Gray simply takes boundaries—in this case, those between the United States and the Soviet Union that, three years later, would cease to exist—as given and as “natural” as the “geopolitical soil” in which they are drawn. (In a subsequent book, Gray naturalizes culture as a product of geography in order to warn that the United States, as a maritime power, must remain on guard against Russia, a “heartland” power; see Gray, 1990.)9 The Age of Imperialism was also the age of Social Darwinism, as noted earlier, rooted in Charles Darwin’s ideas about natural selection, but extended from individual organisms as members of species to states. John Agnew and Stuart Corbridge (1995:57) argue that naturalized geopolitics [from 1875 to 1945] had the following principal characteristics: a world divided into imperial and colonized peoples, states with “biological needs” for territory/resources and outlets for enterprise, a “closed” world in which one state’s political-economic success was at another’s expense . . . , and a world of fixed geographical attributes and environmental conditions that had predictable effects on a state’s global status. According to German philosophers, states could be seen as “natural” organisms that passed through specific stages of life. As a result, younger, more energetic states would succeed older, geriatric ones on the world stage. In order not to succumb prematurely to this cycle of Nature, therefore, states must continually seek advantage over others.10 As Simon Dalby (1990:35) puts it [S]tates were conceptualized in terms of organic entities with quasi-biological functioning. This was tied into Darwinian ideas Markets, the State, and War of struggle producing progress. Thus, expansion was likened to growth and territorial expansion was ipso facto a good thing. 89 British and American geopoliticians held a somewhat different perspective, seeing progress tied to “mastery of the physical world” through science and technological innovation (for more recent invocations of this idea, see Simon, 1981, 1996; Homer-Dixon, 1995). But Nature was still heavily determining: [B]efore the First World War, the current European geopolitical vision linked the success of European civilization to a combination of temperate climate and access to the sea. Temperate climate encouraged the inhabitants to struggle to overcome adversity without totally exhausting their energies, hence allowing progress and innovation to lead to social development. Access to the sea encouraged exploration, expansion and trade, and led ultimately to the conquest of the rest of the world. (Dalby, 1990:35) Both perspectives—organic and innovative—helped to legitimate imperial expansion, colonialism, and conquest. The “life cycle” argument demanded adequate access to the material resources and space necessary to maintain national vitality—hence the German demand for colonies and, later, Lebensraum. The “struggle to survive” required both overseas outposts and physical position to command the vital geographic features that would provide natural advantage to those who held them—hence, British garrisons from Gibraltar to Hong Kong, the Canal Zone and Philippines under American suzerainty, French control of North Africa. Geopolitics was a “science” well-suited to the neomercantilism and gold standard of the late nineteenth and early twentieth centuries. One hundred years of industrialization in Europe had provided the impetus to policies of state development as well as territorial unification under the rubric of “nation.” Each nation was the autochthonous offspring of the land where it lived—which created problems for those nations, such as the Germans, who were scattered throughout Central and Eastern Europe. Thus the national territory was not only sacred but “natural.” Only those within the natural borders of the nation-state could be mobilized to serve it and only those who were naturalized— that is loyal to the nation-state—could be relied on to support it. It is 90 Chapter 5 no accident that borders, so fluid during the age of sovereigns, became rigid, with passports required, during the age of sovereignty. The ethnic cleansings and population transfers of the twentieth century illustrate this point. Woodrow Wilson’s doctrine of national self-determination helped to further this process by extending the mythic principle of organic nation to all those who could establish a recognized claim to such status. Where competing claims arose within specific territories, the strong tried to assimilate, eliminate, or expel the weaker (see chapter 6). We continue to observe this process in many places around the world, even as markets have rendered borders porous and control over them problematic (see, e.g., Strange, 1996). Indeed, as I have suggested in earlier chapters, the “culture wars” that have spread through a number of industrialized countries over the past decade are as much about restoring the mental borders of the nation as it is about expelling those with alien ideas and identities (Lipschutz, 1998b). Passports can be falsified; “true” beliefs cannot. The fundamental erosion of state sovereigny by markets is nowhere seen more clearly than in the realm of genetics, where the geopolitical discourse of states has been transformed into a “geopolitics of the body.” The invocation of Nature to demonstrate the superiority of human groups is not a new phenomenon, and can be traced at least as far back as the ancient Greeks. During the past one hundred years, Darwin’s theories of evolution were used to legitimate the genetic superiority of some races and nations over others. The latter form of naturalization was greatly delegitimated as a result of its application by the enthusiasts of eugenics, but it has reemerged in a somewhat different form over the past twenty years or so in efforts to link IQ to academic and financial success (e.g., Herrenstein and Murray, 1994). This new genre of naturalization—genetic determinism—has developed as both science and ideology, and its parallels to older geopolitical and organic theories of nation and nationalism are worth noting. The contemporary scientific basis for genetic determinism is found in the various research efforts that seek to understand the basis for various congenital diseases and inherited characteristics, culminating in the Human Genome Project (Wingerson, 1991). The ideological manifestation, however, reflects a virtually pure version of liberal methodological individualism in its framing, to wit, an individual’s potential is almost wholly inherent in her/his genetic inheritance. Twin Markets, the State, and War 91 and sibling studies seem to suggest that society and environment are at best minor contributors to that potential, with the result that, in effect, one is already of the “elect” at birth (so one would do well to be careful in choosing one’s parents; Dahlem Workshop, 1993; but see also Harris, 1998). As is true with geography and the state, an individual’s “natural” inheritance is critical to that person’s development. But the ways in which this particular (and not terribly innovative) insight is being used politically are rather alarming.11 In particular, genetic determinism is helping to reinstantiate a vulgar Hobbesian-genetic “war of all against all,” in which the individual has no one to blame but herself for anything that might befall her in the marketplace of life. Inasmuch as the state has been banished from this realm (except as a declining source of research funds), there is no one to turn to for protection against predation by others with superior genetic endowments or sufficient cash (Hanley, 1996). Another version of this ideology extrapolates natural inheritance back to race and ethnicity, arguing that society has no responsibility to redress historical inequities inasmuch as these are largely genetic in origin. Again, it is sink or swim in the genetic marketplace. In this world of hyperliberal Nature, as a result, a new form of sovereignty accrues to the individual. Here, control is exercised by those with good genes—which are scarce—or those who have the wealth to acquire them via the purchase of new medical techniques. Because in the marketplace, wealth is power, money is also the key to preventing oneself from being contaminated by “bad” genes carried by the poor, the ill, the defective, or the alien. Such quality is transmitted, of course, into one’s offspring. As with classical geopolitics, the naturalized discourse of genetics follows the dominant ideology of the day and, in some of its more extreme expressions, involves an almost complete move of the “natural rights” associated with sovereignty from the state to the individual. The result is that sovereignty as an attribute of the state is disintegrating in both material and ideational terms. Sovereignty, Property, Interdependence What, exactly, is “sovereignty”? Although the term continues to be the focus of vociferous controversy (Biersteker and Weber, 1996; Litfin, 92 Chapter 5 1998)—especially as it appears to many to be “eroding”—here I follow Nicholas Onuf’s (1989) lead and conceptualize it as a property of liberalism. Onuf cites C. B. Mcpherson’s description in this regard: The individual is free inasmuch as he is the proprietor of his person and capacities. The human essence is freedom from dependence on the will of others, and freedom is a function of possession. Society has a lot of free individuals related to each other as proprietors of their own capacities and of what they have acquired by their exercise. (Mcpherson, 1962:2, quoted in Onuf, 1989:165) Onuf (1989:166) points out that “States are granted just those properties that liberalism grants to individuals,” among which are real estate, or property (this is easier to understand if we recall that, for the original sovereigns of the seventeenth century, states were property; see Elias, 1994). In a liberal system, individuals holding property are entitled to use it in any fashion except that which is deemed harmful to the interests or welfare of the community (Libecap, 1989; Ruggie, 1993). Indeed, this is precisely the wording of Principle 21 of the Stockholm Declaration: States have the right to exploit their own resources so long as this does not impact on the sovereignty of other states by constituting an illegal intrusion into the jurisdictional space of other states.12 What this implies, therefore, is not only that sovereignty over property is important, so also are the boundaries constituting property. Inside the boundaries of property, the state, like the individual landowner, is free from “dependence on the will of others”; outside the boundaries, it is not. That, at least, is the theory. Practice is quite different. The individual property owner finds her sovereignty not only hedged about with restrictions but also subject to frequent intrusion due to others’ wills. Indeed, the state has the prerogative of violating the sovereignty of individual private property in any number of settings and ways. These can range from investigations into the commission of crimes on, in, or through the use of specific personal or real property, to the creation of public rights-ofway for highways, pipelines, and communication cables, to the taking of property in the greater social “interest”—subject, of course, to just compensation (markets are involved only so far as setting the “value” of the property is concerned). In these situations, an owner of affected Markets, the State, and War 93 property has little recourse except to courts (or rebellion). Such is the power of law. The state, by contrast, has freed itself from such legal niceties through the fictions of international “anarchy” and “self-help,” which comprise the essential elements of the doctrine we call realism. This permits the state to physically resist violations of its property, on the one hand, while declaring a national “interest” in violating the property of (usually) weaker states, on the other. Realism and national interests legitimate a state’s right to transgress boundaries, notwithstanding the Stockholm declaration and other international laws of a similar bent. For reasons that are beyond the scope of this chapter, egregious physical violations of territorial property and sovereignty are increasingly frowned on (Jackson, 1990). This has not, however, led to a diminution in violations of sovereignty; it simply means that such violations are legitimated under other names or processes (Inayatullah, 1996). Recall, for example, that the distribution of resources among states is uneven, a condition often blamed on Nature and geography, with the result that one state finds itself needing to acquire such resources through interaction with another. This state of affairs is sometimes characterized as ecological interdependence, a situation whereby state borders, characterized as “natural” under sovereignty and anarchy, fail to correspond to those of physical and biological nature (Lipschutz and Conca, 1993). It is the tension between the sovereignty of Territory and the sovereignty of Nature that sets up the basis for problems such as “water wars” in the first place. Below, I will examine the concept of ecological interdependence more closely; here, I only point out that, while it is often taken to describe a physical phenomenon—the existence of ecological phenomena or ecosystems extending across national borders—the term may actually serve to obscure relations of domination and subordination between the states in question.13 As the Nazi finance minister, Count von Krosigk, put it in 1935, If we fail to obtain through larger exports the larger imports of foreign raw materials required for our greatly increased domestic employment, then two courses only are open to us, increased home production or the demand for a share in districts from which we can get our raw materials ourselves. (Quoted in Royal Institute of International Affairs, 1936) 94 Chapter 5 Inasmuch as rights of property inherent in state sovereignty reify the possession or control of a resource, neighboring states who may find the resource “scarce” for lack of access to or control of it also find themselves in a condition of relative powerlessness with respect to it. Their only recourse in such a situation is to physically capture the resource or to purchase property rights to it and thereby “legally” come to control it. Scarcity and the “Limits to Growth” There is no need to recount in detail the geneology of “scarcity” as a concept (even Thucydides mentions it; see 1954:10; also Dalby, 1995); suffice it to say that it is central to the theory and practice of neoclassical liberal economics. Here, I want to consider the relationship between scarcity and boundaries or, rather, between the conditions that differentiate absolute from relative scarcity, and the politics associated with both. Historically, people responded to a lack of food and water in a specific location, due to drought or war, by moving. But, while such conditions might lead to localized conflict or social instability, it is less clear that such deficiencies were causal factors in organized violence or war. The starving are rarely strong enough to wage war, and soldiers and guerillas cannot fight if they are weak from hunger (as evidenced by reports concerning the distribution of scarce food supplies in famine-ridden North Korea). Revolutions and rebellions are led by those who are better off and, we can presume, lacking neither for food nor water in absolute terms. Continuing the argument made above, therefore, relative scarcity can be seen as a product of control, of ownership, of property, of sovereignty, of markets. Economists tell us that absolute scarcity does not— indeed, cannot—occur if and when markets are operating properly, and that all scarcity is relative. Thus, in an “efficient” market, free of political intervention, when the supply of some good runs low, its price will rise and people will seek less expensive substitutes.14 Doomsayers, such as the Reverend Malthus and the Professors Meadows and Ehrlich, have thus been attacked for ignoring the rules of supply and demand (Simon, 1996). But if we insert boundaries into our equation, it turns out that the doomsayers do have something germane to say. Markets, the State, and War 95 Malthus (1803) was a prophet of absolute scarcity. As is well known, he argued that geometric population growth would eventually outstrip the arithmetic growth of agricultural production. This would result in circumstances whereby food would run short in absolute terms, leading to widespread starvation and death. His analysis has been—and continues to be—criticized for not taking into account either basic economics or technological innovation, but these criticisms are not very fair. As a cleric, Malthus was undoubtedly more interested in distribution than in markets or capital and, from a strictly ecological perspective, he was right: when food runs short, populations crash.15 But it is less than clear that, for human societies, such crashes can be described as “natural.” Most animal populations have recourse neither to markets nor the means of moving food from one place to another. They can move or change foods, of course, but if all neighboring niches are occupied and alternative foods are being consumed by others, the game may well be up.16 A similar notion of absolute scarcity was promulgated several centuries later by Dennis and Donella Meadows and their colleagues (Meadows, et al., 1972; Meadows, Meadows, and Randers, 1992) at MIT. They concluded that, given then-current trends in nonrenewable resource production, reserves, and consumption, and barring unforeseen circumstances or discoveries, the world would run short of various critical materials sometime during the twenty-first century. Meadows and his colleagues were also harshly attacked for ignoring the same factors as did Malthus. To the satisfaction of many, they were soon “proved wrong” by events. Even today, economists still take pleasure in pointing this out but, again, there was a sense in which the Meadowsists were not really interested in ecology, economics, or innovation, either. What both crude Malthusianism and more sophisticated Meadowsism disregarded was the matter of distribution of resources— that is, for whom would food and minerals be scarce? And why would such scarcity matter? Certainly, it is by no means clear that the depletion of global cobalt supplies would matter as much to Chinese peasants as Cambridge academics. For this error, in any event, Meadows and his colleagues should be forgiven; economists tend to dismiss the same point, too, regarding distribution as a problem outside of their realm of concern and one that, in any event, can be addressed by 96 Chapter 5 economic growth. Their supply and demand curves do no more than illustrate the premise that, if scarcity drives prices rise too high relative to demand, markets will be out of equilibrium and no one will buy. Eventually, sellers will have to lower their prices, and buyers will be able to eat again. The reality is slightly more complicated, inasmuch as even properly functioning markets can foster maldistribution and relative scarcity. As Jean Dreze and Amartya Sen (1989; see also Sen 1994) have pointed out, not everyone starves during a famine—indeed, food is often quite plentiful. What crude market analyses don’t take into account is that, even at market equilibrium, there may be those for whom prices are still too high. Those who have money can afford to buy, those who do not, starve. Scarcity is only relative in this instance, but some people (and countries) do go hungry. In other words, relative scarcity is also a condition of boundaries, in this instance political, cultural or social ones. In some instances, these lines are found between the physical personas of individuals: I am of one caste (class, ethnie, religion); you are of another. My money and food are mine (or of my group), not yours (or your group’s). In other cases, the lines are drawn between countries: this land (and water) is ours, not yours. At both extremes, the money, food, land, water, and whatever else must be kept inside that boundary in order to maintain individual and collective integrity, identity, and sovereignty—that is, if I give you my money, so that you can buy food, I will have less and will not be able to live the way to which I am accustomed. This will lessen me. In doing this, I will also acknowledge a relationship with you that infringes on me and even acknowledges my obligations to you. If I do this, then I will not be who I have been because I will have yielded some of my autonomy to you. Moreover, because you have no money, you cannot buy from me; and because I have as much as I need, I don’t have to buy from you. Hence, I can remain sovereign and strong. Or If we give you our water, so that you can grow food, we will have less and will not be able to live the way in which we have been Markets, the State, and War accustomed. Then we will not be who we have been because we will have yielded some of our sovereignty to you. This will make us weaker. 97 In other words, the resources must remain sovereign property. The transfer of anything across a boundary—whether physical, political, social, cultural, or economic—serves to acknowledge the existence of a legitimate Other and thereby lessens sovereignty—that is, control and autonomy, whether individual or collective—in an absolute sense. It also creates social relations between and among actors that have little or nothing to do with markets or anarchy (but, see Wendt, 1992). This is to say that sovereignty, whether individual or national, is about exclusion, autonomy, and keeping the Other out, both physically and mentally. It is also why uneven distribution is so central to international politics: it helps to perpetuate the hierarchy of power that, notwithstanding the acrobatics of neorealists, are central to international politics. That was the purpose of the princes’ agreement at Westphalia; that is the point of the reification of methodological individualism today. Inside my boundary, I/we can act as we wish; outside of it, I/we can’t. By redrawing or, in some circumstances, abolishing lines, we could change this premise, but that would mean sharing what we have with others and having less for ourselves. It would change us. This is why autarchy—the abolition of relative scarcity through a redrawing of lines of control—was long the dream of realists and policymakers. But autarchy is economically costly and markets are politically costly. Interdependence, whether economic, ecological, or military, is one way of finessing this problem without redrawing boundaries, yielding sovereignty, or changing identity. Embedded Liberalism and Interdependence As I noted in chapter 3, after 1945 the United States found it useful to create the “Free World” in order to propagate economic liberalism beyond national markets. But creating such an imagined community was a difficult proposition. To institutionalize liberalism throughout the American sphere of influence required that states yield some portion of their individual sovereignty in two ways, both in the name of democracy and prosperity: to the systemic level, in the name of de- 98 Chapter 5 fense and free markets, and to the individual level, in the name of human rights and exchange in the market. The first move constrained states from asserting too strongly their autonomy and defecting from the Free World by offering them increased wealth and the threat of helplessness if they tried to defect. The second strengthened the bonds of interest between the Free World, as an emergent natural community, and sovereign individuals by offering the same. Conditions within this Free World economic area did not constitute “interdependence” as that concept was eventually articulated; rather, they were a manifestation of the Gramscian hegemony of the United States (Augelli and Murphy, 1988; Gill, 1993). The geopolitical discourse developed during the 1970s and 1980s to explain the conditions of the 1950s and 1960s invoked “hegemonic stability theory” to explain why this condition was good and right (Kindleberger, 1973; Gilpin, 1981; Keohane, 1984; Kennedy, 1988). It is helpful to jump ahead of our story for a moment to consider the “double hermeneutic” of hegemony.17 Originally, the concept was formulated as a term of socialist opprobrium, and used primarily by the Peoples’ Republic of China against both the United States and the USSR, but during the period between about 1975 and 1988, which corresponded to the period of generalized worries about American decline (Kennedy, 1988; Nye, 1990), hegemony was naturalized and given positive attributes. A “hegemon” now became a state destined for power and dominance as a result of the “natural” cycles of history and global political economy, and that took on the burdens of global economic and political management under anarchy (a description that, quite logically, fit the United States; see, e.g., Goldstein, 1988). The hegemon did this through the establishment of international regimes (Krasner, 1983) and, in so doing, served not only its own interests, first and foremost, but also those of allied countries, who were nonetheless free-riding on the hegemon. Under the skillful hands of non-Marxist scholars of international political economy (Gilpin, 1981), domination was thereby transmuted into a kind of benevolent stewardship—although some of these same scholars worried about what might happen to this particular world order “After Hegemony” (Keohane, 1984). The free riders, however, failed to appreciate the benefits and their good fortune in acquiescing to U.S. “leadership” (Strange, 1983) and, Americans argued, their reluctance to share the burden played a major role in the Markets, the State, and War 99 political disorder of the 1970s and the renewed Soviet threat during of the 1980s. Ungrateful wretches! The United States—the champion and protector of naturally free men [sic] and markets—had only the best interests of its allies in mind when it manipulated, disciplined, or coerced them. “Interdependence theory,” most closely associated with Robert O. Keohane and Joseph S. Nye Jr., was an academic product of the times (the 1970s) and its politics (the energy crisis), rather than an “objective” description or model of “reality.” On the one hand, interdependence theory tried to account for what seemed to be the end of American hegemony over the Free World, after the dollar devaluations, the end of dollar convertibility to gold, and the oil embargo of the early 1970s. On the other hand, it sought to justify certain policies and actions that might otherwise be politically unpopular at home and abroad. But, whereas hegemony theory derived in large part from realism, interdependence theory was liberal in origin (although, as Robert Keohane (1984) has demonstrated time and again, the two are perfectly compatible). Indeed, in their 1977 book Keohane and Nye proposed that “Interdependence in world politics refers to situations characterized by reciprocal effects among countries or actors in different countries” (1978/1989:8) What did they mean by “reciprocal effects”? Writing toward the end of the 1970s, in the aftermath of the first runup in oil prices, they clearly had the distribution problem in mind. The United States no longer owned sufficient oil under its private sovereignty, within its national boundaries, at an acceptable price; others owned too much and were offering it at too high a price. No one paid attention to the reciprocal effects on others of too much oil at too low a price, a condition that led originally to the establishment of OPEC in the early 1960s, some decades later to the Gulf War of 1990-91 (Lipschutz, 1989:129),18 and that has since resulted in the disappearance of several of the Seven Sisters (the major oil companies). For the United States, the “reciprocal effect” was primarily a problem of domestic politics, the threat of a disgruntled electorate forced to queue and pay twice or thrice the accustomed price for a gallon of gas. A few hot-headed analysts and policymakers proposed taking the oil back, inasmuch as it was “ours.” Cooler heads prevailed, but the United States was no longer the same nation as it had been prior to 1973. 100 Chapter 5 Keohane and Nye further distinguished between sensitivity and vulnerability. If you are sensitive, they said, you feel the pain of an action by another but you can recover—that is, through adaptation you can eliminate the temporary infringement on your autonomy, sovereignty, and identity imposed by others. Then it is back to business as usual. If you are vulnerable, however, you feel the pain but cannot recover—your autonomy and sovereignty have been breached for good and you have been changed. You are now constrained within a relationship with the Other that, as much as you might dislike it, has served to establish a new element of your identity in terms of the Other (Wendt, 1992; Mercer, 1995). To be sure, such a merging of identities can exacerbate the sense of those differences that remain, a phenomenon most evident in the clash between religious fundamentalisms and human secularisms. But the two identities have become mutually constitutive, not oppositional; as with an old married couple, neither can maintain her/his identity unbound from the other. By contrast, interdependence connotes transactions across boundaries and, consequently, some degree of separateness. To speak, then, of interdependence is to make an effort to eliminate the breaches in boundaries between oneself and others—through markets, if at all possible—rather than to adapt to this new condition of mutual constitution.19 Whatever the merits of the distinction made by Keohane and Nye, however, both sensitivity and vulnerability posit impacts across borders, infringements on autonomy, and reductions in sovereignty. To eliminate either type of intrusion, adjustments must occur inside the boundaries—in the realm of domestic politics—so that the boundaries can be restored. Hence the contradiction: interdependence is acceptable if it allows us to maintain the boundaries and our national sovereignty; unacceptable if it does not. At best, this is a word game, at worst, a form of false consciousness that serves to perpetuate domination. Or, as Edward Said has written, [T]his universal practice of designating in one’s mind a familiar space which is “ours“ and an unfamiliar space beyond “ours” which is theirs is a way of making geographical distinctions that can be entirely arbitrary. (Said, 1979:54, quoted in Dalby, 1990:20) In this manner lines are drawn around “natural” communities—that is, ones united by characteristics or culture whose origins are, suppos- Markets, the State, and War 101 edly, lost in history—alienated from one another by virtues of these differences. Keohane and Nye (1977/1989:7) acknowledged (but deplored) the ideological content of the concept of interdependence when they wrote: Political leaders often use interdependence rhetoric to portray interdependence as a natural necessity, as a fact to which policy (and domestic interest groups) must adjust, rather than as a situation partially created by policy itself. . . . For those who wish the United States to retain world leadership, interdependence has become part of the new rhetoric, to be used against both economic nationalism at home and assertive challenges abroad. (Emphasis added) Paradoxically, perhaps, the rhetoric and theory of interdependence was also intended to reinforce boundaries without sealing them off as those at home and abroad might wish. The unit of analysis remained the sovereign state; the impacts posited by Keohane and Nye impinged on states rather than individuals, classes, or other groupings; the responses would be taken by policymakers in the “national interest.” Consequently, political policies in response to these effects took the form of state-led actions. Thus, for example, the oil embargo of 1973 and subsequent price hikes were presented as being directed against countries and their citizens, portrayed as homogenous entities, even though there were differential effects within and across the target states and on people (see chapter 3). Politicians fulminated about OPEC taking “our oil” and infringing on “our sovereignty,” even though the oil was “owned” by multinational corporations and stockholders, based in the industrialized states, who profited handsomely as prices rose. And state-led policies to redress these conditions—President Nixon’s ill-fated Project Independence; President Carter’s Synfuels Corporation, both the subject of fierce domestic attack for their intervention into highly profitable oil markets—were presented as schemes to reduce Americans’ reliance not on petroleum, per se, but on the petroleum that the United States did not control. For other countries, the rhetoric of interdependence did not signal the equalization of power relations among allies so much as a U.S. effort to rationalize anew its “natural” leadership of the Free World. 102 Chapter 5 This coincided—not by accident—with the first stirrings of renewed Cold War, marked by the rise of the second Committee on the Present Danger (Sanders, 1983), the collapse of détente, and the political and academic reification of hegemonic stability theory. The erosion of boundaries around the world led to the effort to reinforce them in Central Europe, through emplacement of Euromissiles (see chapter 4) and similar activities elsewhere. By the end of the 1970s, the discourse of economic interdependence had dissolved, to be replaced by reassertions of sovereignty and autonomy under Jimmy Carter and Ronald Reagan, and legitimation of their policies through the newly created discourse of hegemony. You cannot, however, fool Mother Nature. Limits to Sustainability? The Stockholm Declaration was quite explicit about the environmental rights and responsibilities of states. States were sovereign entities where resources and environment were concerned and they were expressly forbidden to engage in activities that negatively affected the sovereignty—the property rights—of other states. What, in practice, did this mean? First, states possessed the absolute right to do whatever they wished with the natural resources located within their boundaries. Second, states were absolutely enjoined from doing anything with their resources that would somehow affect the sovereignty of any other state. Third, because the environment was not subject to these imagined boundaries, states were admonished to protect it within the conditions implied by the first two principles. In other words, the Stockholm Declaration reiterated the absolute impermeability of the boundaries of states as a condition of protecting the environment. While there were (and are) any number of contradictions embedded in these principles, three stand out. First, in spite of long-standing evidence that Nature “respects no borders,” the agreements signed at Stockholm in 1972, at Rio in 1992, and elsewhere during the intervening twenty years and since continue to reify the state as the sole appropriate agent of control, management, and development where environment is concerned. Second, the Stockholm Declaration grants to the state or its international agents the absolute right to discipline Markets, the State, and War 103 nonstate actors when degradation of environment and natural resources is involved—rather than the other way around—even to the point of appropriation through coercion (Peluso, 1992, 1993). That this might lead to further degradation, rather than protection, as well as ever greater concentrations of power in the hands of those who encourage degradation, has never been immediately obvious and certainly has never been an argument that diplomats and policymakers have wanted to hear. And third, it has the effect of reinforcing the natural separation of political units rather than fostering the mutual and respectful relationships between them that might serve better the goal of environmental protection.20 The reification of borders and sovereignty as “natural” has thus had two difficult-to-reconcile consequences where environmental protection is concerned. On the one hand, as is often recognized, it fragments jurisdictions that might better be treated as single units. Therefore, sulfur dioxide emissions from power plants in the midwestern United States are a domestic regulatory problem when they rain out in the Northeast, but an international transboundary problem when they rain out a few miles further on, in Canada. On the other hand, in order to address such transboundary issues, this reification mandates “cooperation” among states, who prefer to protect their individual economic prerogatives. Under these conditions, cooperation is described as “difficult” and “unnatural,” because it goes against the grain of socalled anarchy (which is the term used to naturalize antagonism) and requires that states yield up a measure of their domestic sovereignty in order to address the consequences of the infringements on sovereignty enjoined by the Stockholm Declaration and periodically reiterated since then. Both the rhetorics of “ecological interdependence” and “sustainable development” can thus be interpreted as responses to the ontological difficulties posed by Stockholm. More to the point, they can both be understood as direct descendants of the geopolitical discourses of the nineteenth and twentieth centuries. In 1987, the Brundtland Commission (also known as the World Commission on Environment and Development, or WCED) proposed that “The Earth is one but the world is not,” and argued that the “world in which human activities and their effects were neatly compartmentalized within nations” had begun to disappear (WCED, 1987:4).21 The commission further sug- 104 Chapter 5 gested “that the distribution of power and influence within society [sic; which society is not made clear] lies at the heart of most environmental and development challenges” (1987:38) In other words, according to the commission, even though the sources of the problems of environment and development were to be found at both supra- and subnational levels, sustainability would nonetheless depend on states acting within and across boundaries, even when it was not evidently in their interest, ability, or willingness to do so. The continued resistance of the United States to actually controlling its emissions of greenhouse gases is testimony to the faintness of this particular hope. A more fundamental difficulty, however, has to do with who decides what is sustainable? From a crude ecological perspective, sustainability is frequently defined as a rough equivalence between inputs and outputs. This, it is widely assumed, is equivalent to conditions in the state of Nature that are naturally “balanced.”22 To exceed this balanced condition for too long, or by too much, will run up against natural limits and lead to the outstripping of supply by demand. From an economic (and nineteenth-century organic) perspective, however, growth is “natural” and “good” (see chapter 7). Consequently, sustainability rests on the continued accumulation of capital—and technology—that can be used to substitute for depleted resources or to rise above such limits. In its famous definition of the “solution,” the Brundtland Commission (WCED, 1987:8) finessed this problem by including both conceptions and ignoring the contradictions between them:. The naturalization of biospheric limits, whatever they might be, are balanced in this definition by the naturalization of both technology and social organization that know no limits. These, in turn, are to be dis- Markets, the State, and War 105 covered by recourse to markets that will—how is never specified— find a balance between needs and growth (WCED, 1987:44). The second response to the ontological problem of boundaries and sovereignty—the conflation of statist realism and methodological individualism—is no more helpful. States fight over natural resources such as water because, as Peter Gleick (1994:8) has put it, “of its scarcity, the extent to which the supply is shared by more than one region or state, the relative power of the basin states, and the ease of access to alternative freshwater sources.” In other words, the “cause” of water wars is distribution, not supply, per se. The prevention of water war lies in “formal political agreements” that will address the distribution problem, through the allocation of property rights to water and the appropriate pricing of water in markets among the apparently “rational” users (Gleick, 1994). Voila! Those naturalized factors that lead to interstate competition and war in the first place are subsumed by processes of economic competition and exchange in naturalized markets. These, it is presumed, will produce more efficient and, therefore, more peaceful outcomes. As I suggested earlier, however, the problem of unequal distribution will not go away; it will simply be shifted to those who lack the power to make trouble. Sustainability will thus come to be defined not by the justice of distribution but by the judgement of markets. Ecological interdependence will fall before wealth rather than force of arms, as the rich disempower the poor. The boundaries will be naturalized once again and sovereignty restored to its rightful place in the hierarchy of Nature. What Does It Mean to Be “Natural” Where States Are Concerned? In the search for the causes of social conflict, both peaceful and violent, there is often the temptation to look for the things that can be counted, rather than the things that really count.23 It is easier to calculate per capita availability of water, or the welfare requirements of immigrants, than to change the social relations and hierarchies of power and domination that characterize states and societies. It is easier to invent rhetorics and discourses that, somehow, obfuscate and natural- 106 Chapter 5 ize the inequities inherent in these relationships than to point out and act on the notion that, although things might be as they are, they do not have to remain that way. And, it is always easier to ascribe the causes of unpleasant conditions or events to a mysterious and deterministic history or Nature than it is to unravel the complications of a political economy that spans the globe while reaching into every nook and cranny where there are human beings. Markets, the state, and war are not “natural,” and to believe or act otherwise is to affirm the status quo as the best of all achievable worlds. All three are human institutions and, as such, are constructed and mutable. This is why water wars and water markets can be so easily juxtaposed in the language and reasoning of liberalism and neoclassical economics, even though creating open, transborder markets in water will not necessarily lead to “water peace.” After the dust settles, it will probably be more “efficient” for Palestine to sell its water to Tel Aviv than use it for West Bank agriculture. Water will then flow across borders, becoming scarce on one side and plentiful on the other, thanks to control by markets rather than the military. How the people of Hebron and Nablus will feel about that remains to be seen. 6❖ THE SOCIAL CONTRACTION Since the end of the Cold War, culture and identity have become prominent explanatory variables in international politics (see, e.g., Lapid and Kratochwil, 1996). Among the proponents of this notion are Benjamin Barber (1995), Robert Kaplan (1996), and Francis Fukuyama (1995a, 1995b). The best known, perhaps, is Samuel P. Huntington (1993; 1996), with his “clash of civilizations.” He (1996:19, 20) argues that the years after the Cold War witnessed the beginnings of dramatic changes in peoples’ identities and the symbols of those identities. Global politics began to be reconfigured along cultural lines. . . . In the post–Cold War world flags count and so do other symbols of cultural identity, including crosses, crescents, and even head coverings, because culture counts, and cultural identity is what is most meaningful to most people. In this and other recent works, both culture and identity have been invoked in essentialist terms, as factors that are as invariant as the earth on which they stand. States once came into conflict over raw materials (or so it is said; see Lipschutz, 1989; Westing, 1986); today 107 108 Chapter 6 they are liable to go to war over unfinished idea(l)s. Straits, peninsulas, and archipelagos were once the objects of military conquest; today religious sanctuaries, languages, and national mythologies are the subjects of occupation and de(con)struction. The result appears to be a new type of geopolitics, one that invokes not the physical landforms occupied by states but the mental platforms occupied by ethnies, religions, and nations. In line with such geoculturalism, most of the forty or so wars that have erupted since 1989 have been characterized, both analytically and in the popular press, as “ethnic” or “sectarian,” oriented largely around conflict between cultures. Ordinarily, the origins of such wars are pictured as too ancient and arcane for the citizens of the modern world to understand, inasmuch as the “irrational” cultural factors deemed to be their cause have changed very little over time and no longer make any sense. The most that can be done, according to such logic, is to let them “burn out.” For the most part, however, such categorizations serve more to expel combatants from “history” than to explain what is going on, why such wars are taking place, or what might be done to stop or prevent them. More to the point, not only are essentialist cultural explanations unhelpful, they are wrong. So-called ethnic and sectarian conflict are artifacts of changes within states driven, to no small degree, by forces associated with recent social transformations linked to global integration and external pressures for economic liberalization. Moreover, the fragmentation afflicting “weak” states, such as those in the Balkans, Central Asia, and Africa (Kaplan, 1996), is only the very visible tip of an iceberg that includes even those “strong” countries that are so prominent in the new global economy, including the United States (Rupert, 1997; Crawford and Lipschutz, 1998). As I suggested in chapter 2, political fragmentation and integration are part and parcel of the same process; they are part of a dialectic whose overall consequences cannot, as yet, be foreseen. I begin this chapter with a discussion of standard explanations of recent civil wars and conflicts, and offer a somewhat different account of why such wars have broken out. I then take up the matter of state survival in an era of economic competition, liberalization, and deregulation, discussing the ways in which these economic processes foster both integration and fragmentation, and how this dialectic plays itself The Social Contraction 109 out in many places, threatening, perhaps, a proliferation of statelike entities. Next, I ask how many states are enough? Whereas, from the perspective of already existing states, there are too many to permit the creation of more, from the vantage of proto-states there are too few states, and reopening the books is an absolute necessity. The key question is, of course, too many or too few for what? I suggest here that, perhaps, there are too many states from the security perspective, and too few from the economic and cultural perspectives. Finally, I address the structural difficulties posed by those rules and norms of the international state system governing the establishment of new states. As posed in chapter 3, those rules are premised on exclusivity—indeed, the state system requires exclusivity if it is to operate as an anarchy—and, short of a return to a full-service welfare state that can overcome the centrifugal forces inherent in the pursuit of individual self-interest, we are faced here with an insoluble situation in which units of exclusivity will only get smaller and smaller until they can no longer divide. Is There Ethnic Conflict? For our purposes, it is possible to identify five general “theories” of ethnicity that have something to say about the conflicts that result “when ethnies collide.”1 The first suggests that ethnicity is biological. Proponents of this view argue that ethnic tensions are, somehow, “natural.” Observes one scholar, “people reflexively grasp at ethnic or national identifications or what passes for them” (Rule, 1992:519). An alternative formulation, which falls back on sociobiology, argues that “the urge to define and reject the other goes back to our remotest human ancestors, and indeed beyond them to our animal predecessors” (Lewis, 1992:48). Another view, enunciated some years ago by then-secretary of state Warren Christopher, reiterated by Huntington (1993; 1996), and invoked by President Clinton (1999) to explain the bombing of Yugoslavia in 1999 cites “long histories” and primordiality, accounting for the emergence of ethnic politics and violence by invoking “centuries” of accumulated hatreds among primordial “nations.” Such hatreds, goes the argument, have exploded into war as a consequence of the end of 110 Chapter 6 the Cold War and the disappearance of the repressive mechanisms that kept them from boiling over for four decades. Indeed, as can be seen in the cases of Croatia and Serbia, Kosovo, South Asia, and other places, such invocations, akin to a form of historical materialism, serve to “naturalize” ethnic consciousness and conflict almost as much as do genetic and biological theories. Inasmuch as we cannot change historical consciousness, according to this view, we must allow it to work its logic out to the bitter end. A third perspective, most closely associated with Benedict Anderson (1991), but elaborated by others, is the idea of the imagined community. This view suggests that ethnicity and ethnic consciousness are social constructions best understood as the “intellectual projects” of a bourgeois intelligentsia. These projects arise when elites, using new modes of communication, seek to establish what Ernest Gellner (1983) has called a “high culture” that is distinctive from other, already existing ones (see also Mann, 1993). Such individuals are, not infrequently, to be found in the peripheral regions of empires or states, excluded from the ruling apparatus by reason of birth or class. Because they are highly educated, peripheral intellectuals may be offered opportunities to assimilate into the ruling class, but to reach the top levels, they must renounce completely all of their natal culture. At the same time, these elites are also often aware of the cultural and political possibilities of an identity distinct from that of the center, in which they can play a formative role. Ethnicity, from this view, is cultural, and not inherently violent. But violence may develop when two ethnies, such as Jews and Palestinians, claim the same territory. A fourth perspective is the defensive one (Lake and Rothchild, 1998). Here, the logics of the state and state system begin to come into play. Historically, states have been defined largely in terms of the territory they occupy and the resources and populations they control. Hence, the state must impose clearly defined borders between itself and other states. To do this, the state must plausibly demonstrate that other states and groups pose a physical and ideological threat to its specific emergent “nation.” Herein, then, lies the logic for the politicization of group identity, or the emergence of “ethnicity” and “ethnic conflict”: self-defense.2 The last view is instrumental: Ethnicity is the result of projects meant to capture state power and control. But such a project is not, as The Social Contraction 111 we shall see, totally ahistorical, as rational-choice theory might lead us to believe. Rather, it is a response to the logics of the state system and globalization, drawing on historical and cultural elements already present (and sometimes free floating) within societies, and invoking “threats” (usually imagined) posed by other real or emergent ethnicities as a reason for its own formative and offense-oriented activities (Ra’anan, Mesner, Armes, and Martin, 1991). One might ask why such antagonisms are necessary; wouldn’t communal autonomy suffice? Efforts to provide national/cultural autonomy to ethnic and religious groups were tried in the Ottoman and Hapsburg empires, but failed ultimately because they did not provide to these groups the power accorded to the dominant identity group in those empires and their subunits (Gagnon, 1995). Once the European state system became well-established and spread, only through a “state of one’s own” was it possible and desirable to acquire such power and position (Lipschutz, 1998a). All of these theories focus on cultural difference as the source of conflict and violence, and this view reaches its apotheosis in the work of Huntington (1996:21), who argues that people define themselves in terms of ancestry, religion, language, history, values, customs and institutions. They identify with cultural groups: tribes, ethnic groups, religious communities, nations, and, at the broadest level, civilizations. People use politics not just to advance their interests but also to define their identity. We know who we are only when we know who we are not and often only when we know whom we are against. (Emphasis added) In Huntington’s schema, culture, identity, and what he calls “civilization” are defined not through associational values, but in terms of enemies and what people are not. This is problematic for two reasons. First, there are many cases of cultures coexisting peacefully for extended periods of time. Second, Huntington assumes cultures not only to be stable but also static. While anthropologists continue to have serious disagreements about the definition of culture, we can define it here as the combination of social factors—norms, rules, laws, beliefs, relationships necessary to the reproduction of a society—with material factors that help produce subsistence and foster accumulation. Cultural functionalism is 112 Chapter 6 largely out of style in anthropology because it seeks to explain all features of a culture and its development by their specific purpose in production and reproduction. But such functionalism lies at the core of the geocultural perspective, whose advocates see ethnicity and culture as roughly equivalent to other countries’ supplies of raw materials and military technology. Huntington’s elements of culture fit this schema, although he regards those elements as basic and functionally common to all societies, rather than contextual and contingent. In addition to objecting to such deterministic functionalism, the anthropologist would also point out that cultures are neither static nor stagnant, and that major changes in internal and external environments are likely to disrupt an apparently stable society and make it into something else (as I argued with respect to security in Chapter 3). Huntington, however, seems to believe that cultures and civilizations, like continents and oceans, are immutable and forever (see Gray, 1990). The parallels between classical geopolitics and geocultural politics have been noted in a number of places (see, e.g., Tuathail, 1997). As I suggested in chapter 5, the classical geopolitics of Mahan, Mackinder, and Spykman operated as a discourse of power and surveillance, a means of imposing a hegemonic order on an unruly world politics. Cold War geopolitics divided the world into West and East, good and evil, with perpetual contestation over the “shatter zones” of the Third World (adrift in some cartographic purgatory of nonalignment). Today, these neat geographic boundaries can no longer be drawn between states and across continents, and the shatter zones are to be found within countries as well as consciousnesses. Yet, Huntington’s book (1996:26–27) offers tidily drawn maps whose geocultural borders, with a few exceptions, follow modern boundaries between states (a few oddities do show up hither and thither: an outpost of “Hindu civilization” in Guyana; Hong Kong remains “Western” in spite of its then forthcoming reunification with China; Circumpolar Civilization is entirely missing). There is yet another paradox here: In spite of its very material objectives (and, we must assume, substructure), geoculture, according to Huntington’s conceptualization, seems to lack any material basis. To be sure, geoculture is connected to great swaths of physical territory, “civilizations” much larger than the states that occupy those spaces, but neither geoculture nor these civilizations have any evident material or even institutional existence. The Islamic umma imagined by some The Social Contraction 113 and feared by others is much larger than the states it encompasses but, between Morocco and Malaysia, it is riddled by sectarian, political, economic, and social as well as cultural differences, even down to the local level. Geoculture shows no such variegation. People simply identify with those symbols—“crosses, crescents, and even head coverings”— that “tell them” who they are. Culture and identity, twinned together, thereby come to operate as a sort of proto-ideology, almost a form of “false consciousness.” And because ideologies are, of necessity, mutually exclusive, they must also be unremittingly hostile to one another. The inevitable conclusion is the “clash” predicted in Huntington’s title, and the replacement of the Cold War order with a new set of implacable enemies driven by an incomprehensible (and “irrational”) system of beliefs. The imputation of such explanatory power to geoculture is not only theoretically invalid, it is also empirically incorrect. Most of the violent conflicts underway around the world today are domestic and involve often-similar ethnic, religious, or class-based groups, struggling to impose their specific version of order on their specific societies. Such social conflicts do appear to be contests for hearts, minds, and bodies, and combatants seem to feel no remorse in eliminating those whom they cannot convert—indeed, conversion is rarely an option. While most observers and policymakers view these wars as manifestations of chaos that must be “managed” (as detailed in Crocker and Hampton, 1996), it is perhaps more illuminating to see them as very much a product of contemporary (or even “postmodern”) times (Luke, 1995). What passes for culture in these wars is, at best, an instrumental tool for grabbing power and wealth. Postmodern social warfare has thus been mistakenly characterized as war between “cultures.” Huntington (1996) goes so far as to use the carnage in Bosnia as an archetype for his predictions of “clashes between civilizational cultures,” pointing to the “fault line” between Western and Orthodox Christianity as one of the “flash points for crisis and bloodshed.” Yet the tectonic metaphor is flawed. Just as earthquake faults are often notable for their invisibility prior to an event, such cultural fractures in Bosnia were, according to most reports, hardly apparent prior to 1990 (Gagnon, 1995). Moreover, except for periodic and usually infrequent tremors, faults tend to be very 114 Chapter 6 quiet. Drawing on Freud’s notion of the “narcissism of small differences,” it appears that culture wars are more likely to erupt between those whose ascriptive differences are, initially, minor but that can be magnified into matters of life and death and then reified so as to seem eternal and immutable (Lipschutz, 1998b). Still, how else are people to decide who is deserving of good? How else can one account for success and failure? And what else is one to do when success and failure seem to contradict the rules and expectations of one’s experience? As one scholar of the “ideology of success and failure” in Western societies puts it, Society is considered to be “in order” and justice is considered “to be done” when those individuals, in general, attain success who “deserve” it, in accordance with the existing norms. If this does not happen, then people feel that “there is no justice” or that something is basically wrong. (Ichheiser, 1949:60, quoted in Farr, 1987:204) At the extreme, rationalization of such displacement may take one of two forms: self-blame or scapegoating. Self-blame is more common in the United States, given the high emphasis placed on individualism and entrepreneurism, but self-blame can also generate anger that is externalized onto scapegoats. Who or what is attacked—other countries, minorities, immigrants, or particular economic or political interests—depends on how the causes of displacement are explained and understood, and which narratives carry the greatest logical weight (Hajer, 1993). Academic models that explain job loss by comparative advantage and other such theories are, generally speaking, unintelligible to all but trained economists. Putting the blame on specific individuals or groups is much easier and has “the function of replacing incomprehensible phenomena by comprehensible ones by equating their origins with the intentions of certain persons.” (Groh, 1987:19). Nevertheless, what ultimately happens in a specific country or place is permitted, but not determined, by overarching macrolevel structures. Violence is not inevitable. Such structures function, rather, by imposing certain demands and constraints on domestic possibilities. In such situations, people are offered the opportunity to make meaningful choices about their future, choices that do not involve constrained identities (Todarova, 1998). The problem is that political and eco- The Social Contraction 115 nomic changes of virtually any type usually cut against the grain of prior stratification and corporatism. From the perspective of those who have benefited from such arrangements, any change is to be opposed. If Not “Culture,” What Then? The problem with the views and theories offered above is that, taken individually, each is incomplete. To be sure, ethnicity, religion, and culture have played prominent roles in Bosnia, Rwanda, Sri Lanka, Kashmir, Nigeria, Algeria, Georgia, Angola, Kosovo, Chechnya, and so on, but they are better understood as contingent factors, rather than either fundamental triggers of intrastate wars or ends in themselves. Each theory provides some element of the whole, but none, taken alone, is sufficient. Moreover, each assumes that the phenomenon we call “ethnicity” or “sectarianism” is, necessarily, the same today as it was 50, 200, or even 1,000 years ago. But the systems within which these phenomena and wars have emerged in recent years have not been static and, to the extent that systemic conditions impose changing demands and constraints on domestic political configurations, today’s “ethnicity” must be different from even that of 1950. But how? As indicated by a growing body of research (Crawford and Lipschutz, 1998), we must look beyond the five arguments to account for the implosion of existing states and the drive to establish new ones out of the pieces of the old. The causes of recent and ongoing episodes of social conflict are obviously correlated with the end of the Cold War,3 but they have been fueled in no small part by large-scale processes of economic and political change set in train long before 1989. Specifically, as I argued in chapter 2, changes in the international “division of labor,” economic globalization, and the resulting pressures on countries to alter their domestic economic and political policies in order to more fully participate in the “community of nations”—all processes that began during the Cold War—have had deleterious effects on the relative stability of countries long after its end.4 As I noted in earlier chapters, Barry Buzan (1991) has argued that the state is composed of three elements: a material base, an administrative system, and an idea. He suggests that the “idea” of the state is equivalent in some way to nationalism, although he does not 116 Chapter 6 examine closely the role of the state itself, or its elites, in creating and sustaining this idea. What has become more evident in recent years is that nationalism (or “patriotism,” as it is called in the United States) is only the public face of a very complex citizen/civil society/state relationship. In industrialized, democratic countries, flag waving, anthem singing, and oath taking are public rituals that visibly unite the polity with the state. Such rituals can extend even to sports and similar activities, as evidenced by the nationalist hoopla that surrounds the supposedly internationalist Olympic Games. There is, however, more to this relationship than just ritual; there is a substructure, both material and cognitive, that might be called a “social contract.”5 This is an implicit understanding of the quid pro quos or entitlements provided to the citizen in return for her loyalty to the idea, institutions, and practices of the state (see chapter 8). All relatively stable nation-states are characterized by political and social arrangements that have some form of historical legitimacy. The idea of the “social contract” is, conventionally, ascribed to Rousseau (1968) and Locke (1988), who argued that the state is the result of what amounts to a contractual agreement among people to yield up certain “natural” rights and freedoms in exchange for political stability and protection. Locke went so far as to argue that no state was legitimate that did not rule with the “consent of the governed,” a notion that retains its currency in the contemporary Washington consensus for “democratic enlargement” (Clinton, 1997; Mansfield and Snyder, 1995, offers a more skeptical view of this proposition). Rousseau’s theory of the origin of the state owed much to the notion of consent, as well, although he recognized that some sovereigns ruled through contempt, rather than consent, of the governed. Both philosophers also acknowledged the importance of material life to the maintenance of the social contract. My use of the term here is somewhat different, in that it does not assume a necessarily formalized expression of the social contract. Sometimes, these contracts are codified in written constitutions; at other times, they are not inscribed anywhere, but are found instead in the political and social institutions of a country (as in the United Kingdom or Israel). In either case, a social contract structures the terms of individual citizenship and inclusion in a country’s political community, the rules of political participation, the political relation- The Social Contraction 117 ship between the central state and its various regions, and the distribution of material resources within the country and to various individuals. Social contracts also tend to specify the roles that people may occupy within the country and society, and the relationships between these roles. Quite often, these social contracts are neither just, equitable, nor fair. They are nevertheless widely accepted, and people tend not to dispute them actively, if only because such opposition can also affect their own material position and safety. The social contract is, therefore, a constitutive source of social and political stability within countries, and its erosion or destruction can become the trigger for conflict and war. I do not claim that these social contracts are necessarily respectful of human rights or economically efficient; only that, as historical constructs, they possess a certain degree of legitimacy and authority that allows societies to reproduce themselves in a fairly peaceful manner, over extended periods of time.6 Within the frameworks established by such social contracts, we often find stratified hierarchies, with dominators and dominated, powerful and powerless. Frequently, these roles and relationships have what we would call an “ethnic” or “religious” character as, for example, in the traditional caste system in India, or the “ethnic divisions of labor” once found throughout the lands of the former Ottoman Empire, institutionalized in the millet system, and still present throughout the Caucasus and Central Asia (as well as in some American cities; see Derlugian, 1998). Historically, these hierarchies have tended to change only rather slowly, on a generational scale, unless exposed to sudden and unexpected pressures such as war, invasion, famine, economic collapse, and so on. What is crucial is that these arrangements help to legitimate, in a Gramscian sense, the political framework within which a society exists, thereby reinforcing the citizen/civil society/state relationship. External threats to the nation and its inhabitants—whether real or imagined—can help to consolidate these social contracts as well as to facilitate changes deemed necessary for the continued reproduction of state and society. Threats make it possible to mobilize the citizenry in support of some national “interests” as opposed to others. Threats also help to legitimate domestic welfare policies and interventions that might, under other circumstances, be politically controversial and disruptive. 118 Chapter 6 The introduction into societies of radical changes that take effect over shorter time spans can, however, destabilize, delegitimize, and dissolve long-standing, authoritative, and authoritarian structures and relationships very quickly, as in Central and Eastern Europe in 1989, Yugoslavia in 1991, and Indonesia in 1998. A transition to market organization or democracy represents one such change; the collapse of a kleptocracy, another. The former provides, for example, an economic environment within which some individuals and groups can, quite quickly, become enriched, while others find themselves being impoverished, as in the case of post-Communist Russia. But even where markets and democracy are long established, as in the United States, economic liberalization, certain forms of deregulation (and reregulation), and hyperliberalism can also have the effect of undermining social stability and generating political dissatisfaction and alienation. These kinds of changes disrupt the rule-governed basis of people’s behavior and expectations, sending them in search for new rules, old rules, or no rules (Lipschutz, 1998a, 1998b). The past beckons at those times when change is pervasive; the present becomes illegitimate, nostalgia replaces reasoned discourse, politics becomes venal and, sometimes, violent. To make the point once again: it is not that external pressures are wholly to blame; rather, the political and social changes required of countries whose leaders and elites, both old and new, wish to participate more fully in the changing global economy tend to destabilize the “social contracts” and make them vulnerable to particular types of political mobilization and violence. As Georgi Derlugian (1995: 2) has put it, the causes of conflicts usually labeled “ethnic” are to be found in the prevailing processes in a state’s environment, that may be only tenuously divided into “external”—the interstate system and the world economy—and “internal” which, according to Charles Tilly, shapes the state’s structure and its relation to the subject population and determines who are the major actors within a particular polity, as well as how they approach political struggle. But the consequent dynamics are almost wholly internal. Serge Moscovici (1987:154) has argued that everyone knows what constitutes the notion of conspiracy. Conspiracy implies that members of a confession, party, or The Social Contraction ethnicity . . . are united by an indissoluble bond. The object of such an alliance is to foment upheaval in society, pervert societal values, aggravate crises, promote defeat, and so on. The conspiracy mentality divides people into two classes. One class is pure, the other impure. These classes are not only distinct, but antagonistic. They are polar opposites: everything social, national, and so forth, versus what is antisocial or antinational, as the case may be. 119 And Dieter Groh (1987:1) points out that human beings are continually getting into situations wherein they can no longer understand the world around them. Something happens to them that they feel they did not deserve. Their suffering is described as an injustice, a wrong, an evil, bad luck, a catastrophe. Because they themselves live correctly, act in an upright, just manner, go to the right church, belong to a superior culture, they feel that this suffering is undeserved. In the search for a reason why such evil things happen to them, they soon come upon another group, an opponent group to which they then attribute certain characteristics: This group obviously causes them to suffer by effecting dark, evil, and secretly worked out plans against them. Thus the world around them is no longer as it should be. It becomes more and more an illusion, a semblance, while at the same time the evil that has occurred, or is occurring and is becoming more and more essential, takes place behind reality. Their world becomes unhinged, is turned upside down, [sic] in order to prevent damage to or destruction of their own group (religion, culture, nation, race) they must drive out, render harmless, or even destroy those—called “conspirators”—carrying out their evil plans in secret. That such conspiracies are bizzare, imagined, or socially constructed hardly matters if and when shooting starts. Bullets do kill. Political Entrepreneurs and Social Contraction Faced with pressures and processes that mandate change in domestic arrangements, both those who would lose status and those who would 120 Chapter 6 grasp it tend to see power in absolute and exclusionary terms. In order to limit the distribution of potential benefits, and to mobilize political constituencies in support of their efforts, such people often fall back on social/cultural identities that do incorporate ethnic, religious, and class elements. Rapid social, economic, and political changes create new opportunity structures for those who are in a position to take advantage of them.7 These “political entrepreneurs” are usually welleducated members of the professional classes or intelligentsia. As David Laitin (1985:302) puts it, they know how to provide “selective incentives” to particular individuals to join in the group effort. Communal groups will politicize when there is an entrepreneur who (perhaps instinctively) understands the constraints to organization of rational individual behavior. In other words, a political entrepreneur is one who is able to articulate, in a coherent and plausible fashion, the structure of opportunities and constraints that face a specified group of people as well as the potential costs of not acting collectively. Such appeals have been especially persuasive in “times of trouble,” when societies are faced with high degrees of uncertainty, and particular groups within societies see their economic and social prospects under challenge. It is under these conditions that we find domestic differences emerging and developing into full-blown social conflict and warfare. To put the argument more prosaically, in social settings that are “underdetermined”—where rules and institutions have broken down or are being changed—opportunities often exist for acquiring both power and wealth. There are material benefits to social solidarity. Kinship can function as a form of social capital, establishing relations of trust even where they have not existed previously (Fukuyama, 1995a, 1995b). The political mobilization of ethnic, religious, and cultural identities is one means of taking advantage of such opportunities. Consequently, people do not grasp “reflexively” for their essential ethnic identity when political power and authority crumble. Instead, exclusive and oppositional identities, based on ethnic, religious, and class elements whose meaning is never too clear, are politically constructed and made virulent as those in power, or those who would grasp power, try to mobilize populations in support of their struggles with other elites for The Social Contraction 121 political power, social status, and economic resources (Laitin, 1985; Brass, 1976; Crawford and Lipschutz, 1998). As René Lemarchand (1994:77) has written in his insightful study of conflict and violence in Burundi, The crystallization of group identities is not a random occurrence; it is traceable to specific strategies, pursued by ethnic entrepreneurs centrally concerned with the mobilization of group loyalties on behalf of collective interests defined in terms of kinship, region or ethnicity. . . . Clearly, one cannot overestimate the part played by individual actors in defining the nature of the threats posed to their respective communities, framing strategies designed to counter such threats, rallying support for their cause, bringing pressure to bear on key decision makers, and, in short, politicizing ethnoregional identities. And, he (1994:77) continues, The essential point to note is the centrality of the state both as an instrument of group domination and as an arena where segments of the dominant group compete among themselves to gain maximum control over patronage resources. So from this perspective the state, far from being a mere abstraction, emerges as a cluster of individual contestants and cliques actively involved in the struggle for control over the party, the army, the government, the civil service, and parastatal organizations. . . . Access to the state thus becomes a source of potential rewards for some groups and deprivations for others. (Emphasis added) Of course, political settings are never quite this simple. Many of the societies where political entrepreneurs are, or have been, at work are already characterized by class and social differences that parallel ethnic ones (Derlugian, 1998). The exacerbation of these differences, through an appeal to chauvinistic ideologies of identity, becomes a means for these elites to extract or negotiate for more economic resources, status, and power within a “state of their own.” In this fashion, political entrepreneurs are able to transform “ethnic” identities into tools of political mobilization and opposition. The collapse of Yugoslavia falls into this pattern,8 but it is apparent in any number of 122 Chapter 6 countries afflicted by ethnic conflict. Indeed, there is reason to think that even democratic capitalist countries could fall victim to this process (Lipschutz, 1998b). There is nothing particularly new or novel about these arguments, or about the impacts of international economic change on the domestic politics of countries at different levels of development; Alexander Gerschenkron (1962) wrote about this in the early 1960s (see also Crawford, 1995). What is different now is that the processes of economic liberalization and integration, thought so important to national competitiveness and growth, have, on the one hand, undermined critical responsibilities of the state even as they have, on the other hand, created a whole set of demands for “new” states or comparable political entities. In a very real sense, however, this explanation of the sources of ethnic conflict does not account for the ways in which opportunities for social and political mobilization come into being; rather, it takes for granted that political communities can, and do, implode. What creates the necessary, if not sufficient, conditions for implosion is less clear. Breaking Up Is Not So Hard to Do The difficulty in explaining so-called ethnic and cultural violence may arise because of (1) our inability to see any kind of political formation other than the state; (2) a continuing ontological commitment to and epistemological fascination with the state and state system; and (3) our reluctance to see certain contradictions that inhere to both. Not only are states considered to be the “highest” form of political organization in existence today—at least, as scholars of international relations argue—they are also signifiers of legitimate power that, as Lemarchand notes above, bring wealth and status to individuals simply by virtue of the capacity to occupy dominant roles within them. Two consequences follow. First, in the contemporary world, legitimate representation can arise only through a state; no other form of political status or autonomy quite fits the bill (see chapter 8). Second, as indicated above, control of a state also provides access to a considerable flow of wealth and power, via rents that can be extracted from domestic constituencies and international sources of finance. Not The Social Contraction 123 everyone takes advantage of political power for these “corrupt” ends— we like to think that Western democracies are, in particular, immune from such corruption9—but those who do do this in an overt fashion are more likely to find themselves ruling a potentially unstable country. There is a tendency among analysts and policymakers, moreover, to take for granted the fundamental ontological reality of the state and its ability to exercise meaningful control over what goes on within and across its borders (Mearsheimer, 1994), although the growing reach of the global economic system puts paid to this fantasy, even if not in the way that is commonly believed (Strange, 1996). Interdependence theory is, by now, almost a cliché, reaching a preposterous extreme in Kenichi Ohmae’s (1991) “borderless” world of nearly 6 billion atomized consumers. But realists continue to argue that states remain states and could, if they wished, reassert their hegemony over transnational economic, social, and cultural processes (Thomson, 1995). Interdependendistas, conversely, speak sorrowfully of the “erosion” of sovereignty, as though the material base of the state is carried away through slowly growing ravines, while leaving the mountains largely intact. Such theories cannot explain, however, how and why states might fall apart into smaller units, inasmuch as they largely ignore two critical factors. First, as Buzan’s conceptualization indicates, the state is a cognitive as well as a material construct, and it relies heavily on citizen loyalty for legitimacy, authority, and continuity. Second, in the absence of mechanisms for reinforcing loyalty, such as nationalism, the introduction of markets can exacerbate rather than eliminate already existing social and cultural schisms even as it further undermines the basis of loyalty to the state (as indicated by the collapse of the Soviet Union; Crawford, 1995). Conversely, as seen in the People’s Republic of China, nationalism can become a powerful tool for maintaining loyalty to the state when markets are eroding older bases of state legitimacy (Wehrfritz, 1997). To repeat: for historical reasons and as discussed above, societies and states are usually organized along lines that tend to privilege some groups over others. Such privileges often have much to do with the national, ethnic, or even regional constitution of state and society. Many states have struggled to mute or eliminate such hierarchies, with varying degrees of success and failure, through juridical relief, internal resource transfers, and administrative fiat. Under appropriate conditions, markets 124 Chapter 6 can be efficient allocators of investment. They are, however, largely indifferent to national and ethnic distributions of power and wealth, except insofar as they delineate specific niches for production, services, and sales (Reiff, 1991; Elliott, 1997). And because markets do require rule structures to operate, those who can establish the rules are often able to do so to their individual or collective advantage. Moreover, as markets and economies are liberalized and opened to greater competition from abroad, conditions also favor those who have begun the new game with greater factor endowments. As any investor knows, you have to have money to make money. Left to its own devices, therefore, the market provides greater and more remunerative opportunities to those who are already well-off, and leaves farther behind those who are less wealthy and begin with fewer initial advantages. Beyond this, as noted in earlier chapters, integration and fragmentation are linked consequences of the further globalization of capitalism, rather than independent phenomena as is sometimes assumed. The origins of global economic integration are to be found in the midnineteenth century, with the rise of English liberalism and the doctrine of free trade as propogated by the Manchester School (some argue that it began even earlier, in the sixteenth century). With fits, starts, and retreats, such integration has reached into more and more places in the world, creating myriad webs of material and cognitive linkages. The fact that such integration has become so widespread does not mean that all places in the world share in the resulting benefits (nor does it necessarily imply a fading away of the state). Indeed, it is uneven development, and the resulting disparities in growth and wealth, that make capitalism so dynamic. And, as I noted in chapter 2, it is the constant search for new combinations of factors of production and organization, and not states, that drives innovation, competition, and the rise and fall of regions and locales.10 The fact that there are multiple economic “systems” present in any one location simply adds to the dynamism of the process.11 Today’s comparative advantage may become tomorrow’s competitive drag. The larger political implications of this process have not been given much thought. Comparative advantage is no longer a feature of states as a whole—it never really has been, in any event—but, rather, The Social Contraction 125 of region and locale, where the combination of material, technological and intellectual, is, perhaps, only briefly fortuitous (Noponen, Graham, and Markusen, 1993; Smith, 1989). The specific advantages of a place such as Silicon Valley—in many ways, a historical accident arising as much from the war in the Pacific as the result of deliberate policy12—may have only limited spillover in terms of a country as a whole. The specific conditions that give rise to such development poles, moreover, seem not to be so easily reproduced wherever there is land available for a “science park.”13 Holders of capital can choose locations in which to invest. Cities, communities, places—and to a certain degree, labor—control a much more limited set of factors through which they can attract capital. Because the supply of capital is seen as limited (and probably is), competition among places to attract investment and jobs becomes more of a zero-sum game than the positive sum one argued by advocates of comparative advantage. For a country as a whole, where wealth is produced is thought to be immaterial; for towns and cities, it can be a matter of life or death. This point is not lost, for example, on those American states and cities that have established foreign trade offices and regularly send trade missions abroad (Shuman, 1992, 1994). Nor have the business opportunities arising from such competition been ignored; according to one article in the San Francisco Examiner (Trager, 1995) describing the activities of a consulting firm providing city and regional marketing programs for economic development, its activities resemble those of an international arms dealer—selling weapons to one ruler and then making a pitch to the neighboring potentate based on the new threat. Part of the pitch for these [economic development] programs is that a region needs its own program to survive against the rival programs of other areas. This could become the cause of considerable political antagonism against the neighbors who win and the authorities who are deemed responsible for the loss. As discussed above, how these particular dynamics play themselves out depends on the history and political economy of the specific state and society under consideration and preexisting social and political “differences” that, under the pressures of real, potential, or imag- 126 Chapter 6 ined competition, become triggers of antagonism. The critical point is that the disjunctures between past and future, and between places and regions within countries, can have politically destructive consequences for the state, because it can also delegitimate the cognitive and ideological basis for loyalty to state and society. As pointed out in chapter 2, the notion that individual self-interest can serve the social welfare is only valid under rather narrow conditions. Much of the time (and to a growing degree), the newly wealthy see no reason to contribute to state and society. The newly–poor and those with declining prospects see that the state cares less and less about them. Both groups become alienated from state and society, although the former retreat into private enclaves while the latter seek to restore the status quo ante. Both moves contribute to the fragmentation and dissolution of the public political sphere. The result is that, with the global economic integration that reaches into more and more corners of the world, we find ourselves faced with dialectically linked integration and fragmentation that can play itself out in a number of different ways (see, e.g., Sakamoto, 1994). In the United States, for a number of historical reasons, potential divisions are geographical as well as class- and ethnicity-based. While it is difficult to envision the secession of individual states, not a few parts of the country have been abandoned by the rest as a result of integration and competition (Lipschutz, 1998b). In other countries, such as the former Yugoslavia, the boundaries between jurisdictions were intended to be administrative, but were drawn up in ethnic or national terms. In yet other places, the dividing lines are linguistic, religious, clan-based, “tribal,” or even vaguely cultural (Derlugian, 1998). It goes without saying that those places in which people have fallen to killing each other have nothing to offer global capital—they have, quite literally, fallen out of “history”—but those places able to break away from the political grip of larger polities, as Slovenia escaped the competitive drag of Serbia, might be well-placed to participate in the global economy. Conversely, as seen in the tentative moves of Catalonia to assert its place among the “regions” of Europe, and the interminable discussions in Quebec about whether it would be better off alone than in the Canadian federation, the number of potential states or statelike entities appears to be quite large. The Social Contraction 127 How Many Are Enough? How many of the potential nations existing in the world are likely to seek a state? Approximately 50 countries signed the United Nations charter in 1945. In theory, those 50 represented virtually all of the population of the Allied countries and empires, inasmuch as the European powers fully expected to regain control over colonial territories occupied by the Axis or lost, for a time, to domestic insurgencies. By the mid-1970s, with the first wave of postwar decolonization just about over, UN membership had climbed to more than 150. Following the collapse of Yugoslavia and the Soviet Union, the number of states belonging to the UN passed 190. There is little reason to think the count will stop there, as suggested by an article in the Wall Street Journal (Davis, 1994) entitled “Global Paradox: Growth of Trade Binds Nations, but It Also Can Spur Separatism.” The author pondered whether we might see a world of 500 countries at some time in the future. Another piece in the San Francisco Chronicle (Viviano, 1995), “World’s Wannabee Nations Sound Off,” told of the many ethnic, indigenous, and sectarian groups seeking political autonomy. Finally, there are World Wide Web sites listing hundreds of “microstates” and “micronations,” some serious, others not. In principle, there are few limits to the number of independent states that might come into being in the future; some have suggested the world’s 2,000-odd languages or 5,000-odd potential ethnies stand as an upper limit. In practice, however, there is considerable reluctance on the part of already existing countries to recognize new ones that have not been created with the consent of both government and governed, even though this specific requirement is quite elastic (see Bierstecker and Weber, 1996). As testified to by efforts to reassemble shattered states, such as Cambodia and Somalia, there may also be a sub-rosa fear that successful nonstate forms of political community could be disruptive of the current structure of international politics. In other words, for the time being, the only normatively acceptable form of political community at the international level is the state. A proliferation of clans, tribes, city-states, trading leagues, social movement organizations, transnational identity coalitions, diasporas, and so on could raise questions of legitimacy and representation that might very 128 Chapter 6 well undermine the status of existing states, not to mention well-established hierarchies of power and wealth (see chapter 8). But there are also limits beyond which the “international community” will apparently not go to in order to preserve existing international borders. The Bosnia “peace settlement” signed at Dayton, Ohio, in late 1995 suggests one such limit. With all of its inherent flaws and contradictions, the agreement seemed to recognize that, in this case at least, juridical borders would make not the slightest difference to ethnic politics after fragmentation. The agreement maintained the fiction of a unitary Bosnia, albeit as a confederation of federations comprised of what are by now largely ethnically pure, semiautonomous territories. Provisions permitting repatriation to their former homes of refugees of ethnicity different than the dominant one have, for the most part, gone unfulfilled, although municipal elections have taken place, with displaced refugees being allowed to vote in their former towns of residence. The reality appears to be, however, that the Bosnian Croats treat mostly with Zagreb, and the Bosnian Serbs mostly with Belgrade. There is not much in the way of border controls between the respective ethnic zones and the mother countries, whereas there might well develop increasingly stronger controls between the ethnic zones within Bosnia. And the Bosniak (Muslim) government in Sarajevo will do what it can to maintain itself and expand. If a relative degree of peace can be established and maintained within the fiction of a state—as seems to be happening—the United States and Europe will be satisfied. A precedent will have been established that can be cited by others seeking a similar settlement (Campbell, 1997). From the perspective of the industrialized powers, and especially the United States, there are also both military and economic reasons to limit the number of independent states occupying the planet’s surface. While the United States once pursued a policy of “divide and conquer” in its efforts to dismember the European colonial powers, it has never had a real national interest in a proliferation of juridically sovereign states. For one thing, managing a highly fragmented world system is quite complicated and expensive—as evidenced in reductions in the number of U.S. embassies and consulates abroad. For another, the transaction costs of dealing with even a single new national government can be considerable, especially if it is located in a politically sensitive region. The Social Contraction 129 Too many states also pose a strategic nightmare (a point made implicitly by Chase, Hill and Kennedy, 1996). During the Cold War, each new country had the potential of becoming another cockpit of East-West conflict and, therefore, each existing state required minute attention (and control) lest the Soviets gain another salient into the West. This militated against changes in borders. Now, each additional state is one that could fall under the control of “rogues” or putative terrorists or into threatening disorder. The reported presence of Iranian and Afghani mujuhadeen on Bosniak territory during the civil war there gave nightmares to NATO commanders, the National Security Council, and the U.S. Congress (whether such fears were justified or not). But practically speaking, there is little to prevent the establishment of new states except the ability of more powerful countries to stop the process through active intervention and economic boycotts, something few of them have so far indicated a willingness to do, the intervention by NATO on behalf of Kosovo not withstanding. But maybe more is better. From the economic and cultural perspectives, there is no reason not to have a world of 500 or more statelike political entities. In the past, big was preferable. Because the military prowess of a Great Power rested on its material and economic base, and the autonomy of that base required relatively high levels of self-reliance, large territories could provide both economies of scale and security. Neomercantilism made sense. This was the logic behind states with continental or even transcontinental scope, such as the United States, the Soviet Union, and European colonial empires. Nowadays, however, within the structure of the new global division of labor, and the apparent prospects for global (if not local) peace, prosperity rests more than ever on comparative advantage and market niches (a point also argued in Rosecrance, 1996). The difficulties involved in getting the single European currency up and running—in part, exacerbated by the differential levels of wealth and development from northern to southern and western to eastern Europe—also suggest that such economies of scale may no longer matter as much as they once did. Fordist production for mass markets—both raw materials as well as consumer goods—leads to overcapacity, ruinous competition, and, perhaps, national bankruptcy; niche strategies allow regions and locales in industrialized countries to trade in similar, although not identical, goods and services, without having 130 Chapter 6 to broadly share the wealth with less-fortunate compatriots in other parts of their country. The result can only be detrimental to the political cohesiveness of existing nation-states. Every House a State? In one sense, the state has come full circle in its travels from Westphalia to “McWorld” (as Benjamin Barber puts it; 1995). When the original documents constituting state sovereignty were formulated and signed in the seventeenth century, the princes and their noble colleagues were seeking to protect themselves. States (and populations) were sovereign property, not the autonomous actors we imagine them to be today and, inasmuch as royal sovereignty was coterminous with territory, prince and state were the same. In essence, the Westphalian agreement said, “What is mine is mine, what is yours is yours, and we leave each other’s property alone.” Exclusion of the Other was, therefore, the watchword, for this was the best way to ensure that one’s property would be left alone. This did not rule out wars, of course, for there was nothing and no one to enforce such agreements. As often as not, selfinterest and family feuds overrode social niceties (Elias, 1994). The transition from royal state to nation-state was a gradual one, though well under way by the end of the eighteenth century. Still, the fundamental principle of state exclusivity did not change. Indeed, without it, the state as an entity with sole jurisdiction over a defined territory could not exist, precisely as it did not exist in this form prior to the Westphalian revolution. In its absence, we would be faced now, as then, with a form of “neomedievalism,” characterized by overlapping but differentiated political realms, governed by multilevel and sometimes coterminous authorities, with inhabitants confessing loyalty to several different units, depending on circumstances (see, e.g., Elias, 1994; Bull, 1977:264–76). As events turned out, the nation-state came to be defined by a shared, if artificial or imagined, nationality that was also exclusive: the citizen could not confess loyalty to more than one state at a time (even today, dual citizenship is rarely permitted by national authorities). The nation-state thus became, on the one hand, a container for all those who fit within a certain designated category and, on the other hand, a The Social Contraction 131 barrier to keep out those who did not fit within that category. It also became a means for the accumulation of power at the center as well as the division of power with other similar centers. To reinforce the claim to centralized power and generate exclusive allegiance to a single center, the state had to accomplish two ends. First, it had to eliminate competing claimants to legitimacy from within its putative jurisdiction; ethnic cleansing was state practice long before CNN began to transmit news stories and film from Bosnia and Rwanda. Turning “peasants into Frenchmen” (Weber, 1976)—or whatever— could lead either to assimilation of peripheral nations into the nationality of the center, or it could result in the ruthless extermination of minorities by the center (Elias, 1994). Second, the state had to generate in those under its jurisdiction a parallel resistance to the attractions of other centers, that is, other nation-states and nationalities. Keeping with the precise demarcation of state territories beginning in the nineteenth century, it also became necessary to demarcate precisely the same boundaries within the minds of those living within the lines on the ground and to discipline those who, somehow, violated those boundaries (a point to which I return in the next chapter). Nationalism—and distinctions among types is irrelevant here— was, with very few exceptions, formulated as a doctrine of collective superiority and absolute morality vis-à-vis other nations, thereby serving to bind citizens to the state and to separate them from other states. This was not a one-way deal, of course; the state promised to provide security and political stability to those who signed up with it and not with another (not unlike the deals offered by competing phone and Internet Service Providers companies today). This system of national exclusivity reached its apogee during the middle of the twentieth century, when some countries became, in effect, sealed containers from which there was no possibility of escape. The end of the Cold War not only unsealed the containers but provided the permissive conditions for new containers to be established, as groups of people found themselves both dispossessed from their position within the nation-state and increasingly resentful of their dispossession. Where might the processes of state fragmentation and social contraction stop, once they have been put into motion? Here, the contradictions between the nation-state and the market become critical. To repeat briefly what has been said above as well as many times before: 132 Chapter 6 the liberal doctrine of self-interest calls into question the relationship between the self-interested actor and the larger context within which that actor finds him/her/itself, be it society and state or individual and society. At the international level, this tension is resolved through the dual fictions of anarchy and self-help; at the domestic level, however, taking the logic of self-interest to an extreme risks civil or social warfare and the replication of a Hobbesian “state of nature” at the neighborhood, household, or even individual level. All that is lacking, it would seem, are entrepreneurs with the military force to challenge the center. Given the chaos in some places around the world, and the growing obsession with “gang violence” in the United States and Europe, some might argue that this situation already holds in more places than we would care to acknowledge (Enzenberger, 1994).15 7❖ THE PRINC(IPAL) On March 3, 1983, President Ronald Reagan appeared on American television to announce the end of the nuclear threat. A new military program, designed to protect the country against the possibility of a first-strike attack by Soviet nuclear-tipped intercontinental ballistic missiles, was about to be launched. The Strategic Defense Initiative (SDI), or “Star Wars” as it was almost immediately tagged by its detractors, was proffered to an increasingly restive public as a means of overcoming the moral dilemma inherent in mutual assured destruction (MAD): the holding hostage of one’s people to potential nuclear annihilation as a means of preventing the enemy from even contemplating such an attack. This particular dilemma had already created political disorder throughout Europe and America, manifested most clearly in the Nuclear Freeze movement, the Catholic Bishops’ statement on nuclear weapons, and massive antinuclear protests throughout Western Europe (Meyer, 1990; Wirls, 1992). Reagan, seizing on citizens’ fears of nuclear war, offered SDI as an alternative means of protecting them, thereby attempting to render ineffective and impotent the arguments of “freezniks,” bishops, and other antinuclear activists. There were numerous critics of SDI. Most chose not to contest the program on moral grounds but, rather to launch an attack on its 133 134 Chapter 7 technological (in)feasibility (see, e.g., Drell, Farley, and Holloway, 1985). This, they hoped, would blast to bits what some saw as a dangerous and destabilizing attempt to gain a viable first-strike capability against the Soviet Union, a capability that might trigger the very eventuality that everyone wished to avoid. But here SDI’s critics faced an insoluble dilemma: inasmuch as one could never prove conclusively that an effective shield could not be built, how could one justify halting a project that promised such an enticing vision?1 Ultimately, the defense sectors of the United States and its allies managed to absorb tens of billions of dollars in a largely fruitless attempt to develop the required technologies, although this has not deterred various parties from continuing to argue that a strategic defense system is feasible, desirable, and necessary or the Clinton Administration and U.S. Congress deciding to proceed with an SDI Junior (Rowny, 1997; see also Mandelbaum, 1996). What was largely ignored in the long-playing, choleric exchange over SDI was its essentially moral purpose. SDI became a tool of the American state in providing an impenetrable shield not so much against missiles and accidental nuclear launches (whether from friend or foe) as in opposition to notions of detente, disarmament, and other indicators of declining determination and credibility vis-à-vis the USSR. In offering SDI, Ronald Reagan was promising to build a barrier that would redraw the wavering lines between the Free World and its unfree doppelgänger, between democracy and totalitarianism, between the “Evil Empire” and the “City on the Hill.” Indeed, SDI was a moral statement but, more than that, it was also a reimagining and reinforcing of the borders between nations, between liberal and socialist nationalisms, between what was to be permitted and what was absolutely forbidden. In this chapter, I explore these matters. I begin by describing the features of the moral-state, as I call it, and review briefly the antecedents to this phenomenon, beginning prior to 1648 with a specific focus on the ways in which states, as constituted following the Thirty Years War, also functioned as moral authorities. In historical terms, this authoritative role was first expressed through the person of the sovereign. Following the collapse of the universal moral authority of the Roman Catholic Church, the sovereign’s mandate to rule the state invoked God’s authorization. Although most contemporary democra- The Princ(ipal) 135 cies do not seek legitimacy through theocracy, these deeply buried roots nevertheless retain considerable influence. This is seen, in particular, in the emergence of nationalism—the “civil religion” of the state—as a new source of moral authority, a topic I address in the second section of the chapter. The emergence of state-centered nationalisms was a product of the secular Enlightenment—many of whose acolytes nevertheless subscribed to the authority of “Nature” and natural law (see Noble, 1997). The state now came to provide bounded containers of moral authority within which some practices were prescribed in the name of national solidarity while others were proscribed on pain of ostracism or expulsion. At the limit, as discussed in chapter 6, some national elites found it expedient to eliminate whole classes and categories of people within their states’ borders, or to engage in large-scale civil warfare in order to establish domestic moral discipline. In the third section of the chapter, I examine the emerging contradiction between the moralities of nationalism and the rise of liberal individualism, especially as it developed after 1945. As I argued in earlier chapters, containment of the “Free World” and the “Soviet bloc” throughout the Cold War specified the perimeters of two dominant orders and thereby united two great polities, each within its own “sphere of moral influence.” Populations were disciplined not so much by the threat of physical punishment—although this was forthcoming in certain situations—as the fear of being cast outside the “Realm of Order” into moral ambiguity (and damnation?). President Reagan’s invocation of the “Evil Empire” was thus as much an allusion to Satan and his legions as Stalin and his. More recently, the United States has attempted to reimpose its global moral authority by way of what I called, in an earlier chapter, “disciplinary deterrence,” both at home and abroad, via public relations, demonstration, and, if necessary, public punishment. Finally, I address the collapse of state-centered moral authority in the New World Order of global liberalization. As I argued in chapter 2, old (b)orders have dissolved under the pressures of the global market, which, in turn, has become a sink for, rather than a source of, moral authority. The ever-more-frantic search for new sources of moral authority therefore proceeds through a great number of channels—social, political, economic, ethnic, identity-based—but none is likely to provide 136 Chapter 7 the means for reestablishing borders and order. I conclude with a discussion of efforts to restore (b)orders, and speculate on the implications of such an impossible task for twenty-first century global politics. Real-State or Moral-State? The end of the Soviet Union destroyed utterly and finally the conceptual border between the good of the Free World and the evil of the “bad bloc,” thereby exposing the American people to all sorts of pernicious, malevolent, and immoral forces, beliefs, and tendencies. It should be no cause for wonder, consequently, that the domestic politics of morality, especially in the United States, have become so pronounced and full of inconsistencies (“get the government off of our backs but into the bedrooms of teenage mothers”) and have been extended ever more strongly into the international arena.2 Paradoxically, perhaps, the fundamental causal explanations for these contradictions are to be found not in domestic politics, as is conventionally thought; rather, the roots of this phenomenon lie in the very nature of the nation-state itself, in its somewhat uncertain place in the so-called international system, and in the spread of the norms and practices of political and economic liberalism, a point I have argued in earlier chapters. Far from being amoral, as is so often claimed, state behavior, as encoded in the language and practices of realism, nationalism, statecentricity, and anarchy, exemplifies morality in the extreme, with each unit representing a self-contained, exclusionary moral-state. How can this be? In contemporary international relations theory, the conventional perspective on the nation-state is largely a realist, functionalist one. The state serves to protect itself and its citizens against external enemies, and to defend the sanctity of contracts and property rights from internal ones. Morality, as George Kennan (1985/ 86) and others have never tired of telling us, should play no role in the life of the real-state, for to do so is to risk both safety and credibility. But can the state stand simply for the protection of material interests and nothing else (Hirsch, 1995; Ellis and Kumar, 1983)? After all, the essential constitutive element of the nation-state—the nation—represents the eternal continuity of specific myths, beliefs, and values, usu- The Princ(ipal) 137 ally with a teleological character. Conversely, the defeat of those elements, whether in war or peace, represents a mortal wound to the nation as well as to the authority and legitimacy of the state that protects it. This aspect of the state is largely ignored by the conventional wisdoms of both realism and liberalism (not to mention Marxism). Their advocates fail to historicize the state, seeing it as having no genealogy and thereby omitting from their stories of international politics one critical element: the European state, as heir to the authority of the Catholic Church, was originally constituted as a moral order, defining a prescriptive standard of legitimate authority through containment of its citizens within well-defined physical and moral (b)orders. And, with some changes, this remains the practice today. The legitimacy of the state does not grow simply out of material power; it also rests on the presumption that the state’s authority is both good and right (Brown, 1992: chaps. 2-3). And, although legitimacy is normally addressed only within the context of domestic politics (if then), history from the Thirty Years War on nonetheless illustrates that domestic legitimacy matters in international politics, too. One might argue, of course, that that was then and this is now. The contemporary state no longer fulfills this moral role, and has not done so for many decades. Contemporary threats to state and polity are almost wholly material: terrorists throw bombs, illegal immigrants take resources, diseases trigger illness. I argue to the contrary: the modern nation-state acts not only to protect its inhabitants from threatening material forces, it also acts to limit their exposure to noxious ideas by establishing boundaries that discipline domestic behavior and beliefs. After all, what is a “terrorist” but someone with bad ideas? What is an “illegal” immigrant except someone who knowingly violates public norms? A state that cannot maintain such (b)orders becomes a prime candidate for disorder. And, as I shall argue below, it is in no small part the collapse of these moral borders that is responsible for much of the political disorder throughout the world today.3 More specifically, the kulturkampf that has wracked the United States (and other countries) since the end of the Cold War, and probably longer, is a struggle over where, and on whom, these moral borders should be inscribed. It is not a simple matter, however, of the moral versus the immoral (or amoral) within the confines of the 15 138 Chapter 7 members of the European Union, the 50 American states or the world’s 190-odd countries. Rather, the question is more properly understood as: Are the borders of our contemporary moral community to be national or global? If pernicious forces have free reign across formerly impermeable borders, how can the struggle stop at the water’s edge? And, if such miscreants threaten to penetrate the body politic with their black helicopters, Gurkha troops, and Soviet tanks, how can we not carry the culture war into the international realm (as Samuel Huntington and others have done)? Consequently, on the one side of this struggle are those who would reinscribe the national, excluding or expelling all who do not live up to the moral standards of the Founding Fathers of the United States (there are no Founding Mothers), and extending the borders of that morality abroad through example and discipline (U.S. congressional prohibitions on family-planning funds to certain countries and the Helms-Burton Act restricting dealings with Cuba come to mind here). On the other side are those who, for better or worse, by virtue of choice or via the chances of change, find themselves swept up or away by the disintegration of national and moral (b)orders. This latter group is not identical with those captive to the contemporary events that give rise to refugees, migrants, and the casualties of wars and markets; its members freely make choices among and in support of difference in ways that the culture warriors resolutely abjure. And, as I noted above and in chapter 4, these struggles are not restricted to the domestic domain; in the global realm, moral conflict, disguised as “cultural” or religious difference, has come to replace the ideological blocs of the Cold War. In these struggles, the United States has taken on the role, not of world policeperson, as it is often said, but global dominatrix (both mistress and vice-princ(ipal)). But how can this be? What Was Westphalia? For most international relations (IR) scholars, and for mainstream IR theory, the defining moment of contemporary world politics was 1648, when the Treaty of Westphalia brought an end to the Thirty Years War. As David Campbell critically observes, accounts of this history “offer nothing less than an edifying tale of modernization in which we wit- The Princ(ipal) 139 ness the overcoming of chaos and the establishment of order through the rise of sovereign states” (Campbell, 1992:47). There is good reason to believe that the signers of Westphalia, and its precedessor, the Treaty of Augsburg, had nothing of this sort in mind at the time. It is only through the contingent and contextual lenses of subsequent centuries that such an orderly meaning was imposed on those events. Today, this teleological story of the state offers two central signifiers: anarchy and sovereignty. Through anarchy, we are told, the princes who put their names to the two treaties agreed that a universal authority—the Roman Catholic Church—would no longer stand over them. Through sovereignty, each prince would come to constitute the highest authority within each state and, enjoined from interfering in the affairs of any other, would have no authority anywhere outside of his state. This state of affairs, with its distinction between domestic “order” and the interstate “nonorder,” was subsequently reified through realist Hobbesianism, that is, hard interpretations of the writings of Thomas Hobbes and others (Walker, 1992). The princes were probably not very concerned about this particular inside/outside distinction; we might say that, in 1648, there was more concern with affairs of family than matters of state. Indeed, if we look at a map of sixteenth- and seventeenth-century Europe, we discover that relations between polities were much more intrafamilial than international. Moreover, relations within domestic orders—often scattered about the continent in discrete tracts—had as much to do with which branch and member of a family ruled over a specific territory as with each branch and individual’s religion (a point best illustrated by the intrafamily wars among British royalty and nobility; see Elias, 1994). Hence, while Westphalia did not put an end to these intrafamily squabbles, it did for the most part do away with the remaining vestiges of feudal authority, replacing a confused medieval order with a clear hierarchy that placed prince or king above duke and lord, and invoked the moral authority of God, whether Protestant or Catholic, to bless and legitimate the new arrangements.4 Westphalia, in other words, was a social contract for European society with an embedded morality defining “good” behavior. It lacked many of the elements of domestic orders, to be sure, including a sovereign, but it did provide moral principles in place of an actual ruler. Those principles were frequently 140 Chapter 7 violated (although probably more often observed than not), but they did form the basis for a continent-wide society. Not altogether unintentionally, most late-twentieth-century mainstream IR theorists have been little concerned with the domestic implications of anarchy and sovereignty and have, instead, addressed the functional significance of the two practices for relations among states. Anarchy is said to imply “self-help,” or self-protection, while sovereignty is said to imply “self-interest” or, in its modern mode, accumulation (Inayatullah, 1996). I will not belabor these two points, inasmuch as they are the staple of every IR text published over the past 150 years (Schmidt, 1998). I will point out, however, that as practices, both presuppose modes of transnational regulation rather than the absence of rules and norms so often associated with them.5 More than this, both sovereignty and anarchy can be regarded as expressions of a state-centric morality that presumes a legitimate order within and illegitimate disorder without. The first point is best seen in Kenneth Waltz’s well-known (albeit flawed) invocation of the market as a structurally anarchic parallel to international politics (Waltz, 1979). In invoking the headless market, Waltz draws on Adam Smith’s famous “invisible hand” to explain outcomes of relations between states but fails to recognize that the “invisible foot” of international politics might well produce results quite unlike the orderly outcome posited by Smith. The error committed by Waltz is to regard both markets and international politics as self-regulating, driven by no more than self-interest or power (Smith, by contrast, hoped that religious beliefs would constrain people’s appetites; see Hirsch, 1995). As social institutions, markets are subject to both implicit and explicit regulations. The market is governed, first of all, by the command “Thou shalt not kill.” Other rules follow. Walter Russell Mead (1995/96:14) makes a similar point about airports and air travel when he argues that, “Cutthroat competition between airlines coexists with common adherence to traffic and safety regulations without which airport operations would not be possible.” So it is between states. The two principles of anarchy and sovereignty are both constitutive of the international system as it is conceived and regulative of it, and they constitute moral boundaries for the state that preserve the fiction of international (dis)order and domestic order (Brown, 1992: chap. 5). The Princ(ipal) 141 On reflection, it also becomes clear that sovereignty and anarchy have moral and, in consequence, legal implications for domestic politics, too. As Hobbes (1962:132) put it, [T]he multitude so united in one person, is called a COMMONWEALTH, in Latin CIVITAS. This is the generation of that great LEVIATHAN, or rather, to speak more reverently, of that mortal god, to which we owe under the immortal God, our peace and defense. (Emphasis added) By establishing borders between states and permitting rulers to be sovereign within them, princes were granted the right to establish within their jursidictions autonomous systems of law with both functional and moral content. These systems enjoined certain activities in order to prevent consequences that would be disruptive of the order of the state—that is, order as the way things should be, according to the individual prince’s vision. Or, as Hobbes (1962:113) argued, “But when a convenant is made, then to break it is unjust: and the definition of INJUSTICE, is no other than the not performance of covenant.” Violation of the convenant is, therefore, not simply the breaking of the law; it is repudiation of the underlying moral code of the society. Hobbes argued that coercive power, entrusted to Leviathan, was necessary to ensure “performance of covenant” and the safety and security of each man who subscribed to that covenant. But even though the seventeenth century was quite violent, overt coercion was still relatively uncommon. Rather, it was the possibility of discipline and ostracism by the state (and the other subscribers to the covenant) as a result of a violation of order—not repeated day-to-day punishment— that kept subjects from violating the prince’s laws or the convenant (and continues to do so today).6 Most, if not all, of the legal systems of the time acknowledged, moreover, the hegemony of Christianity— later manifested in the “divine right of kings”—even if they disagreed on which particular version of the religion was to be practiced.7 Hence, although princes opposed a universal morality or empire that could impose sanctions on them against their wills, they sought to foster such an order within their own jurisdictions, based on their right to do so under God. The fact that war and interstate violence among princes did not cease after Westphalia does not mean, however, that morality was 142 Chapter 7 absent from their relations or that combatants were motivated by merely functional needs or appetites. The moral basis of a political entity—its ontology—provides a justification for its existence as well as the implication that other entities are morally illegitimate if they reject the ontology of the first. John Ruggie (1989: 28) argues that Westphalia defined who had the “right to act as a power,” thereby including within its purview the numerous small and weak German principalities and states. The treaty acknowledged both a right of existence for these units and the right of each prince to impose his morality on his subjects. Westphalia did not, however, command that each prince recognize, accept the rule, or adopt the morality of others. War could thus be understood as both a moral and material event. To be conquered was punishment for immoral domestic beliefs and practices; to conquer was reward for moral domestic beliefs and practices.8 By agreement, therefore, although Westphalia commanded domestic morality and international amorality (the latter a rule rather than a condition), this did not prevent princes from trying to extend the boundaries of their domestic morality to engulf the domains of other, “immoral” princes. The original Westphalian system lasted only about 150 years, if that long. Although the royal sovereign was invested with authority via a mysterious God, Enlightenment efforts to introduce rationalism into political rule succeeded all too well, especially in Western Europe. Whereas some of the early empirical scientists, such as Newton, saw their work as illuminating the workings of a universe created by God (Noble, 1997), others took a more physicalist view. Gradually, religious morality was undermined by scientific experimentation and explanation, and philosophers and theorists sought to justify political order by reference to Nature (which some still equated with God, albeit a distant one; a somewhat exaggerated view of this change can be found in Saul, 1992). From this tendency there emerged what came to be called “nationalism.” From Corpus Christii to Corpus Politicum The first true nation-states, it is usually agreed, were Britain and France. In Britain, the modern “nation” emerged out of the Civil War of the seventeenth century, as Parliament fought with the king over the right The Princ(ipal) 143 of rule and the power of the purse. The Puritan Revolution represented an effort to impose on the state a moral order that was both Christian and a forerunner of capitalist individualism but that nonetheless had no external sources or referents of authority apart from God. Hence, the Puritans portrayed Rome and its adherents (including, putatively, any Catholic English sovereigns) as mortal enemies of Cromwell’s Commonwealth and England. This effort to purify the body politic of religious heresy was doomed to fail, however, so long as heretics could not be expelled from the nation’s territory or eliminated through extermination (a familiar problem even today).9 The Restoration, which put Charles II on the British throne, was as much a recognition of the intractability of the moral exclusion of a portion of the body politic itself as a reaction against the harshness of the Commonwealth and its attacks on certain elites. The emergence of the British nation during the following century—and the renewal of war with France during the 1700s—redrew the moral boundaries of society at the edges of the state, and established loyalty to king and country as a value above all others. In France, the Revolution launched a process whereby the source of state legitimacy was transferred from an increasingly discredited (and eventually dead) sovereign to the “people.” The French nation did not, however, attempt to establish a new moral order; that was left to the various and successive leaderships in the two centuries that followed. But the French Revolution did mark a major change in the ontology of the moral order of the state. Whereas the princely state derived authority from God, the new French state derived its authority from a “natural” entity called the “nation.” Enlightenment rationalism sought explanations for the workings of the universe in science; even Hobbes looked to Nature to explain politics and provide a model for the Commonwealth. What could be more logical than to look for the origins of the nation in Nature? By the end of the nineteenth century, even though the very concept was less than a century old, nations had been transmorgrified into constructs whose origins were lost in the dim mists of antiquity but whose continuity was attributed to the their connections to specific territories and the “survival of the fittest” (Dalby, 1990; Agnew and Corbridge, 1995). As I have noted in earlier chapters, this new age of moral imperialism was rooted in Darwin’s ideas about natural selection, but ex- 144 Chapter 7 tended from individual organisms as members of species to states (Darwin, himself, had no truck with these ideas). Members and leaders of nations that fifty years earlier had not even been imagined (Anderson, 1991) now competed to see whose history was more ancient and who had survived greater travails for longer periods of time. This became a means of establishing greater legitimacy and authority (a process that continues, even today, in places such as Kosovo, Rwanda, and Israel/Palestine). A more antideluvian history, in turn, established the moral right to occupy particular territorial spaces, and delegitimated the rights of all others to remain in those spaces (Berend and Ránki, 1979:80–96). Inherent, too, in such national organicism was a notion of “purity,” not only of origins but also of motives. Long-term survival could not be attributed simply to luck; it had, as well, to be a matter of maintaining one nation’s moral distinctiveness from those who were not of the nation, and of accounting for survival with a teleological national mythology. Maintenance of such distinction through culture was not, however, enough; there also had to be dangers associated with difference. These dangers, often as not imagined into being (rather than being “real” in any objective sense), made concrete those borders separating one state from another.10 Those living in borderlands were forced to choose one side or the other. Anyone on the wrong side of such a border were, quite often, forcibly made to migrate across them, as with Native Americans during the nineteenth century, Greeks and Turks after World War I, Germans after World War II, Hindus and Muslims in 1947, Palestinians on the wrong side of the moving “Green Line” between 1947 and 1949, and many others since. Once again, a form of moral order was invoked and moral purity maintained. The apotheosis of this politics of danger took place during World War II in those areas of Europe that fell under Nazi rule. To the national socialist regime, guardian of the moral and biological purity of all Germans, whether within the Third Reich or not, races of a lower order were threats to both (Pois, 1986). The Nazi moral hierarchy could live with Slavs restricted to their place (although it intended eventually to eliminate them or force them to move further to the east). It could not tolerate Jews, Gypsies, and homosexuals, all of whom tended toward high mobility across social, geographical, and sexual borders, and who treated with what the national socialists regarded as The Princ(ipal) 145 “impure” ideas and practices (e.g., “Jewish science”). Inasmuch as containment in ghettos and camps was insufficient to protect the German nation from these impurities, extermination came to be seen as a necessity. And, so, millions died. Ethnic cleansing thus serves a double purpose. Whereas forced transfer leaves alive aggrieved populations whose territorial claims might, at some time in the future, gain international legitimacy and recognition, genocide does not. Not only does it remove contenders for title to property, it also eliminates all witnesses to the deadly actions of the “moral community”—and, at times, as in towns and cities in the former Yugoslavia and other partitioned or cleansed territories, all physical traces, too.11 Any who are left behind will testify to the evil intentions of those Others who have so conveniently been eliminated or erased from the scene. Nothing Succeeds like Success In the United States, attacks on “liberals,” right-wing violence against the federal government and the “New World Order,” and conservative and religious fervor for “family values” (Bennett, 1998) can be understood as an attempt to reimpose a nationalistic moral frame on what some think is becoming a socially anarchic society (Lipschutz, 1998b; Rupert, 1997). The kulturkampf at home is paralleled by the transformation of state practice from military-based to discipline-based behavior, especially where U.S. foreign policy is concerned (see chapter 4). A closer look suggests that the two are of a piece, as in the convergence of a draconian welfare policy with an increasingly vocal movement against immigrants—whatever their legal status—and their countries of origin. Welfare is deemed to sap the moral vitality of the poor, to foster promiscuity and illegitimacy and, more generally, to be a form of immoral “theft” from righteous citizens. Although statistics suggest that most welfare recipients are U.S. citizens, much political ire and fire has been directed at immigrants, whose moral claim to be in the United States is deemed to be weak or nonexistent (a sentiment held by some against immigrants in other countries, too; see Crawford and Lipschutz, 1998). The film Independence Day, in which a disciplinary 146 Chapter 7 environmental sensibility (RECYCLE) complements a plot warning of “aliens stealing our resources,” nicely illustrates how domestic and foreign policy have come together around the extension of morality from the private (domestic) to the public (international) sphere (and further into the solar system and even interstellar space).12 How can we explain such behaviors? While the demise of social (and moral) discipline has been instrumental in the erosion of the citizen-state relationship (Drainville, 1995; see also chapter 8), this is a proximate rather than a primary cause. To explain the sources of social disorder—in this instance, the decline of the state’s moral authority—we must again look back to the immediate post–World War II period and the establishment of the Bretton Woods regime, which put in place the basis for the current social crisis. As I proposed in earlier chapters, the fundamental contradiction in the American and British goal of liberalizing the world economy was that the interests of citizen and state would coincide so long as there existed a threat against which only the state could protect the citizen. By extending the American economic system abroad, throughout the “Free World,” but pointedly drawing lines around the always threatening Soviet bloc, this arrangement generated broad support among Western publics and largely eliminated the security dilemma inside of the Free World’s borders.13 At the end of World War II, of course, the Free World was not yet “free,”14 inasmuch as the Soviets had not yet been definitively tagged as the new enemy. Harry Truman’s felicitous doctrinal phrasing concerning “free peoples everywhere” provided the label; the imperialism of the dollar and the fear of Reds did the rest. As the ex cathedra pronouncements of politicians, pundits, and pastors, and novels and films such as The Manchurian Candidate and Invasion of the Body Snatchers suggested, communism was a pathology of Nature, not an ideology of men; it took you over, you did not take it on (Lipschutz, 1997b: chap. 3). Keeping the enemy out and contained meant, therefore, not only imposing secure boundaries around the world but also imposing limits on one’s own self and behavior. The domino theory was not only about the fall of states; any rupture of containment could breach the individual self and expose it to evil. As I noted in chapter 3, the success and survival of the Free World depended on extending boundaries around a natural community The Princ(ipal) 147 (Stone, 1988) that had not, heretofore, existed. But in order to maintain its sovereignty and autonomy, this natural community had to be juxtaposed against another. Thus, on one side of the boundary of containment was to be found a unit (the Free World) whose sovereignty depended upon keeping out the influences of a unit on the other side (the bloc). The Free World could never have existed without the corresponding “unfree world.” Within the borders of the Free World, however, there remained a problem: the protection of state sovereignty and autonomy—heretofore regarded as the natural order of things—threatened to undermine the integrity of the whole. This was especially difficult from the American point of view, as illustrated in the famous confrontation between so-called isolationists and internationalists.15 The solution to the dilemma was a form of multilateral economic nationalism (Ruggie, 1983a; 1991, 1995). Inside the boundaries of the Free World, states were granted the right to manage their national economies, but only so long as they agreed to move toward and, eventually, adopt the tenets of an internationalized liberalism. With respect to the area outside the boundaries, however, the Free World would, to the extent possible, remain neomercantilistic and self-contained, antagonistic to those who refused to “come in from the cold” (Pollard, 1985; Lipschutz, 1989; Crawford, 1993). Already in the late 1950s, the morality of this arrangement, and the security strategy based on nuclear “massive retaliation,” was being challenged by so-called peace movements opposed to the threat-based logic of East-West relations (Deudney, 1995). By the early 1980s, the Free World’s social contract was becoming fragile as a result of détente, a growing international emphasis on human rights, and the economic troubles that had begun during the 1970s. The former two threatened to undermine moral order within the Free World by turning friends into enemies and vice versa; the latter—especially inflation— threatened to undermine moral order within the United States. It required the renewal of a really cold Cold War during the 1980s to reestablish the moral polarities of East and West, and to excuse the vile behaviors of American allies in the name of meeting the greater moral threats of Soviet adventurism and loss of faith in America. Alas, to no avail! The subsequent collapse of Communism, and the much-trumpeted triumph of liberalism and democracy, fully under- 148 Chapter 7 mined the moral authority of the West, inasmuch as there was no longer a global “evil” against which to pose a global “good.” As earlier chapters have shown, the efforts of some to reestablish a moral divide—as, for example, Samuel Huntington (1993, 1996) with his clashing civilizations—have not, so far, been conspicuously successful. To restore its moral authority in times to come, the nation-state must redraw the boundaries of good and evil, replacing disorder with new (b)orders. The United States government is attempting to restore order at home and abroad in two ways. First, the notion of “democratization and enlargement,” offered during the first Clinton administration, represents an attempt to expand the boundaries of the “good world” (see Clinton, 1997). Those who follow democracy and free markets subscribe to a moral order that makes the world safe for Goodness (which, in turn, supports the now-conventional wisdom that democracies never go to war with each other; but see Mansfield and Snyder, 1995). Second, as described in chapter 4, disciplinary deterrence is being directed against so-called rogue states, terrorists, and others of the “bad bloc,” who are said to threaten the good world even though they possess only a fraction of the authority, influence, and destructive power of the latter.16 Ordinary deterrence is aimed against any state with the capabilities to threaten or attack. Disciplinary deterrence is different. It is an act of national morality, not of national interests. Bondage, Domination, Discipline To repeat the point made in chapter 4, disciplinary deterrence is warfare by other means: through demonstration, through publicity, through the equivalent of corporal punishment. The difficulty with disciplinary deterrence is that there is no there there, and it does not work very well. It is largely conducted against imagined enemies, with imagined capabilities and the worst of imagined intentions. Two men with explosives or cults with gas hardly pose a threat to the whole of the physical body politic; it is their ability to undermine faith in state authority that is so fearsome to those in power. And, as pointed out in earlier chapters, where “rogues” and other such enemies might choose to issue a challenge, or why they would do so, is not at all evident (see The Princ(ipal) 149 also Lipschutz, 1999b). But that these enemies represent the worst of all possible moral actors is hardly questioned by anyone. Disciplinary deterrence is not, however, limited to renegades outside of the United States; it has also been extended into the domestic arena. For most of the Cold War, the threat of Communist subversion, and the fear of being identified as a Pinko Comsymp in some police agency’s files, were sufficient to keep U.S. citizens from straying too far from the Free World straight and narrow. Red baiting continued long after the Red Scares of the 1950s—one can even find it today, in the excoriation of so-called liberals (San Francisco Chronicle, 1997) and Marxist academics (Lind, 1991)—although the language of discipline and exclusion has become somewhat more sophisticated with the passage of time. Still, since the collapse of the Soviet Union it has been difficult for political and social elites to discipline an unruly polity; that things can get out of hand without strong guidance from above is the message of South Central (Los Angeles), Oklahoma City, Waco, and Ruby Ridge. Consequently, warnings routinely issued from on high that the “world is a dangerous place” serve to replace the disciplining threat of Communism (Kugler, 1995). Such warnings are, however, unduly vague. We are told that weapons of mass destruction—nuclear, biological, chemical—could turn up in a truck or suitcase (Myers, 1997). We are informed that laptop cyberterrorists are skulking around the Internet. We are instructed that some country’s missiles are bound, eventually, to land in Alaska, Hawaii, or even Los Angeles. Therefore, we must rely on and trust the authorities to prevent such eventualities, even though the damage done by one or several such devices would never approach the destructive potential that still rests in the arsenals of the nuclear weapons states (Lipschutz, 1999b). Unnamed terrorists—often implied to be Muslim—are discussed and dissed, but some of the most deadly actors turn out to be the “boy or girl next door” (Kifner, 1995). The Clinton administration further sows paranoia, seeking funding to track such neighbors by creat[ing] a special computer tracking system to flag, or “profile,” passengers and identify those with suspicious travel patterns or criminal histories. . . . The names addresses, telephone numbers, travel histories and billing records of passengers would be run 150 Chapter 7 through a giant database that might lead to a search of the luggage of those deemed suspicious. (Broeder, 1996) In a move reminiscent of CONTEILPRO, the FBI establishes “counter terrorism task forces” in a dozen major U.S. cities that, according to a draft memorandum, are “dedicated full time to the investigation of acts of domestic and international terrorism and the gathering of intelligence and [sic] international terrorism” (Rosenfeld, 1997). The Justice Department disseminates funds for cities to prepare for biological terror attacks. Domestic police departments acquire military-type guns and armored vehicles and, as events in New York, Los Angeles, and elsewhere suggest, take on the role of occupying army. And the fearful mayor of New York City, convinced that he might be a target for foreign malcontents, barricades City Hall so that no citizen can enter without official permission. Clearly, disorder knows no borders. Every Wo/man a State! The state possessed by the siren song of its own moral efficacy is not yet an artifact of history; as illustrated by international indignation over Rwanda, Bosnia, and Kosovo, the acts of purification required by extreme nationalism are not so willingly accepted in today’s world as they once might have been. Interventions—on those rare occasions when they do take place—are still usually explained, however, by old statist moralities—the “balance of power” or some such—rather than humanistic ones. At the same time, moreover, a new phenomenon has emerged to challenge the logic of realism: the morality of the market has begun to displace the morality of the state. One might easily say, of course, that the market has no morality. Driven by an ethic of selfinterest, the individual is motivated only to consume as much as possible, within the constraints of the combined limit of her debit and credit cards. And yet, and yet. . . . There is a quite explicit morality associated with discourses of market liberalism and economic growth. According to Smithian principles, the behavior of individuals in free exchange, when taken together, leads to the collective betterment of society without the intervention of politics or power. The market is often offered as a The Princ(ipal) 151 “natural” institution, whose organic expansion is not unlike that of the Darwinian states of yore. Indeed, the contemporary mantra of economic competitiveness fuses the Social Darwinism of geopolitics with the Social Darwinisn of the market: as always, only the fittest will survive. Those old welfare-state ideas of community are not only passé, they are the sure path to failure (see, e.g., Cohen, 1997). Hence, the unfettered market generates an unequivocal good that, logically, must also be morally desirable. Conversely, the intervention of politics or power obstructs this generation of good by being “inefficient,” and such meddling must therefore be immoral. The paradox that follows is that any equity brought about by politics comes to be regarded as bad (and immoral), while the inequities consequent on marketization are deemed regrettable but the natural consequence of human nature and good for those who get the short end of the stick (Himmelfarb, 1995; see also Szerszynski, 1996). In his 1973 biography of Eisenhower’s first secretary of state, John Foster Dulles, Townsend Hoopes (1973:286) wrote that Dulles believed that “American economic and technical superiority rested in large part on the moral superiority of the free enterprise system” (emphasis added). This was not an isolated belief, then or now. According to the President’s Materials Policy Commission (1952:1)—the Paley Commission—established by President Truman in 1952 to examine the problem of raw materials supplies: The United States, once criticized as the creator of a crassly materialistic order of things, is today throwing its might into the task of keeping alive the spirit of Man and helping beat back from the frontiers of the free world everywhere the threats of force and of a new Dark Age which rise from the Communist nations. In defeating this barbarian violence moral values will count most, but they must be supported by an ample materials base. Indeed, the interdependence of moral and material values has never been so completely demonstrated as today, when all the world has seen the narrowness of its escape from the now dead Nazi tyranny and has yet to know the breadth by which it will escape the live Communist one—both materialistic threats aimed to destroy moral and spiritual man. The use of materials to destroy or preserve is the very choice over which the world struggle today rages. (Emphasis added) 152 Chapter 7 Such ideas, originating with the Calvinist notion of the elect, have been repeated again and again in countless political jeremiads (Bercovitch, 1978) and presidential speeches, of which Bill Clinton’s 1997 Inaugural Address is only one recent expression (and which his successor will, undoubtedly, repeat on January 21, 2001). There is a difference between Calvinism and consumerism, however. In times past, one’s material success was indicative of one’s moral superiority; today, one’s material consumption is indicative of one’s contribution to the moral uplifting of the world. Indeed, we might say that, in the emerging global moral economy, consumption becomes not only an individual good, but a collective moral and utilitarian “good,” too. Consumption fosters prosperity, prosperity improves people’s well-being and contentment with the status quo, and the resultant stability of social relations is a morally desirable outcome. As President Clinton (1997) put it in “A National Security Strategy for a New Century,”. . . . Or, modifying slightly the late Deng Xiaoping’s dictum, “It is glorious to consume.” The dissemination throughout the world of liberal market principles, including liberalization, privatization, and structural adjustment, thus begins to acquire the character of a teleological moral crusade rather than the simple pursuit of national or self-interest. Public ownership and welfare spending are condemned as inefficient and wasteful and proscribed by international financial bodies and investors. Venal and bloated governments expend resources on projects that contribute to corruption and indolence, and undermine individuals’ efforts to improve their own position and status by dint of moral reasoning and good works. The discipline of the market rewards those who hew to its principles, whether state, corporation, or individual. And those who cannot or will not do so must be left to suffer the consequences of their economic apostasy. The Princ(ipal) 153 It’s the Economy, Stupid! Stephen Gill (1995) has written perceptively about the ways in which the “global panopticon” of liberal markets act to impose their peculiar morality on the both the credit-worthy and credit-risky. As I argued above, as the nation-state and nationalism have lost the moral authority they once commanded, such authority has shifted increasingly to the market and its disciplines (Strange, 1996). And there is more religion to the market than meets the eye. Those who don’t adhere to the standards of the credit-givers (and takers!)—whether individual or state—are cast out of the blessed innermost circle of the global economy. To be readmitted requires a strict regimen of self-discipline, denial, and reestablishment of one’s good name. But even those with triple-A credit ratings and platinum plastic are not free of this moral regime. Inundated daily with bank offers of new credit cards and below-market interest rates, the credit-worthy are kept to the straight and narrow by fear of punishment should they violate the code of the credit-rating agencies. The proper response to such offers is, of course, “Get thee from me, Satan!” (although not everyone can rise above such temptation; ballooning consumer debt and growing numbers of bankruptcies in the United States indicate that backsliding is on the increase). Nonetheless, we see here the true genius of a globalized credit system. Whereas Church authority was akin to statist regulation—the same rules for everybody, with damnation bestowed through the collective judgement of the community— market-based morality relies on self-regulation (and self-damnation). Pie can be had now (none of that “by and by in the sky” stuff) and temporal salvation is keyed to individual capacity to carry the maximum credit load that s/he can bear—different strokes for different folks. As many of us know from experience, however, self-regulation is a weak reed on which to base a social system. Moreover, the desire to consume to the maximum of one’s individual credit limit does carry with it a larger consequence: the domestic social anarchy that arises from self-interest as the sole moral standard to which each individual consumer hews. Faced with this New World morality, can the nation-state recapture its moral authority and reimpose the borders of order? In some places, such as the former Yugoslavia, the agents of virulent ethno- 154 Chapter 7 nationalisms have tried, but only with limited success. More recently, in places such as Israel and Guatemala, the lure of riches in the market have come to outweigh the certainty of riches by forced appropriation (Lipschutz, 1999a). In other places, such as the United States and Europe, culture wars have become the chosen means to discipline those who would deviate from “traditional” social norms, in a forced effort to bring the heretics back in. But hedonism, cultural innovation, and social reorganization are hallmarks of the market so loved by the very conservatives who have launched these very domestic battles (Gabriel, 1997; Elliott, 1997). Short of reimposing a kind of quasi-theocratic autarchy on their societies—which, in any case, would be vigorously opposed by the cosmopolitan economic elites that benefit from globalization, and lead to disruption and upheaval on a massive scale—the nation-state has little to fall back on in facing this new world. National borders might be guarded by armies, navies and police armed to the teeth, but the borders of nationalist moralities, drawn in the minds of the “nation,” have always been fluid and difficult to demarcate. And imagination knows no boundaries. Carried to an extreme, the market will turn each of us into a nation of one, every man and woman a state, a world of 10 billion atomized, consuming countries. Then, indeed, will we enter into the “borderless world.” 8❖ POLITICS AMONG PEOPLE One may also observe in one’s travel to distant countries the feelings of recognition and affiliation that link every human being to every other human being. —Aristotle, Nicomachean Ethics The pictures I have painted throughout this book are none too attractive; they might be pleasing to the logical eye but cannot be very appealing to the emotional one. Yet, such scenes of gloom, doom, conflict, war (and “liberal” peace; Lipschutz, 1999a) do not encompass the entire world. As Kenneth Boulding (1977) once pointed out, at any particular moment, the number of people living peaceful lives is much, much greater than the number who are not. Why, then, focus on the bad to the exclusion of the good or promising? Why not try to portray positive possibilities rather than a bleak futurescape? We pay greater attention to social disorder, violent conflict, and war precisely because they are so outside the norm of everyday experience, because they “sell” in the media, and because they make us feel a need to do something. The result, however, is that we are left with the belief that the world truly is “a dangerous place,” that we are under constant threat, and that there is little that we, as individuals or 155 156 Chapter 8 members of our small groups and organizations, can do. To be sure, there are matters of pressing importance that could, under certain circumstances, seriously undermine the viability of human civilization but, except for nuclear war or an errant asteroid, none of them is likely to erupt very suddenly or have an instantly terminal effect. The critical question thus remains, as it was put at the beginning of the twentieth century: What is to be done? But “done” about what? And who is to decide? There are many problems, more than can possibly be addressed. It might seem odd, then, to assert that we do not lack for solutions to most of these problems, that we do “know” what to do. But, by and large, the solutions are primarily technical ones, in the sense that they propose to grow, make, or provide more: more food, more energy, more democracy, more capitalism, more peace agreements, more carbon dioxide sequestered in the ocean so that we can drive more cars. When the time comes to apply these solutions, however, things turn out not to be quite so simple (Stone, 1988). To put the matter prosaically, it is often easier to make a horse drink than to change the customary social behaviors of both groups and individuals (Scott, 1999). Furthermore, when faced with a menu of possible choices about “what to do,” not every individual or group will select that option most desired by policymakers, economists, or psychologists. “Rational choice” does not mean singular possibilities, and even “irrational” choices usually have a purpose behind them. Later in this chapter, I offer a somewhat reflective perspective on the future of citizenship, political action, and civil society in a globalizing world, in the view that authority is possible only when people are members of a social institution whose goals they actively support (Drainville, 1996; Thomas, 1997). I also provide some thoughts about what “belonging” and “membership” might mean under these various arrangements. I argue there that, although the concept of global civil society has been underdefined, for a variety of reasons it remains a useful concept in terms of the matter of “after authority.” Drawing on the work of Michael Mann (1993), Steven Gill (1993, 1995), Sakamoto (1994), and others, I attempt to offer a more concrete conceptualization, illustrating parallels between the emergence of the modern national state, citizenship, and domestic civil society, and a growing system of global governance and global civil society. Politics among People 157 Making the claim that global civil society is important to the future of global politics does not imply some sort of teleological “triumph” of reason or a world state, if only because not all of the transnational networks, coalitions, and actors making up global civil society are supportive of this postnational project. Some act through these networks in order to resist the state, while others engage in attacks on states as collaborators with institutions of global governance such as the United Nations. Moreover, those economic actors deeply involved in hyperliberal globalization—primarily corporations and institutions of capital—also constitute an arm of a “global civil society” in their efforts to regulate politics at the transnational level and, in some instances, to intervene in domestic settings through sponsorship of functional projects at the local level. How these actors might view postnational politics and citizenship is not entirely clear.1 Finally, the ultimate form of these mutually constitutive “entities” is, as yet, underdetermined; a collapse back into a more traditional international state system cannot be ruled out, although it is highly unlikely, as I have made clear throughout this book. The growth of various mechanisms of transnational governance, strongly driven by processes linked to economic globalization, suggest otherwise. Prior to that discussion, however, I consider the question of choices: What choices are available; what might we do? I begin with a brief summary of my argument in this book and then ask: What happens after authority? Next, I turn to questions about the future of the nation-state: Will it survive? What will it do? Will something replace it? I discuss how the diffusion of jurisdictional authority from the state to other actors is both fragmenting and integrating global politics, but not in the conventionally understood territorial sense. Finally, I raise a challenge to the conventional “state/nonstate” dichotomy that characterizes the international relations and global politics literature, and propose that we need to go far beyond this binary if we are to understand and act on our future. After Authority In the preceding chapters of this book, I have argued that the basic problem we face is best understood as a disjunction between contem- 158 Chapter 8 porary social change and people’s expectations about their individual and collective futures. Changes in modes of production and reproduction have exposed us to what is, except during periods of war, a historically high rate of social innovation and reorganization. This change has progressed to the point that uncertainty has come to dominate politics, both domestic and global, in ways that were rarely the case, at such a large scale, in earlier times. We regard predictability (if not stability) as central to contemporary life. It is predictability that allows us to be reasonably certain that we can accomplish in the future what we have planned today; it is predictability that lets us go beyond fatalism to action.2 Or, to put the point another way, we expect both our actions and the actions of others to be sufficiently patterned and predictable that we do not have to live in a “State of Nature” in which neighbors are enemies and no one can be trusted.3 Hobbes (1962) thought it necessary to create a Leviathan that would prevent such a situation by imposing its authority on society. Already more than three hundred years ago, he recognized in the English Civil War the potential disorder inherent in the methodological individualism that followed the collapse of centralized religious authority and the rise of capitalism. He therefore sought to discover a new source of rule—possibly based in Nature’s mathematics or science—that could contain such disorder. But the joke, it turns out, has been on us. All along we have been told that the mythical “war of all against all,” in which life was “nasty, brutish, and short,” was a description of an antediluvian time, before authority and society, and that it was only the morality of state and society that brought humanity to a civilized condition.4 Instead, the State of Nature turns out to be our future, a condition that, unwittingly or not, Leviathan itself has let loose on the world. In this instance, I have argued, it is the globalization of markets that is undermining the state, a process set in train by the United States after World War II, a process that may soon approach its apotheosis in the “borderless world” (Ohmae 1991, 1995). This is hardly a new or innovative argument—that the spread of market forces destroys the basis for the social contract that Hobbes and others thought so necessary to restrain human nature (Polanyi, 1944/1957)—and it is one that is harshly critiqued by any number of analysts (see the The Economist, 1997; for counter arguments, see Hirsch, 1995; Ellis and Kumar, 1983). What is new is, perhaps, the inversion of the sequence of events, Politics among People 159 followed by the query: Who or what is to decide on an “authoritative allocation of values” in the absence of authority? The World Federalists sought (and continue to seek) global federation. William Ophuls (Ophuls and Boyan, 1992) and Robert Heilbroner (1991) proposed something akin to world dictatorship. George Bush’s New World Order rested on American hegemony and discipline. Perhaps in the future wealth or corporate tonnage will become the basis of authority, in which case Bill Gates might become the global “Grand Poobah” or Exxon-Mobil a new superpower. Or, conceivably, each individual and her appetite will become her sole source of authority; if so, even the corporation as we know it might not survive. Hyperbole, perhaps, but in some places not so far from the truth. What, then, is our future after authority? Most analysts peering into the future try to describe the big picture: The world will be richer. It will be happier. It will be poorer. It will be crowded. It will be violent. It will be wired. To be sure: it may be all of those. But such global generalizations, encapsulating in very short sound bites the actions of the six (now) or 10 billion (by 2050 or so) individuals populating the Earth, do not tell us very much about what those people are up to, or will be up to. Yet, if the individual has become the new sovereign, as I have argued in this book, what people do will matter, whether they do it alone or in groups, whether they do it peacefully or violently, whether they do it for self-interest or the community. And why they do what they will do will matter, too, because, in the final analysis, their sources of authority for their actions will be important to politics. In making this point, I do not mean to suggest that many of the problems that give rise to both clichés and questions of rule and rules are, somehow, not transnational, transboundary, or world-encompassing in nature, or that national, cultural, and class differences are not implicated in them. I do mean to argue, however, that these problems, factors, and forces are not implicated in world politics in the ways that the Huntingtons, Barbers, Fukuyamas, Ohmaes, or Kaplans of our day might claim they are. Indeed, it is more probable that, after authority, authority will originate less in the actions of “great women and men” of state than in the patterns of everyday politics, of politics among people, locally and globally. These patterns will have less to do with hegemonic stories of a dangerous world and the actions that follow, and more to do with fairly mundane matters, with everyday questions 160 Chapter 8 of governance, citizenship, and even civic virtue: Who rules? Whose rules? What rules? What kind of rules? At what level? In what form? Who decides? On what basis? It is fair to say that these questions, and other similar ones, are already being answered. People’s responses to them are evident in their patterns of behavior in a number of political arenas, in the reorganization of politics around functional issues such as environmental restoration and protection, human rights, gender, indigenous peoples, labor, and culture, as well as trade, investment, property rights, and product standards. While some of these patterns and tendencies might seem contradictory (and are), in that their orientations and consequences are often in opposition, they are all part of what I have called, elsewhere, “global civil society” and global governance (Lipschutz, 1996)5 and they are all generating new types of institutional roles and relations, memberships, and categories of belonging (indeed, were we speaking of nation and state, we might call these roles “citizenship”). To a growing degree, it is in functional arenas such as these, and the ways in which people act toward them and with each other, that we must look in order to see the emerging outlines of twenty-first century politics among people. The skeptical reader might rightfully ask, “What is the evidence for this global civil society and these new forms of citizenship? The state, after all, remains the most powerful and authoritative actor in global affairs. Moreover, if such things do exist, what do they presage?” Would they mean the disappearance of the state-system and a true “postsovereign, postinternational” world politics, as James Rosenau (1990; 1997) has put it? Are not “sovereignty-free” transnational networks and actors so dependent on the structures created and supported by states that they cannot exist without them? How could a global politics function if its basic units were not defined in territorial terms? And, are global civil society and new forms of citizenship plausible in the face of the communitarian and ethnic forces tearing apart so many countries? Can there be a truly democratic, transparent, and representative global politics without the state? The State(s) of Our Future In recent years, speculation about the “future of the state” has been rife (as evident from this book and others cited throughout). What is Politics among People 161 most conspicuous, and provides the basis for solid skepticism about the unchanging nature of world politics, are seemingly contradictory tendencies evident in world politics, as we have seen in earlier chapters.6 On the one hand, we are offered the notion of a single world, integrated via a globalizing economy, in which the sovereign state appears to be losing much of its authority and control over domestic and foreign affairs (Ohmae, 1991; 1995; Strange, 1996; Woodall, 1995). These trends appear to point toward an eventual world state or federation, along the lines of the European Union, only bigger. On the other hand, and contrary to the expectations of neofunctionalists and others, we have seen once-unified countries fracture into war-ridden fragments, in which an ever shrinking state exercises sovereignty over diminishing bits of territory. Both processes involve, as Susan Strange (1996) put it, a “retreat of the state,” albeit in quite different ways. But they also suggest that integration will not lead to a world federation of states and regions even as fragmentation does not presage a return to national sovereignty and a more traditional international relations among five hundred or more states. So, what is going on? In The Great Transformation, Karl Polanyi (1944/57) argued that the self-regulating market was an ideal that could not be fully achieved, lest it destroy human civilization; the two world wars almost accomplished this task (and the Third World War that never happened, but might yet, would surely do so). In recent decades, we have tended to forget his prescient warnings.7 The globalization of production and capital over the past half century has been accompanied by liberalization and, at the rhetorical level at least, a commitment to the deregulation of markets. But in deregulation lies an apparent paradox of our times: a liberal economy cannot exist without rules—so, where are they? Indeed, as I noted in chapter 7, markets require rules in order to function in an orderly fashion (Mead, 1995/96; Attali, 1997). In the late nineteenth and early twentieth centuries, the first steps toward globalization were brought to a halt by national governments and elites who saw threats to their autonomy and prerogatives. The same pattern followed in the 1930s, and there are a few signs that this may be happening again, today. Free traders and their economist supporters decry the protectionist trends they see developing in trade relations among the industrialized economies, warning that the world is going down the same path it has trodden before (Bergsten, 1996). 162 Chapter 8 Perhaps they are correct, perhaps not. It is certainly not beyond the realm of possibility that competitive geopolitical blocs could (re)emerge in the future, as feared by some observers of the European Union, the North American Free Trade Area, and the once-feared and now dormant New Asian Co-Prosperity Sphere under Japanese tutelage. There remains, however, enough residual collective memory, and the World Trade Organization, to suggest that such an outcome might be avoided. But there are good reasons, too, for arguing that contemporary international economic relations bear little if any resemblance to the 1930s. As I have noted throughout this book, nation-states are caught in a contradiction of their own making and, for all the parallels to the past, are treading down a path they have not walked before. On the one hand, they are decentralizing, deregulating, and liberalizing in order to provide more attractive economic environments for financial capital and, as they do so, dismantling the safety net provided by the welfare state. That safety net, it should be noted, includes not only assurance of health and safety, environmental protection, public education, and so on, but also standard sets of rules that “level the economic playing field” and ensure the sanctity of contracts, the latter two both desired by capital. On the other hand, the shift of regulation from the national to the international level is creating a new skein of rules and regulations. Even the British-governed international economy of the nineteenth century, often idealized by gold bugs and free traders, was not a free-for-all. It was regulated, if only by the constraints of the gold standard and the resultant behavior of financiers in London and New York. Today’s markets are hardly self-regulating, either. While “deregulation” is the mantra repeated endlessly in virtually all national capitals and by all international capitalists, it is domestic deregulation vis à vis other producers that is desired, not the wholesale elimination of all rules (Vogel, 1996; Graham, 1996). Selective deregulation at home may create a lower-cost environment in which to produce, but deregulation everywhere creates uncertainty and economic instability. Hence, transnational regulation and global welfarism—the successors to Bretton Woods—are becoming increasingly important in keeping the global economic system together and working.8 The difficulty with the globalization of rules is, to repeat an earlier point: What rules and whose rules? Who pays for them? Who decides what they will say? And how are those decisions made? Politics among People 163 There is another important problem here. With national economies, there was at least the possibility of addressing domestic maldistribution; a global economy hardly permits even this. As deregulated capitalism works its way within countries, the economic playing field develops pits, holes, and undulations, and the distribution of wealth both within and between countries, groups, and classes becomes more and more uneven. This, as might be expected, can pose political problems both domestically and internationally (see, e.g., Pollack, 1997; Kapstein, 1996). For example, in the United States and other industrialized countries, groups of small-scale fixed capitalists, property owners, workers, and others chafe under the new economic environment. But individual countries cannot move to reregulate because there are strong interests who benefit from domestic deregulation, and to reimpose political management might also be to give up a competitive advantage to other countries and their firms. The future does not, however, lie with petty capitalists or labor who operate within the limits of subnational economies; these groups are, generally speaking, of little interest to Wall Street—except for their role in domestic consumption—and they are rarely a locus of technological and organizational innovation.9 Profits are to be found in the high-tech and information industries, in transnational finance and investment, and in flexible production and accumulation. This means looking beyond national borders for ways in which to deploy capital, technology, and design in order to maximize returns and access to foreign markets. One obstacle to such moves is that the transaction costs associated with having to deal with 50- or 150-odd sets of national regulations can be quite high. High–tech, financial, and transnational sectors would, therefore, prefer to see the playing field made level among countries—preferably as inexpensively as possible, but level nonetheless—through single sets of rules that apply to all countries, much as is supposed to be the case within the European Union or among the members of international regimes (Vogel, 1995). And so, although it is often argued that there is no global government, and that regulatory harmonization is not only difficult but also unfair (Bhagwati, 1993), global regulations have been and are being promulgated all the time. The General Agreement on Trade and Tariffs, and its successor, the World Trade Organization, provide examples of regulatory harmonization for the benefit of capital and country. The 164 Chapter 8 Montreal Protocol on Substances that Deplete the Ozone Layer is a regulatory system designed to harmonize rules governing production of ozone-damaging substances. The Nuclear Non-Proliferation Treaty is intended to regulate the production and use of atomic bombs and fissile materials by its signatories. The human rights regime is meant to set a standard for the fair and just treatment of citizens by their states and governments as well as by their fellow citizens. International meetings such as the Conference on Population and Development in Cairo aim at the promulgation of a globally shared set of norms and rules. The ISO 14000 rules recently issued by the International Organization for Standardization are meant to provide a framework for “green” management by corporations. And even international financial institutions, such as the World Bank, are becoming involved in the provision of health and welfare services, albeit as a supplement to the large-scale projects they traditionally support. Indeed, the raft of regimes and international institutions associated with the United Nations system and other transnational groups might be said to constitute something of an incipient international regulatory system (although there are many holes in this “safety net”). As such, it serves two critical functions. First, it sets in place norms and rules that are meant to apply everywhere, even though these standards are sometimes less rigorous than the citizens of particular countries would like, and observation and enforcement remain very problematic. Second, the system makes it possible for national governments to tell their citizens that a particular problem is being addressed but that they—both citizen and representatives—have no control over the content of the rules, and that domestic politics must not be permitted to intrude into either the promulgation or functioning of the rules. Note that political intervention into the market system is taking place here, albeit out of reach of domestic interest groups, lobbyists, and logrolling. The absence of accountability on the part of these global institutions is not so easily shrugged off and serious questions are being raised about this matter (Gill, 1993, 1995). Nevertheless, we see here the beginnings of global governance (and taxation) although, as yet, not representation. There is little question that the “state” will remain a central actor in world politics for some time to come, by virtue of its capabilities, its material and discursive powers, and its domination of the political imaginary. Nevertheless, what has been regarded as the hard core of Politics among People 165 jurisdictional authority of the state—a naturalized fiction if ever there was one—is diffusing away throughout an emergent, multilevel and quite diverse system of globalizing and localizing governance and behavior.10 Some have suggested that these changing patterns constitute a “new (or neo) medievalism”; others have proposed as organizing principles “heteronomy”11 or “heterarchy.”12 In discussing the first of these three concepts, Ole Wæver (1995: note 59) argues that for some four centuries, political space was organized through the principle of territorially defined units with exclusive rights inside, and a special kind of relations on the outside: International relations, foreign policy, without any superior authority. There is no longer one level that is clearly the most important to refer to but, rather, a set of overlapping authorities. (First emphasis in original; second emphasis added) What is critical here is not political space, but political authority, in two senses: first, the ability to get things done, and second, recognition as the legitimate source of jurisdiction and action (as opposed to one’s ability to apply force or coercion in the more conventionally understood sense). As John Ruggie (1989: 28) has pointed out, in a political system—even a relatively unsocialized one—who has “the right to act as a power [or authority] is at least as important as an actor’s capability to force unwilling others to do its bidding” (emphasis added). In this neomedieval world, authority will arise more from the control of knowledge and the power that flows from that control than outright material capabilities. The power to coerce will, of course, remain important, but most people do not need to be coerced; they want to be convinced. This is meant neither as a teleological nor a necessarily progressivist argument. The eventual content of global governance and an international regulatory system could serve the interests of a narrow stratum of political and economic elites and prove profoundly conservative and reactionary (Gill, 1995). The result might be a repetition of previous catastrophes, as the pain of globalization bites deeply at home (Kapstein, 1996). There are disquieting trends to which one can point— such as the globalization of surveillance through information technologies and struggles to construct new, albeit bankrupt, states. Still, the future is not (yet) etched in stone. 166 Chapter 8 Aux Armes, Citoyen? What this discussion has not, so far, defined is the relationship between individuals and the new forms of political action inherent in the globalization of functional authority. That discussion requires an inquiry into the nature of membership in a political community, that is, citizenship. In its standard form, citizenship is defined as a collection of rights and obligations that give individuals a formal legal identity within a state and society. The Westernized (and, some would argue, masculinized) philosophical problem of how individuals come together to form political collectives in which they are members has puzzled political philosophers for centuries. The apparent tension between human beings as highly individualistic entities and the societies they nevertheless have created led historically to such propositions as Leviathan, the Social Contract, the Watchman State, Civil Republicanism, the Welfare State, and even the Invisible Hand. Indeed, the problem of how State and Society came to be remains something of a puzzle to Western theorists, even today. These are not, however, the only ways to conceive of citizenship. As Bryan Turner (1997:5) puts it in the introduction to the first issue of a journal called Citizenship Studies, “[T]hese legal rights and obligations [of citizenship] have been put together historically as sets of social institutions such as the jury system, parliaments and welfare states.” Turner goes on to argue that a “political” conception of citizenship is typically focused on “political rights, the state and the individual,” whereas a “sociological” definition involves the nature of people’s entitlements to scarce resources, confers a particular cultural identity on individuals and groups, and includes the “idea of a political community as the basis of citizenship . . . typically the nation-state.” That is, when individuals become citizens, they not only enter into a set of institutions that confers upon them rights and obligations, they not only acquire an identity, they are not only socialized into civic virtues, but they also become members of a political community with a particular territory and history. (Turner, 1997:9) The advantage of this definition is that it ties together the material substructure of citizenship with the superstructure of rules, rights, behaviors, attitudes, and obligations that constitute the citizen vis-à-vis Politics among People 167 other citizens and the state. The market is part of this institutional structure and, historically, the rules underwriting and functioning of markets have been guaranteed by the authority and activities of the state.13 In “normal” times, the contradictions between substructure and superstructure are minimal—or are either not very evident or are foisted off on the poor and powerless as “natural”—and citizenship is a relatively stable construct. Under those circumstances, people follow the rules and expect to receive commensurate rewards in return (see chapter 6). Less and less, however, are these “normal” times, as I have argued throughout this book. For better or worse, then, the tensions between globalization and fragmentation cannot be addressed by attempts to establish exclusive domains of society and citizen, either by philosophy or force. Such solutions attempt to “imagine communities” (Anderson, 1991) into being without taking into account the material forces that are, on the one hand, keeping imaginary communities from becoming “real” and, on the other hand, pulling real ones apart. The core problematic here is that the forces of globalization are disrupting the boundaries that, for the past two centuries, contained societies and national communities and provided the basis for contextual forms of citizenship and belonging (Shapiro and Alker, 1996). Attempts to reestablish these boundaries and the civic communities within them through disciplinary measures, whether domestic or foreign, risk reproducing the logic of antagonistic nation-states in a much more fragmented form, as newly imagined communities resist the hegemony of the old one (see the essays in Crawford and Lipschutz, 1998). If territorial units are no longer the logical focus for political loyalty, can some other form of political community substitute? What can replace the citizen’s allegiance to state as the new basis for politics? If the rampant individualism of the market is creating a world of 10 billion statelets, how can people come together to act collectively? In principle, states might be able to act against the tendency of marketization to diminish their authority within their national boundaries. In practice, and short of a repeat of a global crisis akin to the 1930s (which was, after all, one of the reasons for the post–World War II globalization project), it is difficult to imagine such a restoration taking place. For reasons I have discussed elsewhere, having to do with 168 Chapter 8 the notion of global governance (Lipschutz, 1996), we need to look beyond the nation-state to answer the questions posed above. In what follows, I want to suggest that there is a real problem with thinking about alternatives to citizenship, especially if we are focused on ways to restore them within the “iron cage” of the contemporary nation-state form. In recent years, research into transnational social movements, nongovernmental organizations, global networks and coalitions, and global governance structures has represented the core of thinking about alternatives to the state among academics and intellectuals (see, e.g., Princen and Finger, 1994; Wapner, 1996; Mathews, 1997; Keck and Sikkink, 1998; Lipschutz, 1996 and the citations therein). Still, virtually all that has been written about these trends continues to use the language and framework of state and “nonstate” actors. As Paul Wapner and others have pointed out, this focus on the state and its “discontents” reflects a certain poverty of imagination about other types of nonnational, postnational political arrangements. In particular, this singular focus leaves out those other discussions that avoid or even ignore the state/nonstate dichotomy. It is at this very point that debates stall over the future of citizenship, politics, and authority under globalization. Is there another way to think about alternatives to the state? To develop this line of thought, I draw on a body of theory that, at first glance, might appear far removed from international relations theory: the work of Judith Butler.14 In her 1990 book, Gender Trouble, Butler uses the work of Michel Foucault to show how, in the case of gender, “juridical systems of power produce the subjects they invariably come to represent.” The question of “the subject” is crucial for politics, and for feminist politics in particular, because juridical subjects are invariably produced through certain exclusionary practices that do not “show” once the juridical structure of politics has been established. In other words, the political construction of the subject proceeds with certain legitimating and exclusionary aims, and these political operations are effectively concealed and naturalized by a political analysis that takes juridical structures as their foundation. Juridical power inevitably “produces” what it claims merely to represent; hence, politics must be concerned with this dual function of power: the juridical and the productive. (Butler, 1990:2) Politics among People 169 Butler (1990:2) goes on to argue that “the category of ‘women,’ the subject of feminism, is produced and restrained by the very structures of power through which emancipation is sought.” Elsewhere in the book (112), she writes that “if gender is not tied to [biological] sex, either casually or expressively, then gender is a kind of action that can potentially proliferate beyond the binary limits imposed by the apparent binary of sex” (parenthetical term added).15 Butler (1990:9–13) also points to the work of Luce Irigaray who writes that, in the hegemonic discourses of gender with which we are most familiar (even if not in agreement), women are not the opposite of men but, rather (if I understand the argument correctly) are not-men (my words). Butler (1990:11) puts it thus. The female sex is thus also the subject that is not one. The relation between masculine and feminine cannot be represented in a signifying economy in which the masculine constitutes the closed circle of signifier and signified. (Emphasis in original) I quote Butler at length here because her analysis can be transposed from her focus on women (as “not-men”), and the term’s always-casual pairing with “men,” to what are conventionally called nonstate actors (i.e., not-state). The term “nonstate actors” is paired with “state” just as casually in international politics literature to denote those political collectivities that act in the inter/transnational realm but that lack specific reified attributes of the state—territory, sovereignty, legitimate monopoly of violence. Such collective actors have the same relationship to the “signifying economy in which the [state] constitutes the closed circle of signifier and signified,” as the feminine has to the masculine. The state, in this “signifying economy,” becomes a “naturalistic necessity” (Butler, 1990:33) against which other political actors are treated and evaluated in terms of their (nonnatural) inability to replicate the symbolic and functional roles of a state for lack of appropriate (natural) tools. The result is that international relations (IR) scholars are always asking, “Yes, but can nonstate actors do what the state does?” when a more appropriate query might be “Is what the state has been doing even necessary?” To give an example, several years ago, as the war in Bosnia was nearing the end of its most violent phase, Michael Mandelbaum (1996), 170 Chapter 8 the “Christian A. Herter Professor of American Foreign Policy at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University, and director of the Project on East-West Relations at the Council on Foreign Relations,” attacked the Clinton administration for conducting “foreign policy as social work.” Responding to then-national security advisor Anthony Lake’s questionable argument that “I think Mother Teresa and Ronald Reagan were both trying to do the same thing,” Mandelbaum (1996:18) riposted: While Mother Teresa is an admirable person and social work a noble profession, conducting American foreign policy by her example [sic; it was not social work!] is an expensive proposition. The world is a big place filled with distressed people, all of whom, by these lights, have a claim to American attention. Not only did Mandelbaum ignore the role of the American state in fostering the “noble profession” of social work, he also fell into the traditional realist trap of regarding such intervention as unworthy of state attention, presumably seeing it as an activity fit only for nonstate actors. The American state has military power for a purpose, and it must use it! A further, and much more fundamental, consequence of this binary treatment of state/nonstate was raised in chapters 4 and 6. Not only must a nation have a state of its own (or, perhaps, vice versa), a nation without a state is incomplete and impotent. It cannot act in international politics in a fully capable (and male) fashion; it must not even accept anything less than a fully sovereign (and potent) state. To remain a “not-state” is to not exist. Such views, to put it mildly, are absolute nonsense. More than that, they cast the matter of after authority in quite a different light than a choice between micro states and macro markets, and suggest a program that is quite distinct from the integration/fragmentation dichotomy of the state/not-state binary. To go beyond this, toward a “proliferation” of legitimate and authoritative actors in the inter/transnational realm, requires us to think (and act) quite differently where global politics are concerned; as Butler (1990:33) writes about gender To expose the contingent acts that create the appearance of naturalistic necessity . . . is a task that now takes on the added burden Politics among People of showing how the very notion of the subject, intelligible only through its appearance as gendered [or state/not-state], admits of possibilities that have been forcibly foreclosed by the various reifications of gender [state/not-state] that have constituted its contingent ontologies. (Parenthetical terms added) 171 In practice, this is a rather more difficult proposition to operationalize, but to the extent that theory informs practice (and practice informs theory), opening up for examination the possibilities beyond the binary can make a contribution to such a project. For the moment, it is incumbent upon us to recognize that even the term “state” is applied to many, rather different political entities. Such difference ought to be welcomed as providing openings for imagining communities, rather than being bemoaned or ignored. And, if we are to speculate on alternatives to citizenship in the nation-state, we ought first to look to the alternatives to the nation-state that already exist in some form. As Iris Marion Young (1990:234) argues, “A model of a transformed society must begin from the material structures that are given to us at this time in history.” Below, I discuss three alternatives, although these hardly exhaust the universe of possibilities. First, I expand on “global civil society,” which builds on parallels between state and domestic civil society, without necessarily postulating the emergence of a global state. The second involves the emergence of counterhegemonic social movements, which could provide the basis for new and innovative forms of political organization and action. The third focuses on the partial deterritorialization of political identity and political community. Global Civil Society In the heteronomous (dis)order of the future, authority will likely be distributed among many foci of political action, organized to address specific issue-areas rather than to exercise a generalized rule over a specific territory (Lipschutz, 1996: chap. 8). Territorially based political jurisdictions will continue to exist, but they will be complemented by others whose responsibilities will lie elsewhere. As Crook, Pakulski, and Waters (1992:34–35) point out, the relationship between actors and jurisdictions might not necessarily follow logically from their 172 Chapter 8 apparent functions. Schools are as likely to engage in environmental restoration as environmental organizations are to become involved in education at the K–12 level (see also Lipschutz, 1996: chap. 5). Elsewhere, I have argued that “global civil society” could represent a structure of actors and networks within which these new authorities emerge (Lipschutz, 1997c). As conventionally understood, civil society includes those political, cultural, and social organizations of modern societies that have not been established or mandated by the state or created as part of the institutionalized political system of the state (e.g., political parties). These groups are, nevertheless, engaged in a variety of political activities.16 Globalizing the concept extends this arrangement into the transnational arena, where it constitutes a protosociety composed of local, national, and global institutions, corporations, and nongovernmental organizations. Global civil society can be understood as shorthand for both the actors and networks that constitute a “new spatial mosaic of global innovation” (Gordon, 1995:196) and the growth in neofunctional authority resulting from a “proliferation” of political actors beyond, above, and beside the state. But civil society and the state are not formations independent of each other; “sovereignty-bound” and “sovereignty-free,” as James Rosenau (1990) has put it, are not fully dichotomous conditions. A state relies on some version of civil society for its legitimacy, a civil society cannot exist without the authority conveyed by the state, whether it is democratic or not. Indeed, we might go so far as to say that the two are mutually constitutive and derive their roles and identities from their relationship with each other. Of course, civic associations such as bowling leagues do operate quite autonomously of the state, yet we would be hard put to claim that they have no relationship whatsoever with the state. The members of a bowling league play according to “official” rules, wearing clothes and using equipment that have been vetted for safety by state officials, paying with money printed by the state, playing in a building whose every basic feature has been mandated by state regulations and constructed by virtue of state-granted permits, having arrived there in licensed vehicles on roads built by the state, etc., etc. By the same token, the state is continually reproduced by the beliefs and practices of civic associations, whether or not they are overtly political. The bowling league does little or nothing that delib- Politics among People 173 erately and consciously helps to reproduce the state—aside, perhaps, from its individual members paying taxes and respecting its institutions. Nevertheless, the normal activities of the league’s members serve to reproduce the normal existence of those conditions that legitimate the state. Bowlers, after all, rarely blow up state buildings or take up arms against the government (and were they to do so, they might actually intensify the state’s presence and legitimacy). None of this is to argue that bowling leagues are authoritative, neofunctional entities, but neither are they mere venues for throwing balls and drinking beer. Why does this matter? It matters because declining state authority will, in all likelihood, be supplemented or replaced by, or sublimated in, some kind of alternative political framework, which could be similar to a world state or very different. The late Richard Gordon’s (1995) research suggested that the relationship of production to politics, and the politics of production, are changing rather radically from what they once were. The strategies of corporate actors and other holders of capital take less and less cognizance of the residual authority and power of individual states to regulate them. More and more, they engage in individual and collective attempts to self-regulate (as, for example, in the multifarious activities of the International Organization for Standardization—ISO) or to generate supranational regulation (as in the World Trade Organization). These findings point also toward the fact that political community—even a federal state—is not restricted to discrete levels of government. As Theda Skocpol (1985:28) points out: On the one hand, states may be viewed as organizations through which official collectivities may pursue collective goals, realizing them more or less effectively given available state resources in relation to social settings. On the other hand, states may be viewed more macroscopically as configurations of organizations and action that influence the meanings and methods of politics for all groups and classes in society. Skocpol offers here a conception of the state that is, perhaps, too broad in encompassing society, but her point is, in my view, an important one. The state is more than just its constitution, agencies, rules, and roles, and it is embedded, as well, in a system of governance. From this view, state and civil society can be seen as mutually constitutive 174 Chapter 8 and, where the state engages in government, civil society often plays a role in governance. What is striking, especially in terms of relationships between nongovernmental organizations and institutionalized mechanisms of government, as well as capital and international regimes, is the growth of institutions of governance at and across all levels of analysis, from the local to the global (see, e.g., Leatherman, Pagnucco, and Smith, 1994: esp. pp. 23–28). This growth suggests, to repeat the argument made above, that even though there is no world government, as such, there may well be an emerging system of global governance. Subsumed within this system of governance are both institutionalized regulatory arrangements—some of which we call “regimes”—and less formalized norms, rules, and procedures that pattern behavior without the presence of written constitutions or material power.17 This system is not a “state,” as we commonly understand the term, but it is statelike, in Skocpol’s second sense. Indeed, we can see emerging patterns of behavior in global politics based on alliances between coalitions in global civil society and various international governance arrangements (see, e.g., Wilmer, 1993, on indigenous peoples alliances). What constitutes the equivalent of citizenship in such a system of global governance? The interests of transnational capital are represented, to some degree, in the international financial regimes but, so far, there is little if any global regulation for the rest of us (Lipschutz, 1999d). There are no mechanisms for representation of anything other than nation-states (although a number of groups and organizations do have observer status in the United Nations General Assembly). There are only a very few judicial fora in which actors other than nation-states can bring international legal actions (although, again, this situation is slowly changing). The idea of the “world” citizen is a rather empty one, while arguments about global “cosmopolitanism” rarely acknowledge just how few are the members of this class. For the moment, therefore, the answer to this question is less than clear. In the longer term, however, we might expect to see the issues of membership and representation become central to support for the institutions and mechanisms of global governance. Politics among People Counterhegemonic Social Movements 175 A second alternative is related to global civil society but focuses more on the emergence of what are called “counterhegemonic social movements.” Robert Cox (1987) and Stephen Gill (1993, 1994), among others, have used a Gramscian framework to speculate on the political possibilities of organized opposition to the hegemonic tendencies of global capital and authority. They argue, in essence, that contemporary progressive social movements represent social forces challenging the “historic bloc” that comprises the contemporary nexus of power rooted in states and capital. According to Gill (1994:195) Counter-hegemonic social movements and associated political organizations must mobilize their capabilities and create the possibility for the democratization of power and production. . . . [W]e might witness a 1990s version of Polanyi’s “double movement” as social movements are remobilized and new coalitions are formed to protect society from the unfettered logic of disciplinary neoliberalism and its associated globalizing forces. How these challengers will proceed is somewhat less clear, although Cox, Gill, and others believe that the growing discursive power of “organic intellectuals” may play a central role in mobilizing these movements. There is nothing new about organic intellectuals, per se; what is new is the scale at which they must do their work. Michael Mann (1993), drawing on the writings of Antonio Gramsci, has written about the emergence of national states in Europe and North America during the 1700s and 1800s, and the economic, political, and social revolutions and changes that took place throughout the “long” nineteenth century. Put briefly, Mann sees the rise of organic intellectuals, who played an essential role in the creation of the modern state, as central to the transition from royal to popular sovereignty and contemporary conceptions of citizenship (they were all men, and were seeking to establish new loci of authority, so it is not surprising that citizenship was defined largely in male terms). They filled primarily a discursive role in a gradual process of social change, by developing and articulating the ideas and practices that animated the political and social 176 Chapter 8 upheavals of those times. Mann observes that, while material interests and needs were always central to popular mobilization, emotional and ideational incentives were at least as important. More than this, the ideas and arguments of the organic intellectuals were framed in terms of “progress,” promising a better future through political, economic, and social reorganization. Nationalism, liberalism, socialism, and other “isms” that reified the strong state were teleological ideologies produced by these organic intellectuals. Without their communicating these arguments and putting them into practice, the nineteenth century might have been a much quieter, but less democratic, time. As it was, the centralized nation-states that have dominated world politics for the past century were, for better or worse, legitimated by the ideas of these intellectuals, if not constructed by them. As Mann (1993:42) has argued: Capitalism and discursive literacy media were the dual faces of a civil society diffusing throughout eighteenth-century European civilization. They were not reducible to each other, although they were entwined. . . . Nor were they more than partly caged by dominant classes, churches, military elites, and states, although they were variably encouraged and structured by them. Thus, they were partly transnational and interstitial to other power organizations. . . . Civil societies were always entwined with states—and they become more so during the long nineteenth century. Here I would propose that the “organic intellectuals” that operate within these counterhegemonic social movements constitute a transnational cadre that could help to create the “double movement” discussed by Polanyi and Gill. I do not refer here to populist opposition to globalization, as put forth by both the left and the right. Such movements seek to restore the primacy of the nation-state in the regulation of spheres of production and social life, although they have rather different ideas about the ends of such a restoration. Rather, I refer to more nuanced critiques of current modes of transnational regulation and their lack of representation, transparency, and accountability. Globalization offers a space for political organizing and activism of which these organic intellectuals and the mobilizers and members of nascent political communities are well-positioned to take advantage. Politics among People Political Deterritorialization 177 Can we imagine political arrangements in which citizenship is possible yet not dependent on territorial units, such as the nation-state? A deterritorialized political community would have to be based not on space, but on flows; not on where people live, but what links them together. That is, the identity between politics and people would not be rooted in a specific piece of reified “homeland” whose boundaries, fixed in the mind and on the ground, excluded all others. Instead, to slightly revise Michael J. Shapiro, this identity would be based on “a heterogenous set of . . . power centers integrated through structures of communication” (Shapiro, 1997:206) as well as knowledge and practices specific to each community. Kathy Ferguson (1996:451–52) argues that collective identity based on control of territory sponsors a zerosum calculation: either we belong here or they do. One can imagine collective identities that are deterritorialized, knit together in some other ways, perhaps from shared memories, daily practices, concrete needs, specific relationships to people, locations, and histories. Such productions would be more narrative than territorial; they might not be so exclusive because they are not so relentlessly spatial. Connection to a particular place could still be honored as one dimension of identity, but its intensities could be leavened by less competitive claims. Participation in such identities could be self-consciously partial, constructed, mobile; something one does and redoes every day, not a docile space one simply occupies and controls. Empathy across collective identities constructed as fluid and open could enrich, rather than endanger, one’s sense of who one is. This is, perhaps, the most difficult political alternative to conceptualize, and we do not yet have much to go on. There is much talk, these days, of “virtual communities,” composed of “netizens” linked through the Internet and the World Wide Web, but this is hardly a political entity, even in its broadest definition (e.g., Aizu, 1998). While the networks through which individuals and groups are connected provide conduits for communication of knowledge and patterns of behavior, collective political action in specific places is mediated by the networks. The idea 178 Chapter 8 of a web-based political community acting collectively (as opposed to wielding influence or lobbying political authorities) remains problematic (Lipschutz, 1996: chap. 3). A deterritorialized political community, it seems to me, must have a much stronger material base (Lipschutz, 1996: chap. 7). Iris Marion Young (1990:171) develops one such idea, offering an “egalitarian politics of difference,” a “culturally pluralist democratic” politics through which “Difference . . . emerges not as a description of the attributes of a group, but as a function of the relations between groups and the interactions of groups with institutions.” These groups would be social groups, that is, “collective[s] of people who have an affinity with one another because of a set of practices or way of life.” Such groups would be provided with “mechanisms for the effective recognition and representation of distinct voices and perspectives of those . . . constituent groups that are oppressed or disadvantaged” (Young, 1990:186, 184). Young proposes (1990:184) that such group representation implies institutional mechanisms and public resources supporting (1) self-organization of group members so that they achieve collective empowerment and a reflective understanding of their collective experience and interests in the context of society; (2) group analysis and group generation of policy proposals in institutionalized contexts where decisionmakers are obliged to show that their deliberations have taken group perspectives into consideration; and (3) group veto power regarding specific policies that affect a group directly. Such arrangements cannot, of course, be created ex nihilo. In the United States, the already-existing material structures are, Young argues, “largescale industry and urban centers.” Within this context, “neighborhood assemblies [could be] a basic unit of democratic participation, which might be composed of representatives from workplaces, block councils, local churches and clubs, and so on as well as individuals” (Young, 1990:234, 252). Saskia Sassen (1994, 1998) makes a somewhat similar argument in her work on global cities. Although this vision is attractive and well within our capabilities to pursue, there are, nonetheless, both conceptual and practical problems that remain to be addressed. For example, it appears that, in Young’s vision, such urban assemblies would remain part of a larger Politics among People 179 national unit, to which they would, presumably, profess some kind of loyalty and whose authority they would recognize as final. Sociologically, in the absence of national redistribution, dependence on locally available resources would render some cities relatively rich while others would be forced to struggle along in poverty (much as is the case within and between cities and countries today; for a critical look at notions of local autonomy, see Lipschutz, 1991). Without some kind of national group- or city-based assembly with real political power, national authorities would tend to ignore cities, as they already often do in the United States. Finally, how would such a scheme play out in other countries? None of this is to suggest that a city or region-based political system could not emerge in parallel to the state system; some writers, such as Kenichi Ohmae (1995), propose that this is already happening. There is a long history of successful city-states as well as city-based leagues, and some cities and groups of cities are deliberately reviving those forms (albeit for mostly economic reasons). Many cities already pursue their own “foreign policies,” both economic and political. And, the urban political machines of the late nineteenth and early twentieth centuries certainly provide a model for a city-based, sociological conception of citizenship. Whether the city can provide the basis for global democratization, citizenship, and new foci of authority—especially when capital is so footloose and fancy-free and cities are competing with each other for capital investment like neighboring countries buying military weapons (see chapter 7)—will be demonstrated only through theory and activism. One World or Many? Do the foregoing notions suggest a global future that might, just possibly, be less dismal than that of the realists and catastrophists? Perhaps. But such a future will not happen without deliberate action. In the final chapter of The Great Transformation, Polanyi pointed, once again, to the way in which the loosing of the self-regulating market on nineteenth-century society in the interests of certain elites led to the inevitable destruction of that society: Nineteenth century civilization . . . disintegrated as the result of . . . the measures which society adopted in order not to be, in 180 Chapter 8 its turn, annihilated by the action of the self-regulating market. . . . [T]he elementary requirements of an organized social life provided the century with its dynamics and produced the typical strains and stresses which ultimately destroyed that society. External wars merely hastened its destruction. (Polanyi, 1944/ 1957:249) He (1944/1957:254) nevertheless ended on a brighter note, foreseeing after World War II “economic collaboration of governments and the liberty to organize national life at will” (emphasis in original; this is a formula that sounds very much like John Ruggie’s embedded liberalism; 1983a, 1991, 1995). This would require freedom to be extended and maintained under unbreakable rules. Juridical and actual freedom can be made wider and more general than ever before; regulation and control can achieve freedom not only for the few, but for all. Freedom not as an appurtenance of privilege, tainted at the source, but as a prescriptive right extending far beyond the narrow confines of the political sphere into the intimate organization of society itself. (Polanyi, 1944/1957:256) How might this be accomplished under contemporary circumstances? In keeping with Polanyi’s hopes for the post–World War II period, we should recognize the opportunities inherent in the Great Transformation now underway. During this post–World War II (née Cold War) period, we could be witness to the democratization of societies and states through mechanisms of global governance and the proliferation of authorities, as well as the emancipation of peoples and cultures as states lose their historical roles as defensive containers and iron cages and become distinctive and diverse communities within a global society. The path to such emancipation will require our active involvement at all levels of politics and government, an involvement that must go beyond parties, elections, and indirect representation. Following this path also suggests that we will need to rethink the notion of “citizen” and “citizenship,” and their relationship to new “authorities,” as I suggested above. In the majority of democratic societies across the planet today, to be a citizen involves the exercise of a few civic duties mostly done grudgingly, if done at all, and a growing unwillingness to contribute to the general well-being of the society Politics among People 181 in which s/he lives. Returning to the argument I made in the opening paragraph of this book, it is easy to see why this is so. As the state has lost interest in the citizen, letting the market get the upper hand, the loyalty of the citizen to the state has weakened. This is not necessarily a bad thing, but it has served to undermine bonds of community and social reciprocity and, in so doing, fostered a host of “solutions” that only exacerbate atomization and alienation. The “ethnic” or “sectarian” solution to this problem—the creation of ever smaller and purer states—hardly seems viable in the longer term. The “globalist” answer—a world state or a planetary ethos—assumes what Dan Deudney (1993) has called “earth nationalism,” one that will be broadly shared by all 6 or 10 billion of the world’s present and future inhabitants. World federalists propose some combination of the two, akin to the civic identity we see in Catalonia, “a country in Europe” and a province in Spain. In speculating on the possibilities of citizenship under globalization, and the consequences for democracy and representation under just and equitable authority, perhaps it is best to return to the notion of “neomedievalism.” While the medieval world is hardly an attractive model on which to base a politics of the future, it nonetheless offers certain features worthy of notice. Power differentials were extreme and hierarchy was nearly absolute, but clients and patrons were enmeshed in a web of mutual rights and duties that bound them together and that could be called upon in specific situations. Moreover, the networks of relations and loyalties linking individuals were not all territoriality based; as Wæver (1995) notes, there was “a set of overlapping authorities” some of which had little to do with space. The multiple levels of “citizenship” developing in Europe, alluded to above, represent only one possible form of political community. In a global political system of the future, we could imagine many political communities, some based on place, others on affiliations, but linked relationally rather than through domination by or loyalty to a single power. Such communities might be material as well as virtual, possessing, for example, the power to tax members, represent them in various political assemblies, and engage in functional activities such as provision of certain welfare services, environmental conservation, and education. A member, in turn, would hold “citizenship” in the community—and could simultaneously be a citizen of many such communi- 182 Chapter 8 ties. Such citizens could be called on to serve their communities in specific political roles both within and in relation to other communities. Indeed, opportunities for “public service” at this scale might very well generate an efflorescence of involvement in democratic politics, to the benefit of all. This will not happen automatically, nor is it likely to come about through the decisions of states and capital; agency is essential. Human beings are not bound to endlessly reproduce the forms and problems of the past, nor are they complete prisoners of the logics of the present. We are constrained by our histories, of course, and what we can do as a result might not always be what we would like to do. Nonetheless, along with constraints come opportunities and the possibility of imagining choices and them making them. More than ever, it is important to both individual and global politics that we recognize those choices and make them carefully. Our future might be better or worse as a result, but at least it will be a future that we have chosen. Notes Chapter 1. Theory of Global Politics 1. This is what happened in the old Soviet Bloc; today, it is taking place, as well, in the West. 2. I realize that the first date is arguable. I am using poetic license here. 3. Statements by former Soviet general Alexander Lebed, in September 1997, that one hundred nuclear “suitcase bombs” have gone missing in Russia can only further stir up fears along these lines. It is puzzling, however, that if they are truly unaccounted for, why none have turned up in the hands of miscreants. Are they, perhaps, in the possession of the United States? Chapter 2. The Worries of Nations 1. Not everyone takes a dim view of the workhouses. Gertrude Himmelfarb (1995), for example, believes that ending the Poor Laws made people responsible for their individual well-being and fate. The concept of family solidarity has been examined by Francis Fukuyama (1995a, 1995b). 2. The Triffin Dilemma arose because U.S. dollars in the possession of foreign countries could be exchanged for gold. Unfortunately, by 1960, the 183 184 Notes United States did not hold enough gold to redeem all of the dollars in circulation abroad. Were the demand for gold to outstrip U.S. gold stocks, the role of the dollar as an international reserve currency could be undermined or destroyed. 3. Again, it is important to recognize that the “self-regulating market” is a fiction; it must be supported by implicit or explicit agreements regarding rules of operation (Attali, 1997). 4. A conventional security-based account can be found in Gaddis (1982; 1987). An economic account is Pollard (1985). A revisionist economistic account can be found in McCormick (1996). A sophisticated and insightful analysis of the process discussed is Gill (1993: esp. 30–34). 5. In essence, this is the core of the so-called Washington consensus, the increasingly popular argument that democracies do not go to war with each other. For a critical assessment of this claim, see Mansfield and Snyder (1995). 6. The dollar was exchangeable for gold at the rate of $35/ounce. Americans could not hold gold bullion and only governments could officially request gold for their dollars. At this rate of exchange, Fort Knox held about $10 billion in gold. 7. Charles Tilly said “The state made war and war made the state.” After World War II, the state made Cold War and Cold War made the state. 8. This was not the only reason underlying the extension of civil rights to African Americans and the implementation of affirmative action, of course. There was also a fear of urban revolt and a desire to show the world that the United States did not oppress its minorities. 9. This continues to be the case today, as evidence by the high proportion of non-U.S. citizens receiving doctorates in scientific and engineering fields. According to Joseph Nye and William Owens (1996:29), “American higher education draws some 450,000 foreign students each year.” 10. For example, the Soviet Union’s MIG fighters were as good or better than anything the United States had to offer, but its avionics used vacuum tubes rather than semiconductor devices. Tubes offered greater protection against the electromagnetic pulses associated with nuclear detonations, but the Soviets used them because they could not miniaturize the electronics. 11. The rise of the behavioralist model in the social sciences was part of this process, too. 12. To be entirely fair, Buchanan put the blame on free trade; that his analysis was, at best, partial and at worst, completely wrong, does not invalidate my argument here. 13. People might be offered equal opportunities to succeed, although even this is difficult to accomplish in practice. Even so, not everyone will seize those opportunities and succeed. Notes 185 14. A fascinating essay on the commodification of consumer shopping habits can be found in Gladwell (1996), “The Science of Shopping.” It is a simple matter to link the bar codes in a shopping cart to the name and address on the check or ATM card proffered in payment, enter them into a database, and sell the resulting information to the appropriate companies. 15. Stephen Kobrin (1997) argues that “e-money” poses a threat to the most fundamental perquisite of the state: taxation. Chapter 3. The Insecurity Dilemma 1. Although, as a graduate student project during the mid-1980s, I tried to fit U.S. and Soviet nuclear missile deployments to different types of differential equations. I found that competition between the U.S. Navy and U.S. Air Force better explained the growth of American nuclear arsenals than did an arms race with the Soviet Union. 2. The hammer-nail conundrum is usually attributed to Abraham Maslow, who was supposed to have observed that “if all you have is a hammer, everything begins to look like a nail.” 3. For an interesting list of “micronations” and hyperlinks to them, see. 4. Border studies is a rapidly growing field; there is a Centre for Border Studies at the University of Wales, complete with peer-reviewed journal. 5. Ken Waltz attacked the idea of “peace through interdependence” almost thirty years ago, in “The Myth of National Interdependence” (1971). 6. Twenty years ago, Stephen Krasner (1978) and others argued that policymakers really did represent the singular interests of an autonomous actor called the “state.” But even “strong” states no longer appear so unitary as they once might have been. 7. The intersubjectivity of national-security policy was never noted at this time. Threats were assumed to be real and objective; the state was assumed to protect society rather than itself. Under conditions of mutual assured destruction (MAD), preparations made for the continuity of state and government in the event of nuclear attack would have resulted, in all likelihood, in a state with no society to govern. 8. Although the literature on “redefining security” has proliferated over the past ten years, the two defining articles are probably Richard H. Ullman (1983) and Jessica Tuchman Mathews (1989). 9. Useful comparisons can be found in the Japanese and German economic spheres of the 1930s and 1940s; for a discussion of the latter, see Hirschman (1980). 186 Notes 10. It does not qualify as bona fide structural adjustment because the dollar remains dominant in the global economy and the United States has not yet been forced to reduce its budget deficits. But, it would be interesting to compare the effects of somewhat similar policies on labor in the United States and former Socialist countries. 11. For example, “electronic classrooms” may make it possible for one professor to lecture to many classrooms at the same time, thereby reducing labor costs for universities. See Marshall (1995b) and The Economist (1995a). 12. This is Huntington’s (1997) argument, as well. The question is whether the “loss of enemies” is really such a problem. 13. Note that one of the most wide-ranging of such incidents to date occurred as the result of satellite failure. For a couple of days, tens of millions of pagers and thousands of computerized gas pumps went off the air. For an insightful analysis of what can and does go wrong with computer-run complex systems, see Rochlin (1997). 14. As Karl Marx said, “Adam Smith’s contradictions are of significance because they contain problems which it is true he does not resolve, but which he reveals by contradicting himself.” 15. Of course, “reality” is a loaded word. Inasmuch as the world and its condition are described by language, there are limits to a truly objective description. You and I can agree that that thing over there is a tank, but I say it is for defensive purposes only, while you say it is for offensive purposes. 16. These are questions ordinarily not asked. Either definitions of security are taken to be objective and nonproblematic, or the state is reified even as security and anarchy are treated as intersubjective constructs. 17. This is, in essence, the argument put forth by Alexander George in Bridging the Gap (1993), although he does acknowledge that images of the enemy are often inaccurate and that acting on such images may lead to undesirable outcomes. A rather different perspective is offered by Smithson (1996). 18. In other words, the enemy, and the threat it presents, possess characteristics specific to the society defining them. See, e.g., Weldes (1992), Lipschutz (1989), and Campbell (1990, 1992). 19. To this, the realist would argue: “But states exist and the condition of anarchy means that there are no restraints on their behavior toward others! Hence, threats must be material and real.” As Nicholas Onuf (1989), Alex Wendt (1992), Mercer (1995), and Kubálková, Onuf, and Kowert (1998) have argued, even international anarchy is a social construction inasmuch as certain rules of behavior inevitably form the basis for such an arrangement (Lipschutz, 1992a). Notes 187 20. This dialectical process is discussed rather nicely, albeit in a different context, by Harvey (1996). 21. This contradiction was apparent in the initial landing of U.S. Marines in Somalia in December, 1992. Demonstrably, there was a question of matching force to force in this case, but the ostensible goal of humanitarian assistance took on the appearance of a military invasion (with the added hyperreality of resistance offered only by the mass[ed] media waiting on shore). 22. Ordinarily, this dialectic might be expected to lead to a new social construction of or consensus around security. As I suggest below, for the United States at least, the contradictions are so great as to make it unlikely that any stable consensus will be forthcoming. See also Lipschutz (1999b). 23. This is not, however, to imply that state maintenance is the actual goal. Rather, the constructing of a nontraditional threat to security was seen during the last few years of the Cold War as a way of shifting resources away from the military and toward more socially focused needs. Some in the military—e.g., the Army Corps of Engineers—welcomed this shift as a way of redefining their mission, perhaps creating a “Green Corps” to send ashore in countries under environmental siege. For a detailed discussion, see Litfin (1998). 24. Often, borders are drawn down the middle of rivers running through valleys because they make such visible and convenient markers. Difference is thereby established even as the water and terrain on both sides are indistinguishable. 25. Star Wars would have drawn a line—or a surface—in the sky, a dome within which the self would be secure and secured, and outside of which would remain the eternal threat of the Other, but few believed that such a surface could be made, much less made secure (see chapter 7). 26. Now, threats emerge because the lines of security, drawn around Russian nuclear facilities, have literally dissolved, allowing fissile materials to become commodified and objects of exchange. In the market, there are no boundaries, only risks. 27. Although these are, apparently, what the United States has proposed as NATO’s new objectives (Erlanger, 1998). Chapter 4. Arms and Affluence 1. Another book by the same name is Oakes (1994). 2. Indeed, this point is demonstrated repeatedly in the opprobrium incurred by Bill Clinton for never having served (or being seen to have wanted to serve) in the military. 188 Notes 3. Indeed, the essential task of deterrence was to convince the other that they would be used, although one would never want to get to the point that they might be used. A typical bit of scenario building can be found in Paul Nitze’s famous 1976 article, “Deterring our Deterrent.” For a full-blown exegesis of this point, see Luke (1989). 4. The term “Finlandization” is worthy of an entire paper in itself. One used to hear people say that to be like Finland would not be so bad; today, no one wants to be like Finland, which is in an economic slump brought on by the collapse of trade with the former Soviet Union. 5. There was, at the time, some controversy over why the Soviets had put the SS-20s into Eastern Europe. On the one hand, some argued that it was done to take advantage of the escalatory gap. On the other hand, some pointed to the deployment as simply the arcane workings of the Soviet militaryindustrial complex, which had taken one stage off of an unsuccessful, solidfueled intercontinental ballistic missile, thereby turning it into a working intermediate range one. The latter argument would, of course, have implied a state beset by bureaucratic conflict and irrationality, rather than one bent on conquering the West. 6. It might be noted, in passing, that the eventual impacts of the SS20s and Euromissiles were greater at home than in enemy territory. The waves of protest against the missiles in the West was viewed with great alarm in many NATO capitals. In the East, the episode was the occasion of growing contacts between Western peace activists and Eastern dissidents that, in the long run, must have contributed to the revolutions of 1989 and 1991. See, e.g., Meyer (1993). 7. Retaliation against Sudan and Afghanistan for the August 1998 bombings of U.S. embassies in Kenya and Tanzania provides one answer to these questions, although it is less clear whether this had the intended effect. 8. In addition to the Iraqi invasion of Kuwait, the number of clear and blatant invasions of one country’s territory by another since 1950 is small by comparison with minor border incursions and civil conflicts: Korea (1950), Southeast Asia (1950s–1970s), the Six Day War (1967), the Indo-Pakistani War (1971), the October War (1973), the Ogaden War (1977), Vietnam’s invasion of Cambodia and the Chinese riposte (1979), the Soviet invasion of Afghanistan (1979), Israeli invasions of Lebanon (1976, 1982), Grenada (1983), Panama (1989). 9. The notion of “irrationality” tends to blend into cultural explanations, whereby irreconcilable differences among cultures become the provocation to conflict. On this point, see especially Huntington (1996). 10. For a somewhat distorted but nonetheless interesting exploration of the impact of cultural differences on diplomacy, see Cohen (1991). Notes 189 11. Indeed, the dance has begun to seem like farce, to the point that a character on The X-Files (December 6, 1998) can plausibly claim that Saddam Hussein is a guy from Brooklyn who, put into power by the CIA, rattles his sabers whenever the U.S. government requires public distraction from other matters. Chapter 5. Markets, the State, and War 1. This is not the precise wording of the document referred to. There, the author (Serageldin, 1995:2) writes, “Agreement on access to water is an important part of the peace accords between Israel and its neighbors. . . . As populations and demand for limited supplies of water increase, interstate and international frictions over water can be expected to intensify.” 2. To name just a few: Starr and Stoll, 1988; Starr, 1991, 1995; Beschorner, 1992; Lowi, 1992, 1993, 1995; Bulloch and Dawish, 1993; Kally with Fishelson, 1993; Hillel, 1994; Isaac and Shuval, 1994; Gleick, 1994; Murakami, 1995; Starr, 1995; Wolf, 1995. 3. “Global deficiencies and degradation of natural resources, both renewable and non-renewable, coupled with the uneven distribution of these raw materials, can lead to unlikely—and thus unstable—alliances, to national rivalries, and, of course, to war” (Westing, 1986: introduction). 4. “We are . . . talking about maintaining access to energy resources that are key—not just to the functioning of this country but the entire world. Our jobs, our way of life, our own freedom, and the freedom of friendly countries around the world would suffer if control of the world’s great oil reserves fell into the hands of Saddam Hussein” (President George Bush, 1990). 5. Although many scholars argue that wars between Israel and its neighbors have been about water, the evidence in support of this unicausal explanation remains thin. 6. The best example of this was the struggle over Alsace-Lorraine between France and Germany. Only the shedding of the nation’s blood could redeem the lost pieces of the organic nation-state. See Elias (1994). 7. As opposed to political geography, which studies the “relationship between geographical factors and political entities” (Weigert, et al., 1957). Geography can be changed, of course, as evidenced for example by the case of the Panama Canal. Oddly, perhaps, the canal served to enhance American power—it was now possible for the Navy to move from one ocean to the other more quickly—while also exacerbating vulnerability: any other power gaining access to the canal could now threaten the opposite U.S. coast more quickly. 190 Notes 8. The dictum was: “Who rules East Europe commands the Heartland; Who rules the Heartland commands the World-Island; Who rules the World-Island commands the World” (Mackinder, 1919/1962:150; see also Mackinder, 1943). 9. Interestingly, as I noted in chapter 3 and discuss in chapter 6, culture has become the most recent refuge for many of those international relations scholars who are unable to account otherwise for the vagaries of world politics; see e.g., Fukuyama (1995b) and Huntington (1996). 10. More recent expressions of this still-common view can be found in Choucri and North (1975) and Organski and Kugler (1980). 11. Some writers, such as Richard Dawkins (1989), have gone so far as to argue that the appropriate unit of competition and survival is the individual “gene,” and that humans (and, presumably, other species) are only containers for them. Of course, by that argument, bacteria and viruses are probably “bound to win.” 12. Principle 21 abjures states to recognize the “responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond their national jurisdiction.” Cited in Nanda (1995:86). 13. Do we ever speak of “ecological interdependence” with, say our children, spouses, significant others, or existing between California and Nevada? 14. Some have noted, of course, that renewable resources are not subject to this particular economic logic, inasmuch as their flows are large and their stocks small or nonexistent. But economists still argue that markets can prevent unsustainable depletion through the price mechanism. Unfortunately, by the time prices rise sufficiently to impel substitution, the renewable resource may be depleted beyond recovery. 15. The 1997–98 El Niño illustrated this proposition rather nicely: in the ocean, many species could follow the food from their normal feeding groups, and blue whales, tuna, and marlin were seen or caught off the coast of Northern California. Anchovy, being less able to go with the flows, died in droves off the coast of Peru. 16. And the invocation to “free up” markets does little to address the immediate needs of those who have no food. 17. The “double hermenutic” is a term used by Anthony Giddens to describe how scholars use the behavior of policymakers to formulate theories, and how policymakers, in turn, behave according to the dictates of theory; see the discussion in Dessler (1989). 18. Nor did they recognize that, given the nature of oil markets, even control of oil would not have prevented a generalized increase in prices; see Lipschutz (1992a). Notes 191 19. Parallel to the argument about old married couples, divorce constitutes an effort to reestablish independence but usually serves to illustrate just how difficult it is to completely sever the bonds of matrimony. 20. This is best seen in discourses about population. The rich consume much more than the poor, but it is the rapidly-growing numbers of poor who the rich fear will move north and cross borders (thereby ignoring the fact that the rich moved south centuries ago). 21. This ignores the obvious point that human activities have never been “neatly compartmentalized.” 22. The Newtonian “harmony of the spheres” remains with us today, even as ecologists warn us that ecosystemic balance and stability do not, for the most part, exist. 23. This is described as the “quantitative fallacy” by David Hackett Fischer (1970:90). Chapter 6. The Social Contraction 1. The recent literature on ethnic conflict is enormous. Among them are: Lake and Rothchild (1998); Crawford and Lipschutz (1998). 2. Although he does not subscribe to this logic, R. B. J. Walker (1992) provides numerous insights into the contradictions and pitfalls of territory and sovereignty in Inside/Outside: International Relations as Political Theory. 3. Although it is a rather crude measurement, a search of the University of California’s Melvyl bibliographic database under the subject category “ethnic relations” turned up 5,761 citations between 1987 and 1997, as compared to 3,197 prior to 1987 (with most of those being published during the 1980s). 4. “Stability” is obviously a tenuous concept. What appears to the outside or historical observer to be stable is usually quite dynamic. See, for example, the semifictional account of Visegrad, Bosnia, in Ivo Andric, The Bridge on the Drina (1977). 5. In the U.S. context, Huntington and others call it the “American Creed.” See also Lipschutz (1998b). 6. Culture can be understood as a form of social contract within a group of people that shares certain types of social characteristics. Usually, such a form is called “tradition” or “custom.” 7. The notion of actor choice in a structured context is discussed in Long and Long (1992). 8. V. P. Gagnon disagrees with me on this point; personal communication. See his article (Gagnon, 1995) as well as Woodward (1995). 192 Notes 9. Such rents accrue even in the absence of “corruption.” For example, on September 15, 1997, “first student” Chelsea Clinton arrived at Stanford University, after having flown from Washington, D.C. on Air Force One with her parents. Surely this perk is available to very few other college freshpersons. 10. On this point, Kenichi Ohmae is correct; see The End of the Nation State (1995). 11. By this I mean that in any one location, there are economic systems of local, regional, national, transnational, and global extent. These are linked but not all of a single piece. Thus, for example, Silicon Valley is tightly integrated into the “global” economy, but some of its inhabitants are also participants in a service-based economy that, although coupled into global systems, is largely directed toward meeting “local” demand. For further discussions of the notion of “multiple” economies, see Gordon (1995). This section has also been informed by a conversation with Randall Germain of the University of Sheffield, April 20, 1996. 12. The term for such historical contingency is “path dependency.” See the discussion of this point in Krugman (1994a: chap. 9). 13. How intentional or fortuitous is, of course, the key question. Silicon Valley was hardly the product of chance; rather, it was the result of intentional mobilization of resources by the state in its pursuit of national security. The difficulty of establishing such a development pole is evidenced by the numerous failed research parks that litter the United States; the problems of maintaining a pole once established were illustrated by the relative collapse of the high-tech center on Route 128 around Boston in the late 1980s. Some of the difficulties facing policymakers who might like to repeat such mobilization are discussed in Crawford (1995). 14. The Microstate Network is at; the Micronations Page, at wwwl.execpc.com/~talossa/patsilor.htm. 15. One article (Hedges, 1995) on the Bosnian peace settlement suggested that “United Nations officials said that they expect NATO to initiate regional or neighborhood meetings to try and settle the complex claims and counterclaims that are sure to complicate the agreement.” Chapter 7. The Princ(ipal) 1. Which is why more than $20 billion have been spent on antimissile defense research and development, and why several hundred million dollars and more continue to be spent on it each year. 2. Susan Strange (1996) has also taken note of this phenomenon, but she ascribes it to the “retreat of the state.” Notes 193 3. Note that this is hardly a new argument and that in making it, I do not propose the restoration of theocracy or a return to Victorian values, as proposed by Gertrude Himmelfarb (1995). I do, however, believe that norms and ethics are important; see Hirsch (1995) and the essays in Ellis and Kumar (1983). 4. I do not mean to imply that the Treaty of Westphalia actually was the means of accomplishing this; rather, it put the stamp of legitimacy on an arrangement that had been developing for some time. 5. Barry Buzan (1991) acknowledges this in his schema of anarchies ranging from “immature” to “mature,” but he retains survival in the state of nature as the rationale for movement toward greater maturity. 6. This point is evident, as well, in “traditional” societies and common-pool resource systems, where violation of the mutual bonds of obligation and responsibility can result in eviction from the community. 7. And this does not mean that we now subscribe to a secular order; see Bragg (1997). 8. The notion of “just war,” which represented an effort to impose morality on the conduct of war, does not contradict this argument, I think. Civilians were the subjects of the prince and his morality, not the source of that morality. 9. The Jews, who had earlier been expelled from England, were sufficiently powerless and few in number to make this practical; there were altogether too many Catholics, however, for either expulsion or extermination to be practical. 10. Thereby creating an inversion of Benedict Anderson’s (1991) “imagined communities,” which we might call “unimaginable communities.” 11. Although, as we see in claims being made against Switzerland and the former East Germany, extermination does not necessarily eliminate claims to property. 12. One is left to wonder what might happen should we make contact with non-terrestrial life, whether intelligent or not. Recent films (Men in Black, Starship Troopers) suggest, in particular, that “bugs” are the enemy, although some, such as The Faculty, warn us about our familiars, as well; see Leary (1997). 13. That is not to say that domestic security was not a concern; the ever vigilant search for ideological threats was pursued by a transnational network of intelligence and surveillance agencies whose capacity was often far in excess of any demonstrated need. 14. It was called the “Grand Area”; see Shoup and Minter (1977). 15. The distinction was never as great as claimed. The isolationists wanted to keep pernicious influences out; the internationalists wanted to keep them contained. Both aimed to avoid “contamination.” 194 Notes 16. The defection of Yassir Arafat from the bad bloc to the good bloc clearly demonstrates how membership in both has more to do with morality than power. Chapter 8. Politics among People 1. I note here, as well, Susan Strange’s (1996) fierce attack on the notion of “global governance” in The Retreat of the State, which reminds us to always regard such neatly packaged concepts with a critical eye. 2. The very notion of cause and effect is rooted in the Enlightenment and the triumph of scientific reasoning, not to mention investment and ratesof-return. Even those who engage in risky, life-threatening activities expect to go back to work after their vacation is over. 3. The regime literature of the 1980s and 1990s (see, especially, Krasner, 1983) sought to discover and explain such patterned behavior among states in the “State of Nature.” 4. Admittedly, liberalism recognizes only the authority of a “watchperson state” that does not seek to regulate human behavior. Still, this requires a very narrow definition of “state” and a great divide between it and “civil society.” 5. For a general overview of perspectives on civil society, see Walzer (1995), and Cohen and Arato (1992). For essays on governance, see Rosenau and Czempiel (1992). 6. James Rosenau (1990; 1997) has taken the contrary tendencies into account by theorizing “sovereignty-bound” and “sovereignty-free” actors. This, I think, does not capture the entire dynamic, in that some of the actors in the latter category would dearly love to move into the former. 7. At least, this is true in the political and policy realms; Polanyi still has an ardent following in both academia and intellectual circles. 8. The author of the Economist survey cited earlier (1997) argues that the source of international economic instability remains too much domestic regulation and government intervention. 9. This does not mean that small companies are not innovative; rather, that the owners of fixed property and small service-oriented businesses face high social costs relative to revenues and find it difficult to liquidate their assets and invest them elsewhere. 10. Rosenau and others have tagged the trend “glocalization,” although I find this term exceptionally grating. 11. Heteronomous: 1. Subject to external or foreign laws or domination; not autonomous. 2. Differing in development or manner of specializa- Notes 195 tion, as the dissimilar segments of certain arthopods. My meaning here is the second, minus the detail about bugs. 12. The best-known discussion of the “new medievalism” is to be found in Bull (1977: 254–55, 264–76, 285–86, 291–94). The notion of “heteronomy” is found, among other places, in Ruggie (1983b: 274, n. 30). The term “heterarchy” comes from Bartlett and Ghoshal (1990), quoted in Gordon (1995: 181). 13. More to the point, as I have noted before, the “market” is not a free-floating institution whose operation is guaranteed by the “laws of Nature,” as some would have it; it is underwritten by a set of embedded rules that are ideologically “naturalized” and that, consequently, seem to disappear. 14. In developing the following argument, I do not mean to ignore the growing body of literature by numerous scholars, both male and female, on the topic of feminism, gender, and international relations theory that has provided important insights into the constitution of world politics. See, for example, Tickner (1992), and Peterson and Runyan (1993). 15. I should note that this line of thought was triggered by Neil Easterbrook’s use of Butler’s work in “State, Heterotopia: The Political Imagination in Heinlein, Le Guin, and Delany” (1997). 16. By this definition, therefore, civil society includes social movements, various kinds of public interest groups, and corporations (although I am not explicitly discussing the last here), all of which do engage in politics of one sort or another. The state-civil society distinction is, sometimes, difficult to ascertain, as in the case of the World Wildlife Fund/Worldwide Fund for Animals and other similar organizations, which subcontract with state agencies. 17. This point is a heavily disputed one: To wit, is the international system so undersocialized as to make institutions only weakly constraining on behavior, as Stephen Krasner (1993) might argue, or are the fetters of institutionalized practices sufficiently strong to modify behavior away from chaos and even anarchy, as Nicolas Onuf (1989) might suggest. Bibliography Agnew, John, and Stuart Corbridge (1995). Mastering Space—Hegemony, Territory and International Political Economy. London: Routledge. Aizu, Izumi (1998). “Emergence of Netizens in Japan and Its Cultural Implications for the Net Society.” Institute for HyperNetwork Society and GLOCOM. Center for Global Communications, International University of Japan, at, May 8, 1998. Allison, Graham (1971). Essence of Decision. Boston: Little, Brown. Anderson, Benedict (1991). Imagined Communities: Reflections on the Origins and Spread of Nationalism. 2d ed. London: Verso. Andric, Ivo (1977). The Bridge on the Drina. Trans. L. F. Edwards. Chicago: University of Chicago Press. Angell, Norman (1910). The Great Illusion: A Study of the Relation of Military Power in Nations to Their Economic and Social Advantages. London: W. Heinemann. Arenson, Karen W. (1998). “Questions about Future of Those Many Ph.D.’s [sic].” New York Times, November 11, national edition, p. A28. Attali, Jacques (1997). “The Crash of Western Civilization—The Limits of Market and Democracy.” Foreign Policy 107 (summer):54–63. Augelli, Enrico, and Craig Murphy (1988). America’s Quest for Supremacy and the Third World: A Gramscian Analysis. London: Pinter. 197 198 Bibliography Banerjee, Sanjoy (1991). “Reproduction of Subjects in Historical Structures: Attribution, Identity, and Emotion in the Early Cold War.” International Studies Quarterly 35, no. 1 (March):19–38. Barber, Benjamin R. (1995). Jihad vs. McWorld. New York: Times Books. Barnet, Richard J. (1973). Roots of War—The Men and Institutions Behind U.S. Foreign Policy. Baltimore: Penguin. Bartlett, C., and S. Ghoshal (1990). “Managing Innovation in the Transnational Corporation.” In C. Y. Doz and G. Hedlund (eds.), Managing the Global Firm (pp. 215–55). London: Routledge. Beck, Ulrich (1992). Risk Society: Towards a New Modernity. Beverly Hills: Sage. Bennett, William J. (1998). The Death of Outrage: Bill Clinton and the Assault on American Ideals. New York: The Free Press. Bercovitch, Sacvan (1978). The American Jeremiad. Madison: University of Wisconsin Press. Berend, Iván T., and György Ránki (1979). Underdevelopment and Economic Growth: Studies in Hungarian Social and Economic History. Budapest: Akadémiai Kiadó. Bergsten, Fred (1996). “Globalizing Free Trade.” Foreign Affairs 75, no. 3 (May/June):105–20. Bernstein, Richard, and Ross H. Munro (1997). The Coming Conflict with China. New York: Knopf. Beschorner, Natasha (1992). Water and Instability in the Middle East. London: Brassey’s for the International Institute for Strategic Studies. Bhagwati, Jagdish (1993). “Trade and the Environment: The False Conflict?” In Durwood Zaelke, Paul Orbuch, and Robert F. Houseman (eds.), Trade and the Environment: Law, Economics, and Policy (pp. 159–60). Washington, D.C.: Island Press. Biersteker, Thomas J., and Cynthia Weber (eds.) (1996). State Sovereignty as Social Construct. Cambridge: Cambridge University Press. Block, Fred (1977). The Origins of International Economic Disorder. Berkeley: University of California Press. Booth, Ken (1991). “Security in Anarchy.” International Affairs 67, no. 3: 527–45. Boulding, Kenneth (1977). Stable Peace. Austin: University of Texas Press. Bragg, Rick (1997). “Judge Lets God’s Law Mix with Alabama’s.” New York Times, February 15, national edition, p. A11. Brass, Paul R. (1976). “Ethnicity and Nationality Formation.” Ethnicity 3, no. 3 (September): 225–239. Broeder, John M. (1996). “Clinton Seeks $1.1 Billion to Fight Terror.” Los Angeles Times, September 10, p. A1. Bibliography 199 Brown, Chris (1992). International Relations Theory—New Normative Approaches. New York: Columbia University Press. Bull, Hedley (1977). The Anarchical Society. New York: Columbia University Press. Bulloch, John, and Adel Dawish (1993). Water Wars: Coming Conflicts in the Middle East. New York: Victor Gollancz. Bunce, Valerie (1985). “The Empire Strikes Back: The Evolution of the Eastern Bloc from a Soviet Asset to a Soviet Liability.” International Organization 39, no. 1 (winter):1–46. Burdick, Eugene, and Harvey Wheeler (1962). Fail-Safe. New York: Dell. Burnham, James (1941). The Managerial Revolution: What Is Happening in the World? New York: John Day. Bush, President George (1990). “Against Aggression in the Persian Gulf,” Dispatch 1, no. 1 (August 15) (Address to employees of the Pentagon, Washington, D.C.). Butler, Judith (1990). Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge. Buzan, Barry (1991). People, States, and Fear. Boulder, Colo.: Lynne Rienner, 2d ed. Campbell, David (1997). “ ‘Ethnic’ Bosnia and Its Partition: The Political Anthropology of International Diplomacy.” Paper prepared for presentation at the annual meeting of the International Studies Association, Toronto, Canada, March 18–22. ——— (1992). Writing Security. Minneapolis: University of Minnesota Press. ——— (1990). “Global Inscription: How Foreign Policy Constitutes the United States.” Alternatives 15:263–86. Castells, Manuel (1996, 1997, 1998). The Information Age. 3 vols. Malden, Mass.: Blackwell. Chase, Robert, Emily Hill, and Paul Kennedy (1996). “The Pivotal States.” Foreign Affairs 75, no. 1 (January/February): 33–51. Choucri, Nazli, and Robert North (1975). Nations in Conflict. San Francisco: W. H. Freeman. Clancy, Tom (1987). Red Storm Rising. New York: Berkley Books. Clinton, President Bill (1999). “President Clinton’s Address on Airstrikes against Yugoslavia, New York Times (March 24, 1999), at: 032599clinton-address-text.html (3/25/99). ——— (1997). “A National Security Strategy for a New Century.” Washington, D.C.: The White House, May, at: strategy97.htm. Coats, A. W. (ed.) (1971). The Classical Economists and Economic Policy. London: Methuen. 200 Bibliography Cohen, Eliot A. (1996). “A Revolution in Warfare.” Foreign Affairs 75, no. 2 (March/April):37–54. Cohen, Jean L., and Andrew Arato (1992). Civil Society and Political Theory. Cambridge: MIT Press. Cohen, Raymond (1991). Negotiating across Cultures. Washington, D.C.: U.S. Institute of Peace Press. Cohen, Roger (1997). “For France, Sagging Self-Image and Espirit.” New York Times, February 11, national edition, p. A1. Cox, Robert (1987). Production, Power, and World Order: Social Forces in the Making of History. New York: Columbia University Press. Crawford, Beverly (1995). “Hawks, Doves, but No Owls: International Economic Interdependence and Construction of the New Security Dilemma.” In Ronnie D. Lipschutz (ed.), On Security (pp. 149–86). New York: Columbia University Press. ——— (1993). Economic Vulnerability in International Relations. New York: Columbia University Press. Crawford, Beverly, and Ronnie D. Lipschutz (1998). The Myth of “Ethnic Conflict”: Politics, Economics, and “Cultural” Violence. Berkeley, CA: International and Area Studies, University of California, Berkeley. Crocker, Chester A., and Fen Osler Hampton, with Pamela Aall (eds.) (1996). Managing Global Chaos: Sources of and Responses to International Conflict. Washington, D.C.: U.S. Institute of Peace Press. Crook, Stephen, Jan Pakulski, and Malcolm Waters (1992). Postmodernization: Change in Advanced Society. London: Sage. Dahlem Workshop (1993). What Are the Mechanisms Mediating the Genetic and Environmental Determinants of Behavior? Twins as a Tool of Behavioral Genetics. Chichester/New York: Wiley. Dalby, Simon (1995). “Neo-Malthusianism in Contemporary Geopolitical Discourse: Kaplan, Kennedy, and New Global Threats.” Paper prepared for presentation to a panel on “Discourse, Geography and Interpretation.” Annual meeting of the International Studies Association, Chicago, February. ——— (1990). Creating the Second Cold War: The Discourse of Politics. London/New York: Pinter Guilford. Davis, Bob (1994). “Global Paradox: Growth of Trade Binds Nations, but It Also Can Spur Separatism.” Wall Street Journal, June 20, Western ed., p. A1. Davis, Christopher Mark (1991). “The Exceptional Soviet Case: Defense in an Autarkic System.” Dædalus 120, no. 4 (fall):113–34. Dawkins, Richard (1989). The Selfish Gene. New ed. New York: Oxford University Press. Bibliography 201 Dawson, Jane I. (1996). Eco-Nationalism. Durham, N.C.: Duke University Press. Der Derian, James (1996). “Eyeing the Other: Technical Oversight, Simulated Foresight, and Theoretical Blindspots in the Infosphere.” Talk given November 11, UC-Santa Cruz. ——— (1995). “The Value of Security: Hobbes, Marx, Nietzsche, and Baudrillard.” In Ronnie D. Lipschutz (ed.), On Security (pp. 24–45). New York: Columbia University Press. ——— (1992). Antidiplomacy: Spies, Terror, Speed, and War. Cambridge, Mass.: Blackwell. Derluguian, Georgi M. (1995). “The Tale of Two Resorts: Abkhazia and Ajaria before and after the Soviet Collapse.” Berkeley, CA: Center for German and European Studies, University of California, Berkeley. Working Paper No. 6.2. Dessler, David (1989). “The Use and Abuse of Social Science for Policy.” SAIS Review 9, no. 2 (summer–fall):203–223. Deudney, Daniel (1995). “Political Fission: State Structure, Civil Society, and Nuclear Weapons in the United States.” In Ronnie D. Lipschutz (ed.), On Security (pp. 87–123). New York: Columbia University Press. ——— (1993). “Global Environmental Rescue and the Emergence of World Domestic Politics.” In Ronnie D. Lipschutz and Ken Conca (eds.), The State and Social Power in Global Environmental Politics (pp. 280– 305). New York: Columbia University Press. ——— (1990). “The Case against Linking Environmental Degradation and National Security.” Millennium 19, no. 3 (winter):461–76. Drainville, André C. (1996). “The Fetishism of Global Civil Society: Global Governance, Transnational Urbanism and Sustainable Capitalism in the World Economy.” Paper presented at the annual convention of the American Political Science Association, San Francisco, Calif., August 29–September 1. ——— (1995). “Of Social Spaces, Citizenship, and the Nature of Power in the World Economy.” Alternatives 20, no. 1 (January–March):51–79. Drell, Sidney D., Philip J. Farley, and David Holloway (1985). The Reagan Strategic Defense Initiative: A Technical, Political, and Arms Control Assessment. Cambridge, Mass.: Ballinger. Dreze, Jean, and Amartya Sen (1989). Hunger and Public Action. New York: Oxford University Press. Easterbrook, Neil (1997). “State, Heterotopia: The Political Imagination in Heinlein, Le Guin, and Delany.” In Donald M. Hassler and Clyde Wilcox (eds.), Political Science Fiction (pp. 43–75). Columbia: University of South Carolina Press. 202 Bibliography The Economist (1997). “The Visible Hand: World Economy,” September 21– 27 (survey). ——— (1995a). “A World without Jobs?” February 11, pp. 21–23. ——— (1995b). “Flowing Uphill,” August 12, p. 36. Edmunds, John C. (1996). “Securities: The New World Wealth Machine.” Foreign Policy 104 (fall):118–38. Elias, Norbert (1994). The Civilizing Process: State Formation and Civilization. Trans. Edmund Jephcott. Oxford: Blackwell. Elliott, Stuart (1997). “Advertising—The New Campaign for 3 Musketeers Adds Diversity to Portray Contemporary America.” New York Times, February 12, national edition, p. C6. Ellis, Adrian, and Krishan Kumar (eds.) (1983). Dilemmas of Liberal Democracies. London: Tavistock. Enzenberger, Hans Magnus (1994). Civil Wars: From L.A. to Bosnia. New York: The New Press. Erlanger, Steven (1998). “U.S. to Propose NATO Take On Increased Roles,” New York Times, December 7, p. A1. Farr, Robert M. (1987). “Self/Other Relations and the Social Nature of Reality.” In Carl F. Graumann and Serge Moscovici (eds.), Changing Conceptions of Conspiracy (pp. 203–17). New York: Springer-Verlag. Ferguson, Kathy (1996). “From a Kibbutz Journal: Reflections on Gender, Race, and Militarism in Israel.” In Michael J. Shapiro and Hayward R. Alker (eds.), Challenging Boundaries: Global Flows, Territorial Identities (pp. 435–54). Minneapolis: University of Minnesota Press. Fischer, David Hackett (1970). Historical Fallacies: Toward a Logic of Historical Thought. New York: Harper & Row. Foster, Gregory (1994). “Interrogating the Future.” Alternatives 19, no. 1 (winter):53–98. Freedman, Lawrence (1983). The Evolution of Nuclear Strategy. New York: St. Martin’s. Friedberg, Aaron L. (1991). “The End of Autonomy: The United States after Five Decades.” Dædalus 120, no. 4 (fall):69–90. Fukuyama, Francis (1995a). “Social Capital and the Global Economy.” Foreign Affairs 74, no. 5 (September/October):89–103. ——— (1995b). Trust: The Social Virtues and the Creation of Prosperity. New York: The Free Press. Gabriel, Trip (1997). “Six Figures of Fun: Bonus Season on Wall Street.” New York Times, February 12, national edition, p. A19. Gaddis, John Lewis (1987). The Long Peace: Inquiries into the History of the Cold War. New York: Oxford University Press. ——— (1982). Strategies of Containment. Oxford: Oxford University Press. Bibliography 203 Gagnon, V. P. (1995). “Historical Roots of the Yugoslav Conflict.” In Milton J. Esman and Shibley Telhami (eds.), International Organizations and Ethnic Conflict (pp. 179–97). Ithaca, N.Y.: Cornell University Press. Gaura, Alicia, and Bill Wallace (1997). “San Jose Seeks More Firepower for Cops: Mayor Wants to Buy Semiautomatic Guns.” San Francisco Chronicle, March 15, p. A1. Gellner, Ernest (1983). Nations and Nationalism. Ithaca, N.Y.: Cornell University Press. George, Alexander (1993). Bridging the Gap. Washington, D.C.: U.S. Institute of Peace Press. Gerö, András (1995). Modern Hungarian Society in the Making: The Unfinished Experience. Trans. James Patterson and Eniko Koncz. Budapest: CEU Press. Gerschenkron, Alexander (1962). Economic Backwardness in Historical Perspective. Cambridge, Mass.: Belknap Press of the Harvard University Press. Gill, Stephen (1995). “The Global Panopticon? The Neoliberal State, Economic Life, and Democratic Surveillance.” Alternatives 2, no. 1 (January–March):1–50. ——— (1994). “Structural Change and Global Political Economy: Globalizing Elites and the Emerging World Order.” In Yoshikazu Sakomoto (ed.), Global Transformation: Challenges to the State System (pp. 169– 99). Tokyo: United Nations University. ——— (1993). “Epistemology, Ontology, and the ‘Italian School’.” In Stephen Gill (ed.), Gramsci, Historical Materialism, and International Relations (pp. 21–48). Cambridge: Cambridge University Press. Gill, Stephen, and James Mittleman (eds.) (1997). Innovation and Transformation in International Studies. Cambridge: Cambridge University Press. Gilpin, Robert (1987). The Political Economy of International Relations. Princeton, N.J.: Princeton University Press. ——— (1981). War and Change in World Politics. Cambridge: Cambridge University Press. ——— (1977). “Economic Interdependence and National Security in Historical Perspective.” In Klaus Knorr and Frank N. Trager (eds.), Economic Issues and National Security (pp. 19–66). Lawrence, Kansas: Regents Press of Kansas. Gladwell, Malcolm (1996). “The Science of Shopping.” The New Yorker 72, no. 33 (November 4):66–75. Gleditsch, Nils Petter (1997). Conflict and the Environment. Dordrecht: Kluwer. Gleick, Peter (1994). “Water, War, and Peace in the Middle East.” Environment 36, no. 3 (April):6–15, 35–42. 204 Bibliography Goldstein, Joshua S. (1988) Long Cycles—Prosperity and War in the Modern Age. New Haven, Conn.: Yale University Press. Gordon, Richard (1995). “Globalization, New Production Systems and the Spatial Division of Labor.” In Wolfgang Litek and Tony Charles (eds.), The Division of Labor—Emerging Forms of World Organisation in International Perspective (pp. 161–207). Berlin: Walter de Gruyter. Gowa, Joanne (1983). Closing the Gold Window. Ithaca, N.Y.: Cornell University Press. Graham, Edward M. (1996). Global Corporations and National Governments. Washington, D.C.: Institute for International Economics. Gray, Colin S. (1990). War, Peace, and Victory: Strategy and Statecraft for the Next Century. New York: Simon & Schuster. ——— (1988). The Geopolitics of Super Power. Lexington: University Press of Kentucky. Groh, Dieter (1987). “The Temptation of Conspiracy Theory, or: Why Do Bad Things Happen to Good People?” In Carl F. Graumann and Serge Moscovici (eds.), Changing Conceptions of Conspiracy (pp. 1–37). New York: Springer-Verlag. Hajer, Maarten A. (1993). “Discourse Coalitions and the Institutionalization of Practice: The Case of Acid Rain in Great Britain.” In Frank Fischer and John Forester (eds.), The Argumentative Turn in Policy Analysis and Planning (pp. 43–76). Durham, N.C.: Duke University Press. Hanley, Charles J. (1996). “Blood Money.” San Francisco Examiner, April 21, p. A12. Associated Press wire service. Harris, Judith Rich (1998). The Nurture Assumption. New York: St. Martin’s. Hartmann, H., and Robert L. Wendzel (1988). Defending America’s Security. Washington: Pergamon-Brassey’s. Harvey, David (1996). Justice, Nature, and the Geography of Difference. London: Blackwell. Hedges, Chris (1995). “Pentagon Confident, but Some Serbs ‘Will Fight’: In Sarajevo Suburbs, Talk of Resistance.” New York Times, November 27, national edition, p. A6. Heilbroner, Robert L. (1991). An Inquiry into the Human Prospect: Looked at Again for the 1990s. 3rd ed. New York: Norton. Herrenstein, Richard, and Charles Murray (1994). The Bell Curve. New York: Basic Books. Herz, John H. (1959), International Politics in the Atomic Age. New York: Columbia University Press. Hillel, Daniel (1994). Rivers of Eden: the Struggle for Water and the Quest for Peace in the Middle East. New York: Oxford University Press. Bibliography 205 Himmelfarb, Gertrude (1995). The De-moralization of Society: From Victorian Virtues to Modern Values. New York: Knopf. Hirsch, Fred (1995). Social Limits to Growth. New ed. Cambridge: Harvard University Press. Hirschman, Albert O. (1980). National Power and the Structure of Foreign Trade. Berkeley: University of California Press, expanded edition; original edition, 1945. Hobbes, Thomas (1962), Leviathan. Ed. Michael Oakeshott. New York: Collier. Homer-Dixon, Thomas F. (1995). “The Ingenuity Gap: Can Poor Countries Adapt to Resource Scarcity?” Population and Development Review 21, no. 3 (September):587–612. Hoopes, Townsend (1973). The Devil and John Foster Dulles. Boston: AtlanticLittle Brown. Huntington, Samuel P. (1997). “The Erosion of American National Interests.” Foreign Affairs 76, no. 5 (September/October):28–49. ——— (1996). The Clash of Civilizations and the Remaking of World Order. New York: Simon & Schuster. ——— (1993). “The Clash of Civilizations.” Foreign Affairs 72, no. 3 (summer):22–49. Ichheiser, G. (1949), “Misunderstandings in Human Relations: A Study in False Social Perception.” American Journal of Sociology 60 (suppl.). Iklé, Fred C. (1996). “The Second Coming of the Nuclear Age.” Foreign Affairs 74, no. 1 (January–February):119–28. ——— (1971). Every War Must End. New York: Columbia University Press. Inayatullah, Naeem (1996). “Beyond the Sovereignty Dilemma: Quasi-states as Social Construct.” In Thomas J. Biersteker and Cynthia Weber (eds.), State Sovereignty as Social Construct (pp. 50–80). Cambridge: Cambridge University Press. Isaac, J., and H. Shuval (eds.) (1994). Water and Peace in the Middle East: Proceedings of the First Israeli-Palestinian International Academic Conference on Water, Zurich, Switzerland, 10–13 December 1992. Amsterdam: Elsevier. Jackson, Robert H. (1990). Quasi-states: Sovereignty, International Relations and the Third World. Cambridge: Cambridge University Press. Jervis, Robert (1978). “Cooperation under the Security Dilemma.” World Politics 30, no. 2 (January):167–214. Kahn, Herman (1965). On Escalation: Metaphors and Scenarios. New York: Praeger. Kaldor, Mary (1990). The Imaginary War: Understanding the East–West Conflict. Oxford: Blackwell. 206 Bibliography Kally, Elisha, with Gideon Fishelson (1993). Water and Peace: Water Resources and the Arab-Israeli Peace Process. Westport, Conn.: Praeger. Kaplan, Robert D. (1996). The Ends of the Earth: A Journey at the Dawn of the Twenty-first Century. New York: Random House. ——— (1994). “The Coming Anarchy.” Atlantic Monthly, February, pp. 44– 76. Kapstein, Ethan (1996). “Workers and the World Economy.” Foreign Affairs 75, no. 3 (May–June):16–37. Keck, Margaret E., and Kathryn Sikkink (1998). Activists across Borders: Advocacy Networks in International Politics. Ithaca, N.Y.: Cornell University Press. Kennan, George F. (1985/86), “Morality and Foreign Policy.” Foreign Affairs 64, no. 5 (winter):205–18. Kennedy, Paul (1988). The Rise and Fall of the Great Powers. New York: Random House. Keohane, Robert O. (1984). After Hegemony: Cooperation and Discord in the World Political Economy. Princeton, N.J.: Princeton University Press. Keohane, Robert O., and Joseph S. Nye (1977/1989). Power and Interdependence. Boston: Little, Brown. Kifner, John (1995). “Bombing Suspect: Portrait of a Man’s Frayed Life.” San Francisco Examiner, December 31, p. A4. New York Times wire service. Kindleberger, Charles P. (1973). The World in Depression, 1929–1939. Berkeley: University of California Press. Kobrin, Stephen (1997). “Electronic Cash and the End of National Markets.” Foreign Policy 107 (summer):54–64. Kotz, Nick (1988). Wild Blue Yonder: Money, Politics, and the B-1 Bomber. New York: Pantheon. Krasner, Stephen D. (1993). “Westphalia and All That.” In Judith Goldstein and Robert Keohane (eds.), Ideas and Foreign Policy (pp. 235–264). Ithaca, N.Y.: Cornell University Press. ——— (ed.) (1983). International Regimes. Ithaca, N.Y.: Cornell University Press. ——— (1978). Defending the National Interest. Princeton, N.J.: Princeton University Press. Krause, Keith, and Michael C. Williams (1996). “Broadening the Agenda of Security Studies: Politics and Methods.” Mershon International Studies Review 40, Suppl. 2 (October):229–54. ——— (eds.) (1997). Critical Security Studies: Concepts and Cases. Minneapolis: University of Minnesota Press. Bibliography 207 Krugman, Paul (1994a). “Europe Jobless, America Penniless?” Foreign Policy 95 (summer):19–34. ——— (1994b). Peddling Prosperity—Economic Sense and Nonsense in the Age of Diminished Expectations. New York: Norton. Kubálková, Vendukla, Nicholas Onuf, and Paul Kowert (eds.) (1998). International Relations in a Constructed World. Armonk, N.Y.: M.E. Sharpe. Kugler, Richard (1995). Toward a Dangerous World. Santa Monica: RAND. Kull, Steven (1988). Minds at War: Nuclear Reality and the Inner Conflicts of Defense Policymakers. New York: Basic Books. ——— (1985). “Nuclear Nonsense.” Foreign Policy 58 (spring):28–52. Laitin, David (1985). “Hegemony and Religious Conflict: British Imperial Control and Political Cleavages in Yorubaland.” In Peter B. Evans, Dietrich Rueschemeyer, and Theda Skocpol (eds.), Bringing the State Back In (pp. 285–316). New York: Cambridge University Press. Lake, David A., and Donald S. Rothchild (eds.) (1998). The International Spread of Ethnic Conflict. Princeton, N.J.: Princeton University Press. Lapid, Yosef, and Friedrich Kratochwil (eds.) (1996). The Return of Culture and Identity in IR Theory. Boulder, Colo.: Lynne Rienner. Larkin, Bruce (forthcoming). War Scripts/Civic Scripts. Manuscript in preparation. Latour, Bruno, and Steve Woolgar (1986). Laboratory Life—The Construction of Scientific Facts. Princeton, N.J.; Princeton University Press; first edition, Sage, 1979. Leary, Warren E. (1997). “Science Fiction’s Microbe Peril from Mars is Unlikely but Possible, Panel Warns.” New York Times, March 7, national edition, p. A10. Leatherman, Janie, Ron Pagnucco, and Jackie Smith (1994). “International Institutions and Transnational Social Movement Organizations: Transforming Sovereignty, Anarchy, and Global Governance.” Kroc Institute for International Peace Studies, University of Notre Dame, August. Working Paper 5:WP:3. Lederer, William, and Eugene Burdick (1987). The Ugly American. New York: Fawcett; originally published in 1958. Lemarchand, René (1994). Burundi: Ethnocide as Discourse and Practice. New York and Cambridge: Wilson Center and Cambridge University Press. Levin, N. D. (ed.) (1994). Prisms and Policy: U.S. Security Strategy after the Cold War. Santa Monica, Calif.: RAND. Lewis, Bernard (1992). “Muslims, Christians, and Jews: The Dream of Coexistence.” The New York Review of Books 39, no. 6, March 26, pp. 48–52. 208 Bibliography Libecap, Gary (1989). Contracting for Property Rights. Cambridge: Cambridge University Press. Libicki, Martin C. (1996). “Technology and Warfare,” Chap. 4 in Patrick M. Cronin (ed.), 2015: Power and Progress. National Defense University, Institute for National Strategic Studies, July, at inss/books/2015/ ch4co.html. Lind, William S. (1991). “Defending Western Culture.” Foreign Policy 84 (fall):40–50. Lipschutz, Ronnie D. (ed.) (1999a). Beyond the Neo-liberal Peace. Special Issue of Social Justice 25, no. 4, (winter). ——— (1999b). “Terror in the Suites: Narratives of Fear and the Political Economy of Danger.” Global Society 14, no. 4 (October):409–437. ——— (1999c). “Members Only?” Citizenship and Civic Virtue in a Time of Globalization.” International Politics 36, no. 2 (June):203–233. ———, with Cathleen Fogel (1999d). “Regulation for the Rest of Us—Global Civil Society and the Democratization of Global Politics.” Paper presented at the Workshop on Global Civil Society/Global Democracy, Rutgers University-Newark, June 4–5. ——— (1998a). “Seeking a State of One’s Own: An Analytical Framework for Assessing ‘Ethnic and Sectarian Conflicts’.” In Beverly Crawford and Ronnie D. Lipschutz (eds.), The Myth of “Ethnic Conflict” (pp. 44– 77). Berkeley: Institute of International and Area Studies, UC-Berkeley. ——— (1998b). “From Culture Wars to Shooting Wars: Globalization and Cultural Conflict in the United States.” In Beverly Crawford and Ronnie D. Lipschutz (eds.), The Myth of “Ethnic Conflict” (pp. 394–433). Berkeley: Institute of International and Area Studies, UC–Berkeley. ——— (1998c). “The Nature of Sovereignty and the Sovereignty of Nature: Problematizing the Boundaries between Self, Society, State, and System.” In Karen T. Litfin (ed.), The Greening of Sovereignty in World Politics (pp. 109–138). Cambridge: MIT Press. ——— (1997a). “The Great Transformation Revisited.” Brown Journal of International Affairs 4, no. 1 (winter/spring):299–318. ——— (1997b). What Did You Do in the Cold War, Daddy? Reading U.S. Foreign Policy in Contemporary Film and Fiction. Draft manuscript. ——— (1997c). “From Place to Planet: Local Knowledge and Global Environmental Governance.” Global Governance 3, no. 1 (January–April):83– 102. ———, with Judith Mayer (1996). Global Civil Society and Global Environmental Governance. Albany: State University of New York Press. ——— (1995a). “On Security.” In Ronnie D. Lipschutz (ed.), On Security (pp. 1–23). New York: Columbia University Press. Bibliography 209 ——— (1995b). “Negotiating the Boundaries of Difference and Security at Millennium’s End.” In Ronnie D. Lipschutz (ed.), On Security (pp. 212–28). New York: Columbia University Press. ——— (1992a). “Reconstructing World Politics: The Emergence of Global Civil Society.” Millennium 21, no. 3 (winter):389–420. ——— (1992b). “Strategic Insecurity: Putting the Pieces Back Together in the Middle East.” In Harry Kreisler (ed.), Confrontation in the Gulf (pp. 113–26). Berkeley: Institute of International Studies, UC-Berkeley. ——— (1992c). “Raw Materials, Finished Ideals: Strategic Raw Materials and the Geopolitical Economy of U.S. Foreign Policy.” In Martha L. Cottam and Chih-yu Shih (eds.), Contending Dramas: A Cognitive Approach to International Organizations (pp. 101–26). New York: Praeger. ——— (1991). “Wasn’t the Future Wonderful? Resources, Environment, and the Emerging Myth of Global Sustainable Development.” Colorado Journal of International Environmental Law and Policy 2:35–54. ——— (1989). When Nations Clash: Raw Materials, Ideology, and Foreign Policy. New York: Ballinger/Harper & Row. Lipschutz, Ronnie D., and Beverly Crawford (1996). “Economic Globalization and the ‘New’ Ethnic Strife: What Is to Be Done?” Institute on Global Conflict and Cooperation, University of California, San Diego, May, Policy Paper #25. Lipschutz, Ronnie D., and Ken Conca (1993). “The Implications of Global Ecological Interdependence.” In Ronnie D. Lipschutz and Ken Conca (eds.), The State and Social Power in Global Environmental Politics (pp. 327–43). New York: Columbia University Press. List, Friedrich (1856). National System of Political Economy. Philadelphia: J. B. Lippincott. Litfin, Karen (ed.) (1998). The Greening of Sovereignty in World Politics. Cambridge: MIT Press. ——— (1994). Ozone Discourses. New York: Columbia University Press. Locke, John (1988). On Civil Government: The Second Treatise. In Peter Laslett (ed.), Two Treatises of Government. Cambridge: Cambridge University Press, student edition. Long, Norman, and Ann Long (eds.) (1992). Battlefields of Knowledge: The Interlocking of Theory and Practice in Social Research and Development. London: Routledge. Lowi, Miriam R. (1995). “Rivers of Conflict, Rivers of Peace.” Journal of International Affairs 49, no. 1:123–44. ——— (1993). Water and Power: The Politics of a Scarce Resource in the Jordan River Basin. Cambridge: Cambridge University Press. 210 Bibliography ——— (1992). “West Bank Water Resources and the Resolution of Conflict in the Middle East.” Occasional Paper Series of the Project on Environmental Change and Acute Conflict no. 1 (September):29–60. Luke, Timothy W. (1995). “New World Order or Neo-World Orders: Power, Politics, and Ideology in Informationalizing Glocalities.” In Mike Featherstone, Scott Lash, and Roland Robertson (eds.), Global Modernities (pp. 91–107). London: Sage. ——— (1989). “On Post-War: The Significance of Symbolic Action in War and Deterrence.” Alternatives 14:343–62. Mackinder, Halford J. (1919/1962). Democratic Ideals and Reality. New York: Norton. ——— (1943). “The Round World and the Winning of the Peace.” Foreign Affairs (July):595–605. Malthus, Thomas Robert (1803). An essay on the principle of population; or, A view of its past and present effect on human happiness; with an inquiry into our prospects respecting the future removal or mitigation of the evils which it occasions. London, printed for J. Johnson, by T. Bensley; A new edition, very much enlarged. Mandelbaum, Michael (1996). “Foreign Policy as Social Work.” Foreign Affairs 75, no. 1 (January–February):16–32. Mann, Michael (1993). The Sources of Social Power: The Rise of Classes and Nation-States, 1760–1914. Vol. 2. Cambridge: Cambridge University Press. Mansfield, Edward, and Jack Snyder (1995). “Democratization and War.” Foreign Affairs 74, no. 3 (May/June):79–97. Marshall, Jonathan (1995a). “Electronic Classes Give Students More Options When Teacher Is Far, Far Away.” San Francisco Chronicle, March 21, p. A1. ——— (1995b). “Don’t Tie Anger to Low Wages.” San Francisco Chronicle, May 29, p. D1. Marx, Karl (1978). “Speech at the Anniversary of the People’s Paper.” In Robert C. Tucker (ed.), The Marx-Engels Reader. 2d ed. (pp. 577–78). New York: Norton. Massing, Michael (1998). The Fix. New York: Simon & Schuster. Mastanduno, Michael (1991). “The United States Defiant: Export Controls in the Postwar Era.” Dædalus 120, no. 4 (fall):91–112. Mathews, Jessica Tuchman (1997). “Powershift.” Foreign Affairs 76, no. 1 (January/February):50–66. ——— (1989). “Redefining Security.” Foreign Affairs 68, no. 2 (spring):162–77. McCormick, Thomas (1996). America’s Half-Century. 2d ed. Baltimore: Johns Hopkins University Press. Bibliography 211 Mcpherson, C. B. (1962). The Political Theory of Possessive Individualism. Oxford: Oxford University Press. Mead, Walter Russell (1995/96). “Trains, Planes, and Automobiles: The End of the Postmodern Moment.” World Policy Journal 12, no. 4 (winter):13–31. Meadows, Dennis, et al. (1972). Limits to Growth. Cambridge: MIT Press. Meadows, Donella H., Dennis L. Meadows, and Jorgen Randers (1992). Beyond the Limits: Confronting Global Collapse, Envisioning a Sustainable Future. Post Mills, Vt.: Chelsea Green. Mearsheimer, John J. (1994). “The False Promise of International Institutions.” International Security 19, no. 3 (winter):5–49. ——— (1990a). “Why We Will Soon Miss the Cold War,” The Atlantic 266, no. 2 (August):35–45. ——— (1990b). “Back to the Future: Instability in Europe after the Cold War.” International Security 15, no. 1 (summer):5–56. Mercer, Jonathan (1995). “Anarchy and Identity.” International Organization 49, no. 2 (spring):229–52. Meyer, David S. (1993). “Below, Beyond, Beside the State: Peace and Human Rights Movements and the End of the Cold War.” In David Skidmore and Valerie M. Hudson (eds.), The Limits of State Autonomy: Societal Groups and Foreign Policy Formulation (pp. 267–96). Boulder, Colo.: Westview Press. ——— (1990). A Winter of Discontent: The Nuclear Freeze and American Politics. New York: Praeger. Milward, Alan S. (1977). War, Economy, and Society, 1939–1945. Berkeley: University of California Press. Moravcsik, Andrew (1991). “Arms and Autarky in Modern European History.” Dædalus 120, no. 4 (fall):23–46. Moscovici, Serge (1987). “The Conspiracy Mentality.” In Carl F. Graumann and Serge Moscovici (eds.), Changing Conceptions of Conspiracy (pp. 151–69). New York: Springer-Verlag. Mueller, John (1989). Retreat from Doomsday: The Obsolescence of Major War. New York: Basic Books. Murakami, Masahiro (1995). Managing Water for Peace in the Middle East: Alternative Strategies. Tokyo: United Nations University Press. Myers, Laura (1997). “Art Imitates Life: Terrorism on Screen Has Some Validity.” Santa Cruz County Sentinal, August 19, p. A-6 (Associated Press wire service). Nanda, Ved. P. (1995). International Environmental Law and Policy. Irvingtonon-Hudson, N.Y.: Transnational Publishers. Nasar, Sylvia (1994). “More Men in Prime of Life Spend Less Time Working.” New York Times, December 12, national edition, p. A1. 212 Bibliography The New York Times (1996). “The Downsizing of America.” March 3–9. Nitze, Paul (1976–77). “Deterring Our Deterrent.” Foreign Policy 25:195–210. Noble, David F. (1997). The Religion of Technology. New York: Knopf. Noponen, Heizi, Julie Graham, and Ann R. Markusen (eds.) (1993). Trading Industries, Trading Regions: International Trade, American Industry, and Regional Economic Development. New York: Guilford. Nye, Joseph S., Jr. (1990). Bound to Lead: The Changing Nature of American Power. New York: Basic Books. Nye, Joseph S., Jr., and William Owens (1996). “America’s Information Edge.” Foreign Affairs 75, no. 2 (March/April):20–36. Oakes, Guy (1994). The Imaginary War: Civil Defense and American Cold War Culture. New York: Oxford University Press. Ohmae, Kenichi (1995). The End of the Nation State. New York: Free Press. ——— (1991). The Borderless World: Power and Strategy in the Interlinked Economy. New York: HarperPerennial. Onuf, Nicholas (1989). World of Our Making: Rules and Rule in Social Theory and International Relations. Columbia: University of South Carolina Press. Ophuls, William, and A. Stephen Boyan, Jr. (1992). Ecology and the Politics of Scarcity Revisited: The Unraveling of the American Dream. New York: W. H. Freeman. Organski, A. F. K., and Jacek Kugler (1980). The War Ledger. Chicago: University of Chicago Press. Packenham, Robert A. (1973). Liberal America and the Third World: Political Development Ideas in Foreign Aid and Social Science. Princeton, N.J.: Princeton University Press. Peluso, Nancy Lee (1993). “Coercing Conservation: The Politics of State Resource Control.” In Ronnie D. Lipschutz and Ken Conca (eds.), The State and Social Power in Global Environmental Politics (pp. 46–70). New York: Columbia University Press. ——— (1992). Rich Forests, Poor People—Resource Control and Resistance in Java. Berkeley: University of California Press. Peterson, V. Spike, and Anne Sisson Runyan (1993). Global Gender Issues. Boulder, Colo.: Westview Press. Pois, Robert A. (1986). National Socialism and the Religion of Nature. London: Croom Helm. Polanyi, Karl (1957). The Great Transformation. Boston: Beacon Press, original edition, 1944. Pollack, Andrew (1997). “Thriving, South Koreans Strike to Keep It That Way.” New York Times, January 17, national edition, p. A1. Bibliography 213 Pollard, Robert A. (1985). Economic Security and the Origins of the Cold War, 1945–1950. New York: Columbia University Press. President’s Materials Policy Commission (1952). Resources for Freedom. Washington, D.C.: U.S. Government Printing Office. Princen, Thomas, and Matthias Finger (eds.) (1994). Environmental NGOs in World Politics. London: Routledge. Quadrennial Defense Review (QDR) (1997). Washington, D.C.: The Pentagon, at:. Ra’anan, Uri, Maria Mesner, Keith Armes, and Kate Martin (1991). State and Nation in Multi-ethnic Societies: The Breakup of Multinational States. Manchester: Manchester University Press. Reich, Robert (1992). The Work of Nations. New York: Vintage. Reiff, David (1991). “Multiculturalism’s Silent Partner.” Harpers, August, pp. 62–72. Rochlin, Gene I. (1997). Trapped in the Net: The Unanticipated Consequences of Computerization. Princeton, N.J.: Princeton University Press. ——— (1985). “Shotguns and Sharpshooters: Command, Control, and the Search for Certainty in the U.S. Weapons Acquisition Process.” Berkeley: Institute of Governmental Studies, University of California, Working paper 85–2. Rosecrance, Richard (1996). “The Rise of the Virtual State.” Foreign Affairs 75, no. 4 (July/August):45–61. Rosenau, James N. (1997). Along the Domestic-Foreign Frontier: Exploring Governance in a Turbulent World. Cambridge: Cambridge University Press. ——— (1990). Turbulence in World Politics: A Theory of Change and Continuity. Princeton, N.J.: Princeton University Press. Rosenau, James N., and Ernst-Otto Czempiel (eds.) (1992). Governance without Government: Order and Change in World Politics. Cambridge: Cambridge University Press. Rosenfeld, Seth (1997). “FBI Wants S.F. Cops to Join Spy Squad.” San Francisco Examiner, January 12, p. A1. Rousseau, Jean-Jacques (1968). The Social Contract. Trans. by Maurice Cransto. Harmondsworth: Penguin. Rowny, Edward L. (1997). “What Will Prevent a Missile Attack?” New York Times, January 24, national edition, p. A17. Royal Institute of International Affairs (1936). Raw Materials and Colonies. London: Royal Institute of International Affairs, Information Department paper no. 18. Rudolph, Susanne Hoeber, and James Piscatori (eds.) (1997). Transnational Religion and Fading States. Boulder, Colo.: Westview Press. 214 Bibliography Ruggie, John G. (1995). “At Home Abroad, Abroad at Home: International Liberalisation and Domestic Stability in the New World Economy.” Millennium 24, no. 3 (winter):507–26. ——— (1993). “Territoriality and Beyond: Problematizing Modernity in International Relations.” International Organization 47, no. 1 (winter):139–74. ——— (1991). “Embedded Liberalism Revisited: Institutions and Progress in International Economic Relations.” In Emanuel Adler and Beverly Crawford (eds.), Progress in International Relations (pp. 201–34). New York: Columbia University Press. ——— (1989). “International Structure and International Transformation: Space, Time, and Method.” In Ernst–Otto Czempiel and James N. Rosenau (eds.), Global Changes and Theoretical Challenges (pp. 21– 35). Lexington, Mass.: Lexington Books. ——— (1983a). “International Regimes, Transactions, and Change: Embedded Liberalism in the Postwar Economic Order.” In Stephen D. Krasner (ed.), International Regimes (pp. 195–232). Ithaca, N.Y.: Cornell University Press. ——— (1983b). “Continuity and Transformation in the World Polity: Toward a Neorealist Synthesis.” World Politics 35, no. 2 (January):261–85. Rule, James B. (1992). “Tribalism and the State.” Dissent 39, no. 4 (fall):519–23. Rupert, Mark (1997). “Globalization and the Reconstruction of Common Sense in the U.S.” In Stephen Gill and James Mittleman (eds.), Innovation and Transformation in International Studies. Cambridge: Cambridge University Press. ——— (1995). Producing Hegemony: The Politics of Mass Production and American Global Power. Cambridge: Cambridge University Press. Said, Edward (1979). Orientalism. New York: Viking. Sakamoto, Yoshikazu (ed.) (1994). Global Transformation: Challenges to the State System. Tokyo: United Nations University Press. Sanders, Jerry W. (1983). Peddlers of Crisis: The Committee on the Present Danger and the Politics of Containment. Boston: South End Press. Sandholtz, Wayne, et al. (1992). The Highest Stakes: The Economic Foundations of the Next Security System. New York: Oxford University Press. San Francisco Chronicle (1997). “Conservative Accuses Gingrich of Cozying Up to Liberals,” February 6, p. A8. ——— (1991a). “U.S. Pushing Language Studies,” December 26, p. A3. ——— (1991b). “Security Threats,” Editorial, December 28, p. A18. Sassen, Saskia (1998). Globalization and Its Discontents. Princeton, N.J.: Princeton University Press. Bibliography 215 ——— (1994). Cities in a World Economy. Thousand Oaks, Calif.: Pine Forge Press. Saul, John Ralston (1992). Voltaire’s Bastards: The Dictatorship of Reason in the West. New York: Free Press. Scheer, Robert (1982). With Enough Shovels. New York: Random House. Schelling, Thomas (1966). Arms and Influence. New Haven, Conn.: Yale University Press. Schlesinger, James (1991–92). “New Instabilities, New Priorities.” Foreign Policy 85 (winter):3–24. Schmidt, Brian C. (1998). The Political Discourse of Anarchy: A Disciplinary History of International Relations. Albany: State University of New York Press. Schurmann, Franz (1987). The Foreign Politics of Richard Nixon: The Grand Design. Berkeley: Institute of International Studies, University of California, Berkeley. ——— (1974). The Logic of World Power. New York: Pantheon. Schwartz, Stephen I. (ed.) (1998). Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940. Washington, D.C.: The Brookings Institution. Seaton, Jim (1994). “Social Warfare: The Setting for Stability Operations.” Paper prepared for the ISA Annual Meeting, Washington, D.C., March 29–April 1. Sen, Amartya (1994). “Population: Delusion and Reality.” New York Review of Books, September 22, pp. 62–71. Serageldin, Ismail (1995). Toward Sustainable Management of Water Resources. Washington, D.C.: The World Bank, August. Shapiro, Michael J. (1997). Violent Cartographies: Mapping Cultures of War. Minneapolis: University of Minnesota Press. Shapiro, Michael J., and Hayward R. Alker (eds.) (1996). Challenging Boundaries: Global Flows, Territorial Identities. Minneapolis: University of Minnesota Press. Shoup, Laurence H., and William Minter (1977). Imperial Brain Trust: The Council on Foreign Relations and United States Foreign Policy. New York: Monthly Review Press. Shuman, Michael H. (1994). Towards a Global Village: International Community Development Initiatives. London and Boulder, Colo.: Pluto Press. ——— (1992). “Dateline Main Street: Courts v. Local Foreign Policies.” Foreign Policy 86 (spring):158–77. Simon, Julian, (1996). The Ultimate Resource 2. Rev. ed. Princeton, N.J.: Princeton University Press. 216 Bibliography ——— (1981). The Ultimate Resource. Princeton, N.J.: Princeton University Press. Skocpol, Theda (1985). “Bringing the State Back In: Strategies of Analysis in Current Research.” In Peter B. Evans, Dietrich Rueschemeyer and Theda Skocpol (eds.), Bringing the State Back In (pp. 3–37). Cambridge: Cambridge University Press. Smith, Neil (1989). “Uneven Development and Location Theory: Towards a Synthesis.” In New Models in Geography. Vol. 1. (pp. 142–63). London: Unwin Hyman. Smith, R. Jeffrey (1984a). “Missile Deployments Roil Europe.” Science 223 (January 27):371–76. —— (1984b). “Missile Talks Doomed from the Start.” Science 223 (February 10):566–70. —— (1984c). “Missile Deployments Shake European Politics.” Science 223 (February 17):665–67. —— (1984d). “The Allure of High-Tech Weapons for Europe.” Science 223 (March 23):1269–72. Smithson, Amy E. (1996). “Growth Industry: The U.S. Arms Control Bureaucracy in the Late 1980s.” Ph.D diss., George Washington University. Washington, D.C. Spykman, Nicholas J. (1944). The Geography of the Peace. Ed. H. R. Nicholl. New York: Harcourt Brace. ——— (1942). America’s Strategy in World Politics: The United States and the Balance of Power. New York: Harcourt, Brace. Starr, Joyce (1995). Covenant over Middle Eastern Waters: Key to World Survival. New York: H. Holt. ——— (1991). “Water Wars.” Foreign Policy 82 (spring):17–30. Starr, Joyce, and Daniel C. Stoll (eds.) (1988). The Politics of Scarcity: Water in the Middle East. Boulder, Colo.: Westview Press. Steinbruner, John (1974). The Cybernetic Theory of Decision. Princeton, N.J.: Princeton University Press. Stone, Deborah (1988). Policy Paradox and Political Reason. New York: HarperCollins. Strange, Susan (1996). The Retreat of the State. Cambridge: Cambridge University Press. ——— (1983). “Cave! hic dragones: A Critique of Regime Analysis.” In Stephen D. Krasner (ed.), International Regimes (pp. 337–54). Ithaca, N.Y.: Cornell University Press. Szerszynski, Bronislaw (1996). “On Knowing What to Do: Environmentalism and the Modern Problematic.” In Scott Lash, Bronislaw Szerszynski, and Briane Wynne (eds.), Risk, Environment, and Modernity: Towards a New Ecology (pp. 104–37). London: Sage. Bibliography 217 Thomas, G. Dale (1997). “Historical Uses of Civil Society and the Global Civil Society Debate.” Paper presented at the annual convention of the International Studies Association, Toronto, March 18–22. Thompson, Michael (1979). Rubbish Theory: The Creation and Destruction of Value. Oxford: Oxford University Press. Thomson, Janice E. (1995). “State Sovereignty and International Relations: Bridging the Gap between Theory and Empirical Research.” International Studies Quarterly 39, no. 2 (June):213–33. Thucydides (1954). The Peloponnesian War. Harmondsworth, U.K.: Penguin. Tickner, J. Ann (1992). Gender in International Relations. New York: Columbia University Press. Todarova, Maria (1998). “Identity (Trans)formation among Bulgarian Muslims.” In Beverly Crawford and Ronnie D. Lipschutz (eds.), The Myth of “Ethnic Conflict” (pp. 471–510). Berkeley: International and Area Studies Press, UC-Berkeley. Trager, Louis (1995). “All’s Fair in Selling Growth to Cities,” San Francisco Examiner, January 22, p. C-1. Tuathail, Gearóid Ó (1997). “At the End of Geopolitics? Reflections on a Plural Problematic at the Century’s End.” Alternatives 22, no. 1 (January–March):35–55. Turner, Bryan S. (1997). “Citizenship Studies: A General Theory.” Citizenship Studies 1, no. 1 (February):5–18. Uchitelle, Louis (1998). “Downsizing Comes Back, but the Outcry is Muted.” New York Times, December 7, national edition, p. A1. ——— (1994). “Changing Economy Spawns ‘Anxious Class’.” San Francisco Chronicle, November 21, p. A6. New York Times wire service. Ullman, Richard H. (1983). “Redefining Security.” International Security 8, no. 1 (summer):129–53. U.S. Senate (1996). Hearings on Information Warfare and the Security of the Government’s Computer Networks. Senate Governmental Affairs Committee, June 25, Congressional Quarterly Database. van Creveld, Martin (1991). The Transformation of War. New York: The Free Press. Viviano, Frank (1995). “World’s Wannabee Nations Sound Off.” San Francisco Chronicle, January 31, p. A1. Vogel, David (1995). Trading Up—Consumer and Environmental Regulation in a Global Economy. Cambridge: Harvard University Press. Vogel, Steven K. (1996). Freer Markets, More Rules: Regulatory Reform in Advanced Industrial Countries. Ithaca, N.Y.: Cornell University Press. Wæver, Ole (1995). “Securitization and Desecuritization.” In Ronnie D. Lipschutz (ed.), On Security (pp. 44–86). New York: Columbia University Press. 218 Bibliography Walker, R. B. J. (1992). Inside/Outside: International Relations as Political Theory. Cambridge: Cambridge University Press. Walt, Steve (1991). “Renaissance of Security Studies.” International Studies Quarterly 35, no. 2 (June):211–39. Waltz, Kenneth (1979). Theory of International Politics. Reading, Mass.: Addison-Wesley. ——— (1971). “The Myth of National Interdependence.” In Charles Kindleberger (ed.), The International Corporation. Cambridge: MIT Press. ——— (1959). Man, the State, and War. New York: Columbia University Press. Walzer, Michael (ed.) (1995). Toward a Global Civil Society. Providence, R.I.: Berghahn Books. Wapner, Paul (1996). Environmental Activism and World Civil Politics. Albany: State University of New York Press. Weber, Eugen (1976). Peasants into Frenchmen: The Modernization of Rural France, 1870–1914. Stanford, Calif.: Stanford University Press. Wehrfritz, George (1997). “The Uses of the Past: Historians and Archeologists Dig Up Evidence to Support China’s Growing Nationalism.” Newsweek 130, no. 1 (July 7):44–45. Weigert, Hans W., et al. (1957). Principles of Political Geography. New York: Appleton-Century-Crofts. Weinberger Caspar (1982). “United States Nuclear Deterrence Policy.” Testimony before the Foreign Relations Committee of the U.S. Senate, December 14. Weldes, Jutta (1992). Constructing National Interests: The Logic of U.S. National Security in the Post-War Era. Ph.D. diss., University of Minnesota, Minneapolis. Wendt, Alex (1992). “Anarchy Is What States Make of It: The Social Construction of Power Politics.” International Organization 46, no. 2 (spring):391–425. Westing, Arthur H. (ed.) (1986). Global Resources and International Conflict. Oxford: Oxford University Press. Wilmer, Franke (1993). The Indigenous Voice in World Politics. Newbury Park, CA: Sage. Wingerson, Lois (1991). Mapping Our Genes: The Genome Project and the Future of Medicine. New York: Plume. Wirls, Daniel (1992). Buildup: The Politics of Defense in the Reagan Era. Ithaca, N.Y.: Cornell University Press. Wolf, Aaron T. (1995). Hydropolitics along the Jordan River: Scarce Water and Its Impact on the Arab-Israeli Conflict. Tokyo: United Nations University Press. Bibliography 219 Woodall, Pam (1995). “The World Economy: Who’s in the Driving Seat?” The Economist, October 7, special insert. Woodward, Susan L. (1995). Balkan Tragedy: Chaos and Dissolution after the Cold War. Washington, D.C.: Brookings. World Commission on Environment and Development (WCED; Brundtland Commission) (1987). Our Common Future. Oxford: Oxford University Press. Young, Iris Marion (1990). Justice and the Politics of Difference. Princeton, N.J.: Princeton University Press. Index Agnew, John, 88 alienation, 44, 126, 182 aliens, 193n.12 Allison, Graham, 38–39 anarchy, 93, 97, 98, 103, 109, 132, 140, 193n.5, 195n.17; and markets, 140, 153–154; and morality, 139, 141, 145, 153; social construction of, 186n.19. See also Hobbesianism; State of Nature Anderson, Benedict, 110, 193n.10 Andric, Ivo, 191n.4 Angell, Norman, 38 Annual Report to the President and the Congress (Secretary of Defense), 74 Arafat, Yassir, 194n.16 Aristotle, 155 arms race, 34 Aron, Raymond, 2 Atoms for Peace, 24 autarchy, 44, 89, 97, 154 authority: after, 157–160, 170, 173; of borders, 58–59, 135, 142–145, 154, 167; and citizens, 166, 178–179; diffusion, decline, and collapse of, 15, 118, 156, 157, 159, 165, 167, 179; of discourses, 65; and elites, 54, 165; establishment of, 27, 165; feudal, 139, 165; and globalization, 4, 118, 165, 168; and individualism, 4, 9, 166; moral, 134–135, 137, 139, 142–148, 153–154; new sources of, 135, 165, 171–179, 180; and security, 48, 52, 54, 61; of social contracts, 117; 221 222 authority (continued) of states and political institutions, 5, 137, 139, 142–148, 156, 165, 173; threats to, 148; of the World Bank, 83. See also borders and boundaries; discipline; morality Index bordoids, 58 Bosnia peace settlement (Dayton Accords), 128, 192n.15 Boulding, Kenneth, 155 Bretton Woods system, 19, 26, 146, 162, 183–184n.2 Brezhnev, Leonid, 70 Bridge on the Drina, The, 191n.4 Brodie, Bernard, 22 Brundtland Commission (World Commission on Environment and Development), 103–104 Buchanan, Pat, 27–28, 184n.12 Bush, President George H., 159; administration of, 73, 80 Butler, Judith, 168–169, 170, 195n.15 Buzan, Barry, 49, 115, 123, 193n.5 ballistic missile defense, 133, 192n.1. See also Strategic Defense Initiative Barber, Benjamin, 2, 107, 130 Barre, Siad, 50 bondage and domination, 8, 80, 148 Booth, Ken, 3 borderlands, 60 borderless world, 154, 158 borders and boundaries, 36, 37, 81, 88, 129, 138, 159, 191n.20; in Bosnia, 128; and discipline, 135, 137, 141, 167, 187n.25; fluidity and fragmentation of, 59, 126, 138, 154, 167; and identities, 90, 96, 100, 131, 134, 144, 146–148, 177–179; and geography, 84, 88, 112, 187n.24; moral authority of, 142–145, 153–154; and resources, 93, 94, 96, 102, 105; and security, 58–62, 130–131, 134, 146–147, 154; and sovereignty, 86, 89–90, 96–97, 102, 104, 112, 123, 137, 140, 167; and war, 68, 73, 84. See also authority; geopolitics; sovereignty; state border studies, 185n.4 Cable News Network (CNN), 78, 131 Campbell, David, 50, 138 capital, 72, 175; finance, 40, 125, 162; flows of, 43; social, 23, 29, 40, 72, 120 capitalism, 194n.9; dynamism of, 124; and factor endowments, 124–125; and the family, 27, 29; industrial, 18; and production, 15–16; and society, 15, 27–28, 158, 163, 173; and uneven development, 124, 163. See also economic growth; globalization; markets Carter, President Jimmy, 101, 102; administration of, 76 caste system, 117 Catholic Church, Roman, 134, 137, 139, 153; and England, 143 Index Centre for Border Studies, 185n.4 choice: actor, 191n.7; rational, theory of, 111, 156. See also irrationality; rationality Christopher, Warren, 109 Central Command, 76, 79 cities: and political assemblies, 178–179; and economic development as war, 125, 179; and municipal foreign policy, 125, 179. See also democratization; global civil society; governance citizens: and the economy, 30, 174; and the state, 39, 116, 126, 146 citizenship, 60, 156; conceptions of, 166–167, 168, 171–179, 180–182; and deterritorialization, 177–179; economic, 30, 174; and political action, 159–160, 166, 175–176, 178–179; social and sociological, 116, 166, 179. See also democratization; global civil society; globalization; counterhegemonic social movements; political deterritorialization Citizenship Studies, 166 civil society, 4, 156, 172, 194n.5, 195n.16; and relations with the state, 172–173, 176, 194n.4, 195n.16. See also global civil society. Clancy, Tom, 72 “clash of civilizations” hypothesis, 107, 112–113, 148; lack of material basis for, 112 Clausewitz, Carl Von, 64, 72 Clinton, Chelsea, 192n.9 223 Clinton, President William Jefferson, 34, 66, 109, 152, 187n.2; administration of, 65, 73–74, 134, 148, 149, 170 Cohen, U.S. Secretary of Defense, William, 74 Cold War, 20, 102, 107; 129; and civil rights, 184n.8; Compromise, 14, 19–21; conservatives, 26, 102; cost of, 63; discipline, 149; end of, 147–148, 187n.23; and ethnic conflict, 109, 115; and higher education, 23–24, 183–184n.2; histories of, 184n.4; and identity, 107, 138; renewal of during the 1980s, 149; and world order, 113, 135, 137. See also containment; Soviet Union; United States collective action, 166–167. See also politics Coming Conflict with China, The, 2 Committee on the Present Danger, 102 commodification: of nuclear materials, 187n.26; of security, 45; of information about shopping habits, 185n.14 common-pool resources, 193n.6 Communist Manifesto, The, 17 comparative advantage, 124–125; and location, 123, 125, 182n.11, 192n.13 Conference on Population and Development, UN (Cairo), 164 Concert of Europe, 17 224 Index credit: as a fictitious commodity, 29; as a moral regime, 153. See also fictitious commodities Cromwell, Oliver, 143 Crook, Stephen, 171 cultural: coexistence, 111; conservatism, 145, 154; difference and violence, 111, 188n.9, 188n.10; essentialism, 107, 108; functionalism, 111–112 culture: and citizenship, 166, 191n.6; definition of, 111– 112; as explanatory variable in politics, 107, 190n.9; as historical materialism, 110; and identity, 107, 110, 111, 113, 166; nature of change in, 111–112; and the proliferation of states, 109; as a raw material, 112; and discourse, 9; war, 27, 90, 137–138, 145– 146, 154. See also borders and boundaries; ethnicity; geoculture; identities; states currency exchange systems, 19, 20, 26, 183–184n.2. See also dollar cyberspace, 4; failures in, 186n.13; hackers in, 4, 33, 46, 47, 55; and Pearl Harbor, 47; threats to, 46–47, 149. See also enemies; threats; terrorism conflict: causes, 105, 109, 113, 115, 117; ethnic and sectarian, 108; and identity and difference, 100, 114; intrastate, 35, 113; metaphor of fault lines in explaining, 113– 114; and opportunity structures, 114, 120; and politics of nostalgia, 118; postmodern, 36, 113; and social capital, 120. See also ethnic and sectarian conflict; threats; war Congress of Vienna, 40 conspiracy theories, 56, 118– 119; and the New World Order, 138; social construction of, 119. See also scapegoating constitutions, as expressions of social contract, 116. See also social contract consumers, 152, 153, 184n.14; and credit, 29 containment, 34, 41, 74; and economics, 41–43; as geopolitical theory, 146–147. See also Cold War; geopolitics Coordinating Committee (COCOM), 42 Corbridge, Stuart, 88 corruption, 123, 192n.9 counterhegemonic social movements, 175–176. See also citizenship; global civil society Cox, Robert, 6, 35, 175 credibility, 67; of deterrence, 70; and rationality, 74; of threats, 68, 69, 76 Dalby, Simon, 88 Darwin, Charles, 86, 88, 90, 143–144 Dawkins, Richard, 190n.11 decision-making: and bureaucratic politics, 39; and security, 52 Index democratic enlargement, 116, 148 democratization, 9; global, 31, 164, 174, 180–182; urban, 176–179; and war, 148, 184n.5. See also cities; citizenship; global civil society; politics; regimes Deng Xiaoping, 152 depressions and recessions: government responses to, 18–19, 26 Der Derian, James, 49, 59, 62 deregulation, 108, 161–163 Derlugian, Georgi, 118 deterrence, 61, 69, 81; credibility of, 70, 75, 188n.3; disciplinary, 8, 74, 78–80, 135, 148–150; extended, 58, 68, 76; nuclear, 67, 68, 71, 72; and the Revolution in Military Affairs, 68. See also credibility; disciplinary; nuclear deterrence Deudney, Dan, 181 Deutch, John, 46, 47 development, 122, 125; and history and political economy, 125; uneven, and capitalism, 124, 163. See also capitalism; economic growth; globalization; markets Dirksen, Senator Everett, 68 disciplinary, deterrence, 8, 74, 78– 80, 135, 148–150; warfare, 77–80 discipline, 2, 8, 9; and borders, 59–60, 102–103, 167; economic, 30, 153; and markets, 150–152, 153; and security strategies, 43, 69, 70; in society and popular 225 culture, 44, 145–146; by the state, 102–103, 134, 138, 141, 145–146, 149–150, 159; and threats, 69, 71; and war, 65, 76, 80, 154. See also authority; borders and boundaries; security discourse(s), 85, 102, 105–106, 112, 175; authority of, 65; boundaries of, 37; definition of, 49, 65; of gender, 169; of genetics, 91; geopolitical, 41, 102; of market liberalism, 150; of population, 191n.2; of security, 48–50, 53, 55, 58; of war, 65, 73. See also ideology discursive practices, 51; speech acts as, 53 disorder, 155, 171; causes of, 15, 158; domestic, in the United States, 149–150 distribution, 95, 105; of global wealth, 30; of oil, 99, 101; and state power, 97; of state financial resources and domestic power, 121. See also markets; power; states diversity: and education, 24; and women and minorities, 26–27 division of labor: international, 26, 112, 129; in the Soviet Bloc, 43. See also labor; production Dole, Robert, 65 dollar, 186n.10; and gold exchange standard, 19, 21, 99, 183–184n.2; and U.S. gold stocks, 21, 184n.6; and international liquidity, 19, 20 226 double hermeneutic, 98, 171, 190n.17 double movement, 161, 175, 176 Dreze, Jean, 96 Dulles, John Foster, 151 Index upward mobility, 24. See also expertise; knowledge Ehrlich, Paul and Anne, 94 Eisenhower, President Dwight D., 151 electronic: battlefield, 68, 72, 81; classrooms, 186n.11; money, 185n15. See also virtual El Niño, 190n.15 embedded liberalism, 20, 86, 97, 147, 180 empires: European, 40–41, 129; and peripheral intellectuals, 110 enemies: and cultural difference, 111, 186n.17; imagined, 79, 148–150, 186n.17, 186n.18, 193n.12; lack of, 34, 186n.12; next United Statesí, 64, 148– 150. See also rogue states; threats; terrorism English: Civil War, 143, 158; liberalism, 124 Enlightenment, 15, 135, 142, 143, 194n.2 entrepreneurs, political, 119–122, 132 environment: and migration, 94; and security, 85, 86; and state power, 89 episteme: security, 53, 55, 56. See also security; states; threats escalation: ladder of nuclear, 69, 70, 188n.5. See also nuclear deterrence ethnic: cleansing, 59, 90, 131, 135, 144–145, 150, 193n.9, 193n.11; relations, 191n.3 ethnic and sectarian conflict, 9, 35; and peace in Bosnia, 128; Easterbrook, Neil, 195n.15 ecological; balance and limits, 95, 104, 191n.22; interdependence, 93, 105. See also limits to growth; resources; sustainability economic growth, 104, 163; and comparative advantage, 124; and disciplinary deterrence, 78; and domestic hierarchies, 115, 158; limits to, 102–105; and morality, 150; and peace, 1, 37, 129, 154; and political economy, 125, 192n.11; and the proliferation of states, 109, 127–130; and restructuring, 44, 123–124, 186n.10; and U.S. allies, 26; and U.S. foreign policy, 20; and social change, 114, 118, 123, 162. See also economy; globalization; markets Economist, The, 78, 83, 85 economy: and citizenship, 116, 167; and discipline, 30; international, 19, 122; and liberalism, 20, 124; state control of, 42. See also economic growth; globalization; markets; mercantilism education: and the Cold War, 23– 24; growth in, 23–24, 25, 186n.11; higher, 184n.9; and Index in Chechnya, 77; and difference, 111; as a form of selfdefense, 110; and instrumentalism, 110–111; and intrastate order, 113, 160; narrative sources of, 114; origins of, 108, 109–115; as postmodern warfare, 113; and state fragmentation, 108. See also conflict; political entrepreneurs; war ethnicity: and communal autonomy, 111; construction of, 120; as culture, 110; and hierarchy, 117; politicization of, 121; historical basis of, 100, 115, 143–144; naturalization of, 110; theories of, 109–111. See also citizenship; imagined communities; nationalism eugenics, 90 Euromissiles, 70–71, 102, 188n.5, 188n.6 European Union, 36, 161, 162, 163; and multinational corporations, 36; and the single currency, 129 expertise, competing centers of, 25–26; and global politics, 25. See also education; information; knowledge Exxon-Mobil, 159 227 federalism and citizenship, 178– 179. See also citizenship; democratization Ferguson, Kathy, 177 fictitious commodities, 16; during the 1990s, 29 Finlandization, 61, 71, 188n.4 flows: of resources, 190n.14, 190n.18; transborder, 37, 43, 190n.12. See also resources; scarcity Fordism, 22–23, 129; and education, 23; and the 1980s recession, 27; and production of nuclear weapons, 23. See also production Foucault, Michel, 168 fragmegration, 6. See also Rosenau, James. fragmentation, 6, 131, 167; and integration, dialectic of, 108, 124, 126, 157, 161, 170; of the public sphere, 126. See also globalization; markets; politics frames of reference, 51–52 free trade, 17, 124, 161; and peace, 37. See also economic growth; globalization; interdependence; liberalization; markets Free World, 42–43, 97–99, 101, 135, 136; borders of, 42, 61– 62, 134, 146–147; and the sovereign individual, 42; as a natural community, 42, 98. See also Cold War; containment; United States French Revolution, 143 Freud, Sigmund, 114; and the “narcissism of small differences,” 114 Fukuyama, Francis, 107, 159, 183n.1 228 functionalism and neofunctionalism, 160, 161, 172 future: of the state, 160–165; worlds, 159, 165, 179–182. See also imagined Index Germany: Nazi, 41, 89, 93, 144– 145; Weimar, 41; and World War III, 60 Gerschenkron, Alexander, 122 Giddens, Anthony, 190n.17 Gill, Stephen, 6, 30, 153, 156, 175, 176 Glaspie, Ambassador April, 72, 75 Gleick, Peter, 105 global civil society, 5, 172; and authority, 156; and citizenship, 160, 171–174; and governance, 9, 160. See also democratization; governance globalization, 4, 60; and authority, 4, 135, 157, 159, 162, 165; and citizenship, 181– 182; and conflict, 36; consequences of, 14, 32, 135, 179–182; defined, 14; and democracy, 31, 164, 180; and destabilization of hierarchies, 115, 154, 158; and domestic conflict, 108, 115; and economic change, 1, 118, 122; and identity, 36; inability to halt, 30; opposition to, 161, 163, 176; and production, 161; and regulation, 157, 163, 176; and the role of states, 3; and security, 7, 32, 36; and social change, 13–14, 118, 122, 158, 167; and stability, 9, 36, 115, 118, 158, 194n.8; and the U.S. national interest, 25. See also capitalism; economic growth; liberalization; markets Gagnon, V.P., 191n8 Gates, William, 159 Gellner, Ernest, 110 Gender Trouble, 168 General Agreement on Trade and Tariffs (GATT), 20, 163. See also World Trade Organization genetics, 90–91; determinism, 90, 190n.11; and the Human Genome Project, 90 geoculture, 108; and borders, 112; lack of material basis, 112; as manifested through symbols, 113. See also culture; “clash of civilizations” geography: political, 189n.7; and security, 88; and state power, 87. See also borders and boundaries geopolitics, 86–91; of the body, 90–91, 190n.11 and cartography, 112; and culture, 108, 112; definition of, 87; discourses of, 85, 87, 98 112; doctrines of, 18, 84, 189n.7, 190n.8; and the domino theory, 146; and the Gulf War, 80; and international competition, 5, 162; and “shatter zones,” 112. See also ideology George, Alexander, 186n.17 Index glocalization, 194n.10. See also Rosenau, James God, 134, 139, 141, 142, 143 gold standard, 162. See also dollar Gorbachev, Mikhail, 28 Gordon, Richard, 173 governance: global and transnational, 5, 9, 156, 157, 160– 165, 165, 168, 174, 180, 194n.1; local, 165, 174; and political action, 159–160, 174. See also authority; democratization; globalization Gramsci, Antonio, 175 Grand Area, 193n.14 Great Depression, 22 Gray, Colin, 2, 85, 87, 88 Great Powers, 39, 86, 129 Great Transformation, The, 13, 179 Groh, Dieter, 119 Ground Launched Cruise Missiles (GLCMs), 70-71. See also Euromissiles Gulf War, 80, 85, 99; as archetype of future wars, 72, 73, 74, 188n.8; costs of, 74; and disciplinary deterrence, 76, 80; lessons of, 76. See also disciplinary; Hussein, Saddam; Iraq; Major Regional Conflicts; war; United States 229 harmonization of laws and regulations among states, 163164. See also trade; markets, and rule structures hegemonic stability theory, 98, 102; definition of, 98; double hermeneutic of, 98. See also geopolitics; ideology hegemony: of Christianity, 141; discursive, 55, 168; Gramscian, 53, 98, 117; really-existing, 123, 159 heterarchy, 165, 195n.12 heteronomy, 165, 171, 194– 195n.11, 195n.12 Heilbroner, Robert, 159 Herz, John, 7, 34, 45 Himmelfarb, Gertrude, 183n.1, 193n.3 Hiroshima, 47 historical structures, 35, 111, 126 historic bloc, 175 Hobbes, Thomas, 2, 8, 37, 62, 139, 141, 143, 158 Hobbesianism, 72, 132, 139; and genetics, 91. See also anarchy; State of Nature Hoopes, Townsend, 151 Hull, Cordell, 85 human rights, 32, 117, 147, 164, 187n.21 Huntington, Samuel P., 2, 107, 109–113, 138, 159, 186n.12, 191n.5; and the “clash of civilizations,” 107, 112–113, 148. See also culture; geoculture; ideology Hussein, President Saddam, 72, 80, 189n.11, 189n.4. See also Gulf War; Iraq; rogue states hyperliberalism, 28, 157; and nature, 91. See also capitalism; economic growth; globalization; ideology; markets 230 identities, 114; and borders, 60, 90, 96–97, 177; construction of, 100, 110, 113, 120, 177; national, 45, 53; proliferation of, 36; and threat from the Other, 100, 120; threats to, 110. See also borders and boundaries; culture; ethnicity ideology, 28, 146, 176, 193n.13; and culture, 113, 121; and historical structures, 35, 111; collapse of, 39, 54; interdependence rhetoric as, 101; naturalization of, 86; of success and failure, 114. See also authority; culture; geopolitics; markets; naturalization imagined: communities, 97, 110, 130, 144, 167, 171, 193n.10; consequences of war, 74; enemies, 79, 144, 148–150, 186n.17, 186n.18; futures, 68, 71; threats, 49–50, 68, 69, 144; wars, 63, 68, 70, 72, 78 imperialism: age of, 86; as response to economic depression, 18; and state expansion, 89; moral, 143 individual: and authority, 159; and citizenship, 39; and identity, 1; and loyalty, 1, 55, 126, 132, 167; and markets, 4, 90, 184n.13; and opportunity, 184n.13; and security, 37; and sovereignty of, 32; and welfare of, 44. See also ideology; markets; self-interest individualism, 167; and the Free World, 42; methodological, Index 90, 97, 105, 158; and selfblame, 114. See also ideology; markets; self-interest industrial revolution, 15–19, 24; first, 13–14, 18, 23, 29, 30; second, 22, 30, 40; and impacts of, 14, 30; and social change, 13–14; and technology, 23; third, 15, 21–24, 30, 35. See also globalization; innovation; revolution information: interpretation of, 51; networks, 160; as private property, 29, 185n.14; and production, 22; revolution, 15, 22, 24; and security, 51; warfare, 46–48, 78–80. See also commodification; knowledge; networks; privatization innovation: by U.S. allies, 26; and food production, 95; social, 7, 13–14, 18, 23, 24, 26, 30, 32; technological, 17, 23, 26, 89, 163 insecurity dilemma, 4, 34, 36, 48, 60, 62. See also enemies; security; threats Inside/Outside: International Relations as Political Theory, 191n.2 interdependence, 59, 86, 97–102, 191n.19; definition of, 99– 100; ecological, 93, 105, 190n.13; as ideology, 101; and peace, 7, 37, 85, 185n.5; theory, 99–101, 123. See also ecological; economic growth; globalization; markets; peace; war Index integration, 6, 122; and dialectic of fragmentation, 108, 126, 127–130, 170; and peace and war, 7, 108. See also economic growth; fragmentation; globalization; liberalization; markets International Monetary Fund (IMF), 19 International Organization for Standardization (ISO), 164, 173; ISO 14000, 164 International Trade Organization (ITO), 19 Iraq, 8, 78, 188n.8; discipline and punishment of, 80; failure to deter, 75. See also Gulf War; Hussein, Saddam Irigaray, Luce, 169 irrationality, 75, 113, 156, 188n.9; of intrastate war, 108. See also choice; rationality Islam: divisions within, 113; and umma, 112–113 231 Kerr, Clark, 23 Keynes, John Maynard, 19 Keynesianism, 22 Kissinger, Henry, 21 knowledge, 7; as capital, 29; mobilization of. for production, 15; and national security, 25; networks of, 177–178; and power, 165. See also commodification; information; networks; politics; production Korean War, 20, 74 Krasner, Stephen, 185n.6, 195n.17 Krugman, Paul, 30 Kull, Steven, 70 Jedi Knights, 72 Jervis, Robert, 34, 45 Johnson, President Lyndon: and the Great Society, 26 justice, 114, 119; of social contracts, 117. See also social contract Kahn, Herman, 69, 70 Kant, Immanuel, 2 Kaplan, Robert, 2, 159 Kennan, George, 41, 52, 87, 137 Keohane, Robert O., 99–101 labor, 186n.10; demographics of, 26; and downsizing, 28, 44, 78; educated, 25; and the factory system, 15–16, 18; and information production, 44; and markets, 16; movement and migration, 32, 43; and restructuring during the 1990s, 28, 44; and security of employment, 44; unions, 18. See also education; globalization; knowledge Laitin, David, 120 Lake, Anthony, 170 Larkin, Bruce, 58 Lebed, Alexander, 183n.3 Leher, Tom, 70 Lemarchand, René, 121, 122 Leviathan, 8, 141, 158, 166 liberal economics: and distribution, 95; neoclassical, 94, 106. See also capitalism; economic growth; economy; 232 Index Mandelbaum, Michael, 169–170 Manhattan Project: and mass production of knowledge, 23 markets: and anarchy, 140, 150; and authority of, 159; and citizenship, 167; and development, 124, 192n.11; and domestic stability, 123, 158; and initial factor endowments, 124, 163; and food and famine, 96; and genetics, 91; and human nature, 151; and human rights, 32; as a moral institution, 150–152, 153; naturalization of, 86, 104– 105, 106; niche, 27, 129; opportunities in, 184n.13; and peace, 9, 129, 154, 185n.5; and power, 123–124; as religion, 153; and rule structures, 124, 140, 161, 162, 167, 184n.3, 195n.13; and states, 32, 84, 131; selfregulating, 13, 16, 20, 140, 161, 162, 179–180, 184n.3; supply and demand in, 94, 96, 104; and threats, 187n.26; in water, 84, 105, 106. See also capitalism; economic growth; economy; globalization; liberalization; production; reproduction Marshall Plan, 20. See also Cold War; containment; United States Marx, Karl, 17, 186n.14 Maslow, Abraham, 185n.2 Meadows, Dennis, 94, 95 Meadows, Donella, 95 Mead, Walter Russell, 140 liberal economics (continued) globalization; liberalization; markets liberalism, 194n.4; conception of the state in, 137; economic, conditions for, 84–85, 97; embedded, 20, 86, 147; English, 142; and the Free World, 42. See also Cold War liberalization, 1, 16, 60, 108, 122; and the Cold War, 19–21, 40– 43; as a moral crusade, 152; and the New World Order, 135; and the U.S. national interest, 25, 43. See also capitalism; Cold War; economic growth; economy; globalization; liberalization; markets limits to growth, 86, 94–97, 102– 105. See also sustainability liquidity, international, 19–21. See also dollar List, Friedrich, 2, 43 Litfin, Karen, 48 Locke, John, 8, 116 Long Peace, The, 63 Mackinder, Halford, 2, 85, 87, 112 Mcpherson, C.B., 92 Mahan, Admiral Thomas, 2, 85, 112 Major Regional Conflicts (MRC), 74–77. See also Gulf War Mann, Michael, 156, 175–176 Malthus, Reverend Thomas, 86, 94, 95; critiques of, 95 Manchester School, 124 Index Mearsheimer, John, J., 35 mercantilism and neomercantilism, 16, 18, 20, 30, 86, 89, 129; in the Free World, 42, 147; in the Soviet Union, 43. See also economic growth; globalization; markets middle class: global, 30 Middle East: and water wars, 83– 85, 189n.1. See also Gulf War military force, 40; mobilization of, 40; and the state, 1, 129; and strategy, 60; utility of, 66 militarization: and domestic policy, 150; of social welfare issues, 50, 51. See also Cold War; containment millet system, 117 Milosevic, President Slobodan, 75 mobilization: for war, 40; by political entrepreneurs, 120; political, 176 Montreal Protocol on Substances that Deplete the Ozone Layer, 164 morality: and consumption, 152, 153; in domestic politics, 136, 145, 152; in international politics, 136, 153; of markets, 9–10, 150–153; of states, 9–10, 136–137, 140, 148; and raw materials, 151; and science, 142; and welfare, 145. See also authority; borders and boundaries Morgenthau, Hans, 2, 39 Moscovici, Serge, 118 Mother Teresa, 170 mujuhadeen in Bosnia, 129 233 multinational corporations, 36, 41; and oil crises, 101 municipal foreign policy. See cities Mutual Assured Destruction (MAD), 23, 133, 185n.7. See also nuclear deterrence nation, 170; ancient origins of, 143; constitutive elements of, 136; organic, 89–90, 143, 144, 189n.6. See also culture; ethnicity; nation-state nationalism, 17, 40, 115–116, 123, 131, 134; Earth, 181; and genetic determinism, 90; as imagined community, 110, 131; and intellectuals, 110, 176; and self-determination, 90; as a source of moral authority, 135, 142, 154; and sports, 116. See also culture; ethnicity; ideology; nationstate national interests, 117, 148, 185n.6; and borders, 58, 93; and global management, 25, 98; and individual interests, 146; and interdependence, 101. See also national security; states national security: broadening of, 47; and crises, 38; and expertise, 25; language, 47; policy, intersubjectivity of, 185n.7; policymakers and, 52; and sovereignty, 5; and state survival, 38; United Statesí policy, 34, 48–58. See also enemies; security; states; threats 234 Index netizens, 177 networks of knowledge and practice, 160, 177–178. See also information; knowledge Newton, Isaac, 142, 191n.22 New World Order, 135, 137, 159 Nietzsche, Friedrich, 49, 59 Nitze, Paul, 188n.3 Nixon, President Richard, 21, 101; Doctrine, 21, 26 non-governmental organizations (NGOs), 25, 168. See also democratization; global civil society; governance; politics non-state actors, 103, 157, 168– 171 non-states: legitimacy of, 127– 128, 170–171 North Atlantic Treaty Organization (NATO), 34, 68, 71, 72, 129, 188n.6, 192n.15; campaign against Yugoslavia, 68, 75, 76, 78, 80; function of, in 1990s, 35, 61–62, 109, 187n.27; military strategies of, 60, 72. See also United States Northrup Grumman, 79 nuclear deterrence, 23, 61, 133, 147, 185n.7; credibility of, 67, 70; and discipline, 70, 71, 72; extended, 58. See also credibility; deterrence; ideology; nuclear weapons; threats nuclear disarmament, 71 nuclear family, 27 Nuclear Non-Proliferation Treaty, 164 nuclear war, 45, 60 National Security State, 37–45; definition of, 40; and domestic discipline, 43–44; end of, 41. See also states “National Security Strategy for a New Century,” 34, 65, 152. See also Clinton, President William Jefferson nation-state, 162, 167; alternatives to, 171–179; and citizenship, 166–167, 175–176; emergence of, 40–41, 142–145, 156, 175– 176; and empires, 41; future of, 157, 160–165; and markets, 131; morality of, 136– 137; as political community, 166; original, 142–143; and self-determination, 90. See also authority; borders and boundaries; nationalism; states natural community, 42, 98, 100, 147–148 naturalization: of biospheric limits, 104; of boundaries, 105; of culture, 88, 107; of ethnicity, 110; of genders, 169; of geography and national power, 85, 87–88; of ideology, 86; of markets, 86, 104–105, 106, 150–151; of national borders, 89, 103; of poverty and weakness, 167; of the state, 169. See also ideology negative organizing principles, 55. See also enemies; Huntington, Samuel; threats neoliberal peace, 9 neomedievalism, 130, 165, 181, 195n.12 Index nuclear weapons, 22, 183n.1; and mass production of, 23; and targeting, 23; and threat to use, 69; and use in war, 70. See also Cold War; containment; deterrence; nuclear deterrence; United States; war Nye, Joseph S., Jr, 99–101 235 Ohmae, Kenichi, 123, 159, 179, 192n.10 oil crises, 21, 26; domestic effects, 101; and oil prices, 99, 190n.18; and Project Independence, 101; and the Synfuels Corporation, 101. See also Gulf War Onuf, Nicholas, 92, 195n.17 ontology: of security, 45; of states, 9, 143 organic intellectuals, 175–176 Organization of Petroleum Exporting Countries (OPEC), 99, 101. See also oil crises “Others,” 49, 51, 59, 61, 97, 100, 109, 130, 145, 187n.25 Pakulski, Jan, 171 peace, 185n.5; as interval between wars, 64, 87; movements, 133, 147, 188n.6; numbers of people living in a condition of, 155. See also war Peoples’ Republic of China (PRC), 98; and nationalism, 123; as potential threat to the United States, 55. See also United States Pershing-II missiles, 70–71. See also Euromissiles Polanyi, Karl, 7, 13–19, 28, 29, 161, 175, 176, 179–180, 194n.7 political deterritorialization, 177–179. See also citizenship; counterhegemonic social movements; democratization; global civil society; politics political economy: and local and regional development, 125; international, 98. See also economy; economic growth; globalization; liberalization political entrepreneurs, 119–122, 132. See also ethnic and sectarian conflict politics: as authoritative allocation of values, 159; feminist, 168– 169, 195n.14; and global civil society, 172; and political action, 155, 159–160, 168, 177, 181–182; and representation, 178. See also democratization; global civil society; governance power: domestic distribution of, 124; juridical, and production of subjects, 168–169; legitimacy of, 122, 165; national, 40, 137; police, 2; and state legitimacy, 142, 165. See also authority Poor Laws, 16–17, 183n.1 Presidentís Materials Policy Commission, 151 princes, 139, 141–142; and their subjects, 193n.8 236 Index realism, 36, 45, 97, 105, 123, 179; conception of state in, 136, 137; definition of, 93. See also authority; borders and boundaries; power; security; states regimes, international, 98, 163– 164, 174, 194n.3; accountability of, 164, 174. See also democratization; governance Reich, Robert, 26, 44 reproduction: and culture, 112; of domestic American conditions abroad, 20; of nonmaterial basis of state, 39; of security policies, 56; of societies, 117; and the state, 172–173. See also culture; production resources: common pool, 193n.6; control of, 110; distribution of among states, 93–95, 97, 101, 191n.20; management of, 93, 84, 102; and morality, 151; price of, 97, 190n.18; renewable, 190n.14, 190n.15; shortages of, 21; substitution for, 104, 190n.14; supply of, 40, 84, 93–95, 190n.14; trade in, 85; wars over, 9, 50, 83– 85, 94, 99, 105, 107, 189n.3, 189n.4. See also flows; scarcity; sustainability Retreat of the State, The, 193n.1. See also Strange, Susan revolution: bourgeois, history of, 17, 175–176; electronics/ information, 22; French, 143; nuclear, 22; leaders of, 94, 175–176; social, 17. See also industrial revolution privatization: of commons, 16, 28; of intellectual property, 29; of security, 45. See also property rights production, 6; and capitalism, 15– 19, 94, 124, 161, 192n.11, 192n.13; and culture, 112; and military infrastructure, 39; niche, 129; and property, 94; social relations of, 15–16; and surplus capacity, 18, 22; and war, 81. See also capitalism; economic growth; globalization; liberalization; markets; reproduction property rights: commons and common pool, 16, 28, 193n.6; enclosure and elimination of, 16, 92, 145; intellectual, 29; of nations, historical, 144; and sovereignty, 85, 92–94, 97; state protection of, 136. See also resources; sovereignty Puritan Revolution and Commonwealth, 153 Quadrennial Defense Review, 57, 66–67, 73, 74 quantitative fallacy, 191n.23 RAND Corporation, 70 Rapid Deployment Force (RDF). See Central Command rationality, 74, 156. See also choice; irrationality Reagan, President Ronald, 27, 133, 134, 135, 170 Index Revolution in Military Affairs (RMA), 65, 68, 71, 72, 81; and demonstration effect, 75, 77, 78. See also disciplinary deterrence; United States rights: human, 147; natural, 91, 116 risk, 31, 33; and uncertainty, 35, 62 “rogue” states, 74, 75, 78, 80, 129, 148–150. See also enemies; security; threats; terrorism Roosevelt, President Franklin D., 85 Rosenau, James, 6, 160, 172, 194n.6, 194n.10 Rousseau, Jean Jacques, 2, 116 Ruggie, John, 20, 142, 165, 180 237 Said, Edward, 100 Sakamoto, Yoshikazu, 156 San Francisco Chronicle, 127 San Francisco Examiner, 125 Sassen, Saskia, 178 Satan, 153 scapegoating, 114, 119. See also conspiracy theories; ethnic and sectarian conflict scarcity, 94–97; absolute and relative, 94; and borders, 97; of resources, 9, 84, 191n.20; and struggle, 86; and substitution, 104, 190n.14; of water, 83–85. See also flows; resources; sustainability Scheer, Robert, 70 Schelling, Thomas, 69 Schmidt, Helmut, 70, 71 Schwartzkopf, General Norman, 72, 76 Seaton, Jim, 9 sectarian conflict. See ethnic and sectarian conflict security: and Cold War conflicts, 129; defining and redefining, 37, 45, 48, 52, 56–57, 58–62, 185n.8, 186n.16; dilemma, 34, 45; and domestic discipline, 43, 187n.25; of employment, 44; elites, 52, 53, 55, 56; existential, 39; and the future environment, 66– 67; and geography, 87, 147; and the household, 45; and individualism, 4, 32, 37; institutions, 35; and the natural environment, 85; and nuclear weapons, 61, 147; ontology of, 45; policy, making of, 48–58; policy, consensual basis of, 54; referent object of, 57; regime, 35; social construction of, 36, 48–58, 187n.23; as speech act, 53; and the state, 7–8, 131, 141; studies, 57; supply of and demand for, 38; and surveillance, 50, 80, 148–150; and uncertainty, 33, 35, 46, 54, 158. See also enemies; surveillance; threats; states; war securitization, 53. See also speech acts self-interest, 4, 5, 27, 130, 132, 153; and genetics, 90; and national security policy, 54; and self-regulation, 153; and 238 self-interest (continued) social warfare, 126; and sovereignty, 140, 159. See also individualism Sen, Amartya, 96 Shapiro, Michael J., 177 Silicon Valley, 125, 182n.11, 192n.13 Skocpol, Theda, 173, 174 Smith, Adam (1723–90), 27, 140, 150, 186n.14 social change, 13, 175–176, 191n.4; and capitalism, 15 social contract, 17, 116–119, 166, 191n.6; disintegration of, 118, 158; fairness of, 117, 123– 124; Free World, 147. See also authority; justice Social Darwinism, 18, 40, 86, 88, 151 social organization, 156; and production, 6, 15–16; and reproduction, 6; and scientific research, 22–23. See also production; reproduction social work and U.S. foreign policy, 170 sociobiology, 109 sovereignty, 6, 85; and borders and boundaries, 86, 92, 93, 96–97; definition of, 91–94; and domestic politics, 100, 140; disintegration of, 3, 90, 123, 161; and individualism, 4, 32, 42, 91, 92, 96–97, 140; and liberalism, 92; as mode of exclusion, 85; and morality, 139, 141, 147; and national security, 5, 32, 66; and property, 85, 92, 97, 130; and Index resources, 84; violations of, 93; and Westphalia, 130. See also borders and boundaries; property rights Soviet Union, 20, 21, 36, 88, 98, 183n.1, 188n.4, 188n.5; collapse of nonmaterial basis, 39; and domestic discipline, 43–44; inability to innovate, 24, 43, 184n.10; and the Resource War, 50; threats from, 76, 87, 99, 129, 133, 134, 135, 185n.1; and the SS–20s, 70–71, 188n.5, 188n.6. See also Cold War; containment; United States speech acts, 53, 69 Spykman, Nicholas, 2, 85, 87, 112 SS–20 missiles, 70–71, 188n.5, 188n.6. See also Euromissiles states: as biological organisms, 88–89, 104; and citizens, 39, 41, 166–167, 178–179; and citizens as threats, 51, 144, 148–150; and climate, 89; and comparative advantage, 124; composition and structure of, 38, 115, 173; cooperation among, 103; domination within, 121; and economic expansion, 16; and elites, 110–111, 121; and exclusivity, 109, 130, 167; and family politics, 139; forms of, 38; future of, 160– 165; and geography, 85, 87, 187n.24; and globalization, 3, 157, 160–165; goals and role of, 38, 39, 169; as heir to the Index Catholic Church, 137, 143; idea of, 53, 58, 84, 115, 123; and innovation, 89; institutions of, 5, 53, 115, 123, 173; intellectual reification of and commitment to, 122–123; and legitimacy, 54, 55, 116–117, 122, 123, 135, 137, 143, 160, 165, 172–173; and loyalty to, 5, 55, 110, 116, 160, 167; material basis of, 36, 53, 84, 88, 115, 123, 185n.15; as a mental construct, 39, 53, 130; and monopoly of violence, 3; and morality, 134, 136–137, 139, 141, 142–148; and multinational corporations, 41, 173; pivotal, 33; and power, 40, 44, 53, 87, 93, 97, 110, 111, 137, 160, 165; proliferation in numbers, 109, 127– 130, 181, 185n.3, 192n.14; and property rights, 92, 97; redefining, 39; as referent object of security, 57; relations with civil society, 172–173, 176, 194n.4, 195n.16; and rents accrued from rule of, 133, 192n.9; and social contracts, 116; and sports, 116; stability of, 117; stages of life, 18, 39, 88–89; survival of, 38, 136, 193n.5; and territory, 40, 79, 84, 86, 89, 102, 110, 157, 177–179; and welfare, 40, 126; world, 161. See also authority; borders and boundaries; enemies; globalization; governance; security; threats 239 State of Nature, 2, 5, 8, 9, 37, 84, 91, 132, 158, 193n.5, 194n.3. See also anarchy state system, international: and the end of bipolarity, 35; and globalization, 157; norms of, 109, 127; legitimacy of, 122. See also states Steinbruner, John, 39 Stockholm Declaration, 92, 93, 102, 103, 190n.12 Strange, Susan, 161, 182n.2, 193n.1 Strategic Defense Initiative, 133–134, 187n.25; moral purpose, 134; technical feasibility, 134 success and failure: explanation of, 114, 184n.13 surveillance, 80, 112, 148–150, 153, 165, 185n.14, 193n.13; by FBI abroad, 150. See also security; terrorism; threats sustainability, 86, 102–105, 190n.14; definition of, 104. See also flows; limits to growth; resources; scarcity symbolic analysts, 26, 44 think tanks, 25–26. See also education; expertise technological: change as a cause of disorder, 15; dominance, 43, 184n.10. See also innovation; social change terrorism, 32, 46, 129, 137, 149; counter–, 150, 188n.7; and surveillance, 50, 148–150; and asymmetric threats, 67, 148, 240 Index Union of Soviet Socialist Republics: See Soviet Union. United States: allies, 98; BottomUp Review, 74; capabilities and weaknesses, 67; Central Intelligence Agency, 189n.11; Congress, 46, 128, 134; decline during the 1980s, 98; and democratic enlargement, 116, 148, 184n.5; domestic politics, 99, 100–101, 136, 137, 147, 152, 187n.23; enemies, 64, 66, 79, 148–150, 186n.12; Federal Bureau of Investigation, 150; as hegemon and global manager, 98, 128, 147, 159; and international moral order, 138, 150– 153; leadership, 98–99, 101; legitimacy of institutions, 118, 191n.5; and morality in foreign policy, 146–148; Mutual Defense Act, 20; National Security Agency, 47; National Security Council, 129; national security policy, 34, 41, 48–58, 60, 74–80, 87, 146–148, 185n.1, 187n.23, 189n.7, 193n.15; and oil crises, 99, 101, 190n.18; and peacekeeping, 77, 170, 187n.21; Pentagon, 57, 66– 67; potential for fragmentation of, 108, 126, 132; relations with Peoples’ Republic of China, 55, 98; and rogue states, 74, 75, 129, 148–150; social discipline during the 1950s, 44, 147; and social work as foreign terrorism (continued) 183n.1. See also enemies; security; surveillance; threats Thirty Years War, 134, 138 threats, 4–5; assessment of, 35, 51; asymmetric, 67, 148, 183n.1; to the body politics, 51; breadth of impacts across polity, 45, 47; credibility of, 67, 68, 75; and discipline, 69, 137, 146–148, 193n.13; external, 117, 149; imagined, 49–50, 68, 69, 111, 144, 148–150, 155, 186n.18, 188n.3; impacts of, 46–47, 66; proliferation of, 34, 44– 46, 49, 66, 137; rhetoric of, 46, 47, 53, 155; social construction of, 36, 50, 51, 56, 148–150, 186n.17, 186n.18; to security, 33–34, 67, 149, 155, 187n.26, 193n.13; to state system, 73. See also borders and boundaries; enemies; identities; security; surveillance; terrorism Thucydides, 2, 94 Tilly, Charles, 32, 118, 184n.7 trade: blocs, 162; and harmonization of rules, 163; and peace, 37, 85; and strategic goods, 42; and weapons, 85. See also economic growth; globalization; markets Triffin Dilemma, 19, 21, 183n.2 Triffin, Robert, 21 Truman Doctrine, 20, 146 Truman, President Harry, 47, 146, 151 Turner, Brian, 166 Index policy, 170; threats to, 33–34, 44–45, 87, 99, 129, 137, 148–150, 187n.23. See also Cold War; containment; liberalization 241 Vandenburg, Senator Arthur, 47 Vietnam War, 25 virtual: communities, 177; money, 29, 185n.15; war, 8, 78. See also electronic von Krosigk, Count, 93 Wæver, Ole, 53, 165, 181 Walker, R.B.J., 191n.2 Wall Street Journal, 127 Waltz, Kenneth, 2, 140, 185n.5 Wapner, Paul, 168 war, 8, 188n.8; causes of, 38, 65, 73, 84, 105, 108, 109–115; civil, 9, 35, 108, 113, 115; conventional, 77; costs of, 63–64, 67–68; and democracy, 148, 184n.5; discourses of, 65, 148; on drugs, 50; and economic intercourse, 38, 185n.5; future, 72–73; imaginary, 63, 69, 73; interstate, 73; just, 193n.8; as a moral event, 142; and municipal economic development as, 125; low-technology, 77, 94; nuclear, 134; objectives of, 79; over idea(l)s, 108; over resources, 9, 50, 83–85, 94, 99, 105, 108, 189n.3, 189n.4; over water, 83–85, 189n.1, 189n.5; postmodern, 36, 158; social construction of, 64; as speech act, 69; virtual, 8. See also conflict; ethnic and sectarian conflict warfare: information, 46–48, 67; social, 10, 77, 115, 132. See also conflict; war; virtual Warsaw Pact, 20, 72 water: as a commodity, 85, 105; wars over, 83–85, 93, 105, 106, 189n.1, 189n.5. See also flows; Middle East; resources; scarcity Waters, Malcolm, 171 weapons of mass destruction: nuclear, 22, 183n.1; and terrorism, 7, 35, 46, 149. See also nuclear deterrence; nuclear weapons; threats; terrorism Weinberger, Caspar, 70 welfare and morality, 145, 151 welfare state, 17, 40, 166; attack on, 26, 145, 151; in nineteenth-century England, 28; reduction of, 27, 109, 162; securitization, of, 50– 51. See also globalization; liberalization; morality Westphalia, Treaty of, 138–139, 141–142, 193n.4; as a social contract for European society, 139–140. See also borders and boundaries; princes; social contract White, Harry Dexter, 19 Wilson, President Woodrow, 90 World Bank, 19, 83–85, 164 242 Index 184n.7; Three, 8, 23, 60–61, 63–64, 67, 69, 81, 161 World Wide Web, 177. See also electronic, networks, virtual X-Files, 189n11 Yeats, William B., 36 Yeltsin, President Boris, 54 Young, Iris Marion, 171, 178 World Commission on Environment and Development (WCED), 103–104. See also sustainability World Federalism, 159 World Trade Organization, 20, 162, 163, 173. See also trade World War: One, 17, 40; Two, 3, 14, 15, 22, 23, 40–41, 54, 87, 144, 146, 158, 167, 180, This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/doc/98583999/After-Authority-War-Peace-And-Global-Politics-in-the-21st-Century
CC-MAIN-2016-40
en
refinedweb
#include <hallo.h> * William WAISSE [Sun, Dec 22 2002, 09:47:37PM]: > > > > ... and would you like to pay your mirror provider for additional disk > > > > space and traffic? > You mean Debian has difficulties finding 10 GB of online disk space ? What is "Debian"? You talk about space on mirrors, mirrors provided by Debian friends. Place on high-available servers costs. > A bare 4 GB would be enough since sources are the same . Woody only. Count Sarge and Sid. > My LUG and myself could be interested providing disk space for this. Noone can forbid you to make a such thing. If you wish to do it, make your hands dirty, contact people that are working on similar projects, etc. This discussion comes all 1-3 months to daylight, some people flame around, and became quiet again. Why? Maybe because optimisation does not get the perfomance gain they expected? > > > i dont see how this would imply a spike in traffic. the whole point of > > He? You do not need additional space for having two versions? > > Aditional space for binaries, yes, but nearly no additional traffic . Wrong. While proxies have to cache only one version today, they would to have to cache multiple for your case. Gruss/Regards, Eduard. -- #include <welt_ist_doch_ein_dorf.h> Attachment: pgpUaru2sZys0.pgp Description: PGP signature
https://lists.debian.org/debian-devel/2002/12/msg01385.html
CC-MAIN-2016-40
en
refinedweb
Introduction During the operation of most programs, errors may occasionally occur. Their adequate processing is one of the important aspects of a high-quality and sustainable software. This article will cover main methods of error handling, recommendations for their use, as well as logging via MQL5 tools. Error handling is a relatively difficult and controversial subject. There are many ways of error handling, and each of them has certain advantages and disadvantages. Many of these methods can be used together, but there is no universal formula — each specific task requires an adequate approach. Basic methods of error handling If a program encounters errors during its operation, then usually for its proper functioning it should perform some action (or several actions). The following examples of such actions are provided: Stop a program. If there are any errors, the most appropriate action would be to stop the running program. Normally these are critical errors that disable program operation, because it becomes either pointless or simply dangerous. MQL5 provides a mechanism of interruption for time execution errors: for example, in the case of "a division by zero" or "an array out of range", the program ceases its operation. In other cases of termination the programmer must take care of them himself. For example, for Expert Advisors the ExpertRemove() function should be utilized: Example of stopping an Expert Advisor with the ExpertRemove() function: void OnTick() { bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { Alert("fail"); ExpertRemove(); return; } } Convert incorrect values to the range of correct values. A certain value frequently has to fall within the specified range. However, in some cases values outside of this range may appear. Then it is possible to have a value's forced return to the acceptable boundary. A calculation of the open position volume can be used as an example. If the resulting volume is outside the minimum and maximum available values, it can be forced to return within these borders: Example of converting incorrect values to the correct value range double VolumeCalculation() { double result=...; result=MathMax(result,SymbolInfoDouble(Symbol(),SYMBOL_VOLUME_MIN)); result=MathMin(result,SymbolInfoDouble(Symbol(),SYMBOL_VOLUME_MAX)); return result; } However, if for some reason a volume turned out higher than the maximum border, and the deposit is unable to sustain such load, then it is advisable to log and to abort the program execution. Fairly frequently this particular error is threatening for an account. Return an error value. In this case if an error occurs, then a certain method or function must return a predetermined value that will signal an error. For example, if our method or function has to return a string, then NULL may be returned in the case of an error. Example of an error value return: #define SOME_STR_FUNCTION_FAIL_RESULT (NULL) string SomeStrFunction() { string result=""; bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { return SOME_STR_FUNCTION_FAIL_RESULT; } return result; } void OnTick() { string someStr=SomeStrFunction(); if(someStr==SOME_STR_FUNCTION_FAIL_RESULT) { Print("fail"); return; } } Nevertheless, such approach can lead to programming mistakes. If this action is not documented, or if a programmer doesn't familiarize himself with a document or a code implementation, then he will not be aware of the possible error value. Moreover, problems may occur, if a function or a method can return almost any value in the normal mode of operation, including the one with an error. Assign the execution result to a special global variable. Frequently this approach is applied for methods and functions that do not return any values. The idea is that the result of this method or function is assigned to a certain global variable, and then the value of this variable is checked in the calling code. For this purpose there is a default function (SetUserError()) in MQL5. Example of assigning an error code with SetUserError() #define SOME_STR_FUNCTION_FAIL_CODE (123) string SomeStrFunction() { string result=""; bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { SetUserError(SOME_STR_FUNCTION_FAIL_CODE); return ""; } return result; } void OnTick() { ResetLastError(); string someStr=SomeStrFunction(); if(GetLastError()==ERR_USER_ERROR_FIRST+SOME_STR_FUNCTION_FAIL_CODE) { Print("fail"); return; } } In this case a programmer may be unaware of the possible errors, however, this approach allows to inform not only about an error, but also to indicate its specific code. This is particular important, if there are several sources of error. Return the execution result as a bool and the resulting value as a variable passed by a reference. Such approach is slightly better than the previous two, as it leads to a lower probability of programming mistakes. It's difficult not to notice, that a method or a function may be unable to operate properly: Example of returning a function operation result as a bool bool SomeStrFunction(string &value) { string resultValue=""; bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { value=""; return false; } value=resultValue; return true; } void OnTick() { string someStr=""; bool result=SomeStrFunction(someStr); if(!result) { Print("fail"); return; } } This and the previously mentioned option can be combined, if there are several different errors, and we need to identify the exact one. A false can be returned, and a global variable have an error code assigned. Example of returning a function result as a bool and assigning an error code with SetUserError() #define SOME_STR_FUNCTION_FAIL_CODE_1 (123) #define SOME_STR_FUNCTION_FAIL_CODE_2 (124) bool SomeStrFunction(string &value) { string resultValue=""; bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { value=""; SetUserError(SOME_STR_FUNCTION_FAIL_CODE_1); return false; } bool resultOfSomeOperation2=SomeOperation2(); if(!resultOfSomeOperation2) { value=""; SetUserError(SOME_STR_FUNCTION_FAIL_CODE_2); return false; } value=resultValue; return true; } void OnTick() { string someStr=""; bool result=SomeStrFunction(someStr); if(!result) { Print("fail, code = "+(string)(GetLastError()-ERR_USER_ERROR_FIRST)); return; } } However, this option is difficult for interpretation (when reading the code) and further support. Return the result as a value from the enumeration (enum), and the resulting value (if any) as a variable passed by a reference. If there are several types of possible errors, in case of failure this option allows to return a specific error type without the use of global variables. Only one value will correspond to the correct execution, and the rest will be considered as error. Example of returning a function operation result as an enumeration value (enum) enum ENUM_SOME_STR_FUNCTION_RESULT { SOME_STR_FUNCTION_SUCCES, SOME_STR_FUNCTION_FAIL_CODE_1, SOME_STR_FUNCTION_FAIL_CODE_2 }; ENUM_SOME_STR_FUNCTION_RESULT SomeStrFunction(string &value) { string resultValue=""; bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { value=""; return SOME_STR_FUNCTION_FAIL_CODE_1; } bool resultOfSomeOperation2=SomeOperation2(); if(!resultOfSomeOperation2) { value=""; return SOME_STR_FUNCTION_FAIL_CODE_2; } value=resultValue; return SOME_STR_FUNCTION_SUCCES; } void OnTick() { string someStr=""; ENUM_SOME_STR_FUNCTION_RESULT result=SomeStrFunction(someStr); if(result!=SOME_STR_FUNCTION_SUCCES) { Print("fail, error = "+EnumToString(result)); return; } } Eliminating global variables is a very important advantage of this approach, as incompetent or negligent handling may cause serious issues. Return a result as a structure instance consisting of a Boolean variable or an enumeration value (enum) and a resulting value. This option is related to the previous method which eliminates the necessity for passing variables by a reference. The use of enum is preferable here, as it will allow to extend the list of possible execution results in the future. Example of returning a function operation result as a structure instance consisting of enumeration values (enum) and resulting values enum ENUM_SOME_STR_FUNCTION_RESULT { SOME_STR_FUNCTION_SUCCES, SOME_STR_FUNCTION_FAIL_CODE_1, SOME_STR_FUNCTION_FAIL_CODE_2 }; struct SomeStrFunctionResult { ENUM_SOME_STR_FUNCTION_RESULT code; char value[255]; }; SomeStrFunctionResult SomeStrFunction() { SomeStrFunctionResult result; string resultValue=""; bool resultOfSomeOperation=SomeOperation(); if(!resultOfSomeOperation) { result.code=SOME_STR_FUNCTION_FAIL_CODE_1; return result; } bool resultOfSomeOperation2=SomeOperation2(); if(!resultOfSomeOperation2) { result.code=SOME_STR_FUNCTION_FAIL_CODE_2; return result; } result.code=SOME_STR_FUNCTION_SUCCES; StringToCharArray(resultValue,result.value); return result; } void OnTick() { SomeStrFunctionResult result=SomeStrFunction(); if(result.code!=SOME_STR_FUNCTION_SUCCES) { Print("fail, error = "+EnumToString(result.code)); return; } string someStr=CharArrayToString(result.value); } Attempt an execution of operation few times. It is frequently worth attempting an operation several times before considering it unsuccessful. For example, if a file can't be read as it is used by another process, then several attempts with increasing time intervals should be made. There is a high possibility that another process will free the file, and our method or function will be able to refer to it. Example of making several attempts to open a file string fileName="test.txt"; int fileHandle=INVALID_HANDLE; for(int iTry=0; iTry<=10; iTry++) { fileHandle=FileOpen(fileName,FILE_TXT|FILE_READ|FILE_WRITE); if(fileHandle!=INVALID_HANDLE) { break; } Sleep(iTry*200); } Note: The example above shows only the essence of this approach, a practical application requires the occurring errors to be analyzed. If, for example, an error 5002 (Invalid file name) or 5003 (The file name is too long) occurs, then there is no point to make any further attempts. Furthermore, it must be considered that such approach shouldn't be applied in the systems where any overall performance slowdown is undesirable. Notify a user explicitly. Users have to be explicitly notified about certain errors (via pop-up window, chart label etc). Explicit notifications can be frequently used in combination with suspending or stopping a program completely. For example, if an account has insufficient funds, or a user has entered incorrect values of input parameters, then he should clearly be notified about that. Example of notifying a user about invalid input parameters input uint MAFastPeriod = 10; input uint MASlowPeriod = 200; int OnInit() { //--- if(MAFastPeriod>=MASlowPeriod) { Alert("A period of fast moving average has to be less than a period of slow moving average!"); return INIT_PARAMETERS_INCORRECT; } //--- return(INIT_SUCCEEDED); } There are certainly other methods of error handling, the list provided only reveals the ones that are most commonly used. General recommendations for error handling Choose an adequate level of error handling. Completely different error handling requirements are applied to different programs. It is OK to skip error handling in a small script that will be used only a few times for checking insignificant ideas and won't be shared with a third party. On the contrary, it would be appropriate to process all possible errors, if a project involves hundreds and thousands of potential users. Always try to have a good understanding of the error processing level required in each specific case. Choose an adequate user interaction level. Explicit user interaction is required only for certain errors: the program can proceed operating on its own without any user notifications. It is important to find a middle way: users shouldn't get bombarded with error warnings or, on the contrary, get zero notifications in critical situations. The following approach may offer a good solution - users should be explicitly notified for any critical errors or situations that require involvement, and for all other cases log files should be kept. Verify results of all functions and methods that return them. If any function or method can return values, among which there are those indicating errors, then it is best to check them. The opportunity for program's quality improvement shouldn't be neglected. Check the conditions before performing certain operations, if possible. For example, check the following before attempting to open a trade: - Are trading robots allowed on the terminal: TerminalInfoInteger(TERMINAL_TRADE_ALLOWED). - Are trading robots allowed on the account: AccountInfoInteger(ACCOUNT_TRADE_EXPERT). - Is there a connection to a trading server: TerminalInfoInteger(TERMINAL_CONNECTED). - Are the parameters of a trading operation correct: OrderCheck(). Follow an adequate performance of various program parts. A common code example is a trailing stop loss that doesn't take into account the frequency of queries to the trading server. This function's calling is normally implemented at every tick. If there is a continuous one-way movement, or some errors occur when attempting to modify transactions, then this function can send trade modification requests almost at every tick (or multiple requests for multiple trades). When quotes are not frequently received, this feature will not cause any problems. Otherwise, there may be serious issues — extremely frequent trade modification requests may cause a broker's disconnection from the automated trading for a particular account and unpleasant conversations with customer support. The easiest solution is to limit the frequency of attempts for implementing trade modification requests: remember the time of a previous request and don't attempt to perform it again, unless 20 seconds have passed. Example of performing a trailing stop loss function less than every 30 seconds const int TRAILING_STOP_LOSS_SECONDS_INTERVAL=30; void TrailingStopLoss() { static datetime prevModificationTime=0; if((int)TimeCurrent() -(int)prevModificationTime<=TRAILING_STOP_LOSS_SECONDS_INTERVAL) { return; } //--- Stop Loss modification { ... ... ... prevModificationTime=TimeCurrent(); } } The same problem may occur, when you attempt to place too many pending orders within a short period of time, as already experienced by the author. Aim for an adequate relation of stability and correctness. A compromise between code's stability and correctness should be determined when writing a program. Stability implies that the program will continue working with errors, even if it leads to slightly inaccurate results. Correctness doesn't allow returning inaccurate results or performing wrong actions. They have to be either accurate, or to be absent completely, which means it is better to stop the program than to return inaccurate results or do something else wrong. For example, if an indicator can't calculate something, it's better to have no signal than shut down completely. On the contrary, for a trading robot it is best to stop its work than to open a trade with an excessive volume. Furthermore, a trading robot can notify users with a PUSH notification before stopping, allowing users to learn about issues and handle them promptly. Display useful information about errors. Try to make error messages informative. It is insufficient, when the program responds with an error — "unable to open a deal" — without further explanation. It is advisable to have a more specific message like "unable to open a deal: incorrect volume of the opened position (0.9999)". It doesn't matter, if the program displays an error information in a pop-up window or in a log file. In any case it should be sufficient for a user or a programmer (especially in the log file analysis) to understand the cause of an error and fix it, if possible. However, users shouldn't be overloaded with information: it is not necessary to display an error's code in a pop-up window, since a user is unable to do much with it. Logging with MQL5 tools Log files are normally created by the program specifically for programmers to facilitate the search of failure/error reasons and to evaluate the system's condition at a specific moment in time etc. In addition to that, logging can be used for software profiling. Levels of logging Messages received in the log files often carry different criticality and require different levels of attention. Logging levels are applied to separate messages with various criticality from each other and to have the ability to customize the criticality degree of displayed messages. Several logging levels are normally implemented: - Debug: debug messages. This level of logging is included in the development, debugging and commissioning stages. - Info: informative messages. They carry information about various system activities (e.g. start/end of operation, opening/closing of orders etc). This level messages usually don't require any reaction, but can significantly assist in studying the chains of events that led to operation errors. - Warning: warning messages. This level of messages may include a description of situations that led to errors that don't require user intervention. For example, if the calculated trade amount is less than the minimum, and the program has automatically corrected it, then it can be reported with a «Warning» level in the log file. - Error: error messages that require intervention. This logging level is typically used with the occurrence of errors linked to issues with saving a certain file, opening or modifying deals etc. In other words, this level includes errors that the program is unable to overcome itself and, therefore, requires a user's or programmer's intervention. - Fatal: critical error messages that disable further program operation. Such messages need to be treated instantly, and often a user's or programmer's notification via email, SMS, etc. is provided at this level. Soon we are going to show you, how PUSH notifications are used in MQL5. Maintaining log files The easiest way of maintaining log files with MQL5 tools is via the standard functions Print or PrintFormat. As a result, all messages will be sent to the Expert Advisor, indicator and terminal script log. Example of displaying messages in a common Experts log with Print() function double VolumeCalculation() { double result=...; if(result<SymbolInfoDouble(Symbol(),SYMBOL_VOLUME_MIN)) { Print("volume of a deal (",DoubleToString(result,2),") appeared to be less than acceptable and has been adjusted to "+DoubleToString(SymbolInfoDouble(Symbol(),SYMBOL_VOLUME_MIN),2)); result=SymbolInfoDouble(Symbol(),SYMBOL_VOLUME_MIN); } return result; } This approach has several disadvantages: - Messages from multiple programs can get mixed up in the total "bunch" and complicate the analysis. - Due to a log file's easy availability it can be accidentally or deliberately deleted by a user. - It is difficult to implement and configure logging levels. - It is impossible to redirect log messages to another source (external file, database, e-mail, etc.). - It is impossible to implement a compulsory rotation of log files (file replacement by data and time or upon reaching a certain size). Advantage of this approach: - It is sufficient to use the same function without having to invent anything. - In many cases the log file can be viewed directly in the terminal and it doesn't have to be searched separately. The implementation of a personal logging mechanism can eliminate all disadvantages of using Print() and PrintFormat(). However, if necessary to re-use the code, a transfer to a new project and a logging mechanism (or refusal to use it in the code) will be required. As an example of a logging mechanism in MQL5 the following scenario can be considered. Example of a custom logging mechanism implementation in MQL5 //+------------------------------------------------------------------+ //| logger.mqh | //| Copyright 2015, Sergey Eryomin | //| | //+------------------------------------------------------------------+ #property copyright "Sergey Eryomin" #property link "" #define LOG(level, message) CLogger::Add(level, message+" ("+__FILE__+"; "+__FUNCSIG__+"; Line: "+(string)__LINE__+")") //--- maximum number of files for operation of "a new log file for each new 1 Mb" #define MAX_LOG_FILE_COUNTER (100000) //--- number of bytes in a megabyte #define BYTES_IN_MEGABYTE (1048576) //--- maximum length of a log file's name #define MAX_LOG_FILE_NAME_LENGTH (255) //--- logging levels enum ENUM_LOG_LEVEL { LOG_LEVEL_DEBUG, LOG_LEVEL_INFO, LOG_LEVEL_WARNING, LOG_LEVEL_ERROR, LOG_LEVEL_FATAL }; //--- logging methods enum ENUM_LOGGING_METHOD { LOGGING_OUTPUT_METHOD_EXTERN_FILE,// external file LOGGING_OUTPUT_METHOD_PRINT // Print function }; //--- notification methods enum ENUM_NOTIFICATION_METHOD { NOTIFICATION_METHOD_NONE,// disabled NOTIFICATION_METHOD_ALERT,// Alert function NOTIFICATION_METHOD_MAIL, // SendMail function NOTIFICATION_METHOD_PUSH // SendNotification function }; //--- log files restriction types enum ENUM_LOG_FILE_LIMIT_TYPE { LOG_FILE_LIMIT_TYPE_ONE_DAY,// new log file for every new day LOG_FILE_LIMIT_TYPE_ONE_MEGABYTE // new log file for every new 1Mb }; //+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ class CLogger { public: //--- add a message to the log //--- Note: //--- if output mode to external file is on but it can't be executed, //--- then message output is done via Print() static void Add(const ENUM_LOG_LEVEL level,const string message) { if(level>=m_logLevel) { Write(level,message); } if(level>=m_notifyLevel) { Notify(level,message); } } //--- set logging levels static void SetLevels(const ENUM_LOG_LEVEL logLevel,const ENUM_LOG_LEVEL notifyLevel) { m_logLevel=logLevel; //--- a level of message output via notifications shouldn't be below a level of writing messages in a log file m_notifyLevel=fmax(notifyLevel,m_logLevel); } //--- set a logging method static void SetLoggingMethod(const ENUM_LOGGING_METHOD loggingMethod) { m_loggingMethod=loggingMethod; } //--- set a notification method static void SetNotificationMethod(const ENUM_NOTIFICATION_METHOD notificationMethod) { m_notificationMethod=notificationMethod; } //--- set a name for a log file static void SetLogFileName(const string logFileName) { m_logFileName=logFileName; } //--- set a type of restriction for a log file static void SetLogFileLimitType(const ENUM_LOG_FILE_LIMIT_TYPE logFileLimitType) { m_logFileLimitType=logFileLimitType; } private: //--- messages with this and higher logging level will be stored in a log file/journal static ENUM_LOG_LEVEL m_logLevel; //--- messages with this and higher logging level will be written as notifications static ENUM_LOG_LEVEL m_notifyLevel; //--- logging method static ENUM_LOGGING_METHOD m_loggingMethod; //--- notification method static ENUM_NOTIFICATION_METHOD m_notificationMethod; //--- name of log file static string m_logFileName; //--- type of restriction for a log file static ENUM_LOG_FILE_LIMIT_TYPE m_logFileLimitType; //--- a result of getting a file name for a log struct GettingFileLogNameResult { GettingFileLogNameResult(void) { succes=false; ArrayInitialize(value,0); } bool succes; char value[MAX_LOG_FILE_NAME_LENGTH]; }; //--- a result for checking the size of existing log file enum ENUM_LOG_FILE_SIZE_CHECKING_RESULT { IS_LOG_FILE_LESS_ONE_MEGABYTE, IS_LOG_FILE_NOT_LESS_ONE_MEGABYTE, LOG_FILE_SIZE_CHECKING_ERROR }; //--- write in a log file static void Write(const ENUM_LOG_LEVEL level,const string message) { switch(m_loggingMethod) { case LOGGING_OUTPUT_METHOD_EXTERN_FILE: { GettingFileLogNameResult getLogFileNameResult=GetLogFileName(); //--- if(getLogFileNameResult.succes) { string fileName=CharArrayToString(getLogFileNameResult.value); //--- if(WriteToFile(fileName,GetDebugLevelStr(level)+": "+message)) { break; } } } case LOGGING_OUTPUT_METHOD_PRINT: default: { Print(GetDebugLevelStr(level)+": "+message); break; } } } //--- execute a notification static void Notify(const ENUM_LOG_LEVEL level,const string message) { if(m_notificationMethod==NOTIFICATION_METHOD_NONE) { return; } string fullMessage=TimeToString(TimeLocal(),TIME_DATE|TIME_SECONDS)+", "+Symbol()+" ("+GetPeriodStr()+"), "+message; //--- switch(m_notificationMethod) { case NOTIFICATION_METHOD_MAIL: { if(TerminalInfoInteger(TERMINAL_EMAIL_ENABLED)) { if(SendMail("Logger",fullMessage)) { return; } } } case NOTIFICATION_METHOD_PUSH: { if(TerminalInfoInteger(TERMINAL_NOTIFICATIONS_ENABLED)) { if(SendNotification(fullMessage)) { return; } } } } //--- Alert(GetDebugLevelStr(level)+": "+message); } //--- obtain a log file name for writing static GettingFileLogNameResult GetLogFileName() { if(m_logFileName=="") { InitializeDefaultLogFileName(); } //--- switch(m_logFileLimitType) { case LOG_FILE_LIMIT_TYPE_ONE_DAY: { return GetLogFileNameOnOneDayLimit(); } case LOG_FILE_LIMIT_TYPE_ONE_MEGABYTE: { return GetLogFileNameOnOneMegabyteLimit(); } default: { GettingFileLogNameResult failResult; failResult.succes=false; return failResult; } } } //--- get a log file name in case of restriction with "new log file for every new day" static GettingFileLogNameResult GetLogFileNameOnOneDayLimit() { GettingFileLogNameResult result; string fileName=m_logFileName+"_"+Symbol()+"_"+GetPeriodStr()+"_"+TimeToString(TimeLocal(),TIME_DATE); StringReplace(fileName,".","_"); fileName=fileName+".log"; result.succes=(StringToCharArray(fileName,result.value)==StringLen(fileName)+1); return result; } //--- get a log file name in case of restriction with "new log file for each new 1 Mb" static GettingFileLogNameResult GetLogFileNameOnOneMegabyteLimit() { GettingFileLogNameResult result; //--- for(int i=0; i<MAX_LOG_FILE_COUNTER; i++) { ResetLastError(); string fileNameToCheck=m_logFileName+"_"+Symbol()+"_"+GetPeriodStr()+"_"+(string)i; StringReplace(fileNameToCheck,".","_"); fileNameToCheck=fileNameToCheck+".log"; ResetLastError(); bool isExists=FileIsExist(fileNameToCheck); //--- if(!isExists) { if(GetLastError()==5018) { continue; } } //--- if(!isExists) { result.succes=(StringToCharArray(fileNameToCheck,result.value)==StringLen(fileNameToCheck)+1); break; } else { ENUM_LOG_FILE_SIZE_CHECKING_RESULT checkLogFileSize=CheckLogFileSize(fileNameToCheck); if(checkLogFileSize==IS_LOG_FILE_LESS_ONE_MEGABYTE) { result.succes=(StringToCharArray(fileNameToCheck,result.value)==StringLen(fileNameToCheck)+1); break; } else if(checkLogFileSize!=IS_LOG_FILE_NOT_LESS_ONE_MEGABYTE) { break; } } } //--- return result; } //--- static ENUM_LOG_FILE_SIZE_CHECKING_RESULT CheckLogFileSize(const string fileNameToCheck) { int fileHandle=FileOpen(fileNameToCheck,FILE_TXT|FILE_READ); //--- if(fileHandle==INVALID_HANDLE) { return LOG_FILE_SIZE_CHECKING_ERROR; } //--- ResetLastError(); ulong fileSize=FileSize(fileHandle); FileClose(fileHandle); //--- if(GetLastError()!=0) { return LOG_FILE_SIZE_CHECKING_ERROR; } //--- if(fileSize<BYTES_IN_MEGABYTE) { return IS_LOG_FILE_LESS_ONE_MEGABYTE; } else { return IS_LOG_FILE_NOT_LESS_ONE_MEGABYTE; } } //--- perform a log file name initialization by default static void InitializeDefaultLogFileName() { m_logFileName=MQLInfoString(MQL_PROGRAM_NAME); //--- #ifdef __MQL4__ StringReplace(m_logFileName,".ex4",""); #endif #ifdef __MQL5__ StringReplace(m_logFileName,".ex5",""); #endif } //--- write a message in a file static bool WriteToFile(const string fileName, const string text) { ResetLastError(); string fullText=TimeToString(TimeLocal(),TIME_DATE|TIME_SECONDS)+", "+Symbol()+" ("+GetPeriodStr()+"), "+text; int fileHandle=FileOpen(fileName,FILE_TXT|FILE_READ|FILE_WRITE); bool result=true; //--- if(fileHandle!=INVALID_HANDLE) { //--- attempt to place a file pointer in the end of a file if(!FileSeek(fileHandle,0,SEEK_END)) { Print("Logger: FileSeek() is failed, error #",GetLastError(),"; text = \"",fullText,"\"; fileName = \"",fileName,"\""); result=false; } //--- attempt to write a text in a file if(result) { if(FileWrite(fileHandle,fullText)==0) { Print("Logger: FileWrite() is failed, error #",GetLastError(),"; text = \"",fullText,"\"; fileName = \"",fileName,"\""); result=false; } } //--- FileClose(fileHandle); } else { Print("Logger: FileOpen() is failed, error #",GetLastError(),"; text = \"",fullText,"\"; fileName = \"",fileName,"\""); result=false; } //--- return result; } //--- get a current period as a line static string GetPeriodStr() { ResetLastError(); string periodStr=EnumToString(Period()); if(GetLastError()!=0) { periodStr=(string)Period(); } StringReplace(periodStr,"PERIOD_",""); //--- return periodStr; } //--- static string GetDebugLevelStr(const ENUM_LOG_LEVEL level) { ResetLastError(); string levelStr=EnumToString(level); //--- if(GetLastError()!=0) { levelStr=(string)level; } StringReplace(levelStr,"LOG_LEVEL_",""); //--- return levelStr; } }; ENUM_LOG_LEVEL CLogger::m_logLevel=LOG_LEVEL_INFO; ENUM_LOG_LEVEL CLogger::m_notifyLevel=LOG_LEVEL_FATAL; ENUM_LOGGING_METHOD CLogger::m_loggingMethod=LOGGING_OUTPUT_METHOD_EXTERN_FILE; ENUM_NOTIFICATION_METHOD CLogger::m_notificationMethod=NOTIFICATION_METHOD_ALERT; string CLogger::m_logFileName=""; ENUM_LOG_FILE_LIMIT_TYPE CLogger::m_logFileLimitType=LOG_FILE_LIMIT_TYPE_ONE_DAY; //+------------------------------------------------------------------+ This code can be placed as a separate included file, for instance Logger.mqh, and saved in <data_folder>/MQL5/Include (this file is attached to the article). The operation with the CLogger class will look approximately the following way: Example of implementation of a personal logging mechanism #include <Logger.mqh> //--- initialize a logger void InitLogger() { //--- set logging levels: //--- DEBUG level for writing messages in a log file //--- ERROR-level for notification CLogger::SetLevels(LOG_LEVEL_DEBUG,LOG_LEVEL_ERROR); //--- set a notification type as PUSH notification CLogger::SetNotificationMethod(NOTIFICATION_METHOD_PUSH); //--- set logging method as an external file writing CLogger::SetLoggingMethod(LOGGING_OUTPUT_METHOD_EXTERN_FILE); //--- set a name for log files CLogger::SetLogFileName("my_log"); //--- set log file restrictions as "new log file for every new day" CLogger::SetLogFileLimitType(LOG_FILE_LIMIT_TYPE_ONE_DAY); } int OnInit() { //--- InitLogger(); //--- CLogger::Add(LOG_LEVEL_INFO,""); CLogger::Add(LOG_LEVEL_INFO,"---------- OnInit() -----------"); LOG(LOG_LEVEL_DEBUG,"Example of debug message"); LOG(LOG_LEVEL_INFO,"Example of info message"); LOG(LOG_LEVEL_WARNING,"Example of warning message"); LOG(LOG_LEVEL_ERROR,"Example of error message"); LOG(LOG_LEVEL_FATAL,"Example of fatal message"); //--- return(INIT_SUCCEEDED); } First, the InitLogger() function has an initialization of all possible logger parameters, and then messages are written in a log file. The result of this code will be recorded in a log file with a name of a type «my_log_USDCAD_D1_2015_09_23.log» inside <data_folder>/MQL5/Files of the following text: 2015.09.23 09:02:10, USDCAD (D1), INFO: 2015.09.23 09:02:10, USDCAD (D1), INFO: ---------- OnInit() ----------- 2015.09.23 09:02:10, USDCAD (D1), DEBUG: Example of debug message (LoggerTest.mq5; int OnInit(); Line: 36) 2015.09.23 09:02:10, USDCAD (D1), INFO: Example of info message (LoggerTest.mq5; int OnInit(); Line: 38) 2015.09.23 09:02:10, USDCAD (D1), WARNING: Example of warning message (LoggerTest.mq5; int OnInit(); Line: 40) 2015.09.23 09:02:10, USDCAD (D1), ERROR: Example of error message (LoggerTest.mq5; int OnInit(); Line: 42) 2015.09.23 09:02:10, USDCAD (D1), FATAL: Example of fatal message (LoggerTest.mq5; int OnInit(); Line: 44) In addition to that, messages of levels ERROR and FATAL will be sent through PUSH-notifications. When setting a message level for writing in a log file as Warning (CLogger::SetLevels(LOG_LEVEL_WARNING,LOG_LEVEL_ERROR)), the output will be the following: 2015.09.23 09:34:00, USDCAD (D1), WARNING: Example of warning message (LoggerTest.mq5; int OnInit(); Line: 40) 2015.09.23 09:34:00, USDCAD (D1), ERROR: Example of error message (LoggerTest.mq5; int OnInit(); Line: 42) 2015.09.23 09:34:00, USDCAD (D1), FATAL: Example of fatal message (LoggerTest.mq5; int OnInit(); Line: 44) It means, that messages below the WARNING level will not be saved. Public methods of CLogger class and LOG macro Let's analyze the CLogger class and LOG macro public methods. void SetLevels(const ENUM_LOG_LEVEL logLevel, const ENUM_LOG_LEVEL notifyLevel). Sets logging levels. const ENUM_LOG_LEVEL logLevel — messages with this and higher logging level will be stored in a log file/journal. By default = LOG_LEVEL_INFO. const ENUM_LOG_LEVEL notifyLevel — messages with this and higher logging level will be displayed as notifications. By default = LOG_LEVEL_FATAL. Possible values for both: - LOG_LEVEL_DEBUG, - LOG_LEVEL_INFO, - LOG_LEVEL_WARNING, - LOG_LEVEL_ERROR, - LOG_LEVEL_FATAL. void SetLoggingMethod(const ENUM_LOGGING_METHOD loggingMethod). Sets a logging method. const ENUM_LOGGING_METHOD loggingMethod — logging method. By default = LOGGING_OUTPUT_METHOD_EXTERN_FILE. Possible values: - LOGGING_OUTPUT_METHOD_EXTERN_FILE — external file, - LOGGING_OUTPUT_METHOD_PRINT — Print function. void SetNotificationMethod(const ENUM_NOTIFICATION_METHOD notificationMethod). Sets a notification method. const ENUM_NOTIFICATION_METHOD notificationMethod — notification method. By default = NOTIFICATION_METHOD_ALERT. Possible values: - NOTIFICATION_METHOD_NONE — disabled, - NOTIFICATION_METHOD_ALERT — Alert function, - NOTIFICATION_METHOD_MAIL — SendMail function, - NOTIFICATION_METHOD_PUSH — SendNotification function. void SetLogFileName(const string logFileName). Sets a name of a log file. const string logFileName — a name of a log file. The default value will be the program's name, where a logger is used (see InitializeDefaultLogFileName() private method). void SetLogFileLimitType(const ENUM_LOG_FILE_LIMIT_TYPE logFileLimitType). Sets a restriction type on a log file. const ENUM_LOG_FILE_LIMIT_TYPE logFileLimitType — restriction type on a log file. Default value: LOG_FILE_LIMIT_TYPE_ONE_DAY. Possible values: - LOG_FILE_LIMIT_TYPE_ONE_DAY — a new log file for every new day. Files with the following names will be created: my_log_USDCAD_D1_2015_09_21.log, my_log_USDCAD_D1_2015_09_22.log , my_log_USDCAD_D1_2015_09_23 .log etc. - LOG_FILE_LIMIT_TYPE_ONE_MEGABYTE — a new log file for every new 1 Mb. Files with the following names will be created: my_log_USDCAD_D1_0.log, my_log_USDCAD_D1_1.log, my_log_USDCAD_D1_2.log etc. Switch to a next file once of the previous file reaches 1Mb. void Add(const ENUM_LOG_LEVEL level,const string message). Adds a message to a log. const ENUM_LOG_LEVEL level — message level. Possible values: - LOG_LEVEL_DEBUG - LOG_LEVEL_INFO - LOG_LEVEL_WARNING - LOG_LEVEL_ERROR - LOG_LEVEL_FATAL const string message — a text of a message. In addition to the Add method, the LOG macro is also implemented and it adds to the message a file name, function's signature and line number where writing in the log file is performed: #define LOG(level, message) CLogger::Add(level, message+" ("+__FILE__+"; "+__FUNCSIG__+"; Line: "+(string)__LINE__+")") This macro can be particularly useful at debugging. Thus, the example shows the logging mechanism which allows: - To configure logging levels (DEBUG..FATAL). - To set a level of messages when users should be notified. - To set where a log should be written — in the Expert log via Print() or to an external file. - For external file output — to indicate a file name and set restrictions to log files: a file for every single date, or a file for every logging megabyte. - To specify a notification type (Alert(), SendMail(), SendNotify()). The proposed option is certainly just an example, and modification will be required for certain tasks (including the removal of unnecessary functionality). For example, in addition to writing to an external file and the common log, writing to a database can be implemented as another logging method. Conclusion In this article we have considered matters of error handling and logging with MQL5 tools. Correct error handling and relevant logging can considerably increase the quality of a developed software and greatly simplify the future support. Translated from Russian by MetaQuotes Software Corp. Original article:
https://www.mql5.com/en/articles/2041
CC-MAIN-2016-40
en
refinedweb
§Action composition This chapter introduces several ways to define generic action functionality. §Reminder about actions Previously, we said that an action is a Java method that returns a play.mvc.Result value. Actually, Play manages internally actions as functions. Because Java doesn’t yet support first class functions, an action provided by the Java API is an instance of play.mvc.Action: public abstract class Action { public abstract CompletionStage<Result> call(Context ctx) throws Throwable; }.Context ctx) { Logger.info("Calling action for {}", ctx); return delegate.call(ctx); } }: play.mvc.Security.Authenticatedand play.cache.Cachedannotations and the corresponding predefined Actions are shipped with Play. See the relevant API documentation for more information..Context ctx) { if (configuration.value()) { Logger.info("Calling action for {}", ctx); } return delegate.call(ctx); } } the context args map. public class PassArgAction extends play.mvc.Action.Simple { public CompletionStage<Result> call(Http.Context ctx) { ctx.args.put("user", User.findById(1234)); return delegate.call(ctx); } } Then in an action you can get the arg like this: @With(PassArgAction.class) public static Result passArgIndex() { Object user = ctx().args.get("user"); return ok(Json.toJson(user)); } Next: HTTP Request Handlers / ActionCreator Found an error in this documentation? The source code for this page can be found here. After reading the documentation guidelines, please feel free to contribute a pull request.
https://www.playframework.com/documentation/2.5.x/JavaActionsComposition
CC-MAIN-2016-40
en
refinedweb
On Thu, 2009-02-19 at 13:20 +0200, Dominique Leuenberger wrote: > Hi, > > this code snipet produces a new warning (especially on newer compilers): > > static av_always_inline int dv_guess_dct_mode(DVVideoContext *s, > uint8_t *data, int linesize) { > if (s->avctx->flags & CODEC_FLAG_INTERLACED_DCT) { > int ps = s->ildct_cmp(NULL, data, NULL, linesize, 8) - 400; > if (ps > 0) { > int is = s->ildct_cmp(NULL, data , NULL, > linesize<<1, 4) + > s->ildct_cmp(NULL, data + linesize, NULL, > linesize<<1, 4); > return (ps > is); > } > } else > return 0; > } > The 'issue' is reaching a non-void function without returning a value. > At first sight, of course you would say this is nonsense and a wrong > warning by the compiler, but by looking closer you can see that the > compiler is right: > > in case of the nested if (ps > 0) not being evaluated to true, no > return value is specified. the 'final' else is never reached. and thus > no return value given at all. > > I think the 'simplest' solution would be to just drop the else and > have return 0; unconditional happening at the end. On the other hand > I'm not sure if this is what it is supposed to do. It is. I'll fix it in a moment. Sorry for the trouble. Thanks, Roman.
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-February/065579.html
CC-MAIN-2016-40
en
refinedweb
User talk:Dorkadorkreece From Uncyclopedia, the content-free encyclopedia edit Welcome! Hello, Dorkadorkreeorkadorkree 21:58, 18 December 2008 (UTC) edit User:Dorkadorkreece/Hey Arnold!© The construction tag on your article expired, so I moved it into your namespace where you can work on it at your leisure. Don't give up on it! -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 18:00, May 11, 2011 (UTC) Hey, thanks alot! I'll be sure to keep working on it.--Dorkadorkreece 22:39, May 11, 2011 (UTC) edit [[Talk:HowTo:Become An Hero]] -Tires screeching- I did the pee review, it's there. VROOOOM! *Popping out drive-by-style*:47, May 14, 2011 (UTC) - No problem bro, thank you for stopping by! If the article was as good as it was without the Mitchell thing, I would have given it an 8 in humor and 7 in concept, maybe even nominate it. For the "an" thing, I thaught it was a bit annoying, but also that it was stretching the joke too much. I mean, the 20th time isn't funny! It's my humble opinion. BTW, I also submitted an article to pee review, Bohemian Grove, if you know the subject or you feel like it, go take a look. :) Later bro.:15, May 15, 2011 (UTC) - And I just want to remind you, my friend, that you should not get discouraged since for me giving a 7 in a department means it's very good: an 8 or 9 (on average) would mean Oh my God I've got to nominate this for the front page!!! A 10 is not likely to happen. I had an article that was featured (Heaven's Gate), another one that underwent a good pee-review, a little like yours, and it was trashed on VFH. It's unpredictable. But for the Mitchell thing, you really have to keep this illeterate shit that girl posted. If you could pretend in a subtle way that it was an adult, while not insisting too much on it, people that know will know, while people that don't know still won't know, you know? Good luck with it and tell me when you are through. I might give you still some other ideas. And Bohemian Grove is on VFH now. XD Shameless self publicity.:37, May 15, 2011 (UTC) edit You can't hide from me! Dunno if you deleted your message because you figured it out already, but just in case: you oughta have a Move button up at the top, right next to History. It does what it says it does. If you don't want to leave a redirect behind, there should also be a checkbox you can press to suppress the redirect. Forgot if that's an admin-only feature or not. I don't think it is. Yeah. You should be able to do that too. -- 00:25, June 8, 2011 (UTC) - Pretty simple, but mostly intuitive. I had trouble figuring it out for the first time. Create an article at User:Dorkadorkreece/Your Name Of Article Here, Bro, with the name of your article, bro in the place of "Your Name of Article Here, Bro". -- 21:42, June 22, 2011 (UTC) edit UnNews:Users realize Facebook is as useless as a chair This great UnNews does exactly what an UnNews should do: take a current event and tug it in many possible directions. I just don't like the headline: It states a conclusion rather than tell you what happened, breaking the superficial resemblance to a real news source. How about: UnNews:Users decide Facebook is as useless as a chair Spıke Ѧ 13:01 18-Feb-13 - Thanks! Or how about: UnNews:Many realize Facebook is as useless as a chair ? --Dorkadorkreece (talk) 13:11, February 18, 2013 - How should I change the title anyway? Do I need to delete the article and replace it with a new article with a better title and copy and paste all the text into the new one? --Dorkadorkreece (talk) 13:27, February 18, 2013 (UTC) I like mine better, as "users" is specific and "many" (although more dramatic) is vague and, again, detracts from the resemblance to a news story. Changing a headline is achieved through the Move button. If you will let me do it, I can make it so that the old name will no longer work, so you won't have to "list the redirect on QVFD." Spıke Ѧ 13:32 18-Feb-13 Done (as shown by the colors of the links above); I went with your word "realize". Spıke Ѧ 13:35 18-Feb-13 - Thanks alot! Do all UnNews articles go on the front page btw? --Dorkadorkreece (talk) 13:38, February 18, 2013 (UTC) I served as Editor-in-Chief of UnNews in 2011, but have recently been promoted to Admin and consequently do very little that's creative. Romartus (who's also an Admin, and who somehow does) now picks the top 5 to go on the UnNews Front Page. As for what goes on the Uncyclopedia main page, there are about twenty recent articles (some of which may be UnNews) nominated to be featured, and you are welcome to vote on them. Spıke Ѧ 13:42 18-Feb-13 - Thanks for all the advice. --Dorkadorkreece (talk) 13:46, February 18, 2013 (UTC) Also, I am giving you autopatrolled status. You won't notice a thing, but your edits won't show up on Special:RecentChanges with an exclamation point begging someone to check each one out. Happy editing! Spıke Ѧ 13:55 18-Feb-13 - Oh wow thanks! Is this like a permanent status for my whole profile? Or just for the article? --Dorkadorkreece (talk) 13:58, February 18, 2013 (UTC) It is permanent to your username; but again, it doesn't affect what you do but only tends to make Patrollers less suspicious of you. In other words, I'm pretty sure you aren't a vandal! Spıke Ѧ 14:32 18-Feb-13 PS--There; you made the Front Page, on the first printing after your UnNews came out! Spıke Ѧ 18:29 18-Feb-13 edit History (TV channel) By chance I came across your article whilst adding a link to Mahmoud Ahmadinejad. Has a lot of potential. Trust you come back and check:01, October 2, 2013 (UTC) edit UnNews:Uncyclopedia Blackout a Success I don't think you should waste time redoing an UnNews from two years ago, and I don't think you should revert an Admin, even though she is incorrect to state that "you weren't there for the sopa blackout here." I considered reverting you myself, but it is your article. However, in addition to removing passives (hooray for that!), you also added a ton of errors in the first paragraph alone, and got rid of significant content (which was important to us when SOPA was a current issue), and the new ending about "suck penis" is always weak, no matter who is doing the sucking. Spıke Ѧ 00:14 7-Dec-13 - This is so late and probably irrelevant now, but if you think it should be reverted, then I'm fine with that. I don't know what errors I added but looking back it was probably pointless to edit it.--Dorkadorkreece (talk) 07:10, April 5, 2014 (UTC) edit Wario Welcome back! it's been a long time. You commented further on my archived talk page (where almost no one will ever look for it): - Again, this is extremely late but I want to tie up loose ends from last year. I can see your points about how it is NOT funny to write stupid crap about Wario, but it I would like to point out that it WAS on my profile so I was writing it more for personal reasons to see how it would look; not so much about making it appealing to everyone else. Please let me know if this is not okay to do. Also, like you said, I didn't get much more than one sentence into writing it so it was very confusing for it to be deleted five minutes later. That's all.--Dorkadorkreece (talk) 07:16, April 5, 2014 (UTC) "Writing...for personal reasons" is fine in the Sandbox but not in the encyclopedia, where "making it appealing to everyone else" is the whole point. But, reviewing the conversation, what you put in the encyclopedia was a tiny bit of what you had in the Sandbox and I had no way to know that more was coming. A better way to do this would have been to start it in your userspace and move it to the encyclopedia in one fell swoop. I had the separate problem of not knowing a thing about Wario, but fortunately another Admin did. Spıke Ѧ 11:15 5-Apr-14 edit UnNews:America celebrates 1,000th school shooting Hi! I shortened your headline a bit. Will now put it on the Front Page. Spıke Ѧ 12:15 30-Nov-14
http://uncyclopedia.wikia.com/wiki/User_talk:Dorkadorkreece
CC-MAIN-2016-40
en
refinedweb
I'm currently trying the Amazon Web Services (AWS) with .NET and of course I'm browsing the german catalog using the german locale. The strange thing is that strings containing german umlaut characters (like ä,ö,ü ...) arrived in .NET strings as '??'. I traced the protocols and found that AWS correctly states the use of UTF-8 in the Content-Type HTTP Header and the XML processing instruction also states UTF-8. The arrived response contains all umlaut characters correctly, so to me it looks like something is going wrong with the Encoding in the Deserialization step that maps the XML into .NET class members. I found a solution that works. I'm using WSE-2.0 for the SOAP client and wrote a custom input filter that is very simple. I still believe it shouldn't be that way, although it works for me now. public class EncodingFilter : SoapInputFilter { public override void ProcessMessage(SoapEnvelope envelope) { envelope.Encoding = System.Text.Encoding.Unicode; }}
http://blogs.msdn.com/b/juergenp/archive/2004/03/10/87273.aspx?Redirected=true&title=Web%20Services%20and%20I18N%20strangeness&summary=&source=Microsoft&armin=armin
CC-MAIN-2013-48
en
refinedweb
Forum:Changing logo? From Uncyclopedia, the content-free encyclopedia Note: This topic has been unedited for 2519 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response. What's the code to change the Uncyclopedia logo (for just one page, of course)? Is it really, really complicated or what? --Lenoxus 19:05, 19 May 2006 (UTC) There is one for AAAAAAAAA! but there seems to be no code there... Bilky Asko Talk Here 19:17, 19 May 2006 (UTC) - It can be changed by editing the Javascript. But you need an admin for that. -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 19:23, 19 May 2006 (UTC) - A good alternative is adding a background behind the logo. The code is below: - <span style="position:absolute;top:-50px;left:-175px;z-index:-1">[[Image:AWarn.gif|AWarn.gif]]</span> (Replace the AWarn.gif with your filename.) You can also change the offsets. I have made a preview that people are welcome to delete after it is viewed. Bilky Asko Talk Here 19:32, 19 May 2006 (UTC) - Changing the logo for one page is hard, requires site-wide javascript manipulation, and is only done for reskins or very rarely for other pages (that are much improved by it). Changing the logo for a namespace is actually a bit easier and can be done with two lines in MediaWiki:Uncyclopedia.css. --Splaka 03:28, 20 May 2006 (UTC) - You can change the logo to all pages just for your own user. To do this just add: - #p-logo a { background-image: url(URL of image) !important; } - to your css file (please note that the link is for the Uncyclopedia skin only). --Forum:Changing logo?.css 20:13, 15 June 2006 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Changing_logo%3F
CC-MAIN-2013-48
en
refinedweb